repo
stringclasses
1 value
number
int64
1
25.3k
state
stringclasses
2 values
title
stringlengths
1
487
body
stringlengths
0
234k
created_at
stringlengths
19
19
closed_at
stringlengths
19
19
comments
stringlengths
0
293k
transformers
21,471
closed
Add TF GPTNeoX
### Feature request Add the GPTNeoX model in TensorFlow. ### Motivation Having GPTNeoX in TensorFlow would benefit the community. ### Your contribution @gante is it possible to assign this to me?
02-06-2023 12:26:01
02-06-2023 12:26:01
(@Rocketknight1 FYI)<|||||>@JIPHF If you're happy to make a PR for this, then please do! Let us know if you need any help with that.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
21,470
closed
make SpeechT5 doc examples deterministic
# What does this PR do? Fixes an issue with the doc examples for SpeechT5. Due to the dropout layer being used in inference mode, the predicted sequence length is not always the same, which causes the doc tests to fail. Setting the seed fixes this. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts and @NielsRogge - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
02-06-2023 10:48:07
02-06-2023 10:48:07
_The documentation is not available anymore as the PR was closed or merged._<|||||>Thank you @hollance . Before I could merge, could you change the docstrings that contain(s) ``` dataset = load_dataset(...) ``` to ``` dataset = load_dataset(...) # doctest: +IGNORE_RESULT ``` 🙏 Thank you.<|||||>@ydshieh If I do this, `make fixup` will wrap this code like so: ```python >>> dataset = load_dataset( ... "hf-internal-testing/librispeech_asr_demo", "clean", split="validation" ... ) # doctest: +IGNORE_RESULT ``` Is that OK?<|||||>> @ydshieh If I do this, `make fixup` will wrap this code like so: > > ```python > >>> dataset = load_dataset( > ... "hf-internal-testing/librispeech_asr_demo", "clean", split="validation" > ... ) # doctest: +IGNORE_RESULT > ``` > > Is that OK? totally OK :-)
transformers
21,469
closed
ForcedBOSTokenLogitsProcessor take input_ids.shape[-1] as number of generated tokens
### System Info `ForcedBOSTokenLogitsProcessor` enforces the specified token as the first generated token. In the code below, it takes `input_ids.shape[-1]` as the length of the generated tokens, but as I know, `input_ids.shape[-1]` equals to `prompt_length + generated_length`. https://github.com/huggingface/transformers/blob/0db5d911fc94604f9568b4b212e005ec4600d157/src/transformers/generation/logits_process.py#L769 So, is this a bug, or there is something I've missing? ### Who can help? @gante ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction None ### Expected behavior None
02-06-2023 10:26:33
02-06-2023 10:26:33
Hey @Dounm 👋 `ForcedBOSTokenLogitsProcessor` is normally used with encoder-decoder models. For those models, when the generation loop begins, it only contains one token per batch/beam by default -- `model.generation_config.decoder_start_token_id ` or `model.generation_config.bos_token_ids`. As such, `ForcedBOSTokenLogitsProcessor` forces an additional token at the beginning of the sequence in those cases. I hope this makes it clearer 🤗 <|||||>Much thanks for your rely!
transformers
21,468
closed
Error when fine-tuning XLM-RoBERTa base on TF/Keras
Hello, I am trying to finetune XLM-RoBERTa for text classification on tensorflow-keras. I am using the `TFXLMRobertaForSequenceClassification` class for training I am using google colab GPU for finetuning. Tensoflow version is 2.9.2 en_y_pred = model.predict(en_x_test_in, batch_size=128, verbose=1) **InvalidArgumentError: indices[2,268] = 124030 is not in [0, 50265) [[node tf_roberta_for_sequence_classification_1/roberta/embeddings/Gather (defined at /usr/local/lib/python3.7/dist-packages/transformers/models/roberta/modeling_tf_roberta.py:149) ]] [Op:__inference_train_function_82886] Errors may have originated from an input operation. Input Source operations connected to node tf_roberta_for_sequence_classification_1/roberta/embeddings/Gather: In[0] tf_roberta_for_sequence_classification_1/roberta/embeddings/Gather/resource: In[1] IteratorGetNext (defined at /usr/local/lib/python3.7/dist-packages/keras/engine/training.py:866)**
02-06-2023 10:14:04
02-06-2023 10:14:04
transformers
21,467
closed
Whisper: Decode with condition_on_previous_text=False
### Feature request Whisper speech recognition without conditioning on previous text. As in https://github.com/openai/whisper/blob/7858aa9c08d98f75575035ecd6481f462d66ca27/whisper/transcribe.py#L278 ### Motivation Whisper implementation is great however conditioning the decoding on previous text can cause significant hallucination and repetitive text, e.g.: >"Do you have malpractice? Do you have malpractice? Do you have malpractice? Do you have malpractice? Do you have malpractice? Do you have malpractice? Do you have malpractice? Do you have malpractice? Do you have malpractice? Do you have malpractice? Do you have malpractice? Do you have malpractice? Do you have malpractice? Do you have malpractice? Do you have malpractice? Do you have malpractice?" Running openai's model with `--condition_on_previous_text False` drastically reduces hallucination @ArthurZucker
02-06-2023 08:28:48
02-06-2023 08:28:48
cc @sanchit-gandhi we could add this as a generation config argument, and in the `prepare_inputs_for_generation` can just remove all the input_ids if asked. WDYT? My question is more about the usage/quality tradeoff but doesn't seem like something hard to maintain. <|||||>Quality is much better without conditioning on previous text https://github.com/openai/whisper/discussions/679#discussioncomment-4449150 Similarly whisperx requires this because theres just too much hallucination otherwise >just remove all the input_ids if asked Yes trying this, fairly straightforward, but not when batch_size > 1 Since each sample in the batch resets at different indexes (when there is a pair of consecutive timestamps). A lot of the methods are nested quite deep so it's taking me a while to sift through, but seems like the best approach, given this variable length prompt per batch, would be to supply an attention mask to the decoder ? Or just pad according to the variable length <|||||>I think the attention mask is the best way to get what you want indeed. Padding can also work, as it should create the attention mask of the padding and pass it to the network. I think it makes sense to add this, we just have to keep it to Whisper, so either in the modeling file, or a new logit processor 😉 I won't have time to do this until at least a week, do you want to open a PR and ping me for pointers and reviews? 🤗 <|||||>Hacked attempt here, seems to work on my end -- can now run very fast whisper without hallucination :') https://github.com/huggingface/transformers/pull/21491/commits/cf2ad49fae43e8355655c5392d4dca0bdd1a733e<|||||>Super cool feature! Thanks for the PR @m-bain! Reviewed directly there!<|||||>Hi there, I was looking into this issue in some detail and I'm not sure this is relevant for the 🤗 Transformers implementation of Whisper, since it never actually conditions on the previous text. It's true that the OpenAI implementation does this, but the Transformers ASR `pipeline` treats all 30-second chunks of audio independently. It never passes in the previous context when doing the predictions. The chunks do overlap partially, so they do have some duplicate tokens, but this overlap is pretty small — not nearly as large as the context provided by the OpenAI implementation. And even if `condition_on_previous_text = False` in the OpenAI code, they still add the output from the previous chunk to the context, which is actually longer than the small bit of overlap used by our `pipeline`. In any case, I will look a bit closer at your PR in the coming days to see exactly what it does. Perhaps it is still an improvement that we could use. 😃 <|||||>So this would mean we would support `conditioning on previous text` by adding the sequential processing on the PR 😄 <|||||>Great point @hollance, shall we keep this open if we have sequential processing on the roadmap?<|||||>Leaving this open as it could be relevant for https://github.com/huggingface/transformers/issues/23231#issuecomment-1545684559<|||||>Doing this only makes sense if we decide to support a sequential pipeline, and I think we weren't really in favor of this? Right now, there is no conditioning on previous text going on (except for explicit prompting, once that PR is merged, which you have to enable manually by passing in prompt tokens). <|||||>I don't think so, no. Running the sequential pipeline is just the same as the original repo, so I struggle to see what the appeal to the user is here vs our batched method (feels like a step backwards). Let's close this one then?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>@hollance if all of the chunks are processed independently as you say, why it happens quite often that the model starts to repeat itself after the short segments were processed? E.g actual audio is "and who"<silence>"okay" but the model will output "and who and who okay" even though there were 2 separate segments?<|||||>@vsokolovskii Do you have a code snippet and audio file that can reproduce this problem?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Leaving this one closed unless we identify a use case that suggests a need for the sequential pipeline. Note that adding a stride between chunks should alleviate any mis-match between them (you can read more about this [here](https://huggingface.co/blog/asr-chunking)).
transformers
21,466
closed
Datasets performance :(
I am using https://huggingface.co/docs/transformers/model_doc/time_series_transformer as an example to setup time series transformers for my use case. I have modified the training script to just traverse all batches and dump a summary - essentially max and min value of first & last time feature elements for each of 366 static categories. 40 epochs, 100 batches per epoch, 256 train sequences per batch = 1_024_000 training sequences in total **Elapsed time: 452.257402 seconds** Iterating over 1M elements with really simple logic took over 7 minutes on M2 macbook :( I am new to python - is it kinda expected?
02-06-2023 07:38:48
02-06-2023 07:38:48
Hi there. Please use the [forums](https://discuss.huggingface.co/) for questions like this as we keep issues for bugs and feature requests only. Also make sure to include the code you are running or no one will be able to help.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
21,465
closed
Obtaining text embeddings from CLIP
I am trying to obtain text embeddings from CLIP as shown below. However, I am confused about the difference between text_embeds vs. pooler_output, since they output different things. According to the documentation, text_embeds is "the text embeddings obtained by applying the projection layer to the pooler_output", but I am not sure what this means? Are both acceptable to use as text embeddings (if I want to compare text similarity), or is one more correct than the other? ``` from transformers import CLIPProcessor, CLIPModel from PIL import Image import requests url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) model = CLIPModel.from_pretrained("openai/clip-vit-large-patch14") processor = CLIPProcessor.from_pretrained("openai/clip-vit-large-patch14") inputs = processor(text=["a photo of a cat", "a photo of a dog"], images=image, return_tensors="pt", padding=True) outputs = model(**inputs) text_embeds = outputs['text_embeds'] pooler_output = outputs['text_model_output']['pooler_output'] ```
02-06-2023 01:57:32
02-06-2023 01:57:32
Hi, You can use both as text embeddings. The former (`text_embeds`) are embeddings which are in the same embedding space as the image embeddings (so it allows you to compare images and text - which is what people mainly use CLIP for). However if you just want text embeddings, and don't care about image embeddings, then you can use the `pooler_output`. <|||||>Also, please ask such questions on our [forum](https://discuss.huggingface.co/) - we'd like to keep Github issues for bugs/feature requests. Thanks!<|||||>Will do, apologies! Wasn't sure which was the appropriate place.
transformers
21,464
closed
How can I fine-tune other languages in trocr? CER over 1
### System Info I want to do fine-tuning using trocr. There is a problem that CER exceeds 1. The correct answer for the dataset is one word, but it seems that the predict value is too long. What's the problem? Should I implement the new language with the following code? I've been learning several times, but I've found that I'm making long sentences about text. So there is a problem that CER goes over 1. The link below is a code that only weights other languages without fine-tuning. The predicted characters are quite long compared to the existing characters, and I wonder if there is a problem with EOS. https://colab.research.google.com/drive/1ovc-9aXsYKDXAfrO0FUtoiLVkXEdrz6s?usp=sharing ### Who can help? _No response_ ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction https://colab.research.google.com/drive/1ovc-9aXsYKDXAfrO0FUtoiLVkXEdrz6s?usp=sharing ### Expected behavior CER over 1
02-06-2023 01:44:16
02-06-2023 01:44:16
Hi, Refer to this thread: https://github.com/huggingface/transformers/issues/18163. Also, please ask such questions on our [forum](https://discuss.huggingface.co/), as we'd like to keep Github issues for bugs/feature requests. Thanks!
transformers
21,463
closed
[examples] improve block_size warning message
there is an odd warning inside the examples wrt model_max_length value e.g. in `run_clm.py` ``` 01/28/2023 16:03:50 - WARNING - __main__ - The tokenizer picked seems to have a very large `model_max_length` (1000000000000000019884624838656). Picking 1024 instead. You can change that default value by passing --block_size xxx. ``` As the models now can work with much longer sequence lengths (bloom, opt, others) should it try to truncate it to 1024? But actually for me what stood out is `1000000000000000019884624838656` - when I see such huge numbers it usually means a bug, so it is worrying and suggests that either I am doing something wrong or there is a bug somewhere. so this PR is proposing to reword the message to be as informative, but not as scary: ``` 01/28/2023 16:03:50 - WARNING - __main__ - The chosen tokenizer supports a `model_max_length` that is longer than the default `block_size` value of 1024. If you would like to use a longer `block_size` up to `tokenizer.model_max_length` you can override this default with `--block_size xxx`. ```
02-06-2023 01:14:46
02-06-2023 01:14:46
_The documentation is not available anymore as the PR was closed or merged._
transformers
21,462
closed
HubertModel output wrong `last_hidden_state` shape.
### System Info - `transformers` version: 4.26.0 - Platform: Linux-5.10.147+-x86_64-with-glibc2.29 - Python version: 3.8.10 - Huggingface_hub version: 0.12.0 - PyTorch version (GPU?): 1.13.1+cu116 (False) - Tensorflow version (GPU?): 2.9.2 (False) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed ### Who can help? @sanchit-gandhi ### Information - [X] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction As stated in [HuBERTModel docs](https://huggingface.co/docs/transformers/model_doc/hubert#transformers.HubertModel), `last_hidden_state` shape should be `(batch_size, sequence_length, hidden_size)`. ```python from transformers import AutoProcessor, HubertModel from datasets import load_dataset import soundfile as sf processor = AutoProcessor.from_pretrained("facebook/hubert-large-ls960-ft") model = HubertModel.from_pretrained("facebook/hubert-large-ls960-ft") input_values = torch.rand(16,4096) # sequence_length = 4096 input_values.shape last_hidden_state = model(input_values).last_hidden_state last_hidden_state.shape ``` However, the shape of `last_hidden_state` was `torch.Size([16, 12, 1024])`. ### Expected behavior The shape of `last_hidden_state` should be `torch.Size([16, 4096, 1024])`.
02-05-2023 20:18:50
02-05-2023 20:18:50
Hey @celsofranssa! Really great question! In the HuBERT model, we take an input sequence of raw audio waveforms, _downsample_ them using a series of 1d-convolutional networks, and pass the downsampled hidden-states to a Transformer network. In this case, `sequence_length` is referring to the downsampled sequence length (i.e. the sequence length _after_ we apply the 1-d convolutional networks). This is equal to the final sequence length of the HuBERT model (since there's no further downsampling by the Transformer network). We can verify this with the `_get_feat_extract_output_length` method, which computes the downsampled sequence length of the HuBERT model: https://github.com/huggingface/transformers/blob/21a2d900eceeded7be9edc445b56877b95eda4ca/src/transformers/models/hubert/modeling_hubert.py#L867 Using this method, we get: ```python from transformers import AutoProcessor, HubertModel from datasets import load_dataset import torch processor = AutoProcessor.from_pretrained("facebook/hubert-large-ls960-ft") model = HubertModel.from_pretrained("facebook/hubert-large-ls960-ft") input_values = torch.rand(16,4096) # sequence_length = 4096 print("Input shape: ", input_values.shape) with torch.no_grad(): last_hidden_state = model(input_values).last_hidden_state print("Last hidden dim: ", last_hidden_state.shape) sequence_len = model._get_feat_extract_output_lengths(input_lengths=input_values.shape[-1]) print("seq len: ", sequence_len) print("Shapes match? ", sequence_len == last_hidden_state.shape[1]) ``` **Print Output:** ``` Input shape: torch.Size([16, 4096]) Last hidden dim: torch.Size([16, 12, 1024]) seq len: tensor(12) Shapes match? tensor(True) ```<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
21,461
closed
Fix multiple `eos_token_id`s in model.generate(...)
# What does this PR do? Fixes https://github.com/huggingface/transformers/pull/20727 for using multiple `eos_token_id`s ## Small repro ```python import math import torch unfinished_sequences = torch.tensor([1,1,1]) next_tokens = torch.tensor([797, 641, 98]) unfinished_sequences.mul((math.prod(next_tokens != i for i in eos_token_id)).long()) ``` ## Error if you run ```python from transformers import pipeline generator = pipeline('text-generation', 'gpt2') generator('hello', eos_token_id=[628, 198], do_sample=True, num_return_sequences=3) ``` then it errors ```python input = tensor([[-32]]) weight = Parameter containing: tensor([[-0.0206, 0.0125, -0.0289, ..., 0.0018, -0.0300, 0.0111], [-0.0239, -0.0158,...0, 0.0075, 0.0113], [-0.0177, -0.0268, 0.0023, ..., 0.0135, 0.0077, -0.0042]], requires_grad=True) padding_idx = -1, max_norm = None, norm_type = 2.0, scale_grad_by_freq = False, sparse = False ... if has_torch_function_variadic(input, weight): return handle_torch_function( embedding, (input, weight), input, weight, padding_idx=padding_idx, max_norm=max_norm, norm_type=norm_type, scale_grad_by_freq=scale_grad_by_freq, sparse=sparse, ) if padding_idx is not None: if padding_idx > 0: assert padding_idx < weight.size(0), "Padding_idx must be within num_embeddings" elif padding_idx < 0: assert padding_idx >= -weight.size(0), "Padding_idx must be within num_embeddings" padding_idx = weight.size(0) + padding_idx else: padding_idx = -1 if max_norm is not None: # Note [embedding_renorm contiguous] # `embedding_renorm_` will call .contiguous() on input anyways, so we # call it here and take advantage of the improved locality in the # `embedding` call below too. input = input.contiguous() # Note [embedding_renorm set_grad_enabled] # XXX: equivalent to # with torch.no_grad(): # torch.embedding_renorm_ # remove once script supports set_grad_enabled _no_grad_embedding_renorm_(weight, input, max_norm, norm_type) > return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse) E IndexError: index out of range in self venv/lib/python3.8/site-packages/torch/nn/functional.py:2210: IndexError ``` ## Tests ``` pytest tests/generation/test_utils.py::GenerationIntegrationTests::test_eos_token_id_int_and_list_greedy_search --disable-warnings -vv pytest tests/generation/test_utils.py::GenerationIntegrationTests::test_eos_token_id_int_and_list_contrastive_search --disable-warnings -vv pytest tests/generation/test_utils.py::GenerationIntegrationTests::test_eos_token_id_int_and_list_top_k_top_sampling --disable-warnings -vv pytest tests/generation/test_utils.py::GenerationIntegrationTests::test_eos_token_id_int_and_list_beam_search --disable-warnings -vv ``` ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @gante
02-05-2023 20:10:54
02-05-2023 20:10:54
_The documentation is not available anymore as the PR was closed or merged._<|||||>Hey @tokestermw 👋 Thank you for spotting the issues and adding a fix! One request, for two reasons: a) thin function wrappers are very undesirable, as they add another abstraction layer b) tensor ops should ideally be done with `torch` operations, otherwise there will be CPU<>GPU data movement 👉 can you replace the implementation with something like the snippet below, which computes the same thing using torch operators? ```py import torch eos_token_id = torch.tensor([797, 641]) unfinished_sequences = torch.tensor([1, 1, 1]) next_tokens = torch.tensor([797, 641, 98]) next_in_eos = next_tokens.tile((eos_token_id.shape[0], 1)).ne(eos_token_id.unsqueeze(1)).prod(dim=0) unfinished_sequences = unfinished_sequences.mul(next_in_eos).long() ``` <|||||>I just found the same issue I think and this is the code snippet I wanted to use for reporting the bug. Probably redundant as of now but before throwing it away, maybe it helps another user finding the issue. No further comment/processing required from my point of view: ```python from transformers import AutoModelForCausalLM, GenerationConfig MODEL = "gpt2" NUM_RETURN_SEQUENCES = 2 MAX_NEW_TOKENS = 64 CONFIG_DIR = "./generation_test" model = AutoModelForCausalLM.from_pretrained(MODEL) model.save_pretrained(CONFIG_DIR) config = GenerationConfig( num_return_sequences=NUM_RETURN_SEQUENCES, max_new_tokens=MAX_NEW_TOKENS, return_full_text=True, do_sample=True, bos_token_id=50256, pad_token_id=50256, eos_token_id=[50000,50256], # the 50000 is just an example to prove the issue ) config.save_pretrained(CONFIG_DIR) model = AutoModelForCausalLM.from_pretrained(CONFIG_DIR) tokenizer = AutoTokenizer.from_pretrained(MODEL) pipe = pipeline("text-generation", model=model, tokenizer=tokenizer) generated = pipe("As always this is a") print(generated[0]["generated_text"]) ``` <|||||>Thanks @gante! will make the change in a bit Another issue I just found with beam search + multiple eos_token_id is that, on occasion we get this error: ```python ValueError: At most 3 tokens in tensor([ 198, 198, 198, 0, 628, 14373], device='cuda:0') can be equal to `eos_token_id: [198, 628]`. Make sure tensor([ 198, 198, 198, 0, 628, 14373], device='cuda:0') are corrected. ``` <img width="864" alt="Screenshot 2023-02-07 at 16 26 49" src="https://user-images.githubusercontent.com/4722119/217397589-53beae41-fa84-4792-bea6-db6056d33972.png"> This is because we generate 2 * num_beams, https://github.com/huggingface/transformers/blob/main/src/transformers/generation/utils.py#L2766 which can fail this check when we have more than one `eos_token_id` https://github.com/huggingface/transformers/blob/main/src/transformers/generation/beam_search.py#L612 (I can post a separate issue if that's better)<|||||>@tokestermw if that is not breaking the existing tests, yes, let's move it to a new issue. In essence, we probably want to keep `1+len(eos_token_id)` beam candidates running, to ensure we have at least 1 non-`eos_token_id` candidate to proceed.<|||||>Mmm, looks like a lot of tests have started failing @gante and @tokestermw <|||||>fixed though there is a seemingly unrelated test error https://app.circleci.com/pipelines/github/huggingface/transformers/57219/workflows/68817729-bfae-4e9a-8139-5e76e0e6ed5d/jobs/693592<|||||>Yes, this one has been fixed on main :-)<|||||>Hi @tokestermw Thank you for working on this. After this PR being merged to `main`, there are some CI regression. Could you take a look 🙏 . Also cc @gante ## To reproduce: ### We can check with specific commit on `main` branch ```bash git checkout 06d940ef # One commit before this PR on `main` git checkout 9960506c # This PR - failed the following tests ``` ### Then prepare the file format for doctests ```python python utils/prepare_for_doc_test.py src docs ``` ### This ```python python3 -m pytest -v --make-reports doc_tests_gpu --doctest-modules docs/source/en/model_doc/t5.mdx::t5.mdx -sv --doctest-continue-on-failure --doctest-glob="*.mdx" ``` gives error ```bash Expected: ['Das Haus ist wunderbar.', 'Ich arbeite gerne in NYC.'] Got: ['Das Haus ist wunderbar. Das Haus ist wunderschön. Sehr', 'Ich arbeite gerne in NYC. Ich arbeite in NYC.'] ``` ### and this ```python python3 -m pytest -v --make-reports doc_tests_gpu --doctest-modules docs/source/en/model_doc/tapex.mdx::tapex.mdx -sv --doctest-continue-on-failure --doctest-glob="*.mdx" ``` gives error ```bash Expected: [' 53', ' george clooney', ' brad pitt'] Got: [' 53 lithuania, french montana, french montana, french montana, french montana, french montana ...(very long non-sense string)] ```<|||||>@ydshieh thanks, ah i see the issue 😓 . we're not carrying over the `unfinished_sequences` making a fix here: https://github.com/huggingface/transformers/pull/21529
transformers
21,460
closed
Fix `SpeechT5ForSpeechToSpeechIntegrationTests` device issue
# What does this PR do? Just a torch device issue being ifxed.
02-05-2023 18:50:03
02-05-2023 18:50:03
_The documentation is not available anymore as the PR was closed or merged._<|||||>Ah yes, nice catch. I don't have merge rights though. Could you also fix this in modeling_speecht5? I think that has the same issue. Around line 2871: ```python if speaker_embeddings is None: speaker_embeddings = torch.zeros((1, 512), device=input_values.device) ``` Thanks! <|||||>> Could you also fix this in modeling_speecht5? I think that has the same issue. Around line 2871: Done! > Ah yes, nice catch. I don't have merge rights though. You have approval right :-) @hollance Then I can merge 🚀
transformers
21,459
closed
adding a tip for deepspeed integration in multi-node environment
This PR 1. adds a tip for training in multi-node environment with deepspeed w/o shared filesystem 2. automatically configures deepspeed to inject: ``` { "checkpoint": { "use_node_local_storage": true } } ``` when `--save_on_each_node` is passed. ------------------- note from @stas00: I took this opportunity to expand much further these sections, as it has been long overdue!
02-05-2023 10:49:18
02-05-2023 10:49:18
@stas00, could you please review ? ^^<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>Maybe, it's also worth adding `use_node_local_storage: true` when `save_on_each_node=True` and use_node_local_storage isn't defined in deepspeed config file.<|||||>Great additions, @izapolsk! > Maybe, it's also worth adding use_node_local_storage: true when save_on_each_node=True and use_node_local_storage isn't defined in deepspeed config file. yes, please and thank you! Also if you'd like it might be a good idea to update the doc to replace `torch.distributed.launch` with the new API of `torch.distributed.run` as the former is deprecated now since about a year. If you want to that is - if not, no worries, I can update it later. We can also say expand your note that any launcher can be used, including `accelerate` I think (need to check though). i.e. the launcher is independent from the program it runs.<|||||>I'll do. thank you <|||||>Please let me know when you finished editing and I will add a few more notes - as some users will be still using `launch`, so we should mentioned both. <|||||>@stas00, please review<|||||>Excellent integration addition, @izapolsk - thank you for the initiative. I took this opportunity to expand much further these sections, as it has been long overdue! Hope you don't mind that I did it in your PR. All your content is there, I just expanded the content a lot more. Please let me know if it looks good to you and if you have any suggestions to make. (please note that I reverted to using the full `torch.distributed.run` way since it's easier for users who are transitioning from `torch.distributed.launch`) p.s. I also edited the OP to reflect the changes.<|||||>awesome, thank you !
transformers
21,458
closed
[i18n-fr] Translate index page to French
# What does this PR do? Translated the `index.mdx` file of the documentation to French. Part of #21456 Thank you in advance for your review. ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @sgugger, could you review this PR? <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts and @NielsRogge - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
02-04-2023 19:22:31
02-04-2023 19:22:31
_The documentation is not available anymore as the PR was closed or merged._
transformers
21,457
closed
Fix `PushToHubCallback` import in Share a model docs
# What does this PR do? Fixes a typo in the Share a model docs section. The example to push a Tensorflow model to the Hub used to call the method`PushToHubCallback` from `transformers.keras.callbacks`, resulting in `ImportError`. This PR corrects that example in all languages so that `PushToHubCallback` is imported directly from `transformers`. ## Before submitting - [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [X] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? # Who can review? @sgugger, @stevhliu and @MKhalusova Thank you!
02-04-2023 19:07:48
02-04-2023 19:07:48
_The documentation is not available anymore as the PR was closed or merged._
transformers
21,456
open
[i18n-fr] Translating docs to fr
<!-- Note: Please search to see if an issue already exists for the language you are trying to translate. --> Hi! Let's bring the documentation to all the french-speaking community 🌐 (currently 0 out of 267 complete) Who would want to translate? Please follow the 🤗 [TRANSLATING guide](https://github.com/huggingface/transformers/blob/main/docs/TRANSLATING.md). Here is a list of the files ready for translation. Let us know in this issue if you'd like to translate any, and we'll add your name to the list. Some notes: * Please translate using an informal tone (imagine you are talking with a friend about transformers 🤗). * Please translate in a gender-neutral way. * Add your translations to the folder called `<languageCode>` inside the [source folder](https://github.com/huggingface/transformers/tree/main/docs/source). * Register your translation in `<languageCode>/_toctree.yml`; please follow the order of the [English version](https://github.com/huggingface/transformers/blob/main/docs/source/en/_toctree.yml). * Once you're finished, open a pull request and tag this issue by including #issue-number in the description, where issue-number is the number of this issue. Please ping @ArthurZucker, @sgugger for review. * 🙋 If you'd like others to help you with the translation, you can also post in the 🤗 [forums](https://discuss.huggingface.co/). ## Get Started section - [x] [index.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/index.mdx) (https://github.com/huggingface/transformers/pull/21458) - [x] [quicktour.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/quicktour.mdx)(https://github.com/huggingface/transformers/pull/21589) - [ ] [installation.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/installation.mdx). ## Tutorial section - [ ] [pipeline_tutorial.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/pipeline_tutorial.mdx) - [ ] [autoclass_tutorial.mdx](https://github.com/huggingface/transformers/blob/master/docs/source/autoclass_tutorial.mdx) - [ ] [preprocessing.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/preprocessing.mdx) - [ ] [training.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/training.mdx) - [ ] [accelerate.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/accelerate.mdx) - [ ] [model_sharing.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/model_sharing.mdx) - [ ] [multilingual.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/multilingual.mdx) <!-- Keep on adding more as you go 🔥 -->
02-04-2023 15:33:19
02-04-2023 15:33:19
transformers
21,455
closed
Fix Whisper Positional Embeddings when using decoder context
null
02-04-2023 14:08:52
02-04-2023 14:08:52
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_21455). All of your documentation changes will be reflected on that endpoint.<|||||>cc @ArthurZucker <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>is this getting in or really not needed?<|||||>Seems to work well without 😉 Also not sure if the updates on whisper fixed the original issue, would have to check!
transformers
21,454
closed
Generate: TF can now accept custom logits processors
# What does this PR do? TF generation test addition PR 1 (out of ???). In an effort to move generation integration tests to be framework-agnostic, I'll be adding low-hanging fruit to TF. This PR brings custom logits processors to TF `.generate()`. The code added is almost copy-paste from PT.
02-04-2023 11:17:51
02-04-2023 11:17:51
_The documentation is not available anymore as the PR was closed or merged._
transformers
21,453
closed
A new test to check config attributes being used
# What does this PR do? Add a new test to check config attributes being used. For edge cases, I only add rules to 2 files. If the concept is approved, **I will add more to pass CI**, and continue the work of cleaning up in follow-up PR(s).
02-04-2023 07:44:26
02-04-2023 07:44:26
_The documentation is not available anymore as the PR was closed or merged._<|||||>@sgugger Ready for review - The failing tests will be addressed (by adding specific rules in subclasses) once the PR is approved 🙏 .<|||||>The report looks like (if unused attributes are detected) ```bash ValueError: The following configuration classes contain unused attributes in the corresponding modeling files: CLIPSegConfig: ['decoder_attention_dropout', 'decoder_hidden_act'] ... DinatConfig: ['patch_norm'] ... ```<|||||>> Thanks for iterating! Just left a couple more comments. > > As you noticed the job is failing right now, are you planning to add everything in the special map or to fix all the attributes not used? Yes, that's mentioned in the description/comment (in previous version, it's better doing so, but with this new version you suggested, I can indeed add them earlier though). FYI: the special map I will update will contain - some confirmed allowed cases (i.e. we know the reasons and nothing we can do but just allow) - some skipped cases **for now** to allow temporarily with #TODO comment<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_21453). All of your documentation changes will be reflected on that endpoint.
transformers
21,452
closed
Added documentation for DagsHubCallback
# What does this PR do? Adds documentation for [DagsHubCallback](https://github.com/huggingface/transformers/blob/59d5edef34ae0fa56065a2e863736d4f133c558b/src/transformers/integrations.py#L1054-L1100)! <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @sgugger, please and thank you! 🙂 <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts and @NielsRogge - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
02-04-2023 07:25:33
02-04-2023 07:25:33
_The documentation is not available anymore as the PR was closed or merged._
transformers
21,451
closed
AutomaticSpeechRecognitionPipeline throws dict key error even with the correct keys
ValueError: When passing a dictionary to AutomaticSpeechRecognitionPipeline, the dict needs to contain a "raw" key containing the numpy array representing the audio and a "sampling_rate" key, containing the sampling_rate associated with that array reproduced by using the following code: ``` import torch from transformers import pipeline from datasets import load_dataset device = "cuda:0" if torch.cuda.is_available() else "cpu" pipe = pipeline( "automatic-speech-recognition", model="openai/whisper-small.en", chunk_length_s=30, device=device, ) ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation") sample = ds[0]["audio"] prediction = pipe(sample)["text"] # we can also return timestamps for the predictions prediction = pipe(sample, return_timestamps=True)["chunks"] ``` versions: torch==1.13.1 transformers==4.26.0 datasets==2.9.0
02-03-2023 23:52:54
02-03-2023 23:52:54
cc @Narsil and @sanchit-gandhi <|||||>This is just because the `sample` is consumed when passed to the pipeline: ```python import torch from transformers import pipeline from datasets import load_dataset device = "cuda:0" if torch.cuda.is_available() else "cpu" pipe = pipeline( "automatic-speech-recognition", model="openai/whisper-small.en", chunk_length_s=30, device=device, ) ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation") sample = ds[0]["audio"] prediction = pipe(sample.copy())["text"] # <------------CHANGE HERE # we can also return timestamps for the predictions prediction = pipe(sample, return_timestamps=True)["chunks"] ``` This should work. @sgugger we could remove that by cloning everything ourselves, but it forces a copy of the entire audio array when passed to the pipeline. We have even more subtle modifications where we don't copy, but we do need to pass *some* keys for live inference (used by the API at least) where we pass extra keys as-is to the caller so it can know how to handle the temporary results (since pipeline is stateless it's cumbersome to deal with those by other means).<|||||>Hey @dsingal0! Looks like this code snippet was taken from the Whisper small.en README example which I added last week: https://huggingface.co/openai/whisper-small.en#long-form-transcription I've updated the model README with @Narsil's fix: https://huggingface.co/openai/whisper-small.en/commit/d34e5b8002f2524cb84680607caa2f802de266cd (and all other Whisper model READMEs accordingly) Feel free to open issues/PRs on the Hugging Face Hub if a code example doesn't look right! With regards to `pipeline`, this behaviour is potentially a bit confusing coming from the `model`/`processor` approach, since this way does not consume the input dict, allowing the user to re-use inputs as they wish. This is ok with me provided it's suitably well explained in the docs! I think it would be more elegant if we had a copy-free approach in the processing that did not consume the audio inputs as we currently do (if feasible): https://github.com/huggingface/transformers/blob/3b9a1dc13209d0cab347bf2363d18963cc3f9194/src/transformers/pipelines/automatic_speech_recognition.py#L447<|||||>It's exactly as how I explained above we have 3 choices: - `consume` (current behavior) this makes samples non reusable. IMO the best choice since reusing is only likely to be used while exploring. - `copy`. This makes an extra copy of the audio. On small files it doesn't matter that much, but way too costly for hour long audio files. - `not-passthrough`. Do not pass extra keys around (like `partial` during live microphone inference). This makes this particular use case quite hard to work with (because of the statelessness of pipeline and it would be a breaking change).<|||||>Sure, thanks for clarifying! Happy to stick with `consume` in this case!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
21,450
closed
Typos/fixes to link syntax
Noticed a couple of small errors and incorrect link syntax in the TPU tutorial, sorry about that!
02-03-2023 22:11:26
02-03-2023 22:11:26
_The documentation is not available anymore as the PR was closed or merged._<|||||>I was just checking what that formatting looked like before committing to it!
transformers
21,449
closed
Longformer FP16 training broken since transformers 4.21
### System Info transformers 4.20 / transformers 4.21 Ubuntu 20, python 3.8 ### Who can help? @ydshieh ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction Apologies, I'm using my own dataset but the problem should be easy to reproduce with any Longformer + FP16 example. Upgrading from transformers 4.20 to 4.21 causes Longformer training loss to stay stuck around its initial value. When using transformers 4.20 + FP16 and transformers >= 4.21 + FP32, training loss declines as expected. https://github.com/huggingface/transformers/pull/17306 seems to be what caused this. You can see on that issue that it affected other models too, some of which have been fixed one by one. Longformer is still affected as of transformers 4.26. ### Expected behavior Be able to train Longformer using fp16 precision on recent version of transformers.
02-03-2023 22:07:53
02-03-2023 22:07:53
Hi @geniki Thank you for reporting the issue. > but the problem should be easy to reproduce with any Longformer + FP16 example It would be really nice if you can provide an example script that could reproduce the issue you reported, especially you mentioned `should be easy to reproduce` 🙏 Looking forward for it! > some of which have been fixed one by one Could you remind me which PRs or commits fixed this issue 🙏 That will help a lot, thank you.<|||||>Thanks for your response @ydshieh. Here are some example where this issue has been addressed for other models: https://github.com/huggingface/transformers/pull/20605 https://github.com/huggingface/transformers/pull/18057 https://github.com/huggingface/transformers/pull/19229 https://github.com/huggingface/transformers/pull/17437 I'll try to make an online example with Longformer work somehow. Do you have any model training tests with small dummy data?<|||||>Hi @geniki You can take any dataset on HF Hub (that are for specific task you are working on), and select a subset of it (say the first 1024 examples). However, as you already know some fixes (in you above comment), would you like to try to experiment a fix for this model (with your own dataset, potentially a subset) and open a PR ❤️ ? If not, no worry, but in this case, as I mentioned, a script that could reproduce would be really nice 👍 <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
21,448
closed
Deprecate parallelize API
# What does this PR do? This PR deprecates the parallelize API now that the big model API has been tested for a bit. Using `device_map="balanced"` in the call to `from_pretrained` will do the same thing as the API, and it's still possible to pass along a custom `device_map` (although they are not in the same format).
02-03-2023 21:09:16
02-03-2023 21:09:16
_The documentation is not available anymore as the PR was closed or merged._
transformers
21,447
closed
For IterableDataset, return DataLoader using self._train_batch_size. …
…This is consistent with how we generate a regular DataLoader, and leads to the correct args.per_device_train_batch_size eventually ending up on each GPU. Fixes # 21444#issuecomment-1416252207 @sgugger
02-03-2023 19:56:26
02-03-2023 19:56:26
_The documentation is not available anymore as the PR was closed or merged._
transformers
21,446
closed
Added timesformer configuration
Co-authored-by: JuheonChu <[email protected]> # What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # 19487 (issue) ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @ydshieh Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts and @NielsRogge - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
02-03-2023 19:31:43
02-03-2023 19:31:43
_The documentation is not available anymore as the PR was closed or merged._<|||||>Hi @AdiaWu and @JuheonChu, thank you for the contribution 🚀 The doctest is confirmed to pass with this change, but there are 2 lines that should not be deleted in this PR. Once that part is reverted, we are ready to merge 💯 <|||||>Hi @ydshieh Thank you very much for your advice, we will work on it now! <|||||>Hi, @AdiaWu, [Update] You created a new file `src/transformers/utils/documentation_tests.txt`. This should be removed, and we only need to add 1 line in the existing file `utils/documentation_tests.txt`. ~~I am not sure if something changed since yesterday, but currently, the file `documentation_tests.txt` is completely modified, in particular in [this commit](https://github.com/AdiaWu/transformers/commit/657de28e023ff305fde3356e187b96c29bca1f07)~~ ~~Why that file is re-created in that commit? Before we can merge this PR, you will have to make that file clean - there should be only 1 line change instead of all file being changed.~~ <|||||>I am sorry for the trouble. However, I only added line 172 and line 173 as you instructed yesterday. May I ask you what is the one line that you want me to revise? We will try to work on it again. <|||||>Dear @ydshieh and from this site: https://github.com/huggingface/transformers/pull/21446/commits/82d5a6216e3c4596f2213dbee440f12bdaa35fcb. You can check that I only added two lines to the document... Not sure if somewhere went wrong.. Agian, sorry for the trouble. <|||||>One commit before, there is unusual changes: https://github.com/huggingface/transformers/pull/21446/commits/657de28e023ff305fde3356e187b96c29bca1f07<|||||>@ydshieh Thank you! We will remove the file right away.<|||||>@ydshieh Hello, I just wanted to double check. So, what @JuheonChu and I should do is deleting "`src/transformers/utils/documentation_tests.txt`" file. Are we understanding correctly?<|||||>> @ydshieh Hello, I just wanted to double check. So, what @JuheonChu and I should do is deleting "`src/transformers/utils/documentation_tests.txt`" file. Are we understanding correctly? Yes<|||||>Hi @AdiaWu If you check on the changed file page https://github.com/huggingface/transformers/pull/21446/files It shows the file `utils/documentation_tests.txt` is not there (or points to `src/transformers/utils/documentation_tests.txt`. I suggest you verify the file locally, make sure `utils/documentation_tests.txt` exist, not a symbolic link to other files, and with the expected one-line change you want to add for this PR 🙏.<|||||>@ydshieh Is the file `utils/documentation_tests.txt` already in our commits? <|||||>Dear @ydshieh , Since the file "utils/documentation_tests.txt" is missing. @JuheonChu and I just updated the "utils/documentation_tests.txt". There should be no problem with the file now. Please check it whenever you are available and see if there are still some errors this time. <|||||>Hi @AdiaWu - The file `src/transformers/utils/documentation_tests.txt` should not be added. - The file `utils/documentation_tests.txt` should not be deleted. It seems at some point of your commits, you have done something to these 2 files. What needs to be done is: - remove the newly added file `src/transformers/utils/documentation_tests.txt` - make sure `utils/documentation_tests.txt` exist, and is not a symbolic link to any other file - make sure `utils/documentation_tests.txt` is updated with one line `src/transformers/models/timesformer/configuration_timesformer.py` I hope this is clear and we can merge the PR once everything is fine, thank you. I can help to resolve this issue if necessary, please let me know :-) <|||||>Dear @ydshieh , I am sorry for misunderstanding your instructions, we have now deleted the "src/transformers/utils/documentation_tests.txt" file, and the "utils/documentation_tests.txt" is currently in the path with only one line added. I hope this time this PR should be all good and ready to be merged. Thank you very much! <|||||>> Dear @ydshieh , I am sorry for misunderstanding your instructions, we have now deleted the "src/transformers/utils/documentation_tests.txt" file, and the "utils/documentation_tests.txt" is currently in the path with only one line added. I hope this time this PR should be all good and ready to be merged. Thank you very much! @ydshieh Thank you for your guidance, and do you mind if you can verify it?<|||||>No worry🤗, and yes it is good now🚀 Thank you so much for making it and the contribution ❤️<|||||>> No worry🤗, and yes it is good now🚀 Thank you so much for making it and the contribution ❤️ Thank you for your patience!
transformers
21,445
closed
Avoid flaky generation sampling tests
# What does this PR do? Avoid the CI failure ```bash tests/models/switch_transformers/test_modeling_switch_transformers.py::SwitchTransformersModelTest::test_beam_sample_generate_dict_output (line 3099) RuntimeError: probability tensor contains either inf, nan or element < 0 ``` For ```bash tests/models/marian/test_modeling_marian.py::MarianStandaloneDecoderModelTest::test_sample_generate (line 2482) RuntimeError: probability tensor contains either inf, nan or element < 0 ``` it's not clear what I can change in https://github.com/huggingface/transformers/blob/6c62cfb2eff095c181481d8ae86c7f836b65d2d7/tests/generation/test_utils.py#L108-L155 I changed to `logits_warper_kwargs, logits_warper = self._get_warper_and_kwargs(num_beams=2)` despite this is not beam sampling test.
02-03-2023 19:05:10
02-03-2023 19:05:10
_The documentation is not available anymore as the PR was closed or merged._
transformers
21,444
closed
Trainer get_train_dataloader creates wrong batch size when using IterableDataset and multi-gpu training on single machine
### System Info @sgugger I'm not sure if I'm missing something here or not. But I am doing masked language modeling with RobertaForMaskedLM and working in pytorch on an AWS machine with 8 V100s. I set args.per_device_train_batch_size=32. If I train with a regular Dataset object, the data loader will produce a big batch of 32 * 8 = 256 examples each time, and then they will be split up and sent to each GPU in batches of 32 as expected. But if I switch to an IterableDataset, I end up with the DataLoader producing batches of 32, which get split into batches of 4 being send to each GPU. This happens because of this code in Trainer.get_train_data_loader. If we have an iterable Dataset, we end up creating a DataLoader based on **per_device_train_batch_size** (which is 32). But if we have any other type of dataset, we create a DataLoader with self.**_train_batch_size** (which is 256). I confess I don't what the first if self.args.world_size > 1 block is supposed to be doing, but that doesn't get executed in my situation (running on a single machine with multiple GPUs). Am I doing something wrong, or is this a bug? Thanks, Andy if isinstance(train_dataset, torch.utils.data.IterableDataset): if self.args.world_size > 1: train_dataset = IterableDatasetShard( train_dataset, batch_size=self._train_batch_size, drop_last=self.args.dataloader_drop_last, num_processes=self.args.world_size, process_index=self.args.process_index, ) return DataLoader( train_dataset, batch_size=self.args.**per_device_train_batch_size**, collate_fn=data_collator, num_workers=self.args.dataloader_num_workers, pin_memory=self.args.dataloader_pin_memory, ) train_sampler = self._get_train_sampler() return DataLoader( train_dataset, batch_size=self.**_train_batch_size**, sampler=train_sampler, collate_fn=data_collator, drop_last=self.args.dataloader_drop_last, num_workers=self.args.dataloader_num_workers, pin_memory=self.args.dataloader_pin_memory, worker_init_fn=seed_worker, ) ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction 1. Use a pytorch model on a single single machine with multiple GPUs 2. Set TrainingArguments.per_device_train_batch_size=32 3. Create a regular dataset in memory from a pandas data frame (or whatever) 4. Put a breakpoint (or debugging statement) in the forward pass of the model to print out inputs.shape -> Very that first dimension=32 5. Now create a IterableDataset and run again 6. See that inputs.shape has first dimension of 4 ### Expected behavior The train batch size should be the same whether using regular or IterableDataset
02-03-2023 18:28:14
02-03-2023 18:28:14
Sounds like the `self.args.per_device_train_batch_size` should be `self._train_batch_size` indeed. Do you want to open a PR? As an aside, using DataParallel is not the recommended way to run a multiple GPUs by PyTorch, you should launch your training script with `torchrun`<|||||>Thanks, Sylvain. I issue the pull request. My first time doing so, so hope I did it OK!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
21,443
closed
[CI ] Remove `past` in favor of `pat_key_values`
# What does this PR do? Related to #20944, The `past` arg was removed.
02-03-2023 18:22:49
02-03-2023 18:22:49
Other tests might also pass! <|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>Git overides the `use_cache` arguments (only if labels are provided) to `False` see [here](https://github.com/ArthurZucker/transformers/blob/c2f9aacee9326b9db886036497a4d157666cb040/src/transformers/models/git/modeling_git.py#L1475-L1476). When generating, `use_cache` is set to false, but when we run `model.group_beam_search`, the ` self.prepare_inputs_for_generation(input_ids, **model_kwargs)` method forces `use_cache` to True see [here ](https://github.com/ArthurZucker/transformers/blob/c2f9aacee9326b9db886036497a4d157666cb040/src/transformers/models/git/modeling_git.py#L1516-L1534). EDIT: just update this tests will pass now. <|||||>The failing tests are unrelated, will merge
transformers
21,442
closed
Draft Pull request
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts and @NielsRogge - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
02-03-2023 16:20:22
02-03-2023 16:20:22
transformers
21,441
closed
Add BLIP-2
# What does this PR do? This PR adds BLIP-2 to the library. To do: - [x] make sure generation works exactly as the original implementation, (maybe @gante can have a look here - based on original code [here](https://github.com/salesforce/LAVIS/blob/main/lavis/models/blip2_models/blip2_opt.py#L207-L211)). Edit: seems to be solved by properly setting the `eos_token_id`! - [x] add more tests for BLIP-2 with `AutoModelForSeq2SeqLM` once designed gets approved - [x] transfer checkpoints, update integration tests - [ ] make it possible to instantiate Blip2Config with config objects, rather than dicts (also check default text config) - will be done in a separate PR cc @younesbelkada
02-03-2023 15:44:26
02-03-2023 15:44:26
_The documentation is not available anymore as the PR was closed or merged._<|||||>@sgugger all comments are addressed, feel free to approve :)<|||||>@NielsRogge Curious, what is the timeline for this to make it into a stable release version?<|||||>Usually there's a Transformers release once every 1 to 2 months, so at the very least in March.<|||||>Hi, thanks for the great work! I'm running into problems trying to use this in the multigpu setting and saw this was mentioned by @younesbelkada earlier -- is there an issue to follow for that? (Specifically, in line 2765 of transformers->generation->utils.py, the devices don't match -- `Expected all tensors to be on the same device, but found at least two devices, cuda:3 and cuda:0!` because beam_scores is on cuda:0 while next_token_scores and next_token_scores_processed are on cuda:3 after using "auto" for the device_map when loading.) I'm also getting a weirder error where it causes a CUDA illegal memory access error for any model used downstream of it on GPU 0, even when it's given no GPU memory on GPU 0 in max_memory. (This doesn't occur for the original BLIP2, which I'm trying to migrate from.)<|||||>Same problem here @sachit-menon "Expected all tensors to be on the same device, but found at least two devices, cuda:1 and cuda:0!" https://github.com/TimDettmers/bitsandbytes/issues/153<|||||>Hi @sachit-menon @xszheng2020 This is a known issue on my end, I can confirm this should be at least fixed for `blip2-opt` at https://github.com/huggingface/transformers/pull/21707 Can you try to checkout from this branch and let us know on the PR if the fix works? thanks!<|||||>Hi, @younesbelkada thanks! will test it on blip2-opt to see whether it works! and hope the blip2-flant5 could be fixed soon
transformers
21,440
closed
![image](https://user-images.githubusercontent.com/41872440/216619913-99568f8c-8a82-42ab-ae7a-dd56846dd395.png)
![image](https://user-images.githubusercontent.com/41872440/216619913-99568f8c-8a82-42ab-ae7a-dd56846dd395.png) I have 8 gpu's in this machine. ![image](https://user-images.githubusercontent.com/41872440/216619996-7af22496-7fb8-4b6d-90f2-3606846b3cdc.png) I think its not taking all 8 gpu's. Already tried changing batch_sizes and with multiple of 8 _Originally posted by @namanpundir in https://github.com/huggingface/transformers/issues/21407#issuecomment-1415898415_
02-03-2023 13:55:16
02-03-2023 13:55:16
Please do not spam the repository by opening duplicate issues. You can find help to debug your training by posting on the [forums](https://discuss.huggingface.co/) as we keep issues for bugs in the library and feature requests only. You will need to share how you are launching your script for anyone to be able to help.
transformers
21,439
closed
BertTokenizer cannot properly tokenize words with dashes
### System Info - `transformers` version: 4.26.0 - Platform: Linux-5.10.147+-x86_64-with-glibc2.29 - Python version: 3.8.10 - Huggingface_hub version: 0.12.0 - PyTorch version (GPU?): 1.13.1+cu116 (False) - Tensorflow version (GPU?): 2.9.2 (False) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ### Who can help? @ArthurZucker ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [x] My own task or dataset (give details below) ### Reproduction The `BertTokenizer` doesn't tokenize words with dashes correctly. I tried to use tokenizer for **Italian** and **English** languages and got unexpected results. This issue similar to https://github.com/huggingface/transformers/issues/5136. ```python3 from transformers import AutoTokenizer, BertTokenizer model1 = 'cointegrated/rubert-tiny2' model2 = 'Babelscape/wikineural-multilingual-ner' tokenizer = BertTokenizer.from_pretrained(model1, model_max_length=50) >>> tokenizer.tokenize('so-called') ['so', '-', 'called'] >> tokenizer.tokenize('era il figlio di ghazi-ud-din haidar.') ['era', 'il', 'fi', '##gli', '##o', 'di', 'gh', '##azi', '-', 'ud', '-', 'din', 'hai', '##dar', '.'] ``` ### Expected behavior Expected something like that: ```python3 >>> tokenizer.tokenize('so-called') ['so', '##-', '##called'] >> tokenizer.tokenize('era il figlio di ghazi-ud-din haidar.') ['era', 'il', 'fi', '##gli', '##o', 'di', 'gh', '##azi', '##-', '##ud', '##-', '##din', 'hai', '##dar', '.'] ```
02-03-2023 09:10:26
02-03-2023 09:10:26
I am not sure what you would like us to do about that. If we change the tokenizer, the model will get inputs different from its training and thus won't perform as well. You should use a different model with a tokenizer that suits your needs :-)<|||||>This is not an issue on our side, it's just the way the WordPiece algorithm works, given the corpus that the BERT authors trained on. Check out our course for more info on tokenization algorithms: https://huggingface.co/course/chapter6/1?fw=pt Closing this issue, feel free to reopen.
transformers
21,438
closed
Fix device issue in a `ConvBertModelTest` test
# What does this PR do? CI failed after #21398
02-03-2023 09:08:29
02-03-2023 09:08:29
_The documentation is not available anymore as the PR was closed or merged._
transformers
21,437
closed
BertTokenizer cannot properly tokenize words with dashes
### System Info The `BertTokenizer` doesn't tokenize words with dashes correctly. I tried to use tokenizer for **Italian** and **English** languages and got unexpected results. This issue similar to https://github.com/huggingface/transformers/issues/5136. ### Who can help? @ArthurZucker ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ```python3 from transformers import AutoTokenizer, BertTokenizer model1 = 'cointegrated/rubert-tiny2' model2 = 'Babelscape/wikineural-multilingual-ner' tokenizer = BertTokenizer.from_pretrained(model1, model_max_length=50) >>> tokenizer.tokenize('so-called') ['so', '-', 'called'] >> tokenizer.tokenize('era il figlio di ghazi-ud-din haidar.') ['era', 'il', 'fi', '##gli', '##o', 'di', 'gh', '##azi', '-', 'ud', '-', 'din', 'hai', '##dar', '.'] ``` ### Expected behavior Expected something like that: ```python3 >>> tokenizer.tokenize('so-called') ['so', '##-', '##called'] >> tokenizer.tokenize('era il figlio di ghazi-ud-din haidar.') ['era', 'il', 'fi', '##gli', '##o', 'di', 'gh', '##azi', '##-', '##ud', '##-', '##din', 'hai', '##dar', '.'] ```
02-03-2023 09:03:27
02-03-2023 09:03:27
transformers
21,436
closed
exclude deleted files in the fixup script
# What does this PR do? Running `make fixup` after a file is deleted on a branch causes `black` to exit with an error. `Error: Invalid value for 'SRC ...': Path /path/to/deleted/file' does not exist.` This PR resolves the error by setting the `git diff` flag `--diff-filter=d` to exclude deleted files from the list of modified files.
02-03-2023 07:36:12
02-03-2023 07:36:12
_The documentation is not available anymore as the PR was closed or merged._
transformers
21,435
closed
Make beam sample more robust
# What does this PR do? Make beam sample more robust. The probability (more generally, scores) in `torch.multinomial` could not contain `nan`. However, the computation ```python probs = nn.functional.softmax(next_token_scores, dim=-1) ``` could have `next_token_scores` being all `-inf` (along some batch dim.) due to the processing in logit warpers and beam scorers, and this leads to `probs` being all `nan` along that batch dim, and we sometimes get the following error in the sampling-style generation methods ``` RuntimeError: probability tensor contains either inf, nan or element < 0 ``` This PR makes sure the probability to `torch.multinomial` is a valid input while not affecting the existing generation process. Related PRs: #17972 #18053
02-03-2023 06:35:59
02-03-2023 06:35:59
_The documentation is not available anymore as the PR was closed or merged._<|||||>Uhmmm... if all tokens in a batch have `-inf` scores, something has gone very wrong. `-inf` can only happen when some tokens are forbidden (e.g. through `NoBadWordsLogitsProcessor`), and if they are all forbidden then it means `.generate` is incorrectly parameterized. I think the runtime error is adequate in that situation -- perhaps we could make it an informative exception explaining the problem?<|||||>@gante Nothing is wrong, at least in the situation I observed: - The `logits_warper` I checked contains: - TemperatureLogitsWarper - TopKLogitsWarper - TopPLogitsWarper Here, - `TopKLogitsWarper`: keeps top 10 logits, set others to `-inf` - `TopPLogitsWarper `: keeps top 1 ([`<eos>` token]) logits, which has probability > top_p (= 0.7), and set others to `-inf`. Then in beam scores, it can't choose `<eos>`, so other tokens are chosen with scores `-inf`. In the next iteration, the line ``` next_token_scores = next_token_scores_processed + beam_scores[:, None].expand_as(next_token_scores) ``` introduce the `all -inf` for some batch dimension, as `beam_scores` contains some `-inf`. The `all -inf` comes from the fact the scores go through `TopPLogitsWarper` while `eos` has a higher probability > `top_p` and `min_tokens_to_keep = 1`. WDYT? <|||||>@ydshieh I understand what causes it, but I still think we shouldn't change the code -- if this situation happens in production, it means that the user has selected a bad combination of model + processors/warpers. Allowing this behavior results in a silent failure, which is much worse than a crash :) The solution should be a fix that would be applicable in real usage, i.e. fixing parameterization. For instance, changing `TopPLogitsWarper` to have `min_tokens_to_keep=2` would fix this issue (and is a potential solution if this problem was happening in an actual use case)<|||||>Well, I do agree some arguments, but I also don't think this is a real problem: Given a set of parametrization, the algorithm is to give the result which makes its own sense. In this case, the result (at some batch dimension) are with scores `-inf`, which totally makes sense: the users can verify the scores and decide what to do on their own. **IMO, as long as we define clearly the behavior, and document it, it is fine.** # Examples Taking one example (not 100% relevant): when we want to sample without replacement, but the places with positive probabilities > 0 are fewer than the sample size request. ### PyTorch gives what you want, even those elements with probability 0 ```python import torch device = "cpu" probs = torch.tensor([[1, 1, 0, 0, 0, 0]], device=device, dtype=torch.float) o = torch.multinomial(probs, num_samples=4) print(o) # tensor([[0, 1, 5, 4]]) # The last 2 elements are meaningless # On CPU/GPU, the behavior seems different too! ``` While ### NumPy throws an error ```python import numpy as np o = np.random.choice(6, 4, replace=False, p=[1.0/3, 1.0/3, 1.0/3, 0, 0, 0]) # ValueError: Fewer non-zero entries in p than size ```<|||||>Also, thinking in batch mode, what if a user really want to use a fixed parametrization, but with it, some example gives error while all other examples could generate successfully? Force them to change the parametrization doesn't seem really good, and it is also not easy to determine beforehand which example will fail with this kind of error.<|||||>And yet another argument: > The solution should be a fix that would be applicable in real usage, i.e. fixing parameterization. For instance, changing TopPLogitsWarper to have min_tokens_to_keep=2 would fix this issue (and is a potential solution if this problem was happening in an actual use case) Well, how about a user have extra `NoBadWordsLogitsProcessor`, which have a bad word, and that one has the higher probability together with the `eos` token for some example(s) in the dataset? Then the generation will fail again, and to make it work, users have to change to `min_tokens_to_keep=2`, while all other examples (in validation/test datasets) all work with previous parametrization? Such failure in the middle of the process will be really annoying IMO.<|||||>@ydshieh The thing is, all the examples you pointed out won't happen unless the user has made a mistake. Beam search methods require at least two tokens per round to operate correctly, so `min_tokens_to_keep=2` should always be set in `beam_sample` (perhaps we can modify `.generate()` to set it by default when `num_beams>1`). Again, if we merge this behavior, we expose ourselves to many silent failure modes, where the user will say "the model is bad"/"HF's generate is wrong" instead of being pointed at the root cause. Here are a few examples: 1. logits processors that force tokens in a certain position with unfeasible constraints (e.g.`ForcedBOSTokenLogitsProcessor` + `ForceTokensLogitsProcessor`) 2. logits processors that prevent tokens in a certain position / all positions with unfeasible constraints (e.g. `NoBadWordsLogitsProcessor`, `WhisperTimeStampLogitsProcessor`) 3. A combination of the above I'm sorry, I'll be very stubborn against this change :) <|||||>> won't happen unless the user has made a mistake I don't agree with this. A parametrization may work very well for all examples in a dataset but will fail with a single one - it's really arguable what is right and wrong here - to reiterate again, this is much more annoying if one has to figure out what parametrization to change for a single/few examples, especially in the prod environment, where a crash should really be avoided - current implementation doesn't allow one to deal with such error with try/except(they can, but doing this in batch mode while the failing case may just be a single example in that batch is annoying) > Again, if we merge this behavior, we expose ourselves to many silent failure modes, where the user will say "the model is bad"/"HF's generate is wrong" instead of being pointed at the root cause. - For users that are not developers + with no motivation to dive into + just want complain: - I am not sure if a crush will make them to change their mind and motivate them to dive into - For other users who are willing to debug - Make sure we communicate well/clearly the returned scores should be checked (at least when debugging/analyze logs) should be already super good > Beam search methods require at least two tokens per round to operate correctly, so `min_tokens_to_keep=2` should always be set in `beam_sample` (perhaps we can modify `.generate()` to set it by default when `num_beams>1`). - Hmm, the test I checked has `logits_warper_kwargs, logits_warper = self._get_warper_and_kwargs(num_beams=1)`, in a test named `test_beam_sample_generate_dict_output` 😕 . I can re-work the test, but this is the least point in our discussion here. > I'm sorry, I'll be very stubborn against this change :) I understand, but nice to have a discussion anyway. Let's bring @sgugger and @patrickvonplaten into discussion and see what they think :-)<|||||>Would adding warning around the changes made in this PR will change your mind, @gante ?<|||||>> Would adding warning around the changes made in this PR will change your mind, @gante ? Haha no, users ignore warnings (and documentation) most of the time 🙃 <|||||>> > Would adding warning around the changes made in this PR will change your mind, @gante ? > > Haha no, users ignore warnings (and documentation) most of the time 🙃 Well, I do agree with you (with this) 🙃🙃<|||||>I'd also be more on the error side, with a clearer error message and actionable reactions printed to the user. If we get to an all nan line in the batch, there is nothing we can really generate.<|||||>I also think that a runtime error should be thrown here ideally. **However** checking all values for `-inf` can seriously slow down generation, so let's maybe make sure to first test for a potential slow down. Also to me it seems this PR is just to make the tests less flaky. Can we maybe instead try to relax parameters like `top_k` (instead of using default 50, we disable it) to minimize flakiness ? <|||||>> I also think that a runtime error should be thrown here ideally. **However** checking all values for `-inf` can seriously slow down generation, so let's maybe make sure to first test for a potential slow down. > OK, I understand better now what you mean, but I left a comment above. > Also to me it seems this PR is just to make the tests less flaky. Can we maybe instead try to relax parameters like `top_k` (instead of using default 50, we disable it) to minimize flakiness ? The flaky tests have been addressed in #21445 <|||||>As we all agree that an error should be thrown with clear message, I am going to close this PR. The work on make more clear message should be done in another PR :-)
transformers
21,434
closed
add customizable ending learning rate arguments to warmup schedulers
# What does this PR do? I noticed that in the original implementation, the learning rate for cosine and linear scheduler with warmup is always scheduled to 0. However, in many new research such as Masked Autoencoder, BEiT, they scheduled the learning rate to some no 0 end learning rate (1e-6 in their case). Hence, I added this features for these schedulers, so people who don't want to schedule learning rate directly to 0 can also use huggingface schedulers. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [X] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @Narsil @sgugger Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR.
02-03-2023 04:33:04
02-03-2023 04:33:04
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_21434). All of your documentation changes will be reflected on that endpoint.<|||||>Not sure I can properly review this code, sorry I never touch that file.<|||||>Thank you for your PR, but the Transformers library is primarily a library of models. Those schedulers are just implemented for ease of use in our Trainer (which wouldn't be able to set that `end_lr` new argument), anything more involved should come from another library/custom user code :-)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
21,433
closed
Problem with tokenization using the 'distil-bert-uncased' tokenizer
### System Info ``` - `transformers` version: 4.24.0 - Platform: Windows-10-10.0.19044-SP0 - Python version: 3.10.8 - Huggingface_hub version: 0.11.0 - PyTorch version (GPU?): 1.13.0+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ``` ### Who can help? @ArthurZucker, @younesbelkada & @sgugger - since I am facing a tokenization issue with the trainer on `distil-bert-uncased` mostly using the official scripts. ### Information - [X] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction I was following official script titled [Fine-tune a pretrained model](https://huggingface.co/docs/transformers/training) with the only difference being that I am loading my dataset from a local `.csv` file with structure its described below. | Text | Label (integers 0-2) | | - | - | | string 1 | label 1 | | string 2 | label 2 | | string n | label n | My real dataset however, does not have a header row, It's just strings & integral labels ranging from 0-2. Header row was added for better readability of my structure using markdown. I am using the library-defined `load_dataset` function to load my csv file right into a Hugging Face dataset. My code is as follows: ```python from datasets import dataset_dict, load_dataset from transformers import DistilBertTokenizerFast, DistilBertModel, Trainer, TrainingArguments import torch DATA_PATH = "SOMEPATH" dataset = load_dataset('csv', data_files={'train': f"{DATA_PATH}\\train.csv", 'test': f"{DATA_PATH}\\test.csv", 'validation': f"{DATA_PATH}\\validation.csv"}, column_names=['text', 'label'], split=['train', 'test', 'validation']) dataset = dataset_dict.DatasetDict({'train':dataset[0], 'test':dataset[1], 'validation':dataset[2]}) tokenizer = DistilBertTokenizerFast.from_pretrained('distilbert-base-uncased') def tokenize_function(examples): return tokenizer(examples["text"], padding=True, truncation=True, max_length=512) FINAL_DS = dataset.map(tokenize_function, batched=True) training_stuff = { "batch_size": 64, "epochs": 4, "learning_rate": 1e-5, "weight_decay": 0.01 } training_args = TrainingArguments( output_dir="/Models/DistilBert", per_device_train_batch_size=training_stuff["batch_size"], evaluation_strategy="steps", num_train_epochs=training_stuff["epochs"], fp16=True, save_steps=100, eval_steps=50, logging_steps=10, weight_decay=training_stuff["weight_decay"], learning_rate=training_stuff["learning_rate"], save_total_limit=64, remove_unused_columns=False, push_to_hub=False, report_to='tensorboard', load_best_model_at_end=True, ) model = DistilBertModel.from_pretrained( 'distilbert-base-uncased', num_labels=3, id2label={0: 'Biased', 1: 'Non-biased', 2: 'No agreemnt'}, label2id={'Biased': 0, 'Non-biased': 1, 'No agreement': 2}, ) trainer = Trainer( model=model, args=training_args, train_dataset=FINAL_DS['train'], eval_dataset=FINAL_DS['validation'], tokenizer=tokenizer, ) train_results = trainer.train() trainer.save_model() ``` However, when I run this script (at `trainer.train()`), I get the following error. <img width="745" alt="err1" src="https://user-images.githubusercontent.com/67118602/216493803-c8d640f0-d3e8-4116-9f75-c4379ce8a290.png"> ![err3](https://user-images.githubusercontent.com/67118602/216495983-64afbdd9-0a5e-40ef-b91d-dcf184b88913.png) The hugging face forum [link](https://discuss.huggingface.co/t/getting-a-value-error-unable-to-create-a-tensor-because-the-feature-text-has-excessive-nesting-and-it-expects-it-to-be-int-for-some-reason/30890?u=quantumstatic) to this issue, unfortunately with no responses. ### Expected behavior I would expect the model to start training.
02-03-2023 03:15:38
02-03-2023 03:15:38
You are not dropping the text columns of your dataset, so the Trainer is then unable to make tensors out of them. You need to either remove the problematic columns of the dataset or remove the `remove_unused_columns=False` argument.<|||||>Could you please explain how removing the text column will help? Wouldn't the transformer need text column for text2text generation. Even if am failing to grasp the idea of dropping the column and `remove_unused_columns=True` is the correct way to move forward. I get the following error: ![err4](https://user-images.githubusercontent.com/67118602/216628779-d4e9260f-862b-4d02-8d5d-4529b64b12be.png) Why would the model not generate outputs? I double checked that my `input_ids` & `attention_mask` they are sufficiently sized according to the input data.<|||||>You should really go on the [forums](https://discuss.huggingface.co/) to help debug your code as the wider community will be there to help. Your code fine-tunes the base model, there is no text2text class here and no `labels` to go with it. That's why you get this error message, there is no loss to optimize.<|||||>I am sorry, I don't want hugging face to debug my code. I completely understand your time is more valuable than debugging my code. However, I am a beginner and I did try to look up online and I did try the forums and I assure you this is my last option to get help. In addition, I thank you once again for helping me with such a trivial problem. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
21,432
closed
annotated TFvisionEncoderDecoder input type hints
Co-authored-by: JuheonChu <[email protected]> Co-authored-by: AdiaWu <[email protected]> # What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes issue #[16059](https://github.com/huggingface/transformers/issues/16059) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @Rocketknight1 Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts and @NielsRogge - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
02-03-2023 02:20:53
02-03-2023 02:20:53
_The documentation is not available anymore as the PR was closed or merged._<|||||>Hello @Rocketknight1, I have one failing CircleCI test yet to resolve [here](https://app.circleci.com/pipelines/github/huggingface/transformers/56839/workflows/c6593d32-bae5-46fa-96cb-d0d05bdee084/jobs/687513?invite=true#step-110-7710) that I need some help with. I tried searching for solutions but couldn't find anything that fixes it. I'm assuming I'm either missing or need to upgrade some package but can't quite pinpoint the issue.<|||||>Hi @miyu386 - a couple of the issues were caused by a difference between two copied functions. Running `make fix-copies` fixed that! The other issues are in our CI - they're caused by a version mismatch in TF vs. TF Probability. This can be fixed by rebasing your PR, but these issues will also be fixed when we merge the PR. If you're happy for me to merge now, I can do that!<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>@Rocketknight1 Thank you for the follow-up! I rebased with upstream and 1 test remained to fail. I don't know if this will be addressed when the PR gets merged, if that's the case the PR is ready to be merged now<|||||>@miyu386 Thanks for doing the rebase! The remaining issue is just a flaky test on our end in one of the PyTorch examples. It has nothing to do with your PR here, so I'm happy to merge now. Thanks for your contribution!
transformers
21,431
closed
🚨🚨🚨 Enforce single model initialization
# What does this PR do? There are currently three problems with the mode inits: **Problem 1:** When not using the fast init (so in practice when using the model constructor or `AutoXxx.from_config` instead of `from_pretrained`) weights are initialized multiple times. @stas00 showed the example of `OPTForCausalLM` where we have a call to `post_init()` three times: in `OPTForCausalLM`, `OptModel` and `OptDecoder`. Each of those calls launches a recursive call of `_init_weights` to all submodules of the model, so this makes three inits. **Problem 2:** The fast init (of random weights of the head in `from_pretrained`) and non-fast init (as above) are not always equivalent. This is because in `from_pretrained` init is done on calling `_init_weights` only on leaf modules with weights not present in the checkpoint, but sometimes `_init_weights` contains class checks for bigger modules ([here](https://github.com/huggingface/transformers/blob/77db257e2a67d4b043cf03bf390947fcd71a9f53/src/transformers/models/oneformer/modeling_oneformer.py#L2801) is one example in OneFormer) **Problem 3:** Some of the models have `_init_weights` function that will initialize the same weights with two different ways. We can take back [this example](https://github.com/huggingface/transformers/blob/77db257e2a67d4b043cf03bf390947fcd71a9f53/src/transformers/models/oneformer/modeling_oneformer.py#L2801) in OneFormer which initializes a weight that is a Conv2D, but _init_weights is applied recursively, so that Conv2D will also be initialized [here](https://github.com/huggingface/transformers/blob/77db257e2a67d4b043cf03bf390947fcd71a9f53/src/transformers/models/oneformer/modeling_oneformer.py#L2891) with a different rule. This PR should solve these three problems with one stone by changing slightly the `_init_weights` function to look for a private `_is_hf_initialized` attribute in the module and skip the init if it's there and `True`. Of course when initializing a module, this private attribute is set to `True` after the initialization is done. This PR gets the 🚨🚨🚨 sign because it might break user's code if they were relying on the (buggy) init of composite models: if a model has an encoder or backbone that is initialized differently from the rest, the init of the encoder/backbone was previously erased by the bigger model init.
02-02-2023 20:36:28
02-02-2023 20:36:28
_The documentation is not available anymore as the PR was closed or merged._<|||||>@stas00 In initial discussions with @LysandreJik , he mentioned he preferred not having a wrapper. Though the argument about init weights code in the wild is a sound one, so showed how it could look like with the last two commits.<|||||>Thanks for the PR, and for showing the two options! I feel like the wrapper is a little bit magical, but would make contributions simpler while reducing the complexity of the code. I would go with the wrapper, if possible.<|||||>Thank you for making it simpler for the end user, Sylvain - I will test this today on m4 and get back to you.<|||||>Thank you for doing a massive adjustment work and the explanations, Sylvain! This is hard work and very awesome for everybody to benefit from!<|||||>Last failing test is flaky so this is good for final review!<|||||>so it didn't make it into https://github.com/huggingface/transformers/releases/tag/v4.26.1, right? do you know if you plan another hotfix release in the future or plan to wait for 4.27.0? Asking as I'm needing to anchor requirements on this fix for m4 where I found this bug.<|||||>This won't be until 4.27.0 as it could come with bugs we need to fix (and it's not a regression fix so won't go in a patch).<|||||>Thank you for the clarity, Sylvain. 4.27.0 it is.
transformers
21,430
closed
Add `inputs_embeds` support for `.generate()` with BLOOM models
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> Adds accepting `.generate()` calls with `inputs_embeds` on BLOOM models (following GPT2 example in #21405). ## Before submitting - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts and @NielsRogge - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 --> @gante
02-02-2023 20:08:59
02-02-2023 20:08:59
_The documentation is not available anymore as the PR was closed or merged._<|||||>There seems to be a CircleCI issue when triggering the tests 🤔 @akreal could you try following [these instructions](https://support.circleci.com/hc/en-us/articles/360056873811-Your-access-to-a-project-from-CircleCI-was-revoked-by-GitHub)? I'm not sure whether they will help, but they were the closest match I found based on CircleCI's error message.
transformers
21,429
closed
Add tutorial doc for TF + TPU
This is the sidebar tutorial for training with TF + TPU, to go with [the code notebook](https://github.com/huggingface/notebooks/pull/313). Note that the Markdown is exported straight from Notion, so some formatting will probably look very wrong - I'm working on cleanup!
02-02-2023 19:39:40
02-02-2023 19:39:40
_The documentation is not available anymore as the PR was closed or merged._<|||||>@sayakpaul The spaces have been removed because the extra content after them got moved to a whole other doc!
transformers
21,428
closed
do not scale gradient in bf16 mode
# What does this PR do? Turn off gradient scaling in the trainer when bf16 mode is selected. Only use gradient scaling in float16 mode. ## Who can review? @sgugger and @stas00
02-02-2023 17:55:30
02-02-2023 17:55:30
_The documentation is not available anymore as the PR was closed or merged._
transformers
21,427
closed
Refactor whisper asr pipeline to include language too.
# What does this PR do? ## Why a refacto of this magnitude ? - Earlier iterations tried to keep in line with `pipeline`, which started a cascade of `if ... else ` just for `whisper`. - Those cascade of functions where all loosing the `language` information which would make it tough to include `language` in the return. - Since we have chunking, potentially we have several `language` codes within a single file making this a segmentation problem, not a classification problem. (So using `detect_language(..) -> str` not really applicable to the pipeline). Given this problem space, it was simpler to reimplement the whole thing within the `tokenizer` akin to `_build_conversational_inputs` which can allow lots of specifities within each model. Here the new `_decode_asr` doesn't contain any information outside the tokenizer. ## How does it work ? Hopefully inline comments should be the most comprehensive tools. The tokenizer, needs `stride`, expressed in `seconds` and `time_precision` to convert from token space to seconds space. We could do this outside of `tokenizer` if necessary (everything kept in tokenizer space) but since `time_precision` is already used as args of some methods, we can use this, so the output of this function doesn't need to be converted back. =========== Overview ============ - iterate over all outputs - all tokens within output - Each token can be - language token - special token - timestamp token - text token - We accumulate the text tokens. - We split on end timestamps - Lots of complexity comes from stride and timestamps Most importantly we need to handle strides, and timestamps within strides. In order to handle those, we're simply not splitting on them, and handling merges later. All the merges is relatively simple, we find the maximum overlapping sequence. Small optimization: The overlap sequence might contain errors/conflicts. We're choosing from the previous sequence on the left side of the overlap, and next sequence on right side of the overlap. Since those tokens should correspond to the same audio, midway should be correct most of the time. ## Benchmark Courtesy of @ArthurZucker ```python # Whisper HF captioning from evaluate import load from datasets import load_dataset import numpy as np from transformers import pipeline import time import whisper libri = load_dataset("librispeech_asr", "clean", split="test") batch = len(libri) start_dataset = 0 # batch = 10 # start = 0 * batch models = ["tiny", "tiny.en", "base", "base.en", "small", "small.en", "medium", "medium.en", "large", "large-v2"] # models = ["tiny.en"] # models = ["large-v2"] for model_name in models: speech_recognizer = pipeline( task="automatic-speech-recognition", model=f"openai/whisper-{model_name}", framework="pt", device=1 ) model = whisper.load_model(f"{model_name}" + "-v1" if model_name == "large" else f"{model_name}").to( device="cuda:2" ) for offset in range(start_dataset, len(libri), batch): iterator = range(offset, offset + batch) speech_file = np.concatenate([libri[i]["audio"]["array"] for i in iterator]) labels = " ".join([libri[i]["text"] for i in iterator]) start = time.time() hf = speech_recognizer( [speech_file], return_timestamps=True, chunk_length_s=30, stride_length_s=[4, 4], batch_size=32, ignore_warning=True, num_workers=1, )[0] end = time.time() # print(res) print(f"model : {model_name}\nhf time : ", end - start) # print(res["text"]) with open("hf.txt", "w") as f: f.write(hf["text"]) norm_labels = speech_recognizer.tokenizer._normalize(labels) norm_res = speech_recognizer.tokenizer._normalize(hf["text"]) wer = load("wer") hf_wer = wer.compute(predictions=[norm_res], references=[norm_labels]) print("hf wer :", hf_wer) ``` <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts and @NielsRogge - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
02-02-2023 16:23:38
02-02-2023 16:23:38
_The documentation is not available anymore as the PR was closed or merged._<|||||>Still not finished (I'm seeing weird drops in WER when changing parameter combinations) but as a starter @sgugger if you have early feedback I'd take it.<|||||>Ok, I found that whisper doesn't play well **at all** without `return_timestamps: ```python # Whisper HF captioning from evaluate import load from datasets import load_dataset import numpy as np from transformers import pipeline import time import whisper libri = load_dataset("librispeech_asr", "clean", split="test") model_name = "openai/whisper-large-v2" speech_recognizer = pipeline(task="automatic-speech-recognition", model=model_name, framework="pt", device=2) name = model_name.split("/")[-1][len("whisper") + 1 :] model = whisper.load_model(f"{name}-v1" if name == "large" else name).to(device="cuda:3") # Faulty example start = 10 end = 15 iterator = range(start, end) speech_file = np.concatenate([libri[i]["audio"]["array"] for i in iterator]) labels = " ".join(libri[i]["text"] for i in iterator) for return_timestamps in [True, False]: print(f"========Timestamps ({return_timestamps})=========") start = time.time() hf = speech_recognizer( [speech_file], return_timestamps=return_timestamps, chunk_length_s=30, stride_length_s=[4, 4], batch_size=32, num_workers=1, )[0] end = time.time() print(f"model : {model_name}\nhf time : ", end - start) norm_labels = speech_recognizer.tokenizer._normalize(labels) norm_res = speech_recognizer.tokenizer._normalize(hf["text"]) print("HF TEXT:", hf["text"]) wer = load("wer") hf_wer = wer.compute(predictions=[norm_res], references=[norm_labels]) print("hf wer :", hf_wer) start = time.time() openai = model.transcribe(np.asarray(speech_file, dtype=np.float32), without_timestamps=not return_timestamps) end = time.time() norm_open = speech_recognizer.tokenizer._normalize(openai["text"]) print("openai time : ", end - start) openai_wer = wer.compute(predictions=[norm_open], references=[norm_labels]) print("openai wer :", openai_wer) print("Openai TEXT:", openai["text"]) # # if hf_wer > 1.5 * openai_wer: # # import ipdb # # ipdb.set_trace() ``` If you check, the output is really different in both cases. Whisper seems to stop outputting tokens when not using timestamps instead of continuing.<|||||>> Can we do the changes in the tests in a followup PR so that the diffs only show potential new tests, but no changes otherwise? There's less change than the diff leads to believe. I only removed the tests linked to a function I had removed, and there's a few tiny changes in the actual timestamps being outputted. Happy to make the modifications, it will leave a function unused here but would be the tests modifications easier to spot.
transformers
21,426
closed
Allow to add more information in `is_flaky`
# What does this PR do? As mentioned once offline, I think it's better for us to make a bit more effort to describe the situation for tests decorated by `is_flaky`. We don't always know the exact reasons (and for known cases, we don't always have good way to fix - at least not in a few months sometimes). A `description` is always good IMO. For future PRs, if new tests being decorated with `is_flaky`, let's keep 👀 on if `description` is provided - despite it's an optional parameter 🙏 .
02-02-2023 15:57:26
02-02-2023 15:57:26
_The documentation is not available anymore as the PR was closed or merged._
transformers
21,425
closed
[`ImageProcessor`] Refactor default `mean` & `std` to `OPENAI_CLIP_MEAN` & `OPENAI_CLIP_STD`
# What does this PR do? Initially this PR was intended to fix a small nit for BLIP feature extractors, the default normalization `mean` & `std` are currently incorrect as pointed out by @NielsRogge , BLIP uses the mean and std that are identical to CLIP: https://github.com/salesforce/LAVIS/blob/5ddd9b4e5149dbc514e81110e03d28458a754c5d/lavis/processors/blip_processors.py#L21 - this has no effect for the current models as the values were already correct on the Hub. Therefore this PR adds a new variable in `constants.py`, `OPENAI_CLIP_MEAN` & `OPENAI_CLIP_STD` as this value is used by other models as well `CLIP`, `CLIPSeg`, `OWL-ViT` , etc. cc @NielsRogge @sgugger
02-02-2023 15:52:50
02-02-2023 15:52:50
_The documentation is not available anymore as the PR was closed or merged._
transformers
21,424
closed
Add tips for generation with Int8 models
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Adds some tips to avoid gotchas with text generation and Int8 models ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts and @NielsRogge - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 --> cc @gante @younesbelkada
02-02-2023 15:17:35
02-02-2023 15:17:35
_The documentation is not available anymore as the PR was closed or merged._<|||||><3
transformers
21,423
closed
Fixes bug in the creation of ExponentialDecayLengthPenalty
input_ids_seq_length doesn't exist in the GenerationConfig, it exists as local variable in the function. Setting exponential_decay_length_penalty therefore results in an error: `AttributeError: 'GenerationConfig' object has no attribute 'input_ids_seq_length'` This simple change fixes this issue, and the exponential_decay_length_penalty works as expected. @gante
02-02-2023 15:03:07
02-02-2023 15:03:07
_The documentation is not available anymore as the PR was closed or merged._<|||||>merging as the failing test (`tests/models/cvt/test_modeling_cvt.py::CvtModelTest::test_save_load_fast_init_to_base`) is a known flaky test
transformers
21,422
closed
Add distinct section names for PyTorch and TF
Super-small fix here - the section names for PyTorch and TF were identical in the notebooks doc, which meant that when you clicked on one of the TF categories in the TOC you got sent to the PyTorch one instead (because it came first). This PR adds separate section names so this doesn't happen! (Thanks @mishig25 for telling me how to do that!)
02-02-2023 13:38:53
02-02-2023 13:38:53
_The documentation is not available anymore as the PR was closed or merged._<|||||>Thank you for fixing this!
transformers
21,421
closed
Use torch `1.13.1` in push/scheduled CI
# What does this PR do? Well, I should update these much earlier.
02-02-2023 13:14:52
02-02-2023 13:14:52
_The documentation is not available anymore as the PR was closed or merged._
transformers
21,420
closed
[examples/research_projects/onnx/summarization] is outdated
### System Info - `transformers` version: 4.25.0 - Platform: Linux-4.19.91-24.1.al7.x86_64-x86_64-with-glibc2.10 - Python version: 3.8.10 - Huggingface_hub version: 0.12.0 - PyTorch version (GPU?): 1.13.1+cu116 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed ### Who can help? @fatcat-z ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction With the above env set, just clone the repo and run ``` python run_onnx_exporter.py --model_name_or_path facebook/bart-base ``` ### Expected behavior **Expected**: BART with BeamSearch exported with no error. BTW, I notice that projects in `research_projects` are not maintained actively. I want to export an `XXXModelForConditionalGeneration` with BeamSearch. And this demo is the best reference I have found so far. So I truly hope someone can fix it. **Actual**: ```pytb /home/root/.local/lib/python3.8/site-packages/transformers/models/bart/modeling_bart.py:232: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! if attn_weights.size() != (bsz * self.num_heads, tgt_len, src_len): /home/root/.local/lib/python3.8/site-packages/transformers/models/bart/modeling_bart.py:239: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! if attention_mask.size() != (bsz, 1, tgt_len, src_len): /home/root/.local/lib/python3.8/site-packages/transformers/models/bart/modeling_bart.py:271: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! if attn_output.size() != (bsz * self.num_heads, tgt_len, self.head_dim): /home/root/.local/lib/python3.8/site-packages/transformers/models/bart/modeling_bart.py:915: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! if input_shape[-1] > 1: /home/root/.local/lib/python3.8/site-packages/transformers/models/bart/modeling_bart.py:96: TracerWarning: torch.tensor results are registered as constants in the trace. You can safely ignore this warning if you use this function to create tensors out of constant variables that would be the same every time you call this function. In any other case, this might cause the trace to be incorrect. mask = torch.full((tgt_len, tgt_len), torch.tensor(torch.finfo(dtype).min)) /home/root/.local/lib/python3.8/site-packages/torch/jit/_trace.py:976: TracerWarning: Encountering a list at the output of the tracer might cause the trace to be incorrect, this is only valid if the container structure does not change based on the module's inputs. Consider using a constant container instead (e.g. for `list`, use a `tuple` instead. for `dict`, use a `NamedTuple` instead). If you absolutely need this and know the side effects, pass strict=False to trace() to allow this behavior. module._c._create_method_from_trace( /home/root/.local/lib/python3.8/site-packages/torch/jit/_trace.py:154: UserWarning: The .grad attribute of a Tensor that is not a leaf Tensor is being accessed. Its .grad attribute won't be populated during autograd.backward(). If you indeed want the .grad field to be populated for a non-leaf Tensor, use .retain_grad() on the non-leaf Tensor. If you access the non-leaf Tensor by mistake, make sure you access the leaf Tensor instead. See github.com/pytorch/pytorch/pull/30531 for more informations. (Triggered internally at aten/src/ATen/core/TensorBody.h:480.) if a.grad is not None: /home/root/.local/lib/python3.8/site-packages/torch/jit/annotations.py:309: UserWarning: TorchScript will treat type annotations of Tensor dtype-specific subtypes as if they are normal Tensors. dtype constraints are not enforced in compilation either. warnings.warn("TorchScript will treat type annotations of Tensor " Traceback (most recent call last): File "run_onnx_exporter.py", line 207, in <module> main() File "run_onnx_exporter.py", line 203, in main export_and_validate_model(model, tokenizer, output_name, num_beams, max_length) File "run_onnx_exporter.py", line 123, in export_and_validate_model torch.onnx.export( File "/home/root/.local/lib/python3.8/site-packages/torch/onnx/utils.py", line 504, in export _export( File "/home/root/.local/lib/python3.8/site-packages/torch/onnx/utils.py", line 1529, in _export graph, params_dict, torch_out = _model_to_graph( File "/home/root/.local/lib/python3.8/site-packages/torch/onnx/utils.py", line 1131, in _model_to_graph example_outputs = _get_example_outputs(model, args) File "/home/root/.local/lib/python3.8/site-packages/torch/onnx/utils.py", line 1017, in _get_example_outputs example_outputs = model(*input_args, **input_kwargs) File "/home/root/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl return forward_call(*input, **kwargs) RuntimeError: forward() Expected a value of type 'Tensor (inferred)' for argument 'num_beams' but instead found type 'int'. Inferred 'num_beams' to be of type 'Tensor' because it was not annotated with an explicit type. Position: 3 Value: 4 Declaration: forward(__torch__.bart_onnx.generation_onnx.BARTBeamSearchGenerator self, Tensor input_ids, Tensor attention_mask, Tensor num_beams, Tensor max_length, Tensor decoder_start_token_id) -> Tensor Cast error details: Unable to cast 4 to Tensor ```
02-02-2023 11:29:43
02-02-2023 11:29:43
This is an unmaintained example, it won't work without using the transformers version corresponding to the time it was written.<|||||>@sgugger Thanks for your quick reply. For some reason, I have to use transformer==4.25.0. I wonder if @fatcat-z has any suggestion on how to adapt the code?<|||||>Hi, For converting summarization models to ONNX, we now have a lot of classes implemented in [HuggingFace Optimum](https://huggingface.co/docs/optimum/index). Generation with ONNX models is also implemented there (greedy, beam search, etc.). Check the [guides](https://huggingface.co/docs/optimum/onnxruntime/overview) for more info.<|||||>Hi @NielsRogge Optimum does look promising. But my model is a GPT-like decoder, with only a `*ModelForConditionalGeneration` interface. You can find details [here](https://huggingface.co/BAAI/glm-large/blob/main/modeling_glm.py). The signature of the generation interface differs a bit from what Optimum has officially supported. As far as I can tell, even if I managed to convert the model into ONNX following this [guide](https://huggingface.co/docs/transformers/serialization), using Optimum will run into another issue.<|||||>Hi @un-certainty, not sure if I understood your issue well. Would the [`ORTModelForCustomTasks`](https://github.com/huggingface/optimum/blob/e8f5a955bc40eea8c1382ab29be8f8ac99601817/optimum/onnxruntime/modeling_ort.py#L1585) help you to achieve this? Otherwise, don't hesitate to open an issue in [Optimum](https://github.com/huggingface/optimum/issues) to see how we can improve the current implementation.<|||||>Hi @un-certainty , Could you please try to update the code [here](https://github.com/huggingface/transformers/blob/197e7ce911d91d85eb2f91858720957c2d979cd2/examples/research_projects/onnx/summarization/bart_onnx/generation_onnx.py#L709)? Please set the type explicitly, like "num_beams: int".<|||||>Hi @fatcat-z Thanks for your suggestion. I found that you created two traced decoders [here](https://github.com/huggingface/transformers/blob/197e7ce911d91d85eb2f91858720957c2d979cd2/examples/research_projects/onnx/summarization/bart_onnx/generation_onnx.py#L222). So will there be two decoders in the converted ONNX graph?<|||||>> Hi @fatcat-z > > Thanks for your suggestion. > > I found that you created two traced decoders [here](https://github.com/huggingface/transformers/blob/197e7ce911d91d85eb2f91858720957c2d979cd2/examples/research_projects/onnx/summarization/bart_onnx/generation_onnx.py#L222). So will there be two decoders in the converted ONNX graph? In the final ONNX graph, there will be no decoder ONNX op so there won't be 2 decoders there. The point is: each traced decoder in the model will be converted to a set of ONNX ops in the final ONNX graph.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
21,419
closed
Fix Graphormer test suite
# What does this PR do? Fixes comment in #21367, @ydshieh Updated shape of model instantiation when calling pretrained in test suite, + updated values in the integration test.
02-02-2023 11:03:15
02-02-2023 11:03:15
_The documentation is not available anymore as the PR was closed or merged._
transformers
21,418
closed
add new model of MGP-STR
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Add new model of MGP-STR. Fixes https://github.com/huggingface/transformers/issues/18828 ## Before submitting - [√] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [√] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [√] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [√] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [√] Did you write any new necessary tests? ## Who can review? @amyeroberts and @NielsRogge <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts and @NielsRogge - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
02-02-2023 09:56:47
02-02-2023 09:56:47
_The documentation is not available anymore as the PR was closed or merged._
transformers
21,417
closed
(maybe) redundant code in transformers/src/transformers/models/bert/
Hi, I was looking at the BERT code and noticed that `BertOutput` and `BertSelfOutput` are almost the same, except for the `in_features` argument to `nn.Linear`? https://github.com/huggingface/transformers/blob/92ce53aab859012f7714dae6d6fce7a7d701e75f/src/transformers/models/bert/modeling_bert.py#L376-L387 https://github.com/huggingface/transformers/blob/92ce53aab859012f7714dae6d6fce7a7d701e75f/src/transformers/models/bert/modeling_bert.py#L454-L465 I was wondering if there is a reason to have it in this way, instead of just one `Output` class accepting an `in_hidden_size` for when it needs to change. Something like this: ``` class BertNewOutput(nn.Module): def __init__(self, config, in_hidden_size): super().__init__() self.dense = nn.Linear(in_hidden_size, config.hidden_size) ... ``` The pattern with `BertOutput` and `BertSelfOutput` is repeated in similar parts of the code such as in `src/transformers/models/bert/modeling_tf_bert.py` where https://github.com/huggingface/transformers/blob/92ce53aab859012f7714dae6d6fce7a7d701e75f/src/transformers/models/bert/modeling_tf_bert.py#L349 is the same code as in https://github.com/huggingface/transformers/blob/92ce53aab859012f7714dae6d6fce7a7d701e75f/src/transformers/models/bert/modeling_tf_bert.py#L427
02-02-2023 08:06:59
02-02-2023 08:06:59
Please use the [forums](https://discuss.huggingface.co/) for such questions as we keep issues for bugs and feature requests only.
transformers
21,416
closed
Fixed RAG script which was failing on dummy example
# What does this PR do? Fixed RAG script which was failing on test_epoch_end in dummy example The dummy example fails when test_epoch_end is called. The prefix="test" should be dynamic in the log metrics too. Also, test_finetune.sh was failing when test file was not present. @shamanez would be ideal to review. Let me know if more information is needed.
02-02-2023 07:40:02
02-02-2023 07:40:02
_The documentation is not available anymore as the PR was closed or merged._<|||||>Hi there. This is an unmaintained research project, so we normally don't accept PRs on it. You can try pinging the original authors and see if they accept your suggestion :-)<|||||>@sgugger looks all good to me.<|||||>Thanks for having a look @shamanez and thanks for your contribution @kaustubhdhole !
transformers
21,415
closed
layoutlmv3-base-chinese convert onnx
### Feature request When use layoutlmv3-base-chinese converting to onnx, the comand line is as follows: python3.7 -m transformers.onnx --model=model path --feature token-classification, we find it lacks of vocab_file and merges_file, can you support us above two files, thank you very much, if not, can you give us some method to solve above problem? ### Motivation No. ### Your contribution No.
02-02-2023 07:12:09
02-02-2023 07:12:09
ONNX will only convert the model, not the tokenizer. ONNX conversion is now moved to the Optimum package, no need to pass --feature anymore as it will infer that automatically based on the checkpoint. Docs here: https://huggingface.co/docs/optimum/exporters/onnx/usage_guides/export_a_model.<|||||>Closing this as the issue seems resolved.
transformers
21,414
closed
add support to MPNetForCausalLM
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes [#21379](https://github.com/huggingface/transformers/issues/21379#issue-1563845759) Goal: Add support to MPNetForCausalLM Changes: Modified [modeling_mpnet.py](https://github.com/huggingface/transformers/blob/master/src/transformers/models/mpnet/modeling_mpnet.py): add cross-attention and accept arguments encoder_hidden_states & encoder_attention_mask ; and added the new class MPNetForCausalLM; ## Before submitting - [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [X] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [X] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts and @NielsRogge - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 --> @VictorSanh @thomwolf @patil-suraj
02-02-2023 07:11:45
02-02-2023 07:11:45
_The documentation is not available anymore as the PR was closed or merged._<|||||>Hi there! MPNet is an encoder model, so there are no checkpoints available that will work for causal LM objective. Why are you interested in adding this model?<|||||>Hi @sgugger! I've recently been using sentense-transformers library which relies on transformers for sentence embedding. and certain features(https://www.sbert.net/docs/package_reference/losses.html#denoisingautoencoderloss) require decoder part of model. mpnet show good performance in sentence-transformers library and i tried to write a decoder part to further improve this. there was a similar issue before [#14737](https://github.com/huggingface/transformers/issues/14737)<|||||>There is no `DistilBertForCausalLM` either in the library, so the issue you link to doesn't really have a link. Like I said, there are no pretrained checkpoints for MPNet and causal language modeling, so even if we add this architecture, you won't be able to use it in sentence-transformers since you will get garbage outputs.
transformers
21,413
closed
Error: Both `max_new_tokens` and `max_length` have been set but they serve the same purpose
### System Info - `transformers` version: 4.26.0 - Platform: Linux-5.10.147+-x86_64-with-glibc2.29 - Python version: 3.8.10 - Huggingface_hub version: 0.12.0 - PyTorch version (GPU?): 1.13.1+cu116 (False) - Tensorflow version (GPU?): 2.9.2 (False) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed ### Who can help? @gante @ArthurZucker @younesbelkada @sgugger @stevhliu @MKhalusova ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction I was setting up and using Galactica language model. Was facing this error: **ValueError: Both `max_new_tokens` and `max_length` have been set but they serve the same purpose -- setting a limit to the generated output length. Remove one of those arguments. Please refer to the documentation for more information.** ``` !pip install galai import galai as gal from galai.notebook_utils import * ``` `model = gal.load_model("base") ` `model.generate("The Transformer architecture [START_REF]") ` ----- here the error came up. Same error came up while running this ---- ``` prompt = f"Question: A bat and a ball cost $\\$1.10$ in total. The bat costs $\\$1.00$ more than the ball. How much does the ball cost?\n\nAnswer:" display_markdown(model.generate(prompt, new_doc=True, max_length=250)) ``` ### Expected behavior <img width="1179" alt="Screenshot 2023-02-02 at 9 37 50 AM" src="https://user-images.githubusercontent.com/76526750/216229408-a68fb60b-0c50-4fe9-877a-d0cac693ca46.png"> <img width="1167" alt="Screenshot 2023-02-02 at 9 38 00 AM" src="https://user-images.githubusercontent.com/76526750/216229415-f8d4f67f-8e23-4937-a57e-967c3f37357f.png"> these are the expected results. Please help.
02-02-2023 04:09:16
02-02-2023 04:09:16
Hi @arastumudgal If I am not mistaken, the fix should have been addressed in https://github.com/huggingface/transformers/pull/21347 If you install `transformers` from the `main` branch `pip install git+https://github.com/huggingface/transformers`, your script should work <|||||>Hi @younesbelkada ` ImportError: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html` I am facing this error on jupyter notebook when loading the models. <img width="582" alt="Screenshot 2023-02-07 at 9 04 04 AM" src="https://user-images.githubusercontent.com/76526750/217141786-2dc0da11-5a16-42c0-9924-e029fe56abe1.png"> ``` !pip install -U ipywidgets !pip install ipywidgets --upgrade !jupyter nbextension enable --py widgetsnbextension !pip install -U jupyter ``` Did this too but still it isnt working, showing the same error. Any fix? It is working on google colab but not on jupyter notebook. <|||||>Hi @arastumudgal Thanks for the issue but this looks like an issue that is independent from `transformers` <|||||>> Hi @arastumudgal If I am not mistaken, the fix should have been addressed in #21347 If you install `transformers` from the `main` branch `pip install git+https://github.com/huggingface/transformers`, your script should work Just tried this, facing the same error though. @younesbelkada <|||||>@arastumudgal Maybe an update flag was missing, i.e. `pip install -U git+https://github.com/huggingface/transformers` :) Please note that `galai` [has a pinned transformers version](https://github.com/paperswithcode/galai/blob/e3e34481aefeceff9f239f2121988566382a72b2/requirements.txt#L2), and you might run into other issues.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
21,412
closed
[Whisper] Word level and character level timestamps
### Feature request `output = pipe(audio_file,chunk_length_s=30,return_timestamps=True)` Getting word level and character level timestamps with Whisper model asr pipeline upon using `return_timestamps=True` ### Motivation The timestamps returned currently are at stride level. For our use case, we want to get accurate timestamps for each word or possibly each character.use case ### Your contribution With guidance, happy to submit the PR.
02-02-2023 01:48:06
02-02-2023 01:48:06
cc @ArthurZucker and @Narsil <|||||>Hi @Rishabh-Choudhry . This is impossible to do with `whisper`. Whisper simply doesn't work in such a way, it output "timestamp" tokens, roughly when it feels like. And that's all we can do with them. I've seen hybrid approaches where you use `wav2vec2` (and similar) to get those accurate timestamps and solve the potential conflicts. This is however outside of scope for the pipelines in my opinion. (Too complex, and requires running 2 different models, and impossible to align in the general case). https://github.com/m-bain/whisperX Would that work for you ? <|||||>This approach with DTW is more memory efficient and scalable: https://github.com/linto-ai/whisper-timestamped<|||||>Just going to bump this. There are several solutions out there and this is a pretty key missing feature from the transformer implementation of Whisper. E.g. https://github.com/jianfch/stable-ts/blob/main/stable_whisper/whisper_word_level.py<|||||>There's a PR opened for it: https://github.com/huggingface/transformers/pull/21427 If you look at it, it actually uncovered some issues with Whisper itself (in non timestamp mode, the default in `transformers`, not the default in `openai`.)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>NB: word level timestamps were added to openai/whisper last week. Tried it out, it seems to work. https://github.com/openai/whisper/commit/500d0fe9668fae5fe2af2b6a3c4950f8a29aa145<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>I've investigated adding word-level timestamps to Transformers using the OpenAI approach of using the cross-attention weights. Preliminary results can be found in this Colab: https://colab.research.google.com/drive/1VWbAgzKWQsStdAA1hcumBU2uyFQX7zAB?usp=sharing<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Closed by https://github.com/huggingface/transformers/pull/23205<|||||>@hollance Thanks for adding a nice feature. I know that using the cross attention weight to get the token level timestamp. Then, I think there is no dependence between doing additional finetuning and getting token level timestamp. What do you think? If I want to get token level timestamp from my finetuned model, is there anything I need to be careful about? Timestamp tokens in sentence units will be attached and trained.<|||||>@upskyy You may need to use different `attention_heads` on the fine-tuned model. See also: https://gist.github.com/hollance/42e32852f24243b748ae6bc1f985b13a
transformers
21,411
closed
UnboundLocalError: local variable 'image_processor_class' referenced before assignment
### System Info - `transformers` version: 4.26.0.dev0 - Platform: Linux-4.18.0-425.3.1.el8.x86_64-x86_64-with-glibc2.28 - Python version: 3.9.16 - Huggingface_hub version: 0.12.0 - PyTorch version (GPU?): 1.13.1 (False) - Tensorflow version (GPU?): 2.11.0 (False) - Flax version (CPU?/GPU?/TPU?): 0.5.3 (cpu) - Jax version: 0.3.6 - JaxLib version: 0.3.5 - Using GPU in script?: False - Using distributed or parallel set-up in script?: False ### Who can help? @sgugger ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction When I try to add a new model as per the tutorial [here](https://huggingface.co/docs/transformers/add_new_model), I get the following error with the given set of inputs: ``` $ transformers-cli add-new-model-like What is the model you would like to duplicate? Please provide the lowercase `model_type` (e.g. roberta): roberta What is the name (with no special casing) for your new model in the paper (e.g. RoBERTa)? NewTransformer What identifier would you like to use for the `model_type` of this model? [newtransformer] What lowercase name would you like to use for the module (folder) of this model? [newtransformer] What prefix (camel-cased) would you like to use for the model classes of this model (e.g. Roberta)? [NewTransformer] What prefix (upper-cased) would you like to use for the constants relative to this model? [NEWTRANSFORMER] What will be the name of the config class for this model? [NewTransformerConfig] Please give a checkpoint identifier (on the model Hub) for this new model (e.g. facebook/roberta-base): Will your new model use the same processing class as roberta (RobertaTokenizer) (yes/no)? no What will be the name of the tokenizer class for this model? [NewTransformerTokenizer] Traceback (most recent call last): File "/home/stuli/.conda/envs/bin/transformers-cli", line 8, in <module> sys.exit(main()) File "/scratch/gpfs/stuli/transformers/src/transformers/commands/transformers_cli.py", line 54, in main service = args.func(args) File "/scratch/gpfs/stuli/transformers/src/transformers/commands/add_new_model_like.py", line 1351, in add_new_model_like_command_factory return AddNewModelLikeCommand(config_file=args.config_file, path_to_repo=args.path_to_repo) File "/scratch/gpfs/stuli/transformers/src/transformers/commands/add_new_model_like.py", line 1382, in __init__ ) = get_user_input() File "/scratch/gpfs/stuli/transformers/src/transformers/commands/add_new_model_like.py", line 1583, in get_user_input image_processor_class=image_processor_class, UnboundLocalError: local variable 'image_processor_class' referenced before assignment ``` ### Expected behavior There should be no error with the given sequence of inputs when creating a new model.
02-02-2023 00:52:10
02-02-2023 00:52:10
transformers
21,410
closed
Fix image_processor_class bug
# What does this PR do? This PR fixes the image_process_class bug. Fixes #21411 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @sgugger
02-02-2023 00:42:21
02-02-2023 00:42:21
_The documentation is not available anymore as the PR was closed or merged._
transformers
21,409
closed
Fix task guide formatting
Fixes formatting for some of the task guides where links to `AutoModelForX` in the Train sections aren't properly rendered because there wasn't a blank line after the `<Tip>` block preceding the text.
02-02-2023 00:36:46
02-02-2023 00:36:46
_The documentation is not available anymore as the PR was closed or merged._
transformers
21,408
closed
Refactor model summary
This PR refactors the [model summary](https://huggingface.co/docs/transformers/model_summary): - updated with speech/audio, computer vision, and multimodal models (picked based on the ones with the most doc views, this can be refined to show or hide other models) - embeds a timeline of when models are released to provide a visual reference - provides structure and narrative - instead of a list - to discuss the high-level differences between models (users can compare the models, see trends and progression in the larger modelscape) - removed the [attention section](https://huggingface.co/docs/transformers/model_summary#more-technical-aspects), which will get its own page (and possibly be expanded with more attention types) in the conceptual guide section Would love to hear what you think about the direction of this doc please! @sgugger @MKhalusova @LysandreJik
02-01-2023 20:26:32
02-01-2023 20:26:32
_The documentation is not available anymore as the PR was closed or merged._<|||||>I don't think a model summary with sections per modality can work. I'm okay with removing the specific of each model currently in the model summary (make sure they are present in the corresponding model pages however as we don't want to lose anything) but I think we need to have a better structure with h2 sections for modalities than h3 sections for different kinds of models (in NLP encoder/decoder/encoder-decoder, in CV transformer/convnet etc.).<|||||>> but I think we need to have a better structure with h2 sections for modalities than h3 sections for different kinds of models Good point, this’ll work better and allow me to include convnets more naturally! Great questions @MKhalusova, let me try and clarify (and also refine the purpose of the doc while doing so)! 🙂 > My main issue is that it is not entirely clear to me what audience these documents target and what they aim to achieve. The audience is a beginner or someone who is coming from a different modality (say like, from NLP to CV), and the goal is to provide a high-level conceptual overview of the different model types available in each modality. > If I know what model I am interested in, its model doc is much more useful, as it has all of the information. For sure, the model docs fulfill the role of providing all the nitty-gritty information. But sometimes, this can be too much detail, and you can't really make connections between models or understand why you should use one over the other because you're lacking context. The model summary doc tries to go up a level and give users an introductory overview instead of all the technical details. If they’re interested in learning more, they can follow the links to the specific model doc page. > If I want to learn about the difference between encoders and decoders, the information is in the course. The course only has very general information about encoders and decoders. For example, it doesn’t tell you how BERT and DeBERTa are different. > If I want to compare two different models for the same task, I have to jump up and down in the doc and may learn some differences in how they work internally, but what if I’m interested in other aspects such as benchmarks, size of the model, how recent it is, etc.? Yeah the structure I have now is not the best! 😅 But I think @sgugger's suggestion will improve this quite a bit, where it’ll be more readable, and related sections will be more localized, so you don’t have to jump around as much. The goal though is not to give users all the technical details about a model (size, performance, etc.). > So my question is, what are we aiming to achieve with this doc? It can be difficult to approach Transformers when there are so many X-former variants. This doc hopes to provide users with a beginner-friendly guide to them so they can make connections and be like oh wait, this CV model is just like an NLP model, and it's just the input that's different. I think we also want to give more context about the models in terms of design decisions and constraints (e.g., Swin does _x_, unlike ViT because _y_). In a nutshell, I suppose it's to give users the bigger picture of the Transformer model landscape and give them a mental framework to categorize and think about Transformer models. > but what if I’m interested in other aspects such as benchmarks > Are we creating a place where one can compare models on several aspects? I think we can boost the impact of this doc even more by addressing those issues you raise above. An embedded Space at the top of the doc that lets users discover models based on certain parameters (longer sequences, tasks, memory-efficiency, multilinguality, etc.) would be very useful and guide users toward selecting a model for their use-case. I can look into this as a next step! 🙂<|||||>Updated the structure to be: ``` ## Computer vision ### Encoder ### ConvNet ## NLP ### Encoder ### Decoder ... ``` If this looks good to everyone, I'll go ahead and fill out the rest of the sections!<|||||>Ok I think this is ready for review now, time to call in the experts! The goal of the doc is to provide a high-level overview of the model types in each modality, so users have more context and can start making connections. @sayakpaul, would you mind reviewing the computer vision and maybe the multimodal sections? @sanchit-gandhi, if you could take a look at the audio section please (I promise it's way shorter this time 😅 )? Thank you both so much! 👏<|||||>I kept most of the model summary infos that wasn't redundant with what was already on the model doc pages (for example, the original GPT); these have been added under the **Tips** section. I've also split out the attention mechanisms onto their own page, which I'll expand on later in a separate PR with additional attention types.
transformers
21,407
closed
multi gpu training.
### Feature request What we can do if using run_clm.py to train with multi gpu's? ### Motivation multi gpu training for faster training. ### Your contribution can we add this in the script? model = nn.DataParallel(model,device_ids=[i for i in range(torch.cuda.device_count())])
02-01-2023 19:10:24
02-01-2023 19:10:24
If you have multiple GPUs it will run on multiple GPUs.<|||||>![image](https://user-images.githubusercontent.com/41872440/216619913-99568f8c-8a82-42ab-ae7a-dd56846dd395.png) I have 8 gpu's in this machine. ![image](https://user-images.githubusercontent.com/41872440/216619996-7af22496-7fb8-4b6d-90f2-3606846b3cdc.png) I think its not taking all 8 gpu's. Already tried changing batch_sizes and with multiple of 8
transformers
21,406
closed
Enable PyTorch/XLA Fully Sharded Data Parallel (FSDP)
# What does this PR do? This PR enables the user to make use of the [PyTorch/XLA implementation of FSDP](https://github.com/pytorch/xla/tree/master/torch_xla/distributed/fsdp), including the newly added [auto-wrap feature](https://github.com/pytorch/xla/pull/4318). Four arguments have been added to `training_args.py` to facilitate this functionality: - `xla_fsdp`: this flag is a string containing the location of a `.json` file which specifies the FSDP arguments the user wants to use when wrapping their model. - `xla_fsdp_min_num_params`: this flag is an int which will set a size-based automatic wrapping policy which automatically FSDP wraps any module with at least `xla_fsdp_min_num_params` many parameters. - `xla_fsdp_transformer_layer_cls_to_wrap`: this flag is a list of (case-sensitive) strings which will set a layer-class-based automatic wrapping policy which automatically FSDP wraps any module whose name matches one of the listed strings. - `xla_fsdp_grad_ckpt`: this flag is a bool which determines whether gradient checkpointing is enabled for the automatically wrapped layers. # Design notes and future work 1) This PR is an updated version of [this closed PR](https://github.com/huggingface/transformers/pull/20774), which enabled FSDP for a more restricted class of models. This PR now enables nested FSDP wrapping via two auto-wrap policies, avoiding the restrictions of the previous PR. 2) For very large model sizes (greater than, say, 128B parameters), users may see host-side OOMs on TPUs during initialization. This can be mitigated by initializing layer weights immediately after construction, wrapping with FSDP, and moving onto the XLA device, as can be seen in [this branch](https://github.com/AlexWertheim/transformers/blob/einsum/src/transformers/models/gpt2/modeling_gpt2.py#L690-L723). We opted to enable FSDP wrapping at the trainer level, since it does not necessitate model-specific changes and does not disrupt the existing architecture for model construction and initialization. 3) Checkpointing support for XLA FSDP is not included as part of this PR. We hope to add it soon via another PR. 4) We have not included testing for XLA FSDP as part of this PR. We would like to add this in a future PR. Thanks to @ronghanghu for his assistance in the preparation of this PR. Among many other contributions, the observations that one must copy the model's forward method and replace the optimizer step are his. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? --> ## Who can review? @sgugger @JackCaoG <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts and @NielsRogge - speech models: @sanchit-gandhi Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger and @stevhliu HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
02-01-2023 18:33:24
02-01-2023 18:33:24
_The documentation is not available anymore as the PR was closed or merged._<|||||>Thanks so much for the feedback! > I think this can all be added via the new `fsdp_config` training arguments instead of adding four new training arguments in a class that already has too many of them (and for which users have complained a lot about). Adding an `xla` boolean inside and relying on the existing `fsdp_min_num_params`, then adding new keys for the two other arguments you want to add would work better. The new `fsdp_config` training argument is great. I envision the strategy would be something like adding four keys to the `fsdp_config` dictionary: - An `xla` boolean which indicates whether the user is using Fairscale FSDP or XLA FSDP - An `xla_config` string which points to the location of a JSON file which stores the XLA FSDP configuration parameters - The arguments `xla_fsdp_transformer_layer_class_to_wrap` and `xla_grad_ckpt` as before Does this make sense to you? One thing I am wondering about is why `fsdp_transformer_layer_class_to_wrap` hasn't been absorbed into `fsdp_config` the same way `fsdp_min_num_params` has. It would be a bit strange to have `xla_fsdp_transformer_layer_class_to_wrap` as part of the `fsdp_config` dictionary and not `fsdp_transformer_layer_class_to_wrap`. > The other thing to change is that `self.model` should always be the original model, not the wrapped one, which is one you need the hack on the signature. That attribute should be left as is. Just to clarify, do you mean that we should set `self.model = model` before we modify `model`'s forward signature, or that we shouldn't set `self.model = model` at all? The former is no problem, but I think we need to do need to set `self.model = model`; among other things, without it, the program hits a segfault between training and evaluation. <|||||>Your plan for the config sounds sound, cc @pacman100 on the question around `fsdp_transformer_layer_class_to_wrap`. Regarding the `self.model` question, it looks like the traditional FSDP also changes `self.model` to use the FSDP model, though it doesn't look like it updates the signature. Is it missing there as well?<|||||>> Your plan for the config sounds sound, cc @pacman100 on the question around `fsdp_transformer_layer_class_to_wrap`. Excellent, thanks. If it helps, I'm happy to add `fsdp_transformer_layer_class_to_wrap` to the`fsdp_config` training argument as part of my PR, and correct a few small typos I noticed. > Regarding the `self.model` question, it looks like the traditional FSDP also changes `self.model` to use the FSDP model, though it doesn't look like it updates the signature. Is it missing there as well? Great question. I wasn't sure, so I asked @ronghanghu about it. He pointed out to me that we likely do not need to update the signature for either case anymore, now that XLA FSDP wrapping is occurring in `_wrap_model`. Indeed, `_remove_unused_columns` is called by `get_train_dataloader` before `_wrap_model` is called in`inner_training_loop`, and so it will use the original unwrapped model's signature. Moreover, while the functions `get_eval_dataloader` and `get_test_dataloader` call `_remove_unused_columns` after the model is wrapped, this should be ok because `self.self._signature_columns` is already set in the previous call to `get_train_dataloader`. I still need to test to verify this, but once I do, I can remove the update to the wrapped model signature. In the event that it's necessary for XLA FSDP, it will also be necessary for traditional FSDP. <|||||>> I'm happy to add fsdp_transformer_layer_class_to_wrap to the fsdp_config training argument as part of my PR, and correct a few small typos I noticed. Hello @AlexWertheim, adding `fsdp_transformer_layer_class_to_wrap` to `fsdp_config` would be great. I had this change in my backlog but it would be great if this PR does that alongside the conterpart XLA changes. I feel only boolean `xla` would be enough along with `grad_ckpt`, `compute_dtype` and `buffer_dtype` to the config. The other args `fsdp_min_num_params ` and `fsdp_transformer_layer_class_to_wrap` can be used in either cases and doc mentioning that `grad_ckpt`, `compute_dtype` and `buffer_dtype` is only applicable with `xla` should be enough along with a warning when fsdp is being used without xla with this any of these fields passed. I feel `xla_config` as a path to file inside `fsdp_config` which itself is a path to file would be lot of load on user <|||||>> Hello @AlexWertheim, adding `fsdp_transformer_layer_class_to_wrap` to `fsdp_config` would be great. I had this change in my backlog but it would be great if this PR does that alongside the conterpart XLA changes. Thanks @pacman100, I'd be happy to do this. One question I had was how we should handle the (unusual) situation when both the deprecated flag `fsdp_transformer_layer_cls_to_wrap` and the option within `fsdp_config` are both specified. With `fsdp_min_num_params`, there is the reasonably simple approach to take the max of the two specified numbers. In the case of `fsdp_transformer_layer_cls_to_wrap`, there are a few approaches that come to mind: - Raise an error - Raise a warning and use the `fsdp_config` specified string - Use both layer classes What do you think? Also, XLA FSDP supports automatic wrapping based on a set of layer names. I believe Fairscale FSDP also does as well, but currently, `fsdp_transformer_layer_cls_to_wrap` only accepts a single string as input. What do you think about expanding `fsdp_transformer_layer_cls_to_wrap` to accept a list of strings? The modifications to the existing Fairscale FSDP logic will be quite small - right now, I think it just passes a singleton set to the auto wrap policy. > I feel only boolean `xla` would be enough along with `grad_ckpt`, `compute_dtype` and `buffer_dtype` to the config. The other args `fsdp_min_num_params ` and `fsdp_transformer_layer_class_to_wrap` can be used in either cases and doc mentioning that `grad_ckpt`, `compute_dtype` and `buffer_dtype` is only applicable with `xla` should be enough along with a warning when fsdp is being used without xla with this any of these fields passed. > > I feel `xla_config` as a path to file inside `fsdp_config` which itself is a path to file would be lot of load on user With appreciation for the confusion that this might cause the user, I think it's still important to allow the user to specify XLA FSDP specific configuration parameters via an `xla_fsdp_settings` flag. The [XLA FSDP parameters](https://github.com/pytorch/xla/blob/master/torch_xla/distributed/fsdp/xla_fully_sharded_data_parallel.py#L122-L240) actually differ quite substantially from [Fairscale FSDP parameters](https://github.com/facebookresearch/fairscale/blob/v0.4.13/fairscale/nn/data_parallel/fully_sharded_data_parallel.py#L170-L291), and so I think it would be valuable for the user to be able to specify the XLA FSDP arguments that they want. Adding a separate flag for each non-redundant XLA FSDP parameter seems like it might bloat the `fsdp_config` file with too many XLA specific arguments. If they don't supply an XLA config file, then no harm done - the XLA FSDP default parameters will be used. Compute dtype and buffer dtype are actually already Fairscale FSDP arguments, and so these could be inferred from existing flags if you'd like, though I think it's best for the user to specify them through `xla_fsdp_settings`. (Right now, the trainer sets a mixed precision policy based on whether the `bf16` or `fp16` training args are specified). <|||||>Still happy to discuss what the final changes look like, but as a proof of concept, I've pushed some changes which (among other things) implement some of the items discussed in prior comments: - `fsdp_transformer_layer_cls_to_wrap` is now moved inside `fsdp_config`, and is treated as a list of strings instead of a single string. In the event that the user enters `fsdp_transformer_layer_cls_to_wrap` as a string in the JSON file, the program converts this to a list. - If the user uses the deprecated version of `fsdp_transformer_layer_cls_to_wrap` outside `fsdp_config`, the program issues a warning and combines it (as a list of strings) with whatever `fsdp_transformer_layer_cls_to_wrap` is set to. - The PyTorch FSDP logic has been modified to iterate through the elements of `fsdp_transformer_layer_cls_to_wrap` instead of passing a singleton set to the auto-wrap policy. - I noted that `fsdp_min_num_params` wasn't getting set correctly. This is because `getattr(dict, key, default)` seems to always return the default. I've replaced instances of `getattr(dict, key, default)` with `dict.get(key, default)`, including with `fsdp_min_num_params`. - XLA FSDP parameters are still loaded into their own separate dictionary from the JSON file specified by `xla_fsdp_settings`, though I am happy to continue discussion over whether this should be modified - Defaults are now correctly set for the flags `xla` and `xla_fsdp_grad_ckpt`<|||||>Hello @AlexWertheim, thanks a lot for all the changes and detailed notes: At overall level, what if the fsdp_config looked like below. i.e., xla_fsdp_settings was a dict instead of path to another json. Also, `xla_fsdp_grad_ckpt` could be pushed into `xla_fsdp_settings` as `grad_ckpt` param or something like that as it is unique to xla fsdp. This way user would need to give path to only one json `fsdp_config` while all xla params are in `xla_fsdp_settings` when `xla` is True. ```json { "fsdp_transformer_layer_cls_to_wrap": "T5Block", "xla": true, "xla_fsdp_settings": { "buffer_dtype": "bfloat16", "compute_dtype": "bfloat16", "grad_ckpt": true, } } ``` <|||||>> At overall level, what if the fsdp_config looked like below. i.e., xla_fsdp_settings was a dict instead of path to another json. Also, `xla_fsdp_grad_ckpt` could be pushed into `xla_fsdp_settings` as `grad_ckpt` param or something like that as it is unique to xla fsdp. This way user would need to give path to only one json `fsdp_config` while all xla params are in `xla_fsdp_settings` when `xla` is True. > > ```json > { > "fsdp_transformer_layer_cls_to_wrap": "T5Block", > "xla": true, > "xla_fsdp_settings": { > "buffer_dtype": "bfloat16", > "compute_dtype": "bfloat16", > "grad_ckpt": true, > } > } > ``` @pacman100 Thanks very much for the feedback. I think the suggestion to turn `xla_fsdp_settings` into a dictionary is a great one, and a good way to resolve the issue of having a separate configuration path within `fsdp_config`. As far as absorbing `xla_fsdp_grad_ckpt` into `xla_fsdp_settings` goes, I agree that it is nice to have all of the XLA FSDP related configuration information in one place from a logical point of view. That being said, I do have a concern, namely that `xla_fsdp_settings` was supposed to keep track of the XLA FSDP wrapping parameters, and `xla_fsdp_grad_ckpt` is not part of the XLA FSDP wrapping arguments. I worry that the user will get confused about what this flag is for, and in particular, that users will not realize that they have to add this gradient checkpointing flag (or, for that matter, any future XLA FSDP related flags that are not themselves XLA FSDP wrapping params). What do you think? Would also love to hear your thoughts, @ronghanghu <|||||>@sgugger @pacman100 Thanks for taking time to review this pr. As Pytorch 2.0 branch cut just happened and will most likely be release in ~ a month, it will be great if we can enable the FSDP for HF on the master(I believe we will also need to make some other changes, but that can happen in the subsequent pr). This way we can share our benchmark using FSDP with HF more broadly when we doing the announcement/blog post for the 2.0 release.<|||||>@JackCaoG This should be merged in the next few days :-) @AlexWertheim could you rebase your branch on main to fix the tests, and run `make style` on your branch to fix the quality jobs? @pacman100 let me know when you are happy with all the changes.<|||||>> @AlexWertheim could you rebase your branch on main to fix the tests, and run `make style` on your branch to fix the quality jobs? Done! Please note that all commits including and after [77e99c8](https://github.com/huggingface/transformers/pull/21406/commits/77e99c8452aa9fc7b7ca5b03c0d69b2a28e71b0e) are just concerned with rebasing and formatting/style changes. Apologies for the many commits; I had some difficulties running all of the automatic style checks, but I think I've run all of them now, and the CircleCI checks are now passing. Commit [e65dc0b](https://github.com/huggingface/transformers/pull/21406/commits/e65dc0b65a21417a15e30498fa3fd6447334d7ed) modified a check which would have prevented XLA FSDP wrapping when `local_rank = -1`, which is not correct for TPUs. As discussed in prior comments, commit [632318a](https://github.com/huggingface/transformers/pull/21406/commits/632318ac88e3d313843023cf0eff8bb9ac45716f) modified the argument `xla_fsdp_settings` to be a dictionary instead of a path to a JSON file; note that `xla_fsdp_grad_ckpt` was not absorbed into `xla_fsdp_settings`, for the reasons mentioned earlier. Please let me know if there are any additional questions or concerns upon review. Thanks!<|||||>Hi, I was able to do parallel training flawlessly without FSDP on xlm-roberta-**base** on the 8 cores of the TPU-v3 VM (because the model and batch fit properly within each core) by following the [run_glue example](https://github.com/huggingface/transformers/tree/main/examples/pytorch#running-on-tpus) for TPUs. Now I'm trying to get the XL version (facebook/xlm-roberta-**xl**) to work with XLA FSDP with the Trainer integration (set as "full_shard") but I'm getting these memory errors when running the second or first batch: `0%| | 2/939 [03:05<28:08:11, 108.10s/it]Exception in device=TPU:0: RESOURCE_EXHAUSTED: From /job:localservice/replica:0/task:0: 2 root error(s) found. (0) RESOURCE_EXHAUSTED: Attempting to reserve 13.20G at the bottom of memory. That was not possible. There are 10.59G free, 0B reserved, and 10.59G reservable.` From my understanding, the model was supposed to split loaded onto the TPU cores, along with whatever Zero-3 entails, but it doesn't seem to be happening. I tried setting `per_device_train_batch_size=1`, limited tokenizer `max_length=200` and a played with a bunch of `fsdp_config` parameters but none worked. Also I'm confused if I'm still supposed to use the xla_spawn.py script to run with FSDP or not. I tried both and got the same error though. Additionally, trying to debug by myself I found that this [line](https://github.com/huggingface/transformers/blob/main/src/transformers/trainer.py#L617) condition is never True because the conditions on line 601 prevent it. Maybe we could have an example on how to use this feature with the Trainer? Thanks in advance!<|||||>@AlexWertheim This seems to be a device side oom, any advise you can give?
transformers
21,405
closed
Generate: decoder-only models can generate with `inputs_embeds`
# What does this PR do? 2-in-1 PR 🔥 ### 1 - Experimenting with input embeddings Accepting `.generate()` calls with `inputs_embeds` on decoder-only models is a long-standing request (#6535) -- see [this recent comment ](https://github.com/huggingface/transformers/issues/6535#issuecomment-1353658984) in particular and its reacts. It has to be added on a per-model basis, and this PR adds the necessary changes for GPT2. Other models will throw an informative exception if the user passes `inputs_embeds`, asking them to check this PR and implement the same pattern on the model they want to use it with 🤗 Please note that it is still expected that the user passes `input_ids`, i.e. ``` outputs = model.generate(input_ids, inputs_embeds=inputs_embeds) ``` This is because decoder-only models expect the prompt to be present in the output, and this is the only way to preserve it! `input_ids` can also be omitted and, in that case, the output won't contain the prompt. ### 2 - BLIP 2 (cc @NielsRogge) This change is a soft-requirement for BLIP 2. The alternatives to add BLIP 2 are: 1. Support `.generate()` with `inputs_embeds` on decoder-only models 2. Change OPT's reference implementation to accept a new kwarg (`query_embeds`, that gets appended to the embeddings of the `input_ids`) Option 1, this PR, seems like a better option according to our library philosophy :)
02-01-2023 16:15:03
02-01-2023 16:15:03
_The documentation is not available anymore as the PR was closed or merged._
transformers
21,404
closed
Added DagshubCallback
# What does this PR do? Adds a Trainer Integration with [DagsHub](https://dagshub.com/). It extends the MLFlow integration that already currently exists to also track and push artifacts using DVC, into DagsHub repositories. The idea is to allow users to integrate an MLOps stack to their current setups with minimal effort. Here's a colab notebook with a sample integration: https://colab.research.google.com/drive/1KEeQYp3jD8kmkMMhGKR1R1C6cEjnbJLZ ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts and @NielsRogge - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
02-01-2023 14:29:33
02-01-2023 14:29:33
_The documentation is not available anymore as the PR was closed or merged._<|||||>Thank you so much for the lightning review, @sgugger! I've made the relevant changes. I also added a check within `get_available_reporting_integrations` to ensure available integrations are reported accurately. Since `DagsHubCallback` subclasses `MLFlowCallback`, it's still separate. If both integrations are requested, it does not execute twice. This is what the `get_available_reporting_integrations` looks like when moving packages around: ![2023-02-01-095908_1920x606_scrot](https://user-images.githubusercontent.com/52078103/216080956-e57170a6-9a77-49e2-adff-e36e3b885909.png) Hope this was helpful! :)<|||||>Thanks again for your contribution!<|||||>Thank you for your rapid responses!! Have an awesome rest of day 🙂
transformers
21,403
open
How to sending my request about parameters in inference API?
### Model description <img width="999" alt="image" src="https://user-images.githubusercontent.com/61076726/216067290-7969e29b-8c83-41fa-80a6-6c71bc4bac16.png"> I can't find where I can modify, and, does generating inference not include beam search? ### Open source status - [X] The model implementation is available - [X] The model weights are available ### Provide useful links for the implementation _No response_
02-01-2023 14:19:20
02-01-2023 14:19:20
Hi there, this is the Transformers repository. You can address your questions about the inference API [here](https://github.com/huggingface/api-inference-community)<|||||>Can I modify parameters of Hosted inference API? For example, I want the multiple output of this Summarization task rather than one: <img width="459" alt="image" src="https://user-images.githubusercontent.com/61076726/216101165-e26760d7-2163-42b3-bb53-44136ab032da.png"> How can I achieve this? Many Thanks!<|||||>> Hi there, this is the Transformers repository. You can address your questions about the inference API [here](https://github.com/huggingface/api-inference-community) Thanks a lot! I want to know if I can modify the parameters of **Hosted inference API** so that the demo of the webpage outputs multiple results?
transformers
21,402
closed
ModelError while deploying FlanT5-xl
### System Info transformers_version==4.17.0 Plaform = Sagemaker Notebook python==3.9.0 ### Who can help? @ArthurZucker @younesbelkada ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Amazon Sagemaker deployment script in AWS for flant5-xl ```python from sagemaker.huggingface import HuggingFaceModel import sagemaker role = sagemaker.get_execution_role() # Hub Model configuration. https://huggingface.co/models hub = { 'HF_MODEL_ID':'google/flan-t5-xl', 'HF_TASK':'text2text-generation' } # create Hugging Face Model Class huggingface_model = HuggingFaceModel( transformers_version='4.17.0', pytorch_version='1.10.2', py_version='py38', env=hub, role=role, ) # deploy model to SageMaker Inference predictor = huggingface_model.deploy( initial_instance_count=1, # number of instances instance_type='ml.m5.xlarge' # ec2 instance type ) predictor.predict({ 'inputs': "The answer to the universe is" }) ``` Results in ```bash --------------------------------------------------------------------------- ModelError Traceback (most recent call last) /tmp/ipykernel_20116/1338286066.py in <cell line: 26>() 24 ) 25 ---> 26 predictor.predict({ 27 'inputs': "The answer to the universe is" 28 }) ~/anaconda3/envs/python3/lib/python3.10/site-packages/sagemaker/predictor.py in predict(self, data, initial_args, target_model, target_variant, inference_id) 159 data, initial_args, target_model, target_variant, inference_id 160 ) --> 161 response = self.sagemaker_session.sagemaker_runtime_client.invoke_endpoint(**request_args) 162 return self._handle_response(response) 163 ~/anaconda3/envs/python3/lib/python3.10/site-packages/botocore/client.py in _api_call(self, *args, **kwargs) 528 ) 529 # The "self" in this scope is referring to the BaseClient. --> 530 return self._make_api_call(operation_name, kwargs) 531 532 _api_call.__name__ = str(py_operation_name) ~/anaconda3/envs/python3/lib/python3.10/site-packages/botocore/client.py in _make_api_call(self, operation_name, api_params) 958 error_code = parsed_response.get("Error", {}).get("Code") 959 error_class = self.exceptions.from_code(error_code) --> 960 raise error_class(parsed_response, operation_name) 961 else: 962 return parsed_response ModelError: An error occurred (ModelError) when calling the InvokeEndpoint operation: Received client error (400) from primary with message "{ "code": 400, "type": "InternalServerException", "message": "Could not load model /.sagemaker/mms/models/google__flan-t5-xl with any of the following classes: (\u003cclass \u0027transformers.models.auto.modeling_auto.AutoModelForSeq2SeqLM\u0027\u003e, \u003cclass \u0027transformers.models.t5.modeling_t5.T5ForConditionalGeneration\u0027\u003e)." } " ``` From [an existing issue](https://github.com/huggingface/transformers/issues/20038), I suspected this might be due to the use of `transformers==4.17.0`, however, when I use the exact same script to deploy flant5-large model, it works without any issues. ### Expected behavior The model should get deployed on AWS Sagemaker without any issues.
02-01-2023 14:15:22
02-01-2023 14:15:22
Hello @RonLek Thanks for the issue! Note that starting from `flan-t5-xl`, the weights of the model are sharded. Sharded weights loading has been supported after the release of `transformers==4.17.0` (precisely in `transformers==4.18.0`: https://github.com/huggingface/transformers/releases/tag/v4.18.0 ), so I think the fix should be updating the `transformers` version to a more recent one, e.g. `4.26.0` or `4.25.0`.<|||||>Hi @younesbelkada and @RonLek ! I have the same issue deploying `google/flan-t5-xxl` on SageMaker. I've tried to update to `transformers==4.26.0` by providing `code/requirements.txt` through `s3://sagemaker-eu-north-1-***/model.tar.gz`: ```python # Hub Model configuration. https://huggingface.co/models hub: dict = {"HF_MODEL_ID": "google/flan-t5-xxl", "HF_TASK": "text2text-generation"} # Create Hugging Face Model Class huggingface_model = HuggingFaceModel( transformers_version="4.17.0", pytorch_version="1.10.2", py_version="py38", model_data="s3://sagemaker-eu-north-1-***/model.tar.gz", env=hub, role=role, ) ``` Observing the AWS logs I can see that `transformers==4.26.0` was installed: ``` This is an experimental beta features, which allows downloading model from the Hugging Face Hub on start up. It loads the model defined in the env var `HF_MODEL_ID` /opt/conda/lib/python3.8/site-packages/huggingface_hub/file_download.py:588: FutureWarning: `cached_download` is the legacy way to download files from the HF hub, please consider upgrading to `hf_hub_download` warnings.warn( #015Downloading: 0%\| \| 0.00/11.0k [00:00<?, ?B/s]#015Downloading: 100%\|██████████\| 11.0k/11.0k [00:00<00:00, 5.49MB/s] #015Downloading: 0%\| \| 0.00/674 [00:00<?, ?B/s]#015Downloading: 100%\|██████████\| 674/674 [00:00<00:00, 663kB/s] #015Downloading: 0%\| \| 0.00/2.20k [00:00<?, ?B/s]#015Downloading: 100%\|██████████\| 2.20k/2.20k [00:00<00:00, 2.24MB/s] #015Downloading: 0%\| \| 0.00/792k [00:00<?, ?B/s]#015Downloading: 100%\|██████████\| 792k/792k [00:00<00:00, 43.5MB/s] #015Downloading: 0%\| \| 0.00/2.42M [00:00<?, ?B/s]#015Downloading: 0%\| \| 4.10k/2.42M [00:00<01:04, 37.5kB/s]#015Downloading: 1%\| \| 28.7k/2.42M [00:00<00:16, 147kB/s] #015Downloading: 4%\|▎ \| 86.0k/2.42M [00:00<00:07, 318kB/s]#015Downloading: 9%\|▊ \| 209k/2.42M [00:00<00:03, 633kB/s] #015Downloading: 18%\|█▊ \| 438k/2.42M [00:00<00:01, 1.16MB/s]#015Downloading: 37%\|███▋ \| 897k/2.42M [00:00<00:00, 2.18MB/s]#015Downloading: 76%\|███████▌ \| 1.83M/2.42M [00:00<00:00, 4.24MB/s]#015Downloading: 100%\|██████████\| 2.42M/2.42M [00:00<00:00, 3.12MB/s] #015Downloading: 0%\| \| 0.00/2.54k [00:00<?, ?B/s]#015Downloading: 100%\|██████████\| 2.54k/2.54k [00:00<00:00, 2.62MB/s] WARNING - Overwriting /.sagemaker/mms/models/google__flan-t5-xxl ... Collecting transformers==4.26.0 Downloading transformers-4.26.0-py3-none-any.whl (6.3 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 6.3/6.3 MB 65.9 MB/s eta 0:00:00 Requirement already satisfied: requests in /opt/conda/lib/python3.8/site-packages (from transformers==4.26.0->-r /opt/ml/model/code/requirements.txt (line 1)) (2.28.1) Collecting huggingface-hub<1.0,>=0.11.0 Downloading huggingface_hub-0.12.0-py3-none-any.whl (190 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 190.3/190.3 kB 46.0 MB/s eta 0:00:00 Requirement already satisfied: numpy>=1.17 in /opt/conda/lib/python3.8/site-packages (from transformers==4.26.0->-r /opt/ml/model/code/requirements.txt (line 1)) (1.23.3) Requirement already satisfied: tokenizers!=0.11.3,<0.14,>=0.11.1 in /opt/conda/lib/python3.8/site-packages (from transformers==4.26.0->-r /opt/ml/model/code/requirements.txt (line 1)) (0.13.0) Requirement already satisfied: packaging>=20.0 in /opt/conda/lib/python3.8/site-packages (from transformers==4.26.0->-r /opt/ml/model/code/requirements.txt (line 1)) (21.3) Requirement already satisfied: tqdm>=4.27 in /opt/conda/lib/python3.8/site-packages (from transformers==4.26.0->-r /opt/ml/model/code/requirements.txt (line 1)) (4.64.1) Requirement already satisfied: pyyaml>=5.1 in /opt/conda/lib/python3.8/site-packages (from transformers==4.26.0->-r /opt/ml/model/code/requirements.txt (line 1)) (6.0) Requirement already satisfied: filelock in /opt/conda/lib/python3.8/site-packages (from transformers==4.26.0->-r /opt/ml/model/code/requirements.txt (line 1)) (3.8.0) Requirement already satisfied: regex!=2019.12.17 in /opt/conda/lib/python3.8/site-packages (from transformers==4.26.0->-r /opt/ml/model/code/requirements.txt (line 1)) (2022.9.13) Requirement already satisfied: typing-extensions>=3.7.4.3 in /opt/conda/lib/python3.8/site-packages (from huggingface-hub<1.0,>=0.11.0->transformers==4.26.0->-r /opt/ml/model/code/requirements.txt (line 1)) (4.3.0) Requirement already satisfied: pyparsing!=3.0.5,>=2.0.2 in /opt/conda/lib/python3.8/site-packages (from packaging>=20.0->transformers==4.26.0->-r /opt/ml/model/code/requirements.txt (line 1)) (3.0.9) Requirement already satisfied: charset-normalizer<3,>=2 in /opt/conda/lib/python3.8/site-packages (from requests->transformers==4.26.0->-r /opt/ml/model/code/requirements.txt (line 1)) (2.0.12) Requirement already satisfied: idna<4,>=2.5 in /opt/conda/lib/python3.8/site-packages (from requests->transformers==4.26.0->-r /opt/ml/model/code/requirements.txt (line 1)) (3.4) Requirement already satisfied: urllib3<1.27,>=1.21.1 in /opt/conda/lib/python3.8/site-packages (from requests->transformers==4.26.0->-r /opt/ml/model/code/requirements.txt (line 1)) (1.26.11) Requirement already satisfied: certifi>=2017.4.17 in /opt/conda/lib/python3.8/site-packages (from requests->transformers==4.26.0->-r /opt/ml/model/code/requirements.txt (line 1)) (2022.9.24) Installing collected packages: huggingface-hub, transformers Attempting uninstall: huggingface-hub Found existing installation: huggingface-hub 0.10.0 Uninstalling huggingface-hub-0.10.0: Successfully uninstalled huggingface-hub-0.10.0 Attempting uninstall: transformers Found existing installation: transformers 4.17.0 Uninstalling transformers-4.17.0: Successfully uninstalled transformers-4.17.0 Successfully installed huggingface-hub-0.12.0 transformers-4.26.0 WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv [notice] A new release of pip available: 22.2.2 -> 23.0 [notice] To update, run: pip install --upgrade pip Warning: MMS is using non-default JVM parameters: -XX:-UseContainerSupport 2023-02-01T15:46:06,090 [INFO ] main com.amazonaws.ml.mms.ModelServer - MMS Home: /opt/conda/lib/python3.8/site-packages Current directory: / Temp directory: /home/model-server/tmp Number of GPUs: 0 Number of CPUs: 4 Max heap size: 3461 M Python executable: /opt/conda/bin/python3.8 Config file: /etc/sagemaker-mms.properties Inference address: http://0.0.0.0:8080 Management address: http://0.0.0.0:8080 Model Store: /.sagemaker/mms/models Initial Models: ALL Log dir: null Metrics dir: null Netty threads: 0 Netty client threads: 0 Default workers per model: 4 Blacklist Regex: N/A Maximum Response Size: 6553500 Maximum Request Size: 6553500 Preload model: false Prefer direct buffer: false 2023-02-01T15:46:06,140 [WARN ] W-9000-google__flan-t5-xxl com.amazonaws.ml.mms.wlm.WorkerLifeCycle - attachIOStreams() threadName=W-9000-google__flan-t5-xxl 2023-02-01T15:46:06,204 [INFO ] W-9000-google__flan-t5-xxl-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - model_service_worker started with args: --sock-type unix --sock-name /home/model-server/tmp/.mms.sock.9000 --handler sagemaker_huggingface_inference_toolkit.handler_service --model-path /.sagemaker/mms/models/google__flan-t5-xxl --model-name google__flan-t5-xxl --preload-model false --tmp-dir /home/model-server/tmp 2023-02-01T15:46:06,205 [INFO ] W-9000-google__flan-t5-xxl-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - Listening on port: /home/model-server/tmp/.mms.sock.9000 2023-02-01T15:46:06,205 [INFO ] W-9000-google__flan-t5-xxl-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - [PID] 47 2023-02-01T15:46:06,206 [INFO ] W-9000-google__flan-t5-xxl-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - MMS worker started. 2023-02-01T15:46:06,206 [INFO ] W-9000-google__flan-t5-xxl-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - Python runtime: 3.8.10 2023-02-01T15:46:06,206 [INFO ] main com.amazonaws.ml.mms.wlm.ModelManager - Model google__flan-t5-xxl loaded. 2023-02-01T15:46:06,210 [INFO ] main com.amazonaws.ml.mms.ModelServer - Initialize Inference server with: EpollServerSocketChannel. 2023-02-01T15:46:06,218 [INFO ] W-9000-google__flan-t5-xxl com.amazonaws.ml.mms.wlm.WorkerThread - Connecting to: /home/model-server/tmp/.mms.sock.9000 2023-02-01T15:46:06,218 [INFO ] W-9000-google__flan-t5-xxl com.amazonaws.ml.mms.wlm.WorkerThread - Connecting to: /home/model-server/tmp/.mms.sock.9000 2023-02-01T15:46:06,219 [INFO ] W-9000-google__flan-t5-xxl com.amazonaws.ml.mms.wlm.WorkerThread - Connecting to: /home/model-server/tmp/.mms.sock.9000 2023-02-01T15:46:06,226 [INFO ] W-9000-google__flan-t5-xxl com.amazonaws.ml.mms.wlm.WorkerThread - Connecting to: /home/model-server/tmp/.mms.sock.9000 2023-02-01T15:46:06,278 [INFO ] W-9000-google__flan-t5-xxl-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - Connection accepted: /home/model-server/tmp/.mms.sock.9000. 2023-02-01T15:46:06,281 [INFO ] W-9000-google__flan-t5-xxl-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - Connection accepted: /home/model-server/tmp/.mms.sock.9000. 2023-02-01T15:46:06,284 [INFO ] W-9000-google__flan-t5-xxl-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - Connection accepted: /home/model-server/tmp/.mms.sock.9000. 2023-02-01T15:46:06,290 [INFO ] W-9000-google__flan-t5-xxl-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - Connection accepted: /home/model-server/tmp/.mms.sock.9000. 2023-02-01T15:46:06,298 [INFO ] main com.amazonaws.ml.mms.ModelServer - Inference API bind to: http://0.0.0.0:8080 Model server started. 2023-02-01T15:46:06,302 [WARN ] pool-3-thread-1 com.amazonaws.ml.mms.metrics.MetricCollector - worker pid is not available yet. 2023-02-01T15:46:08,478 [INFO ] W-9000-google__flan-t5-xxl-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - Model google__flan-t5-xxl loaded io_fd=3abd6afffe6261f4-0000001d-00000000-084f36d4c5a81b10-639dfd41 2023-02-01T15:46:08,491 [INFO ] W-9000-google__flan-t5-xxl com.amazonaws.ml.mms.wlm.WorkerThread - Backend response time: 2081 2023-02-01T15:46:08,493 [WARN ] W-9000-google__flan-t5-xxl com.amazonaws.ml.mms.wlm.WorkerLifeCycle - attachIOStreams() threadName=W-google__flan-t5-xxl-1 2023-02-01T15:46:08,499 [INFO ] W-9000-google__flan-t5-xxl-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - Model google__flan-t5-xxl loaded io_fd=3abd6afffe6261f4-0000001d-00000001-c96df6d4c5a81b10-276a10eb 2023-02-01T15:46:08,500 [INFO ] W-9000-google__flan-t5-xxl com.amazonaws.ml.mms.wlm.WorkerThread - Backend response time: 2089 2023-02-01T15:46:08,500 [WARN ] W-9000-google__flan-t5-xxl com.amazonaws.ml.mms.wlm.WorkerLifeCycle - attachIOStreams() threadName=W-google__flan-t5-xxl-3 2023-02-01T15:46:08,512 [INFO ] W-9000-google__flan-t5-xxl-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - Model google__flan-t5-xxl loaded io_fd=3abd6afffe6261f4-0000001d-00000004-12e7f154c5a81b12-fe262c46 2023-02-01T15:46:08,512 [INFO ] W-9000-google__flan-t5-xxl com.amazonaws.ml.mms.wlm.WorkerThread - Backend response time: 2101 2023-02-01T15:46:08,513 [WARN ] W-9000-google__flan-t5-xxl com.amazonaws.ml.mms.wlm.WorkerLifeCycle - attachIOStreams() threadName=W-google__flan-t5-xxl-4 2023-02-01T15:46:08,561 [INFO ] W-9000-google__flan-t5-xxl-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - Model google__flan-t5-xxl loaded io_fd=3abd6afffe6261f4-0000001d-00000003-6582f154c5a81b12-273338b8 2023-02-01T15:46:08,561 [INFO ] W-9000-google__flan-t5-xxl com.amazonaws.ml.mms.wlm.WorkerThread - Backend response time: 2150 2023-02-01T15:46:08,561 [WARN ] W-9000-google__flan-t5-xxl com.amazonaws.ml.mms.wlm.WorkerLifeCycle - attachIOStreams() threadName=W-google__flan-t5-xxl-2 2023-02-01T15:46:10,450 [INFO ] pool-2-thread-6 ACCESS_LOG - /169.254.178.2:59002 "GET /ping HTTP/1.1" 200 7 2023-02-01T15:46:15,412 [INFO ] pool-2-thread-6 ACCESS_LOG - /169.254.178.2:59002 "GET /ping HTTP/1.1" 200 0 2023-02-01T15:46:20,411 [INFO ] pool-2-thread-6 ACCESS_LOG - /169.254.178.2:59002 "GET /ping HTTP/1.1" 200 0 ``` But I got the same error when trying to do an inference: ``` botocore.errorfactory.ModelError: An error occurred (ModelError) when calling the InvokeEndpoint operation: Received client error (400) from primary with message "{ "code": 400, "type": "InternalServerException", "message": "Could not load model /.sagemaker/mms/models/google__flan-t5-xxl with any of the following classes: (\u003cclass \u0027transformers.models.auto.modeling_auto.AutoModelForSeq2SeqLM\u0027\u003e, \u003cclass \u0027transformers.models.t5.modeling_t5.T5ForConditionalGeneration\u0027\u003e)." } ``` AWS logs: ``` 2023-02-01T15:49:59,831 [INFO ] W-google__flan-t5-xxl-1-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - Prediction error 2023-02-01T15:49:59,832 [INFO ] W-google__flan-t5-xxl-1-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - Traceback (most recent call last): 2023-02-01T15:49:59,832 [INFO ] W-google__flan-t5-xxl-1-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - File "/opt/conda/lib/python3.8/site-packages/sagemaker_huggingface_inference_toolkit/handler_service.py", line 219, in handle 2023-02-01T15:49:59,832 [INFO ] W-google__flan-t5-xxl-1-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - self.initialize(context) 2023-02-01T15:49:59,832 [INFO ] W-google__flan-t5-xxl-1-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - File "/opt/conda/lib/python3.8/site-packages/sagemaker_huggingface_inference_toolkit/handler_service.py", line 77, in initialize 2023-02-01T15:49:59,832 [INFO ] W-9000-google__flan-t5-xxl com.amazonaws.ml.mms.wlm.WorkerThread - Backend response time: 1 2023-02-01T15:49:59,833 [INFO ] W-google__flan-t5-xxl-1-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - self.model = self.load(self.model_dir) 2023-02-01T15:49:59,833 [INFO ] W-google__flan-t5-xxl-1-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - File "/opt/conda/lib/python3.8/site-packages/sagemaker_huggingface_inference_toolkit/handler_service.py", line 104, in load 2023-02-01T15:49:59,833 [INFO ] W-google__flan-t5-xxl-1-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - hf_pipeline = get_pipeline(task=os.environ["HF_TASK"], model_dir=model_dir, device=self.device) 2023-02-01T15:49:59,833 [INFO ] W-google__flan-t5-xxl-1-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - File "/opt/conda/lib/python3.8/site-packages/sagemaker_huggingface_inference_toolkit/transformers_utils.py", line 272, in get_pipeline 2023-02-01T15:49:59,833 [INFO ] W-google__flan-t5-xxl-1-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - hf_pipeline = pipeline(task=task, model=model_dir, device=device, **kwargs) 2023-02-01T15:49:59,834 [INFO ] W-google__flan-t5-xxl-1-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - File "/opt/conda/lib/python3.8/site-packages/transformers/pipelines/__init__.py", line 754, in pipeline 2023-02-01T15:49:59,834 [INFO ] W-google__flan-t5-xxl-1-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - framework, model = infer_framework_load_model( 2023-02-01T15:49:59,834 [INFO ] W-9000-google__flan-t5-xxl ACCESS_LOG - /169.254.178.2:59002 "POST /invocations HTTP/1.1" 400 13 2023-02-01T15:49:59,834 [INFO ] W-google__flan-t5-xxl-1-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - File "/opt/conda/lib/python3.8/site-packages/transformers/pipelines/base.py", line 266, in infer_framework_load_model 2023-02-01T15:49:59,834 [INFO ] W-google__flan-t5-xxl-1-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - raise ValueError(f"Could not load model {model} with any of the following classes: {class_tuple}.") 2023-02-01T15:49:59,835 [INFO ] W-google__flan-t5-xxl-1-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - ValueError: Could not load model /.sagemaker/mms/models/google__flan-t5-xxl with any of the following classes: (<class 'transformers.models.auto.modeling_auto.AutoModelForSeq2SeqLM'>, <class 'transformers.models.t5.modeling_t5.T5ForConditionalGeneration'>). 2023-02-01T15:49:59,835 [INFO ] W-google__flan-t5-xxl-1-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - 2023-02-01T15:49:59,835 [INFO ] W-google__flan-t5-xxl-1-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - During handling of the above exception, another exception occurred: 2023-02-01T15:49:59,835 [INFO ] W-google__flan-t5-xxl-1-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - 2023-02-01T15:49:59,836 [INFO ] W-google__flan-t5-xxl-1-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - Traceback (most recent call last): 2023-02-01T15:49:59,836 [INFO ] W-google__flan-t5-xxl-1-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - File "/opt/conda/lib/python3.8/site-packages/mms/service.py", line 108, in predict 2023-02-01T15:49:59,836 [INFO ] W-google__flan-t5-xxl-1-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - ret = self._entry_point(input_batch, self.context) 2023-02-01T15:49:59,836 [INFO ] W-google__flan-t5-xxl-1-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - File "/opt/conda/lib/python3.8/site-packages/sagemaker_huggingface_inference_toolkit/handler_service.py", line 243, in handle 2023-02-01T15:49:59,836 [INFO ] W-google__flan-t5-xxl-1-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - raise PredictionException(str(e), 400) 2023-02-01T15:49:59,837 [INFO ] W-google__flan-t5-xxl-1-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - mms.service.PredictionException: Could not load model /.sagemaker/mms/models/google__flan-t5-xxl with any of the following classes: (<class 'transformers.models.auto.modeling_auto.AutoModelForSeq2SeqLM'>, <class 'transformers.models.t5.modeling_t5.T5ForConditionalGeneration'>). : 400 ```<|||||>Hello @valentinboyanov I can see in your script that: ```python HuggingFaceModel( transformers_version="4.17.0", pytorch_version="1.10.2", py_version="py38", model_data="s3://sagemaker-eu-north-1-***/model.tar.gz", env=hub, role=role, ) ``` can you update `transformers_version` with the correct value? I suspect this is causing the issue<|||||>@younesbelkada if I change it, I'm unable to deploy at all: ``` raise ValueError( ValueError: Unsupported huggingface version: 4.26.0. You may need to upgrade your SDK version (pip install -U sagemaker) for newer huggingface versions. Supported huggingface version(s): 4.6.1, 4.10.2, 4.11.0, 4.12.3, 4.17.0, 4.6, 4.10, 4.11, 4.12, 4.17. ``` This is why I've followed the instructions by [Heiko Hotz (marshmellow77) in this comment](https://discuss.huggingface.co/t/deploying-open-ais-whisper-on-sagemaker/24761/5) to provide a `requirements.txt` file that will let me specify dependencies I want to be installed in the container.<|||||>@valentinboyanov what is the content for your ` model_data="s3://sagemaker-eu-north-1-***/model.tar.gz"`? Could you please share the folder structure. <|||||>@philschmid yes, here it goes: ``` ➜ model tree . . └── code └── requirements.txt 1 directory, 1 file ``` ``` ➜ model cat code/requirements.txt transformers==4.26.0% ```<|||||>When you provide a `model_data` key word you also have to include the `inference.py` and the model weights. <|||||>@philschmid what should be the contents of the `inference.py` in case of the flan-t5-xl model? Can this be an empty file if I don't intend to change anything from the hub model? There doesn't seem to be such a file included within the [Hugging Face repository](https://huggingface.co/google/flan-t5-xl/tree/main). @valentinboyanov I confirm getting the same as well. From the CW logs it seems that `4.17.0` is un-installed and replaced with the latest version specified in the `requirements.txt` file. > @younesbelkada if I change it, I'm unable to deploy at all: > > ``` > raise ValueError( > ValueError: Unsupported huggingface version: 4.26.0. You may need to upgrade your SDK version (pip install -U sagemaker) for newer huggingface versions. Supported huggingface version(s): 4.6.1, 4.10.2, 4.11.0, 4.12.3, 4.17.0, 4.6, 4.10, 4.11, 4.12, 4.17. > ``` > > This is why I've followed the instructions by [Heiko Hotz (marshmellow77) in this comment](https://discuss.huggingface.co/t/deploying-open-ais-whisper-on-sagemaker/24761/5) to provide a `requirements.txt` file that will let me specify dependencies I want to be installed in the container. <|||||>I'm having the same `Could not load model error with any of the following classes: AutoModelForSeq2SeqLM and T5ForConditionalGeneration` when using a docker for inference of a `flan-t5-xxl-sharded-fp16` model: Code works without Docker, but If I build and run `docker run --gpus all -p 7080:7080 flan-t5-xxl-sharded-fp16:latest`, error is the following: ``` [2023-02-05 21:33:53 +0000] [1] [INFO] Starting gunicorn 20.1.0 [2023-02-05 21:33:53 +0000] [1] [INFO] Listening at: http://0.0.0.0:7080 (1) [2023-02-05 21:33:53 +0000] [1] [INFO] Using worker: uvicorn.workers.UvicornWorker [2023-02-05 21:33:53 +0000] [7] [INFO] Booting worker with pid: 7 [2023-02-05 21:34:01 +0000] [7] [INFO] Is CUDA available: True [2023-02-05 21:34:01 +0000] [7] [INFO] CUDA device: NVIDIA A100-SXM4-40GB [2023-02-05 21:34:01 +0000] [7] [INFO] Loading model [2023-02-05 21:34:02 +0000] [7] [ERROR] Exception in worker process Traceback (most recent call last): File "/usr/local/lib/python3.9/site-packages/gunicorn/arbiter.py", line 589, in spawn_worker worker.init_process() File "/usr/local/lib/python3.9/site-packages/uvicorn/workers.py", line 66, in init_process super(UvicornWorker, self).init_process() File "/usr/local/lib/python3.9/site-packages/gunicorn/workers/base.py", line 134, in init_process self.load_wsgi() File "/usr/local/lib/python3.9/site-packages/gunicorn/workers/base.py", line 146, in load_wsgi self.wsgi = self.app.wsgi() File "/usr/local/lib/python3.9/site-packages/gunicorn/app/base.py", line 67, in wsgi self.callable = self.load() File "/usr/local/lib/python3.9/site-packages/gunicorn/app/wsgiapp.py", line 58, in load return self.load_wsgiapp() File "/usr/local/lib/python3.9/site-packages/gunicorn/app/wsgiapp.py", line 48, in load_wsgiapp return util.import_app(self.app_uri) File "/usr/local/lib/python3.9/site-packages/gunicorn/util.py", line 359, in import_app mod = importlib.import_module(module) File "/usr/local/lib/python3.9/importlib/__init__.py", line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "<frozen importlib._bootstrap>", line 1030, in _gcd_import File "<frozen importlib._bootstrap>", line 1007, in _find_and_load File "<frozen importlib._bootstrap>", line 986, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 680, in _load_unlocked File "<frozen importlib._bootstrap_external>", line 850, in exec_module File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed File "/app/main.py", line 29, in <module> pipe_flan = pipeline("text2text-generation", model="../flan-t5-xxl-sharded-fp16", model_kwargs={"load_in_8bit":True, "device_map": "auto"}) File "/usr/local/lib/python3.9/site-packages/transformers/pipelines/__init__.py", line 754, in pipeline framework, model = infer_framework_load_model( File "/usr/local/lib/python3.9/site-packages/transformers/pipelines/base.py", line 266, in infer_framework_load_model raise ValueError(f"Could not load model {model} with any of the following classes: {class_tuple}.") ValueError: Could not load model ../flan-t5-xxl-sharded-fp16 with any of the following classes: (<class 'transformers.models.auto.modeling_auto.AutoModelForSeq2SeqLM'>, <class 'transformers.models.t5.modeling_t5.T5ForConditionalGeneration'>). [2023-02-05 21:34:02 +0000] [7] [INFO] Worker exiting (pid: 7) [2023-02-05 21:34:04 +0000] [1] [INFO] Shutting down: Master [2023-02-05 21:34:04 +0000] [1] [INFO] Reason: Worker failed to boot. ``` Dockerfile is the following: ``` FROM tiangolo/uvicorn-gunicorn-fastapi:python3.9 # install dependencies RUN python3 -m pip install --upgrade pip RUN pip3 install torch==1.13.0 transformers==4.26.0 sentencepiece torchvision torchaudio accelerate==0.15.0 bitsandbytes-cuda113 COPY ./app /app COPY ./flan-t5-xxl-sharded-fp16/ /flan-t5-xxl-sharded-fp16 EXPOSE 7080 # Start the app CMD ["gunicorn", "-b", "0.0.0.0:7080", "main:app","--workers","1","--timeout","180","-k","uvicorn.workers.UvicornWorker"] ``` The code of `app/main.py` is the following: ```py from fastapi import FastAPI, Request from fastapi.logger import logger from transformers import AutoModelForSeq2SeqLM, AutoTokenizer, T5ForConditionalGeneration import json import logging import numpy as np import os import torch from transformers import pipeline app = FastAPI() gunicorn_logger = logging.getLogger('gunicorn.error') logger.handlers = gunicorn_logger.handlers if __name__ != "main": logger.setLevel(gunicorn_logger.level) else: logger.setLevel(logging.INFO) logger.info(f"Is CUDA available: {torch.cuda.is_available()}") logger.info(f"CUDA device: {torch.cuda.get_device_name(torch.cuda.current_device())}") logger.info("Loading model") # error is in this line pipe_flan = pipeline("text2text-generation", model="../flan-t5-xxl-sharded-fp16", model_kwargs={"load_in_8bit":True, "device_map": "auto"}) # extra code removed ```<|||||>@philschmid @younesbelkada just wanted to follow up on this. > @philschmid what should be the contents of the `inference.py` in case of the flan-t5-xl model? There doesn't seem to be such a file included within the [Hugging Face repository](https://huggingface.co/google/flan-t5-xl/tree/main). > > @valentinboyanov I confirm getting the same as well. From the CW logs it seems that `4.17.0` is un-installed and replaced with the latest version specified in the `requirements.txt` file. > > > @younesbelkada if I change it, I'm unable to deploy at all: > > ``` > > raise ValueError( > > ValueError: Unsupported huggingface version: 4.26.0. You may need to upgrade your SDK version (pip install -U sagemaker) for newer huggingface versions. Supported huggingface version(s): 4.6.1, 4.10.2, 4.11.0, 4.12.3, 4.17.0, 4.6, 4.10, 4.11, 4.12, 4.17. > > ``` > > > > > > > > > > > > > > > > > > > > > > > > This is why I've followed the instructions by [Heiko Hotz (marshmellow77) in this comment](https://discuss.huggingface.co/t/deploying-open-ais-whisper-on-sagemaker/24761/5) to provide a `requirements.txt` file that will let me specify dependencies I want to be installed in the container. <|||||>@RonLek i am planning to create an example. I ll post it here once it is ready. <|||||>@RonLek done: https://www.philschmid.de/deploy-flan-t5-sagemaker<|||||>This works! Thanks a ton @philschmid for the prompt response :rocket: <|||||>@philschmid just curious. Would there be a similar sharded model repo for flan-t5-xl?<|||||>If you check this blog post: https://www.philschmid.de/deploy-t5-11b There is a code snippet on how to do this, for `t5-11b` https://www.philschmid.de/deploy-t5-11b ```python import torch from transformers import AutoModelWithLMHead from huggingface_hub import HfApi # load model as float16 model = AutoModelWithLMHead.from_pretrained("t5-11b", torch_dtype=torch.float16, low_cpu_mem_usage=True) # shard model an push to hub model.save_pretrained("sharded", max_shard_size="2000MB") ```<|||||>Thanks! This worked :fire: <|||||>@philschmid thanks for the guidance here. While deploying your solution on SageMaker i noticed that it works great on g5 instances but not on p3 instances( p3.8xlarge). Also, do we know when the the direct deploy from HF hub would work out of the box? Error below - ``` Model fails to load, the reason being that the library bitsandbytes that is required "The installed version of bitsandbytes was compiled without GPU support. " on p3 instance and that leads to the below error when you invoke the model- 2023-02-25T01:24:28,714 [INFO ] W-model-3-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - mms.service.PredictionException: 'NoneType' object has no attribute 'cget_col_row_stats' : 400 ```<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
21,401
closed
Fix some pipeline tests
# What does this PR do? - change `feature_extractor=...` to `image_processor=...` in `test_pipeline_XXX` files, if `XXX` is vision tasks. - this doesn't affect any backward compatibility - they are just testing files - change `self.feature_extractor(...)` to `self.image_processor()` in some vision pipeline files - the backward compatibility is ensured by the change in `base.py` (parent class `Pipeline.__init__`): ```python if self.image_processor is None and self.feature_extractor is not None: # Backward compatible change, if users called # ImageSegmentationPipeline(.., feature_extractor=MyFeatureExtractor()) # then we should keep working self.image_processor = self.feature_extractor ``` - A few other fixes, see review comments
02-01-2023 12:49:10
02-01-2023 12:49:10
_The documentation is not available anymore as the PR was closed or merged._<|||||>> Let's stop adding tests of new models in the pipelines until the metaclass is removed and we have a mixin class that all model testers inherit from (I think that is the plan, right?) Yes, that's the plan! For now, no new models will be added in pipeline testing (because we need to create and upload the tiny model on the Hub, and I guess no one knows how to do it except me, and I definitely will keep my life easier :-) > as we can't add a succession of tests in the pipeline common tests to change behavior for this or that model. Sure! <|||||>Failing test is irrelevant to this PR.<|||||>For transparancy: I need to add back the following block in `src/transformers/pipelines/__init__.py`: ```python # If `model` (instance of `PretrainedModel` instead of `str`) is passed (and/or same for config), while # `image_processor` or `feature_extractor` is `None`, the loading will fail. This happens particularly for some # vision tasks when calling `pipeline()` with `model` and only one of the `image_processor` and `feature_extractor`. # TODO: we need to make `NO_IMAGE_PROCESSOR_TASKS` and `NO_FEATURE_EXTRACTOR_TASKS` more robust to avoid such issue. # This block is only temporarily to make CI green. if load_image_processor and load_feature_extractor: load_feature_extractor = False ``` The issue comes from the fact of calling `pipeline()` with: - a model (not string) - one of `image_processor` or `feature_extractor` being specified, but another one is `None` - tasks involving vision models, so both `load_image_processor` and `load_feature_extractor` are `True` then it will fail around https://github.com/huggingface/transformers/blob/c2f623cf53a9a8b2e192135b03ae211ba1ce3695/src/transformers/pipelines/__init__.py#L863 Without this change, the following 2 tests just fails - DocumentQuestionAnsweringPipelineTests::test_pt_LayoutLMv2Config_LayoutLMv2ForQuestionAnswering_LayoutLMv2TokenizerFast_LayoutLMv2ImageProcessor (not tested before) - tests/pipelines/test_pipelines_image_segmentation.py::ImageSegmentationPipelineTests::test_maskformer (already failed since the PR #20851 one week ago) **We need to improve this `pipeline.__init__.py` to make it more robust regarding the feature_extractor/image_processor while we want to keep backward compatibility** This should go in a separate PR though, I will merge this PR as it is, unless you strongly against the changes in the last 2 commits.
transformers
21,400
closed
Add Pix2Struct
# What does this PR do? Fixes #20663 Paper: https://arxiv.org/pdf/2210.03347.pdf Code: https://github.com/google-research/pix2struct `Pix2Struct` is a series of Image-text models that has been fine-tuned on various datasets and tasks. ![Screenshot 2023-03-10 at 09 42 19](https://user-images.githubusercontent.com/49240599/224267136-389b8300-6497-4321-a6cc-d85b4f8e55d7.png) This integration will offer users variety of models and potential use cases `Pix2Struct` is a model that combines vision encoder and text decoder, similar as T5. The method heavily relies on its image processing procedure. The image pre-proccessing differs from classic Vision Transformers by being able to handle images of variable resolution, thus being able to keep the aspect ratio of the original image, that seems to be essential and crucial for image understanding. ![Screenshot 2023-03-10 at 09 47 12](https://user-images.githubusercontent.com/49240599/224268204-f97d76b4-54e4-470a-aa44-9b6c086c8369.png) Therefore I decided to change the current paradigm for getting `pixel_values` differently. Now the pixel values should be seen as tokens that are directly processed by the `ImageProcessor`. Hence, I decided to change `pixel_values` to `pixel_embeds` , as in fact they correspond to the image embeddings. We now obtain the patch embeddings directly from the processor, that is also responsible of also computing the pixel embeds attention mask. I will update all the weights (18 in total) after I get 1 approval ### TODO - FIne-tuning notebook
02-01-2023 11:21:54
02-01-2023 11:21:54
_The documentation is not available anymore as the PR was closed or merged._<|||||>@younesbelkada @ArthurZucker 👋 how is this PR going? Do you need some help to get it over the finish line? Happy to collab if helpful.<|||||>Hi @ankrgyl Thanks so much for proposing your help on this PR! I fixed now few tests related to batched generation and addressed most of @ArthurZucker 's comments. The architecture is completely ready to use if someone wants to perform conditional and unconditional image captionning! I wanted to work on a fine-tuning notebook similar as this one: https://colab.research.google.com/drive/1lbqiSiA0sDF7JDWPeS0tccrM85LloVha?usp=sharing as it boosts quite a lot the usage of the model ! IMO the things that are left are: 1- Making a notebook for Pix2Struct using the base model (that is currently pushed here: https://huggingface.co/ybelkada/pix2struct-textcaps-base 2- Address the last comments 3- Push the correct conversion script 4- Push the remaining weights (I can do that only after one approval) If you want, you can help me on 1, if you have some doubts about your modification you can just run the integration tests: ```bash RUN_SLOW=1 pytest tests/models/pix2struct/test_modeling_pix2struct.py::Pix2StructIntegrationTest ``` and make sure they pass! I am aiming to merge this at most by beginning of next week ! Let me know if you want to help on those, otherwise happy to continue the PR 💪 <|||||>It looks like you've got it under control so I'll bow out, but happy to test!<|||||>I think I have addressed most of the comments! I also updated the PR description and would love to have a round of review! cc @amyeroberts @ArthurZucker <|||||>Thanks @amyeroberts for the extensive review! Should have addressed most of them and left some open questions Regarding the new naming `patches`, I am not 100% convinced about that, users needs to see these input as a new paradigm that is equivalent to text tokens (as there are also attention masks in this new input) but applied to images, and I am afraid `patches` will confuse users as the shape of this input would be hard to interpret `bs x seq_len x hidden_dim` (with `hidden_dim=num_channels x patch_width x patch_height`.)<|||||>As disccused offline, let's stick for `flattened_patches` ! I should have fixed your comments by now and added support for `vqa` models in `Pix2struct` as they require a specific format / way of inferring<|||||>Thanks a mile for the extensive review! 🚀 So from what I have got from your comment: https://github.com/huggingface/transformers/pull/21400#discussion_r1137690655 I removed the `data_format` argument Would love to have a last round of review 💪
transformers
21,399
closed
How to resume training with a different learning rate or else TrainingArguments?
I tried the following settings, which will be covered by the learning rate of checkpoint ``` training_args = TrainingArguments( learning_rate=5e-4, # larger than before 5e-5 ) ... trainer.train(resume_from_checkpoint=True)```
02-01-2023 09:48:19
02-01-2023 09:48:19
Please use the [forums](https://discuss.huggingface.co/) for surch questions as we keep issues for bugs and feature requests only.The resume training functionality is intended in case of instance crash and you should use the same hyperparameters.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
21,398
closed
Fix the issue of using only inputs_embeds in convbert model
Fixes #21395 @sgugger
02-01-2023 08:47:49
02-01-2023 08:47:49
_The documentation is not available anymore as the PR was closed or merged._
transformers
21,397
closed
Parallelize LongT5
# What does this PR do? Adds parallelization to LongT5 Fixes # (issue) [21396](https://github.com/huggingface/transformers/issues/21396) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @ArthurZucker and @younesbelkada
02-01-2023 01:14:56
02-01-2023 01:14:56
_The documentation is not available anymore as the PR was closed or merged._<|||||>Yes, just pass along `device_map="auto"` or `device_map="balanced"` in your call to `from_pretrained` to have the model be parallelized. It will work for training and inference.<|||||>Oh sweet! Didn't know about this. I just tried training with the longt5 3b model using accelerate and it didn't work well; GPU0 got most of the workload and the training run crashed quickly. I tried both "auto" and "balanced". If I use my code it works. I realize that I could specify my own device map, but that's pretty tedious. Is there a better way to debug this? Thanks!
transformers
21,396
closed
Parallelize LongT5
### Feature request Similar to regular T5, it'd be nice if LongT5 had parallelization support ### Motivation This allows for training larger models / input sizes by using several gpus ### Your contribution I have a pull request ready for this
02-01-2023 01:07:36
02-01-2023 01:07:36
Hi, We've just deprecated the parallelize API as it can now be done using the `from_pretrained` method. See https://github.com/huggingface/transformers/pull/21448
transformers
21,395
closed
UnboundLocalError: local variable 'seq_length' referenced before assignment
### System Info Hi, I am using the `ConvBertForTokenClassification` model in models.convbert and encountered the bug when passing only `input_embeds` to `forward()`. The traceback says that at line 833 in modeling_convbert.py ``` if token_type_ids is None: if hasattr(self.embeddings, "token_type_ids"): buffered_token_type_ids = self.embeddings.token_type_ids[:, :seq_length] ``` The seq_length is unassigned. I noticed just above this piece of code in ``` elif input_ids is not None: input_shape = input_ids.size() batch_size, seq_length = input_shape elif inputs_embeds is not None: input_shape = inputs_embeds.size()[:-1] ``` seq_length is not assigned if the program enters `elif inputs_embeds is not None`. Not sure if it is the `batch_size, seq_length = input_shape` missing for `inputs_embeds` or I am not using the model correctly? ### Who can help? text models: @ArthurZucker and @younesbelkada ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction passing only `inputs_embeds` and `attention_mask` to `ConvBertForTokenClassification` model. ### Expected behavior There should be no error.
01-31-2023 23:21:27
01-31-2023 23:21:27
I think it might be a bug, request you to post a sample script of your usage to reproduce the error.<|||||>> I think it might be a bug, request you to post a sample script of your usage to reproduce the error. ``` import torch from transformers import ConvBertConfig, ConvBertForTokenClassification embeddings = torch.tensor([1]) mask = torch.tensor([1]) convbert_model_config = ConvBertConfig() convbert_model = ConvBertForTokenClassification(convbert_model_config) outputs = convbert_model(inputs_embeds=embeddings, attention_mask=mask) ```<|||||>Thanks for providing a reproducing script and thanks @raghavanone for jumping on this. Your fix looks good!
transformers
21,394
closed
[WHISPER] - ValueError: Malformed soundfile
### System Info I had a few files with this error. In @openai/whisper the same files worked, so I figured this was a bug. Digging a little bit, this has to do with some ffmpeg parameters inside : `File "/usr/local/lib/python3.9/site-packages/transformers/pipelines/audio_utils.py", line 41, in ffmpeg_read` Instead of ``` file_name = "audio.mp3" output = pipe(file_name, generate_kwargs=props, return_timestamps=True, chunk_length_s=30, stride_length_s=[6, 0], batch_size=32, ignore_warning=True) ``` The workaround I used, is to load the audio with whisper.load_audio instead, and pass that into the pipeline (until this if fixed) ``` audio = whisper.load_audio(source_file) output = pipe(audio, generate_kwargs=props, return_timestamps=True, chunk_length_s=30, stride_length_s=[6, 0], batch_size=32, ignore_warning=True) ``` This does require a whisper dependency, but doesn't load it's model, just uses a more up to date load_audio than the one in transformers. ### Who can help? @ArthurZucker @sanchit-gandhi ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction The only file I can reproduce this on is a video of my kid, so I can share via DM, don't want to paste the link here. ### Expected behavior File is valid, and whisper runs properly
01-31-2023 22:41:02
01-31-2023 22:41:02
Hey @altryne! Indeed, it should be possible to load up any audio file using transformers `pipeline` alone. Could you share the full traceback of the error? This will help in pinpointing its exact nature! It would be great if you are able to share the `.mp3` file! My email is `[email protected]`. Thanks!<|||||>Thanks Sanchit! Shared! (it's a video file, but whisper doesn't mind) <|||||>any news on this case ?<|||||>> (it's a video file, but whisper doesn't mind) It was an mp4 video file that needed to be converted to an mp3 or wav audio file first. HF's audio pipeline works with any audio file format, but not video.<|||||>Thanks for the clarification<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
21,393
closed
Moved LiLT under multimodal models in TOC
LiLT was listed under text models, however, it should be listed under multimodal. The PR fixes this.
01-31-2023 18:46:12
01-31-2023 18:46:12
_The documentation is not available anymore as the PR was closed or merged._
transformers
21,392
closed
Remove more unused attributes in config classes
# What does this PR do? Remove another set of unused attributes in config classes. There are still 20~30 things to check, probably I will open the PR of the new test and merge it first (skipping some failing cases), and continue to clean them up later.
01-31-2023 18:24:44
01-31-2023 18:24:44
_The documentation is not available anymore as the PR was closed or merged._
transformers
21,391
closed
T5/Flan-T5 text generation with `load_in_8bit=True` gives error `expected scalar type Float but found Half`
### System Info - `transformers` version: 4.27.0.dev0 - Platform: Linux-5.10.147+-x86_64-with-glibc2.29 - Python version: 3.8.10 - Huggingface_hub version: 0.12.0 - PyTorch version (GPU?): 1.14.0a0+410ce96 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: yes - Using distributed or parallel set-up in script?: no ### Who can help? @younesbelkada ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction 1. Start a container with the latest [NVIDIA PyTorch Docker Image](https://docs.nvidia.com/deeplearning/frameworks/pytorch-release-notes/rel-22-12.html#rel-22-12) and an A100 GPU 2. Install the latest `transformers` from this github repo 3. Run the snippet from [the official example](https://huggingface.co/google/flan-t5-base) ```python from transformers import T5Tokenizer, T5ForConditionalGeneration tokenizer = T5Tokenizer.from_pretrained("google/flan-t5-base") model = T5ForConditionalGeneration.from_pretrained("google/flan-t5-base", device_map="auto", load_in_8bit=True) input_text = "translate English to German: How old are you?" input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to("cuda") outputs = model.generate(input_ids) print(tokenizer.decode(outputs[0])) ``` Throws ``` RuntimeError Traceback (most recent call last) Cell In[23], line 9 6 input_text = "translate English to German: How old are you?" 7 input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to("cuda") ----> 9 outputs = model.generate(input_ids) 10 print(tokenizer.decode(outputs[0])) File /usr/local/lib/python3.8/dist-packages/torch/autograd/grad_mode.py:27, in _DecoratorContextManager.__call__.<locals>.decorate_context(*args, **kwargs) 24 @functools.wraps(func) 25 def decorate_context(*args, **kwargs): 26 with self.clone(): ---> 27 return func(*args, **kwargs) File /usr/local/lib/python3.8/dist-packages/transformers/generation/utils.py:1255, in GenerationMixin.generate(self, inputs, generation_config, logits_processor, stopping_criteria, prefix_allowed_tokens_fn, synced_gpus, **kwargs) 1247 logger.warning( 1248 "A decoder-only architecture is being used, but right-padding was detected! For correct " 1249 "generation results, please set `padding_side='left'` when initializing the tokenizer." 1250 ) 1252 if self.config.is_encoder_decoder and "encoder_outputs" not in model_kwargs: 1253 # if model is encoder decoder encoder_outputs are created 1254 # and added to `model_kwargs` -> 1255 model_kwargs = self._prepare_encoder_decoder_kwargs_for_generation( 1256 inputs_tensor, model_kwargs, model_input_name 1257 ) 1259 # 5. Prepare `input_ids` which will be used for auto-regressive generation 1260 if self.config.is_encoder_decoder: File /usr/local/lib/python3.8/dist-packages/transformers/generation/utils.py:617, in GenerationMixin._prepare_encoder_decoder_kwargs_for_generation(self, inputs_tensor, model_kwargs, model_input_name) 615 encoder_kwargs["return_dict"] = True 616 encoder_kwargs[model_input_name] = inputs_tensor --> 617 model_kwargs["encoder_outputs"]: ModelOutput = encoder(**encoder_kwargs) 619 return model_kwargs File /usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py:1423, in Module._call_impl(self, *input, **kwargs) 1418 # If we don't have any hooks, we want to skip the rest of the logic in 1419 # this function, and just call forward. 1420 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks 1421 or _global_backward_pre_hooks or _global_backward_hooks 1422 or _global_forward_hooks or _global_forward_pre_hooks): -> 1423 return forward_call(*input, **kwargs) 1424 # Do not call functions when jit is used 1425 full_backward_hooks, non_full_backward_hooks = [], [] File /usr/local/lib/python3.8/dist-packages/accelerate/hooks.py:158, in add_hook_to_module.<locals>.new_forward(*args, **kwargs) 156 output = old_forward(*args, **kwargs) 157 else: --> 158 output = old_forward(*args, **kwargs) 159 return module._hf_hook.post_forward(module, output) File /usr/local/lib/python3.8/dist-packages/transformers/models/t5/modeling_t5.py:1055, in T5Stack.forward(self, input_ids, attention_mask, encoder_hidden_states, encoder_attention_mask, inputs_embeds, head_mask, cross_attn_head_mask, past_key_values, use_cache, output_attentions, output_hidden_states, return_dict) 1042 layer_outputs = checkpoint( 1043 create_custom_forward(layer_module), 1044 hidden_states, (...) 1052 None, # past_key_value is always None with gradient checkpointing 1053 ) 1054 else: -> 1055 layer_outputs = layer_module( 1056 hidden_states, 1057 attention_mask=extended_attention_mask, 1058 position_bias=position_bias, 1059 encoder_hidden_states=encoder_hidden_states, 1060 encoder_attention_mask=encoder_extended_attention_mask, 1061 encoder_decoder_position_bias=encoder_decoder_position_bias, 1062 layer_head_mask=layer_head_mask, 1063 cross_attn_layer_head_mask=cross_attn_layer_head_mask, 1064 past_key_value=past_key_value, 1065 use_cache=use_cache, 1066 output_attentions=output_attentions, 1067 ) 1069 # layer_outputs is a tuple with: 1070 # hidden-states, key-value-states, (self-attention position bias), (self-attention weights), (cross-attention position bias), (cross-attention weights) 1071 if use_cache is False: File /usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py:1423, in Module._call_impl(self, *input, **kwargs) 1418 # If we don't have any hooks, we want to skip the rest of the logic in 1419 # this function, and just call forward. 1420 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks 1421 or _global_backward_pre_hooks or _global_backward_hooks 1422 or _global_forward_hooks or _global_forward_pre_hooks): -> 1423 return forward_call(*input, **kwargs) 1424 # Do not call functions when jit is used 1425 full_backward_hooks, non_full_backward_hooks = [], [] File /usr/local/lib/python3.8/dist-packages/accelerate/hooks.py:158, in add_hook_to_module.<locals>.new_forward(*args, **kwargs) 156 output = old_forward(*args, **kwargs) 157 else: --> 158 output = old_forward(*args, **kwargs) 159 return module._hf_hook.post_forward(module, output) File /usr/local/lib/python3.8/dist-packages/transformers/models/t5/modeling_t5.py:687, in T5Block.forward(self, hidden_states, attention_mask, position_bias, encoder_hidden_states, encoder_attention_mask, encoder_decoder_position_bias, layer_head_mask, cross_attn_layer_head_mask, past_key_value, use_cache, output_attentions, return_dict) 684 else: 685 self_attn_past_key_value, cross_attn_past_key_value = None, None --> 687 self_attention_outputs = self.layer[0]( 688 hidden_states, 689 attention_mask=attention_mask, 690 position_bias=position_bias, 691 layer_head_mask=layer_head_mask, 692 past_key_value=self_attn_past_key_value, 693 use_cache=use_cache, 694 output_attentions=output_attentions, 695 ) 696 hidden_states, present_key_value_state = self_attention_outputs[:2] 697 attention_outputs = self_attention_outputs[2:] # Keep self-attention outputs and relative position weights File /usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py:1423, in Module._call_impl(self, *input, **kwargs) 1418 # If we don't have any hooks, we want to skip the rest of the logic in 1419 # this function, and just call forward. 1420 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks 1421 or _global_backward_pre_hooks or _global_backward_hooks 1422 or _global_forward_hooks or _global_forward_pre_hooks): -> 1423 return forward_call(*input, **kwargs) 1424 # Do not call functions when jit is used 1425 full_backward_hooks, non_full_backward_hooks = [], [] File /usr/local/lib/python3.8/dist-packages/accelerate/hooks.py:158, in add_hook_to_module.<locals>.new_forward(*args, **kwargs) 156 output = old_forward(*args, **kwargs) 157 else: --> 158 output = old_forward(*args, **kwargs) 159 return module._hf_hook.post_forward(module, output) File /usr/local/lib/python3.8/dist-packages/transformers/models/t5/modeling_t5.py:592, in T5LayerSelfAttention.forward(self, hidden_states, attention_mask, position_bias, layer_head_mask, past_key_value, use_cache, output_attentions) 582 def forward( 583 self, 584 hidden_states, (...) 590 output_attentions=False, 591 ): --> 592 normed_hidden_states = self.layer_norm(hidden_states) 593 attention_output = self.SelfAttention( 594 normed_hidden_states, 595 mask=attention_mask, (...) 600 output_attentions=output_attentions, 601 ) 602 hidden_states = hidden_states + self.dropout(attention_output[0]) File /usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py:1423, in Module._call_impl(self, *input, **kwargs) 1418 # If we don't have any hooks, we want to skip the rest of the logic in 1419 # this function, and just call forward. 1420 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks 1421 or _global_backward_pre_hooks or _global_backward_hooks 1422 or _global_forward_hooks or _global_forward_pre_hooks): -> 1423 return forward_call(*input, **kwargs) 1424 # Do not call functions when jit is used 1425 full_backward_hooks, non_full_backward_hooks = [], [] File /usr/local/lib/python3.8/dist-packages/accelerate/hooks.py:158, in add_hook_to_module.<locals>.new_forward(*args, **kwargs) 156 output = old_forward(*args, **kwargs) 157 else: --> 158 output = old_forward(*args, **kwargs) 159 return module._hf_hook.post_forward(module, output) File /usr/local/lib/python3.8/dist-packages/apex/normalization/fused_layer_norm.py:386, in FusedRMSNorm.forward(self, input) 383 return manual_rms_norm(input, self.normalized_shape, self.weight, self.eps) 385 if self.elementwise_affine: --> 386 return fused_rms_norm_affine(input, self.weight, self.normalized_shape, self.eps) 387 else: 388 return fused_rms_norm(input, self.normalized_shape, self.eps) File /usr/local/lib/python3.8/dist-packages/apex/normalization/fused_layer_norm.py:189, in fused_rms_norm_affine(input, weight, normalized_shape, eps) 187 args = _cast_if_autocast_enabled(input, weight, normalized_shape, eps) 188 with torch.cuda.amp.autocast(enabled=False): --> 189 return FusedRMSNormAffineFunction.apply(*args) File /usr/local/lib/python3.8/dist-packages/apex/normalization/fused_layer_norm.py:69, in FusedRMSNormAffineFunction.forward(ctx, input, weight, normalized_shape, eps) 67 input_ = input.contiguous() 68 weight_ = weight.contiguous() ---> 69 output, invvar = fused_layer_norm_cuda.rms_forward_affine( 70 input_, ctx.normalized_shape, weight_, ctx.eps) 71 ctx.save_for_backward(input_, weight_, invvar) 72 return output RuntimeError: expected scalar type Float but found Half ``` ### Expected behavior The model to generate a translation of the input
01-31-2023 17:43:38
01-31-2023 17:43:38
Hi @steve-marmalade Thanks for the issue and your interest in 8bit models This issue has been flagged in https://github.com/huggingface/transformers/pull/21281 and fixed :-) Please use the `main` branch of `transformers` - I ran your script on the `main` branch and it worked fine `pip install git+https://github.com/huggingface/transformers.git` Maybe worth it to make a patch release @sgugger as this issue has been also flagged internally? <|||||>Will include the fix in the next patch release (probably tomorrow).<|||||>Sounds good thank you! <|||||>Thanks very much for the quick response @younesbelkada ! I just tested again to make sure, and am still seeing the issue even on the `main` branch of `transformers` (I see the fix referenced in that issue in the `modeling_t5.py` file in my environment). I will double check my environment to ensure I haven't made a mistake somewhere, but wanted to note that I also see `apex` and `accelerate` in the `Traceback` -- could there be any interaction there?<|||||>You are right, the issue is be related to `apex` I just installed `apex` from source and encountered the issue you are describing However I get the same issue even without 8-bit: ```python import torch from transformers import T5Tokenizer, T5ForConditionalGeneration tokenizer = T5Tokenizer.from_pretrained("google/flan-t5-base") model = T5ForConditionalGeneration.from_pretrained("google/flan-t5-base", device_map="auto", torch_dtype=torch.float16) input_text = "translate English to German: How old are you?" input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to("cuda") outputs = model.generate(input_ids) print(tokenizer.decode(outputs[0])) ``` This is because the LayerNorm is replace by apex's LayerNorm in case you have `apex` installed. Is having `apex` crucial in your case? I can investigate this a bit more meanwhile! Alternatively, can you try the snippet below : ```python import torch from transformers import T5Tokenizer, T5ForConditionalGeneration T5ForConditionalGeneration._keep_in_fp32_modules = None tokenizer = T5Tokenizer.from_pretrained("google/flan-t5-base") model = T5ForConditionalGeneration.from_pretrained("google/flan-t5-base", device_map="auto", torch_dtype=torch.float16) input_text = "translate English to German: How old are you?" input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to("cuda") outputs = model.generate(input_ids) print(tokenizer.decode(outputs[0])) ```<|||||>Ok, I was able to reproduce the error once again by running the NVIDIA container `nvcr.io/nvidia/pytorch:22.12-py3` (which includes apex) and then the following: ```bash pip install sentencepiece accelerate bitsandbytes pip install git+https://github.com/huggingface/transformers.git ``` And then the above python snippet. Uninstalling apex resolves the crash. Trying to build it from source now to see whether that helps.<|||||>To answer your questions: 1. I can confirm that `model = T5ForConditionalGeneration.from_pretrained("google/flan-t5-base", device_map="auto", torch_dtype=torch.float16)` fails with the same error. I had previously tried `model = T5ForConditionalGeneration.from_pretrained("google/flan-t5-base", device_map="auto")`, which works _and_ translates the input more or less correctly (but I assume uses more GPU memory than the other approaches). 2. Running the second snippet you shared with `T5ForConditionalGeneration._keep_in_fp32_modules = None` does not crash, but the input is not translated (it just repeats back "How old are you?"). > Is having apex crucial in your case? No, not crucial. I am not an expert here, but thought that running the NVIDIA images (with apex) would improve inference efficiency on the A100, which is definitely nice to have if true. > I can investigate this a bit more meanwhile! Thank you! <|||||>Actually, @younesbelkada I'd be curious to get your opinion on `apex` -- is it your impression that it speeds up training and/or inference significantly? From a quick scan of the [README](https://github.com/NVIDIA/apex) it looks like many of the features (aside from the fused layers that are causing the problem in this Issue) are already integrated into PyTorch so maybe it's not worth the hassle to get it working?<|||||>@steve-marmalade I did a bit of testing with flan-t5-xl with Apex and without (with float32) and observed a small approx ~5% inference speed improvement with Apex.<|||||>I'm working in the Docker image `nvcr.io/nvidia/pytorch:22.10-py3` and encountered this error. As suggested by @steve-marmalade, the error disappeared after `pip uninstall apex`.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
21,390
closed
Skip batches fast with accelerate
# What does this PR do? This PR uses the lastest from `Accelerate` to quickly skip batches when resuming training (it's only going to be quicker for a regular dataset, iterable datasets will still require a manual pass though the first batches). Note that the RNG seeds can't be reloaded before we have started the iteration of the data loader because the random sampler used by the data laoder needs to use the seed at this stage, so there is a small path to load it after the iteration has started cc @stas00 if you want to experiment (needs to use Accelerate main until v0.16 is out).
01-31-2023 16:30:37
01-31-2023 16:30:37
_The documentation is not available anymore as the PR was closed or merged._<|||||>Thank you for implementing this, Sylvain, Is this PR going to sit for a bit until the new Accelerate comes out? Currently I have to work on some other urgent things and SLURM makes it hard to quickly test things, but would be happy to test once I get the opportunity. <|||||>The new release is just out actually, so this should be merged as soon as positively reviewed :-)<|||||>ok, great! congrats! let me then just do a quick test on my desktop.<|||||>Hmm, on a small test it worked fine, but I see it fails on the real training. It appears to be restarting on resume rather than fast-forwarding. Perhaps when I did the small test I only checked that it printed the fast-forwarding message and thought it was doing the right thing. Here is the TB: ![snapshot_88](https://user-images.githubusercontent.com/10676103/216860918-00645132-12b9-471f-abaf-385ae03d1725.png) The straight line is a similar training before this change, the restarting one is after. So as you can see it restarts the iteration counter rather than continuing (it's hard to tell what was done to the data). Both trainings were run in 2 parts each with resume due to the slurm environment. I was training with `--log_level warning` so didn't get the info log. Any thoughts to what might have gone wrong? The example `run_clm.py` script that was running with this config: https://github.com/huggingface/m4/pull/922/files#diff-27452e2e8d112cbfd59f7900ef3b39dc35a4a0faf2d967fd86e399fe7ccb1ba2R192-R217 During the problematic run I used `accelerate==0.16.0` and `transformers@main` (from Feb 1) The good one was `accelerate==0.15.0` and some recent `transformers`<|||||>On the other hand it finished training at the same iteration as before, so the dataloader issues `StopIteration` at the same iteration as before. which means some accounting was off - that is epoch isn't calculated correctly.<|||||>The training should finish at the same loss (this is tested with and without randomness in the model for small training), so normally you're good with the data. I think the problem lies with [this line](https://github.com/huggingface/transformers/blob/182afb7dc6f40aea5f5bb41710cb5207d187b022/src/transformers/trainer.py#L1909) which uses the `step` variable which now goes from 0 to len(data_loader) - num_batches_skipped (instead of num_batch_skipped to len(data_loader) before my PR). Will push a fix today or tomorrow!<|||||>super! thank you, Sylvain.<|||||>Should be fixed by the PR mentioned above.
transformers
21,389
closed
Generate: fix TF XLA tests on models with `max_position_embeddings` or `max_target_positions`
# What does this PR do? Extracted from #20901 -- the lines added in this PR were incorrectly removed [here](https://github.com/huggingface/transformers/commit/0f78529f982eceb79c5855d0466c287ec8a18df1), causing some XLA tests to fail. These changes fixes 8 slow TF tests on `test_xla_generate_slow`. They were also approved in the PR linked above, which was redone as part of the discussion (and closed).
01-31-2023 15:15:21
01-31-2023 15:15:21
_The documentation is not available anymore as the PR was closed or merged._
transformers
21,388
closed
Added: links from model docs to respective model checkpoints on Hub
To aid navigation and discoverability, this PR adds links to respective model checkpoints on Hub to model docs (one link per model, in a format `https://huggingface.co/models?sort=downloads&search=YOSO`). A couple of maintenance changes are also included (copyright update and removal of obsolete disclaimers). UPD: reverted copyright to the original date
01-31-2023 14:34:09
01-31-2023 14:34:09
> Note that we're not changing the copyright years of the files: they are copyrighted from the year they were created. I can revert them back, but I'd love to know why?<|||||>Because there are many many many files in the library ;-) I also think it is bad practice to remove the year of the creation of the file, so if update there was, it would need to be something like 2020-20203 (for a file created in 2020), but I don't think the update is necessary at all. @CarlosMFerr might have more insight on this!<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_21388). All of your documentation changes will be reflected on that endpoint.<|||||>Since model summary rework, this PR is no longer relevant.
transformers
21,387
closed
OOM when running causal language modelling sample
The example at: https://github.com/huggingface/transformers/tree/main/examples/pytorch/language-modeling#gpt-2gpt-and-causal-language-modeling ...mentions taking half an hour on a K80. I'm using an M40 24GB, which I would reasonably believe has sufficient VRAM, but it dies with OOM at startup. I used the exact commandline shown on the page. Dropping -per_device_train_batch_size and --per_device_eval_batch_size from 8 to 4 succeeds, but it's still using ~15GB of VRAM Unsure whether the WikiText-2 dataset has changed, or transformers has changed, or the model has changed, or there's something different about VRAM usage (eg default data type size) between the K80 and M40 24GB. I'm just beginning with transformers fine-tuning which is why I was running the example commandline. Apologies if I'm missing something obvious. Ubuntu 22.04 LTS Python 3.10.6 running under venv transformers 4.27.0.dev0-py3.10.egg
01-31-2023 13:51:57
01-31-2023 13:51:57
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
21,386
closed
getting hidden_states in a causal menner
### Feature request i want to use the roberta model in the following way: given a list of N tokens, i want the model to compute a hidden_state for each of the N tokens in a causal way, meaning the first token hidden_state is computed based only on the first token, the second hidden_state is computed based on the first two tokens, the third hidden_state is computed based on the first three tokens and so on. additionally, i want a CLS token that his hidden_state will be computed based on all the input tokens. there seems like there is no flag or input which will make this. or is there?
01-31-2023 13:31:47
01-31-2023 13:31:47
You should use the [forums](https://discuss.huggingface.co/) for a question like this, as we keep issues for bugs and feature requests only.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
21,385
closed
Do not log the generation config for each prediction step in TrainerSeq2Seq
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> `generation_config` is currently initialized every time `generate` is called with `generation_config=None` (see [here](https://github.com/huggingface/transformers/blob/98d88b23f54e5a23e741833f1e973fdf600cc2c5/src/transformers/generation/utils.py#L1183)). This will also log the generation config (see [here](https://github.com/huggingface/transformers/blob/98d88b23f54e5a23e741833f1e973fdf600cc2c5/src/transformers/generation/configuration_utils.py#L557)). Therefore, it will be logged for each iteration of the evaluation loop of `TrainerSeq2Seq`. To avoid this behavior, this PR introduces a hack that sets `self.model.generation_config._from_model_config` to `False` after the first call to `generate` in `TrainerSeq2Seq` to ensure that 1) the right generation config has been initialized, 2) it will not be initialized in the following interations. Internal discussion [here](https://huggingface.slack.com/archives/C01N44FJDHT/p1675159917133549). ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts and @NielsRogge - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
01-31-2023 11:31:35
01-31-2023 11:31:35
_The documentation is not available anymore as the PR was closed or merged._
transformers
21,384
closed
[torch] remove deprecated uint8 in favor of bool
# What does this PR do? This should fix #21013. Have not ran the tests yet, so leaving as draft
01-31-2023 10:53:03
01-31-2023 10:53:03
_The documentation is not available anymore as the PR was closed or merged._<|||||>Waiting for the CI to work again. <|||||>Ok now the real clean up is starting, have to identify all attention masks, vs the causal masks. I think it is a good thing, will help understanding <|||||>The way we handle attention mask is not normalised throughout the library. Diving deeper might not be the best idea as there can be some backward incompatibilities: - if we modify all the methods that handle attention masks that are define at the beginning of most of the modeling files, it is gonna break things for potential users. - if we modify the output of the tokenizer, it is the same. In conclusion the simplest fix is to dig where `uint8` are used, otherwise ignore. Whenever the uint8 are converted to `torch.bool` the rest of the code that depends on it should also be updated.<|||||>Last check, I need to make sure these new mask don't go through a `1.0 - mask` afterwards and will be good to go. EDIT: looks good, everything goes to a torch.where<|||||>Test failing is unrelated, merging
transformers
21,383
closed
[Docs] Minor fixes
# What does this PR do? This PR: - moves TAPAS and LayoutLM to the "multimodal" section rather than "text", as these models leverage more modalities than just text (TAPAS leverages row and column information, LayoutLM leverages 2D coordinates). - adds a figure for DETA and fixes the one of UPerNet
01-31-2023 10:36:28
01-31-2023 10:36:28
_The documentation is not available anymore as the PR was closed or merged._
transformers
21,382
closed
Simplify column_names in run_clm/mlm
Following https://github.com/huggingface/transformers/pull/21343 Just a minor change to simplify the code, and fix a small bug (`column_names` needs to be a list to be able to call `column_names[0]`) cc @stas00
01-31-2023 10:35:39
01-31-2023 10:35:39
_The documentation is not available anymore as the PR was closed or merged._<|||||>This is much better - thank you, @lhoestq!
transformers
21,381
closed
gradient checkpointing disables requires_grad when freezing part of models (fix with use_reentrant=False)
### System Info - `transformers` version: 4.22.2 - Platform: Linux-4.18.0-372.36.1.el8_6.x86_64-x86_64-with-glibc2.28 - Python version: 3.10.4 - Huggingface_hub version: 0.10.0 - PyTorch version (GPU?): 1.12.1 (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: yes but not relevant - Using distributed or parallel set-up in script?: no but not relevant ### Who can help? trainer/PyTorch: @sgugger ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction When freezing the first layers of a model (e.g. including embeddings) *and* using gradient-checkpointing, **all** gradient calculation will be disabled, i.e. the output will have `requires_grad==False`. See https://discuss.pytorch.org/t/checkpoint-with-no-grad-requiring-inputs-problem/19117/7 for explanations. I believe this is the issue encountered by @entslscheia in https://github.com/huggingface/transformers/issues/16276 (apparently unsolved). Here is a sample code to reproduce with `BertModel`, but I think that gradient-checkpointing is implemented like this everywhere in the library: ```py In [1]: from transformers import BertTokenizer, BertModel In [2]: model = BertModel.from_pretrained('../models/bert-base-uncased/') In [3]: tokenizer = BertTokenizer.from_pretrained('../models/bert-base-uncased/') In [4]: inputs = tokenizer(['foo', 'bar'], return_tensors='pt') # enable gradient checkpointing In [5]: model.encoder.gradient_checkpointing = True # freezing the first input layers (here the embeddings) In [8]: for p in model.embeddings.parameters(): ...: p.requires_grad=False In [9]: output = model(**inputs) # expected True In [12]: output.last_hidden_state.requires_grad Out[12]: False # note that all weights of the model have requires_grad==True except for the embeddings In [15]: model.encoder.layer[0].output.dense.weight.requires_grad Out[15]: True ``` As myself, you might find out about this while training a model, because obviously if you pass `None` gradients to an optimizer, it will not be happy. You might encounter the following warning: `/gpfswork/rech/fih/usl47jg/miniconda3/envs/datasets/lib/python3.10/site-packages/torch/utils/checkpoint.py:25: UserWarning: None of the inputs have requires_grad=True. Gradients will be None` and then the following error: `RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn`. ### Expected behavior A quick fix for me was to set `use_reentrant=False` when calling `torch.utils.checkpoint.checkpoint` (e.g. in https://github.com/huggingface/transformers/blob/main/src/transformers/models/bert/modeling_bert.py#L600). Note that this will be the future default in "future versions of PyTorch" ([according to the doc](https://pytorch.org/docs/stable/checkpoint.html) without further precision).
01-31-2023 09:54:59
01-31-2023 09:54:59
Thanks for flagging the issue. We don't offer APIs to freeze the model in the Trainer (as it has never shown better results for fine-tuning, quite the opposite), so we can leave this issue to show how to solve the problem, but we won't really incorporate it in Transformers.<|||||>I think a lot of people use `transformers` models without using the Trainer (so with their own training script or, e.g. pytorch lightning that *does* provide an API to freeze models), but it’s your call :)
transformers
21,380
closed
Update `Graphormer` and fix its `torchscript` test failures
# What does this PR do? Update `Graphormer` and fix its `torchscript` test failures cc @clefourrier for reference
01-31-2023 09:54:59
01-31-2023 09:54:59
_The documentation is not available anymore as the PR was closed or merged._
transformers
21,379
closed
Add support to MPNetForCausalLM
### Feature request Add MPNetForCausalLM for various usage. ### Motivation The pre-trained mpnet model provides the best performance in [sentence embedding](https://www.sbert.net/docs/pretrained_models.html) And it can aim for additional performance improvements through using [TSDAE (Tranformer-based Denoising AutoEncoder)](https://www.sbert.net/examples/unsupervised_learning/README.html#tsdae). For using TSDAE, model need decoder part implementation(MPNetForCausalLM). but current MPNet model does not support MPNetForCausalLM With support to MPNetForCausalLM, all the decoding functions will be possible for mpnet models, e.g. the TSDAE learning, seq2seq tasks, etc. simillar issue : [Add support to DistilBertLMHeadModel](https://github.com/huggingface/transformers/issues/14737) ### Your contribution I have been working on the details by forking the recent master branch , and if there is no problem, I would like to proceed.
01-31-2023 08:08:35
01-31-2023 08:08:35
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
21,378
closed
UL2 Training with HF Trainer + DeepSpeed Zero3 Results in CUDA Illegal Memory Exception
### System Info transformers version==4.26.0 torch==1.13.1 deepspeed==0.8 hardware: 8x A100-80GB Fine-tuning UL2 with the Huggingface Trainer and DeepSpeed Zero2 or Zero3 results in a CUDA Illegal Memory Exception. This is true with any Huggingface Trainer script, PyTorch version (1.12 and 1.113), DeepSpeed version (0.6.7, 0.7.7, 0.8), and CUDA version (11.3 and 11.8) that I've tried. The same scripts work just fine with flan-t5-xxl. ``` [W CUDAGuardImpl.h:124] Warning: CUDA warning: an illegal memory access was encountered (function destroyEvent) terminate called after throwing an instance of 'c10::Error' what(): CUDA error: an illegal memory access was encountered CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1. ``` Any thoughts @stas00? Your help would be appreciated. ### Who can help? @stas00 ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Try fine-tuning UL2 on any task/dataset using DeepSpeed Zero2/Zero3. You should encounter the error. ### Expected behavior Training proceeds normally.
01-31-2023 00:47:53
01-31-2023 00:47:53
I have never tried running UL2 - please help me to reproduce it and of course for the future do follow the instructions from the error message to re-running with ` =1` (except this feature is broken in recent NCCL (pt-1.13) and it'll hang https://github.com/NVIDIA/nccl/issues/750). The async nature often makes it impossible to get a real traceback and `CUDA_LAUNCH_BLOCKING=1` turns async mode off and gives you a normal traceback.<|||||>Thank you, @stas00. This is the error with `CUDA_LAUNCH_BLOCKING=1`: ``` [2023-01-31 01:03:02,046] [INFO] [utils.py:827:see_memory_usage] DeepSpeedZeRoOffload initialize [begin] [2023-01-31 01:03:02,047] [INFO] [utils.py:832:see_memory_usage] MA 4.56 GB Max_MA 4.56 GB CA 5.48 GB Max_CA 5 GB [2023-01-31 01:03:02,048] [INFO] [utils.py:837:see_memory_usage] CPU Virtual Memory: used = 30.74 GB, percent = 2.3% Parameter Offload: Total persistent parameters: 664576 in 164 params [2023-01-31 01:03:02,287] [INFO] [utils.py:827:see_memory_usage] DeepSpeedZeRoOffload initialize [end] [2023-01-31 01:03:02,289] [INFO] [utils.py:832:see_memory_usage] MA 4.56 GB Max_MA 4.56 GB CA 5.48 GB Max_CA 5 GB [2023-01-31 01:03:02,289] [INFO] [utils.py:837:see_memory_usage] CPU Virtual Memory: used = 30.59 GB, percent = 2.3% terminate called after throwing an instance of 'std::runtime_error' what(): NCCL Error 1: unhandled cuda error terminate called after throwing an instance of 'std::runtime_error' what(): NCCL Error 1: unhandled cuda error terminate called after throwing an instance of 'std::runtime_error' what(): NCCL Error 1: unhandled cuda error terminate called after throwing an instance of 'std::runtime_error' what(): NCCL Error 1: unhandled cuda error terminate called after throwing an instance of 'std::runtime_error' what(): NCCL Error 1: unhandled cuda error terminate called after throwing an instance of 'std::runtime_error' what(): NCCL Error 1: unhandled cuda error terminate called after throwing an instance of 'std::runtime_error' what(): NCCL Error 1: unhandled cuda error terminate called after throwing an instance of 'std::runtime_error' what(): NCCL Error 1: unhandled cuda error [2023-01-31 01:03:08,861] [INFO] [launch.py:286:sigkill_handler] Killing subprocess 26370 [2023-01-31 01:03:08,879] [INFO] [launch.py:286:sigkill_handler] Killing subprocess 26371 [2023-01-31 01:03:08,879] [INFO] [launch.py:286:sigkill_handler] Killing subprocess 26372 [2023-01-31 01:03:08,894] [INFO] [launch.py:286:sigkill_handler] Killing subprocess 26373 [2023-01-31 01:03:08,908] [INFO] [launch.py:286:sigkill_handler] Killing subprocess 26374 [2023-01-31 01:03:09,454] [INFO] [launch.py:286:sigkill_handler] Killing subprocess 26375 [2023-01-31 01:03:09,471] [INFO] [launch.py:286:sigkill_handler] Killing subprocess 26376 [2023-01-31 01:03:09,485] [INFO] [launch.py:286:sigkill_handler] Killing subprocess 26377 ```<|||||>Hmm, I have no idea based on the log. Thank you for sharing it, Michael. How do I reproduce the problem? Is it possible that you're running out of cpu memory? sometimes you get cpu-oom event and the program gets culled in the middle of the run, but usually the OS should log this event in the console or syslog.<|||||>You can reproduce the problem by trying to fine-tune UL2 in BF16 using DeepSpeed Zero2/Zero3 and the HF Trainer. Dataset doesn't seem to matter, I think any Seq2Seq fine-tuning script should reproduce it. I doubt it's a resource issue. It's GCP's a2-ultragpu instance with 1.3TB of CPU mem. GPU memory also seems to be fine. I remember training a UL2 model back in September with DeepSpeed successfully, but now I can't seem to. Do you have access to an A100 node to try this out? <|||||>Sounds good. But why is it so difficult to copy-n-paste the commands and configs that fail for you and not have me figure everything out from scratch? Please meet me half way.<|||||>Okay, my bad. It's just all custom, but here goes. Train: ``` import functools import json import argparse from datetime import datetime import os from utils.dataset_formats import Seq2SeqDataset import numpy as np import nltk import wandb import torch from datasets import load_metric from transformers import Seq2SeqTrainingArguments, Seq2SeqTrainer, AutoTokenizer, AutoModelForSeq2SeqLM, AddedToken class Trainer: def __init__(self, args) -> None: self.train_dataset = None self.val_dataset = None self.args = args self.metric = load_metric("rouge") self.trainer = None # Get a Seq2SeqDataset from a json file def prepare_datsets_for_training(self) -> None: train_data_json = json.load(open(self.args.train)) val_data_json = json.load(open(self.args.val)) self.train_dataset = Seq2SeqDataset(train_data_json) self.val_dataset = Seq2SeqDataset(val_data_json) self.tokenizer = None # Train and save a Seq2Seq model def train_model(self) -> AutoModelForSeq2SeqLM: training_args = Seq2SeqTrainingArguments(output_dir=self.args.save_dir, num_train_epochs=self.args.num_epochs, logging_steps=1, save_steps=self.args.save_steps or self.args.eval_steps, per_device_train_batch_size=self.args.per_device_train_batch_size, per_device_eval_batch_size=self.args.per_device_eval_batch_size, logging_dir=args.save_dir, bf16=self.args.bf16, bf16_full_eval=self.args.bf16, fp16=False, gradient_accumulation_steps=self.args.gradient_accumulation_steps, overwrite_output_dir=True, evaluation_strategy="steps", eval_steps=self.args.eval_steps, predict_with_generate=True, report_to="wandb", learning_rate=args.learning_rate, lr_scheduler_type="cosine", gradient_checkpointing=self.args.gradient_checkpointing, deepspeed=args.deepspeed, log_level="error", log_level_replica="error") tokenizer = AutoTokenizer.from_pretrained(self.args.model) added_tokens = [AddedToken("<"), AddedToken("<SOURCE>"), AddedToken("{"), AddedToken("}"), AddedToken("\n"), AddedToken("\t"), AddedToken(" "), AddedToken(" "), AddedToken(" "), AddedToken("`")] tokenizer.add_special_tokens({"additional_special_tokens": added_tokens}) tokenizer.save_pretrained(self.args.save_dir + "/tokenizer") self.tokenizer = tokenizer model = AutoModelForSeq2SeqLM.from_pretrained(self.args.model) os.environ["WANDB_PROJECT"] = self.args.name if torch.distributed.get_rank() == 0: run_name = datetime.now().strftime('%b-%d-%I%M%p-%G') wandb.tensorboard.patch(root_logdir=self.args.save_dir) wandb.init(name=run_name, entity="hellocognition") nltk.download('punkt') # Barrier for distributed training print("Rank {} reached barrier 1".format(torch.distributed.get_rank())) torch.distributed.barrier() model_collate_fn = functools.partial( self.make_batch, tokenizer=tokenizer, max_input_len=self.args.max_input_len, max_target_len=self.args.max_target_len, ) assert self.train_dataset and self.val_dataset self.trainer = Seq2SeqTrainer(model=model, args=training_args, train_dataset=self.train_dataset, eval_dataset=self.val_dataset, data_collator=model_collate_fn, compute_metrics=self.compute_metrics) # Barrier for distributed training print("Rank {} reached barrier 2".format(torch.distributed.get_rank())) torch.distributed.barrier() self.trainer.train() if torch.distributed.get_rank() == 0: trainer.save(self.args.save_dir + '/final_model') return model # Truncate examples to max input lengths and make a torch.Tensor input/output batch def make_batch(self, example_list: list, tokenizer: AutoTokenizer, max_input_len: int, max_target_len: int): model_input_list = [model_input for model_input, _ in example_list] gold_answer_list = [gold_answer for _, gold_answer in example_list] model_input_tokens = tokenizer.batch_encode_plus(model_input_list, max_length=max_input_len, padding=True, truncation=True) model_input_ids, model_input_mask = ( torch.tensor(model_input_tokens["input_ids"]), torch.tensor(model_input_tokens["attention_mask"]) ) gold_answer_tokens = tokenizer.batch_encode_plus(gold_answer_list, max_length=max_target_len, padding=True, truncation=True) gold_answer_ids, gold_answer_mask = ( torch.tensor(gold_answer_tokens["input_ids"]), torch.tensor(gold_answer_tokens["attention_mask"]) ) lm_labels = gold_answer_ids[:, :].contiguous().clone() # Set pad tokens to -100 to be ignored by cross entropy loss lm_labels[gold_answer_mask[:, :].contiguous() == 0] = -100 model_inputs = { "input_ids": model_input_ids, "attention_mask": model_input_mask, "labels": lm_labels, } return model_inputs # Compute ROUGE metrics def compute_metrics(self, eval_pred: list): predictions, labels = eval_pred decoded_preds = self.tokenizer.batch_decode(predictions, skip_special_tokens=False) # Replace -100 in the labels as we can't decode them. labels = np.where(labels != -100, labels, self.tokenizer.pad_token_id) decoded_labels = self.tokenizer.batch_decode(labels, skip_special_tokens=False) # Rouge expects a newline after each sentence decoded_preds = ["\n".join(nltk.sent_tokenize(pred.strip())) for pred in decoded_preds] decoded_labels = ["\n".join(nltk.sent_tokenize(label.strip())) for label in decoded_labels] result = self.metric.compute(predictions=decoded_preds, references=decoded_labels, use_stemmer=True) # Extract a few results result = {key: value.mid.fmeasure * 100 for key, value in result.items()} # Add mean generated length prediction_lens = [np.count_nonzero(pred != self.tokenizer.pad_token_id) for pred in predictions] result["gen_len"] = np.mean(prediction_lens) return {k: round(v, 4) for k, v in result.items()} if __name__ == "__main__": # parse args parser = argparse.ArgumentParser(description='Train Argument Parser') parser.add_argument('--name', help='name of the model to be trained using the modeltype-datasetname convention, e.g. flan-t5-3B-gpt3', required=True) parser.add_argument('--model', help='name or path of the model to train, e.g. google/flan-t5-xl', required=True) parser.add_argument('--train', help='path to the json train dataset', required=True) parser.add_argument('--val', help='path to the json val dataset', required=True) parser.add_argument('--max_input_len', type=int, help='maximum number of tokens allowed in training input', required=True) parser.add_argument('--max_target_len', type=int, help='maximum number of tokens allowed in training target output', required=True) parser.add_argument('--save_dir', help='save directory after training', required=True) parser.add_argument('--num_epochs', type=int, help='number of epochs to train', required=True) parser.add_argument('--learning_rate', type=float, help='learning rate', required=True) parser.add_argument('--eval_steps', type=int, help='how many steps to eval after', required=True) parser.add_argument('--save_steps', type=int, help='how many steps to save after', required=False) parser.add_argument('--gradient_accumulation_steps', type=int, help='how many steps to accumulate gradient for (increases effective batch size)', required=True) parser.add_argument('--per_device_train_batch_size', type=int, help='train batch size', required=True) parser.add_argument('--per_device_eval_batch_size', type=int, help='eval batch size', required=True) parser.add_argument('--bf16', help='enable bfloat16 training and eval', default=False, action="store_true") parser.add_argument('--gradient_checkpointing', help='allow larger sequence lengths to fit in memory', default=False, action="store_true") parser.add_argument('--deepspeed', help='path of the deepspeed config', required=True) parser.add_argument('--local_rank') args = parser.parse_args() # log into wandb os.environ['WANDB_API_KEY'] = "WANDB-KEY" # make trainer trainer = Trainer(args) # prepare dataset trainer.prepare_datsets_for_training() # perform training trained_model = trainer.train_model() ``` Seq2Seq Dataset: ``` from torch.utils.data import Dataset class Seq2SeqDataset(Dataset): def __init__(self, examples): self.examples = examples def __len__(self): return len(self.examples) def make_example(self, i): prompt = self.examples[i]["prompt"] example_input = self.examples[i]["example_input"] gold_answer = self.examples[i]["gold_answer"] model_input = "{}\n{}".format(prompt, example_input) return (model_input, gold_answer) def __getitem__(self, i): return self.make_example(i) ``` ds_config: ``` { "train_micro_batch_size_per_gpu": "auto", "gradient_accumulation_steps": "auto", "bf16": { "enabled": "auto" }, "zero_optimization": { "stage": 3, "overlap_comm": true, "contiguous_gradients": true, "sub_group_size": 1e12, "reduce_bucket_size": "auto", "stage3_prefetch_bucket_size": "auto", "stage3_param_persistence_threshold": "auto", "stage3_max_live_parameters": 2e9, "stage3_max_reuse_distance": 1e9, "gather_16bit_weights_on_model_save": true }, "optimizer": { "type": "AdamW", "params": { "lr": "auto" } } } ``` This works great with flan-t5, but fails on UL2. Here is the detailed error that I get without `CUDA_LAUNCH_BLOCKING=1`: ``` [W CUDAGuardImpl.h:124] Warning: CUDA warning: an illegal memory access was encountered (function destroyEvent) terminate called after throwing an instance of 'c10::Error' what(): CUDA error: an illegal memory access was encountered CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1. Exception raised from c10_cuda_check_implementation at ../c10/cuda/CUDAException.cpp:31 (most recent call first): frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x57 (0x7f49bd1a6457 in /opt/conda/lib/python3.7/site-packages/torch/lib/libc10.so) frame #1: c10::detail::torchCheckFail(char const*, char const*, unsigned int, std::string const&) + 0x64 (0x7f49bd1703ec in /opt/conda/lib/python3.7/site-packages/torch/lib/libc10.so) frame #2: c10::cuda::c10_cuda_check_implementation(std::string const&, std::string const&, int, bool) + 0xb4 (0x7f49bd246c64 in /opt/conda/lib/python3.7/site-packages/torch/lib/libc10_cuda.so) frame #3: <unknown function> + 0x1e0dc (0x7f49bd21e0dc in /opt/conda/lib/python3.7/site-packages/torch/lib/libc10_cuda.so) frame #4: c10::cuda::CUDACachingAllocator::raw_delete(void*) + 0x244 (0x7f49bd221054 in /opt/conda/lib/python3.7/site-packages/torch/lib/libc10_cuda.so) frame #5: <unknown function> + 0x4f6823 (0x7f49aa4ab823 in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_python.so) frame #6: c10::TensorImpl::~TensorImpl() + 0x1a0 (0x7f49bd1869e0 in /opt/conda/lib/python3.7/site-packages/torch/lib/libc10.so) frame #7: c10::TensorImpl::~TensorImpl() + 0x9 (0x7f49bd186af9 in /opt/conda/lib/python3.7/site-packages/torch/lib/libc10.so) frame #8: std::vector<at::Tensor, std::allocator<at::Tensor> >::~vector() + 0x8b (0x7f49aa4add1b in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_python.so) frame #9: c10d::ProcessGroupNCCL::WorkNCCL::~WorkNCCL() + 0x8c (0x7f491bf3ae8c in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_cuda_cpp.so) frame #10: c10d::ProcessGroupNCCL::WorkNCCL::~WorkNCCL() + 0x9 (0x7f491bf3b349 in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_cuda_cpp.so) frame #11: <unknown function> + 0xbe302c (0x7f49aab9802c in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_python.so) frame #12: <unknown function> + 0x3e4272 (0x7f49aa399272 in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_python.so) frame #13: <unknown function> + 0x3e51af (0x7f49aa39a1af in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_python.so) frame #14: <unknown function> + 0xe5698 (0x56347c18d698 in /opt/conda/bin/python3.7) frame #15: <unknown function> + 0x1f7b89 (0x56347c29fb89 in /opt/conda/bin/python3.7) frame #16: <unknown function> + 0x191de8 (0x56347c239de8 in /opt/conda/bin/python3.7) frame #17: _PyEval_EvalFrameDefault + 0x4c8a (0x56347c26426a in /opt/conda/bin/python3.7) frame #18: _PyFunction_FastCallKeywords + 0x184 (0x56347c1d73d4 in /opt/conda/bin/python3.7) frame #19: <unknown function> + 0x191de8 (0x56347c239de8 in /opt/conda/bin/python3.7) frame #20: _PyEval_EvalFrameDefault + 0xb2a (0x56347c26010a in /opt/conda/bin/python3.7) frame #21: <unknown function> + 0x1f7b66 (0x56347c29fb66 in /opt/conda/bin/python3.7) frame #22: _PyFunction_FastCallDict + 0xaef (0x56347c1a78cf in /opt/conda/bin/python3.7) frame #23: _PyEval_EvalFrameDefault + 0x1f86 (0x56347c261566 in /opt/conda/bin/python3.7) frame #24: _PyEval_EvalCodeWithName + 0x33d (0x56347c1a5ccd in /opt/conda/bin/python3.7) frame #25: _PyFunction_FastCallKeywords + 0x320 (0x56347c1d7570 in /opt/conda/bin/python3.7) frame #26: <unknown function> + 0x191de8 (0x56347c239de8 in /opt/conda/bin/python3.7) frame #27: _PyEval_EvalFrameDefault + 0x4c8a (0x56347c26426a in /opt/conda/bin/python3.7) frame #28: _PyFunction_FastCallKeywords + 0x184 (0x56347c1d73d4 in /opt/conda/bin/python3.7) frame #29: <unknown function> + 0x191de8 (0x56347c239de8 in /opt/conda/bin/python3.7) frame #30: _PyEval_EvalFrameDefault + 0xb2a (0x56347c26010a in /opt/conda/bin/python3.7) frame #31: _PyEval_EvalCodeWithName + 0x33d (0x56347c1a5ccd in /opt/conda/bin/python3.7) frame #32: _PyFunction_FastCallDict + 0x6a0 (0x56347c1a7480 in /opt/conda/bin/python3.7) frame #33: <unknown function> + 0x185ea4 (0x56347c22dea4 in /opt/conda/bin/python3.7) frame #34: _PyObject_FastCallKeywords + 0x18c (0x56347c238b8c in /opt/conda/bin/python3.7) frame #35: <unknown function> + 0x191f79 (0x56347c239f79 in /opt/conda/bin/python3.7) frame #36: _PyEval_EvalFrameDefault + 0x16bb (0x56347c260c9b in /opt/conda/bin/python3.7) frame #37: _PyFunction_FastCallKeywords + 0x184 (0x56347c1d73d4 in /opt/conda/bin/python3.7) frame #38: <unknown function> + 0x191de8 (0x56347c239de8 in /opt/conda/bin/python3.7) frame #39: _PyEval_EvalFrameDefault + 0x4c8a (0x56347c26426a in /opt/conda/bin/python3.7) frame #40: _PyFunction_FastCallKeywords + 0x184 (0x56347c1d73d4 in /opt/conda/bin/python3.7) frame #41: <unknown function> + 0x191de8 (0x56347c239de8 in /opt/conda/bin/python3.7) frame #42: _PyEval_EvalFrameDefault + 0x4c8a (0x56347c26426a in /opt/conda/bin/python3.7) frame #43: _PyEval_EvalCodeWithName + 0x33d (0x56347c1a5ccd in /opt/conda/bin/python3.7) frame #44: _PyFunction_FastCallDict + 0x6a0 (0x56347c1a7480 in /opt/conda/bin/python3.7) frame #45: <unknown function> + 0x185ea4 (0x56347c22dea4 in /opt/conda/bin/python3.7) frame #46: _PyObject_FastCallKeywords + 0x18c (0x56347c238b8c in /opt/conda/bin/python3.7) frame #47: <unknown function> + 0x191f79 (0x56347c239f79 in /opt/conda/bin/python3.7) frame #48: _PyEval_EvalFrameDefault + 0x16bb (0x56347c260c9b in /opt/conda/bin/python3.7) frame #49: _PyEval_EvalCodeWithName + 0x33d (0x56347c1a5ccd in /opt/conda/bin/python3.7) frame #50: _PyFunction_FastCallDict + 0x6a0 (0x56347c1a7480 in /opt/conda/bin/python3.7) frame #51: _PyEval_EvalFrameDefault + 0x1f86 (0x56347c261566 in /opt/conda/bin/python3.7) frame #52: _PyEval_EvalCodeWithName + 0x33d (0x56347c1a5ccd in /opt/conda/bin/python3.7) frame #53: _PyFunction_FastCallKeywords + 0x320 (0x56347c1d7570 in /opt/conda/bin/python3.7) frame #54: <unknown function> + 0x191de8 (0x56347c239de8 in /opt/conda/bin/python3.7) frame #55: _PyEval_EvalFrameDefault + 0x16bb (0x56347c260c9b in /opt/conda/bin/python3.7) frame #56: _PyEval_EvalCodeWithName + 0x33d (0x56347c1a5ccd in /opt/conda/bin/python3.7) frame #57: _PyFunction_FastCallDict + 0x6a0 (0x56347c1a7480 in /opt/conda/bin/python3.7) frame #58: <unknown function> + 0x185a63 (0x56347c22da63 in /opt/conda/bin/python3.7) frame #59: PyObject_Call + 0x6c (0x56347c1b09dc in /opt/conda/bin/python3.7) frame #60: <unknown function> + 0x21d3e7 (0x56347c2c53e7 in /opt/conda/bin/python3.7) frame #61: _PyObject_FastCallKeywords + 0x3cb (0x56347c238dcb in /opt/conda/bin/python3.7) frame #62: <unknown function> + 0x191f79 (0x56347c239f79 in /opt/conda/bin/python3.7) frame #63: _PyEval_EvalFrameDefault + 0x16bb (0x56347c260c9b in /opt/conda/bin/python3.7) ```<|||||>and cmd line?<|||||>``` deepspeed train.py \ --name ul2-test \ --model google/ul2 \ --train <your train file>.json \ --val <your val file>.json \ --max_input_len 128 \ --max_target_len 512 \ --save_dir <your save directory path> \ --num_epochs 3 \ --learning_rate 2e-4 \ --eval_steps 3000 \ --gradient_accumulation_steps 8 \ --per_device_train_batch_size 1 \ --per_device_eval_batch_size 1 \ --deepspeed utils/ds_config_zero3.json \ --bf16 ``` I can't share the train files, unfortunately, but as per the Seq2SeqDataset schema, the train/val files are a list of ``` { "prompt": "prompt", "example_input": "input", "gold_answer": "gold_answer" } ``` objects dumped to a JSON file.<|||||>ok, then I can't support you, Michael. Once you provide a way for me to reproduce the problem I'd be happy to try to understand and come up with a solution. <|||||>Okay, my apologies again. Here are some dummy files that can be used to reproduce the issue. https://phind-demo.s3.amazonaws.com/demo_train.json https://phind-demo.s3.amazonaws.com/demo_val.json So the train script would be ``` deepspeed train.py \ --name ul2-test \ --model google/ul2 \ --train demo_train.json \ --val demo_val.json \ --max_input_len 128 \ --max_target_len 512 \ --save_dir <your save directory path> \ --num_epochs 3 \ --learning_rate 2e-4 \ --eval_steps 3000 \ --gradient_accumulation_steps 8 \ --per_device_train_batch_size 1 \ --per_device_eval_batch_size 1 \ --deepspeed ds_config.json \ --bf16 ``` <|||||>Now please test that the code you shared works. As it fails here: ``` File "train.py", line 183, in <module> trained_model = trainer.train_model() File "train.py", line 96, in train_model self.trainer.train() File "/mnt/nvme0/code/huggingface/transformers-master-2/src/transformers/trainer.py", line 1557, in train return inner_training_loop( File "/mnt/nvme0/code/huggingface/transformers-master-2/src/transformers/trainer.py", line 1569, in _inner_training_loop train_dataloader = self.get_train_dataloader() File "/mnt/nvme0/code/huggingface/transformers-master-2/src/transformers/trainer.py", line 835, in get_train_dataloader train_dataset = self._remove_unused_columns(train_dataset, description="training") File "/mnt/nvme0/code/huggingface/transformers-master-2/src/transformers/trainer.py", line 711, in _remove_unused_columns ignored_columns = list(set(dataset.column_names) - set(signature_columns)) File "/mnt/nvme0/code/huggingface/datasets-master/src/datasets/arrow_dataset.py", line 1673, in column_names return self._data.column_names AttributeError: 'Seq2SeqDataset' object has no attribute '_data' ``` I dumped your `Seq2SeqDataset` into your main script (`trainer.py`) so the line numbers won't match with your original main script. Also does the problem still occur if you use a much smaller ul2 model? e.g. I'm trying with `yhavinga/ul2-small-dutch-english` - at this point we don't care for outcome, just to reproduce your problem. I'm trying on 1 gpu first: ``` rm -rf save_dir; CUDA_VISIBLE_DEVICES=0 deepspeed train.py \ --name ul2-test \ --model yhavinga/ul2-small-dutch-english \ --train demo_train.json \ --val demo_val.json \ --max_input_len 128 \ --max_target_len 512 \ --save_dir save_dir \ --num_epochs 3 \ --learning_rate 2e-4 \ --eval_steps 3000 \ --gradient_accumulation_steps 8 \ --per_device_train_batch_size 1 \ --per_device_eval_batch_size 1 \ --deepspeed ds_config.json \ --bf16 ``` Let's try to come up with the smallest possible set up that reproduces the issue, then it'll be easy to debug.<|||||>I've just tested the scripts with flan-t5-small (ul2-small-dutch-english had an odd CUDA error, a different one than the one described above). Additionally, ul2-small-dutch-english is not a representative example as it uses a different activation function from google's UL2 (gated gelu vs gated silu). Please refer to my S3 bucket for my scripts that I've confirmed run on my machine and their corresponding directory structure. - train.py (https://phind-demo.s3.amazonaws.com/train.py) - ds_config.json (https://phind-demo.s3.amazonaws.com/ds_config.json) - utils folder containing dataset_formats.py, which has the Seq2SeqDataset class (https://phind-demo.s3.amazonaws.com/utils/dataset_formats.py) With these exact files and the latest version of transformers/datasets, I've just been able to run: ``` deepspeed train.py \ --name ul2-test \ --model google/flan-t5-small \ --train demo_train.json \ --val demo_val.json \ --max_input_len 128 \ --max_target_len 512 \ --save_dir save_dir \ --num_epochs 3 \ --learning_rate 2e-4 \ --eval_steps 3000 \ --gradient_accumulation_steps 8 \ --per_device_train_batch_size 1 \ --per_device_eval_batch_size 1 \ --deepspeed ds_config.json \ --bf16 ``` But any of the UL2 models get CUDA errors. Would appreciate your help. Thanks! <|||||>Thank you, Michael. With this last version of your code I can run the example you shared. OK, so what is the smallest UL2 model do you still see the problem with? https://huggingface.co/models?sort=downloads&search=ul2 I run the above code on ` --model Finnish-NLP/ul2-small-nl24-finnish` on 1 and 2 gpus and had no problem. Additionally, once you try a smaller ul2 model, do you get the same problem with a. 1 gpu b. 2 gpus?<|||||>Running with `--model Finnish-NLP/ul2-small-nl24-finnish` works for me as well with any number of gpus (from 1 to 8). But I don't think it's representative because it uses a different activation function than google/ul2. Unfortunately there are no "real" smaller UL2 models, unlike the flan-t5 series where everything is the same except for scale. UPDATE: I take that back. yhavinga/ul2-base-en-nl also uses gated-silu. Running that experiment now.<|||||>Running ``` deepspeed train.py \ --name ul2-test \ --model yhavinga/ul2-base-en-nl \ --train demo_train.json \ --val demo_val.json \ --max_input_len 128 \ --max_target_len 512 \ --save_dir save_dir \ --num_epochs 3 \ --learning_rate 2e-4 \ --eval_steps 3000 \ --gradient_accumulation_steps 8 \ --per_device_train_batch_size 1 \ --per_device_eval_batch_size 1 \ --deepspeed ds_config.json \ --bf16 ``` on 8 gpus, I got ``` RuntimeError: CUDA error: CUBLAS_STATUS_NOT_INITIALIZED when calling `cublasCreate(handle)` ╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮ │ /home/michael/train.py:164 in <module> │ │ │ │ 161 │ trainer.prepare_datsets_for_training() │ │ 162 │ │ │ 163 │ # perform training │ │ ❱ 164 │ trained_model = trainer.train_model() │ │ 165 │ │ │ │ /home/michael/train.py:77 in train_model │ │ │ │ 74 │ │ print("Rank {} reached barrier 2".format(torch.distributed.get_rank())) │ │ 75 │ │ torch.distributed.barrier() │ │ 76 │ │ │ │ ❱ 77 │ │ self.trainer.train() │ │ 78 │ │ │ │ 79 │ │ if torch.distributed.get_rank() == 0: │ │ 80 │ │ │ trainer.save(self.args.save_dir + '/final_model') │ │ │ │ /opt/conda/lib/python3.7/site-packages/transformers/trainer.py:1531 in train │ │ │ │ 1528 │ │ │ args=args, │ │ 1529 │ │ │ resume_from_checkpoint=resume_from_checkpoint, │ │ 1530 │ │ │ trial=trial, │ │ ❱ 1531 │ │ │ ignore_keys_for_eval=ignore_keys_for_eval, │ │ 1532 │ │ ) │ │ 1533 │ │ │ 1534 │ def _inner_training_loop( │ │ │ │ /opt/conda/lib/python3.7/site-packages/transformers/trainer.py:1775 in _inner_training_loop │ │ │ │ 1772 │ │ │ │ │ with model.no_sync(): │ │ 1773 │ │ │ │ │ │ tr_loss_step = self.training_step(model, inputs) │ │ 1774 │ │ │ │ else: │ │ ❱ 1775 │ │ │ │ │ tr_loss_step = self.training_step(model, inputs) │ │ 1776 │ │ │ │ │ │ 1777 │ │ │ │ if ( │ │ 1778 │ │ │ │ │ args.logging_nan_inf_filter │ │ │ │ /opt/conda/lib/python3.7/site-packages/transformers/trainer.py:2523 in training_step │ │ │ │ 2520 │ │ │ return loss_mb.reduce_mean().detach().to(self.args.device) │ │ 2521 │ │ │ │ 2522 │ │ with self.compute_loss_context_manager(): │ │ ❱ 2523 │ │ │ loss = self.compute_loss(model, inputs) │ │ 2524 │ │ │ │ 2525 │ │ if self.args.n_gpu > 1: │ │ 2526 │ │ │ loss = loss.mean() # mean() to average on multi-gpu parallel training │ │ │ │ /opt/conda/lib/python3.7/site-packages/transformers/trainer.py:2555 in compute_loss │ │ │ │ 2552 │ │ │ labels = inputs.pop("labels") │ │ 2553 │ │ else: │ │ 2554 │ │ │ labels = None │ │ ❱ 2555 │ │ outputs = model(**inputs) │ │ 2556 │ │ # Save past state if it exists │ │ 2557 │ │ # TODO: this needs to be fixed and made cleaner later. │ │ 2558 │ │ if self.args.past_index >= 0: │ │ │ │ /opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py:1194 in _call_impl │ │ │ │ 1191 │ │ # this function, and just call forward. │ │ 1192 │ │ if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks o │ │ 1193 │ │ │ │ or _global_forward_hooks or _global_forward_pre_hooks): │ │ ❱ 1194 │ │ │ return forward_call(*input, **kwargs) │ │ 1195 │ │ # Do not call functions when jit is used │ │ 1196 │ │ full_backward_hooks, non_full_backward_hooks = [], [] │ │ 1197 │ │ if self._backward_hooks or _global_backward_hooks: │ │ │ │ /opt/conda/lib/python3.7/site-packages/deepspeed/utils/nvtx.py:11 in wrapped_fn │ │ │ │ 8 │ │ │ │ 9 │ │ def wrapped_fn(*args, **kwargs): │ │ 10 │ │ │ with torch.cuda.nvtx.range(func.__qualname__): │ │ ❱ 11 │ │ │ │ return func(*args, **kwargs) │ │ 12 │ │ │ │ 13 │ │ return wrapped_fn │ │ 14 │ else: │ │ │ │ /opt/conda/lib/python3.7/site-packages/deepspeed/runtime/engine.py:1727 in forward │ │ │ │ 1724 │ │ if self.fp16_auto_cast(): │ │ 1725 │ │ │ inputs = self._cast_inputs_half(inputs) │ │ 1726 │ │ │ │ ❱ 1727 │ │ loss = self.module(*inputs, **kwargs) │ │ 1728 │ │ │ │ 1729 │ │ if self.zero_optimization_partition_weights(): │ │ 1730 │ │ │ # Disable automated discovery of external parameters │ │ │ │ /opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py:1212 in _call_impl │ │ │ │ 1209 │ │ │ bw_hook = hooks.BackwardHook(self, full_backward_hooks) │ │ 1210 │ │ │ input = bw_hook.setup_input_hook(input) │ │ 1211 │ │ │ │ ❱ 1212 │ │ result = forward_call(*input, **kwargs) │ │ 1213 │ │ if _global_forward_hooks or self._forward_hooks: │ │ 1214 │ │ │ for hook in (*_global_forward_hooks.values(), *self._forward_hooks.values()) │ │ 1215 │ │ │ │ hook_result = hook(self, input, result) │ │ │ │ /opt/conda/lib/python3.7/site-packages/transformers/models/t5/modeling_t5.py:1618 in forward │ │ │ │ 1615 │ │ │ │ head_mask=head_mask, │ │ 1616 │ │ │ │ output_attentions=output_attentions, │ │ 1617 │ │ │ │ output_hidden_states=output_hidden_states, │ │ ❱ 1618 │ │ │ │ return_dict=return_dict, │ │ 1619 │ │ │ ) │ │ 1620 │ │ elif return_dict and not isinstance(encoder_outputs, BaseModelOutput): │ │ 1621 │ │ │ encoder_outputs = BaseModelOutput( │ │ │ │ /opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py:1212 in _call_impl │ │ │ │ 1209 │ │ │ bw_hook = hooks.BackwardHook(self, full_backward_hooks) │ │ 1210 │ │ │ input = bw_hook.setup_input_hook(input) │ │ 1211 │ │ │ │ ❱ 1212 │ │ result = forward_call(*input, **kwargs) │ │ 1213 │ │ if _global_forward_hooks or self._forward_hooks: │ │ 1214 │ │ │ for hook in (*_global_forward_hooks.values(), *self._forward_hooks.values()) │ │ 1215 │ │ │ │ hook_result = hook(self, input, result) │ │ │ │ /opt/conda/lib/python3.7/site-packages/transformers/models/t5/modeling_t5.py:1051 in forward │ │ │ │ 1048 │ │ │ │ │ cross_attn_layer_head_mask=cross_attn_layer_head_mask, │ │ 1049 │ │ │ │ │ past_key_value=past_key_value, │ │ 1050 │ │ │ │ │ use_cache=use_cache, │ │ ❱ 1051 │ │ │ │ │ output_attentions=output_attentions, │ │ 1052 │ │ │ │ ) │ │ 1053 │ │ │ │ │ 1054 │ │ │ # layer_outputs is a tuple with: │ │ │ │ /opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py:1212 in _call_impl │ │ │ │ 1209 │ │ │ bw_hook = hooks.BackwardHook(self, full_backward_hooks) │ │ 1210 │ │ │ input = bw_hook.setup_input_hook(input) │ │ 1211 │ │ │ │ ❱ 1212 │ │ result = forward_call(*input, **kwargs) │ │ 1213 │ │ if _global_forward_hooks or self._forward_hooks: │ │ 1214 │ │ │ for hook in (*_global_forward_hooks.values(), *self._forward_hooks.values()) │ │ 1215 │ │ │ │ hook_result = hook(self, input, result) │ │ │ │ /opt/conda/lib/python3.7/site-packages/transformers/models/t5/modeling_t5.py:680 in forward │ │ │ │ 677 │ │ │ layer_head_mask=layer_head_mask, │ │ 678 │ │ │ past_key_value=self_attn_past_key_value, │ │ 679 │ │ │ use_cache=use_cache, │ │ ❱ 680 │ │ │ output_attentions=output_attentions, │ │ 681 │ │ ) │ │ 682 │ │ hidden_states, present_key_value_state = self_attention_outputs[:2] │ │ 683 │ │ attention_outputs = self_attention_outputs[2:] # Keep self-attention outputs an │ │ │ │ /opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py:1212 in _call_impl │ │ │ │ 1209 │ │ │ bw_hook = hooks.BackwardHook(self, full_backward_hooks) │ │ 1210 │ │ │ input = bw_hook.setup_input_hook(input) │ │ 1211 │ │ │ │ ❱ 1212 │ │ result = forward_call(*input, **kwargs) │ │ 1213 │ │ if _global_forward_hooks or self._forward_hooks: │ │ 1214 │ │ │ for hook in (*_global_forward_hooks.values(), *self._forward_hooks.values()) │ │ 1215 │ │ │ │ hook_result = hook(self, input, result) │ │ │ │ /opt/conda/lib/python3.7/site-packages/transformers/models/t5/modeling_t5.py:586 in forward │ │ │ │ 583 │ │ │ layer_head_mask=layer_head_mask, │ │ 584 │ │ │ past_key_value=past_key_value, │ │ 585 │ │ │ use_cache=use_cache, │ │ ❱ 586 │ │ │ output_attentions=output_attentions, │ │ 587 │ │ ) │ │ 588 │ │ hidden_states = hidden_states + self.dropout(attention_output[0]) │ │ 589 │ │ outputs = (hidden_states,) + attention_output[1:] # add attentions if we output │ │ │ │ /opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py:1212 in _call_impl │ │ │ │ 1209 │ │ │ bw_hook = hooks.BackwardHook(self, full_backward_hooks) │ │ 1210 │ │ │ input = bw_hook.setup_input_hook(input) │ │ 1211 │ │ │ │ ❱ 1212 │ │ result = forward_call(*input, **kwargs) │ │ 1213 │ │ if _global_forward_hooks or self._forward_hooks: │ │ 1214 │ │ │ for hook in (*_global_forward_hooks.values(), *self._forward_hooks.values()) │ │ 1215 │ │ │ │ hook_result = hook(self, input, result) │ │ │ │ /opt/conda/lib/python3.7/site-packages/transformers/models/t5/modeling_t5.py:498 in forward │ │ │ │ 495 │ │ │ return hidden_states │ │ 496 │ │ │ │ 497 │ │ # get query states │ │ ❱ 498 │ │ query_states = shape(self.q(hidden_states)) # (batch_size, n_heads, seq_length, │ │ 499 │ │ │ │ 500 │ │ # get key/value states │ │ 501 │ │ key_states = project( │ │ │ │ /opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py:1212 in _call_impl │ │ │ │ 1209 │ │ │ bw_hook = hooks.BackwardHook(self, full_backward_hooks) │ │ 1210 │ │ │ input = bw_hook.setup_input_hook(input) │ │ 1211 │ │ │ │ ❱ 1212 │ │ result = forward_call(*input, **kwargs) │ │ 1213 │ │ if _global_forward_hooks or self._forward_hooks: │ │ 1214 │ │ │ for hook in (*_global_forward_hooks.values(), *self._forward_hooks.values()) │ │ 1215 │ │ │ │ hook_result = hook(self, input, result) │ │ │ │ /opt/conda/lib/python3.7/site-packages/torch/nn/modules/linear.py:114 in forward │ │ │ │ 111 │ │ │ init.uniform_(self.bias, -bound, bound) │ │ 112 │ │ │ 113 │ def forward(self, input: Tensor) -> Tensor: │ │ ❱ 114 │ │ return F.linear(input, self.weight, self.bias) │ │ 115 │ │ │ 116 │ def extra_repr(self) -> str: │ │ 117 │ │ return 'in_features={}, out_features={}, bias={}'.format( │ │ │ │ /opt/conda/lib/python3.7/site-packages/deepspeed/runtime/zero/linear.py:116 in zero3_linear_wrap │ │ │ │ 113 │ │ 114 def zero3_linear_wrap(input, weight, bias=None): │ │ 115 │ if bias is None: │ │ ❱ 116 │ │ return LinearFunctionForZeroStage3.apply(input, weight) │ │ 117 │ else: │ │ 118 │ │ return LinearFunctionForZeroStage3.apply(input, weight, bias) │ │ 119 │ │ │ │ /opt/conda/lib/python3.7/site-packages/torch/cuda/amp/autocast_mode.py:97 in decorate_fwd │ │ │ │ 94 │ def decorate_fwd(*args, **kwargs): │ │ 95 │ │ if cast_inputs is None: │ │ 96 │ │ │ args[0]._fwd_used_autocast = torch.is_autocast_enabled() │ │ ❱ 97 │ │ │ return fwd(*args, **kwargs) │ │ 98 │ │ else: │ │ 99 │ │ │ autocast_context = torch.is_autocast_enabled() │ │ 100 │ │ │ args[0]._fwd_used_autocast = False │ │ │ │ /opt/conda/lib/python3.7/site-packages/deepspeed/runtime/zero/linear.py:61 in forward │ │ │ │ 58 │ │ │ # fused op is marginally faster │ │ 59 │ │ │ ret = torch.addmm(bias, input, weight.t()) │ │ 60 │ │ else: │ │ ❱ 61 │ │ │ output = input.matmul(weight.t()) │ │ 62 │ │ │ if bias is not None: │ │ 63 │ │ │ │ output += bias │ │ 64 │ │ │ ret = output │ ╰──────────────────────────────────────────────────────────────────────────────────────────────────╯ RuntimeError: CUDA error: CUBLAS_STATUS_NOT_INITIALIZED when calling `cublasCreate(handle)` ``` Running with CUDA_VISIBLE_DEVICES=0, I get a slightly different error: ``` ─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮ │ /home/michael/train.py:164 in <module> │ │ │ │ 161 │ trainer.prepare_datsets_for_training() │ │ 162 │ │ │ 163 │ # perform training │ │ ❱ 164 │ trained_model = trainer.train_model() │ │ 165 │ │ │ │ /home/michael/train.py:77 in train_model │ │ │ │ 74 │ │ print("Rank {} reached barrier 2".format(torch.distributed.get_rank())) │ │ 75 │ │ torch.distributed.barrier() │ │ 76 │ │ │ │ ❱ 77 │ │ self.trainer.train() │ │ 78 │ │ │ │ 79 │ │ if torch.distributed.get_rank() == 0: │ │ 80 │ │ │ trainer.save(self.args.save_dir + '/final_model') │ │ │ │ /opt/conda/lib/python3.7/site-packages/transformers/trainer.py:1531 in train │ │ │ │ 1528 │ │ │ args=args, │ │ 1529 │ │ │ resume_from_checkpoint=resume_from_checkpoint, │ │ 1530 │ │ │ trial=trial, │ │ ❱ 1531 │ │ │ ignore_keys_for_eval=ignore_keys_for_eval, │ │ 1532 │ │ ) │ │ 1533 │ │ │ 1534 │ def _inner_training_loop( │ │ │ │ /opt/conda/lib/python3.7/site-packages/transformers/trainer.py:1775 in _inner_training_loop │ │ │ │ 1772 │ │ │ │ │ with model.no_sync(): │ │ 1773 │ │ │ │ │ │ tr_loss_step = self.training_step(model, inputs) │ │ 1774 │ │ │ │ else: │ │ ❱ 1775 │ │ │ │ │ tr_loss_step = self.training_step(model, inputs) │ │ 1776 │ │ │ │ │ │ 1777 │ │ │ │ if ( │ │ 1778 │ │ │ │ │ args.logging_nan_inf_filter │ │ │ │ /opt/conda/lib/python3.7/site-packages/transformers/trainer.py:2523 in training_step │ │ │ │ 2520 │ │ │ return loss_mb.reduce_mean().detach().to(self.args.device) │ │ 2521 │ │ │ │ 2522 │ │ with self.compute_loss_context_manager(): │ │ ❱ 2523 │ │ │ loss = self.compute_loss(model, inputs) │ │ 2524 │ │ │ │ 2525 │ │ if self.args.n_gpu > 1: │ │ 2526 │ │ │ loss = loss.mean() # mean() to average on multi-gpu parallel training │ │ │ │ /opt/conda/lib/python3.7/site-packages/transformers/trainer.py:2555 in compute_loss │ │ │ │ 2552 │ │ │ labels = inputs.pop("labels") │ │ 2553 │ │ else: │ │ 2554 │ │ │ labels = None │ │ ❱ 2555 │ │ outputs = model(**inputs) │ │ 2556 │ │ # Save past state if it exists │ │ 2557 │ │ # TODO: this needs to be fixed and made cleaner later. │ │ 2558 │ │ if self.args.past_index >= 0: │ │ │ │ /opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py:1194 in _call_impl │ │ │ │ 1191 │ │ # this function, and just call forward. │ │ 1192 │ │ if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks o │ │ 1193 │ │ │ │ or _global_forward_hooks or _global_forward_pre_hooks): │ │ ❱ 1194 │ │ │ return forward_call(*input, **kwargs) │ │ 1195 │ │ # Do not call functions when jit is used │ │ 1196 │ │ full_backward_hooks, non_full_backward_hooks = [], [] │ │ 1197 │ │ if self._backward_hooks or _global_backward_hooks: │ │ │ │ /opt/conda/lib/python3.7/site-packages/deepspeed/utils/nvtx.py:11 in wrapped_fn │ │ │ │ 8 │ │ │ │ 9 │ │ def wrapped_fn(*args, **kwargs): │ │ 10 │ │ │ with torch.cuda.nvtx.range(func.__qualname__): │ │ ❱ 11 │ │ │ │ return func(*args, **kwargs) │ │ 12 │ │ │ │ 13 │ │ return wrapped_fn │ │ 14 │ else: │ │ │ │ /opt/conda/lib/python3.7/site-packages/deepspeed/runtime/engine.py:1727 in forward │ │ │ │ 1724 │ │ if self.fp16_auto_cast(): │ │ 1725 │ │ │ inputs = self._cast_inputs_half(inputs) │ │ 1726 │ │ │ │ ❱ 1727 │ │ loss = self.module(*inputs, **kwargs) │ │ 1728 │ │ │ │ 1729 │ │ if self.zero_optimization_partition_weights(): │ │ 1730 │ │ │ # Disable automated discovery of external parameters │ │ │ │ /opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py:1212 in _call_impl │ │ │ │ 1209 │ │ │ bw_hook = hooks.BackwardHook(self, full_backward_hooks) │ │ 1210 │ │ │ input = bw_hook.setup_input_hook(input) │ │ 1211 │ │ │ │ ❱ 1212 │ │ result = forward_call(*input, **kwargs) │ │ 1213 │ │ if _global_forward_hooks or self._forward_hooks: │ │ 1214 │ │ │ for hook in (*_global_forward_hooks.values(), *self._forward_hooks.values()) │ │ 1215 │ │ │ │ hook_result = hook(self, input, result) │ │ │ │ /opt/conda/lib/python3.7/site-packages/transformers/models/t5/modeling_t5.py:1618 in forward │ │ │ │ 1615 │ │ │ │ head_mask=head_mask, │ │ 1616 │ │ │ │ output_attentions=output_attentions, │ │ 1617 │ │ │ │ output_hidden_states=output_hidden_states, │ │ ❱ 1618 │ │ │ │ return_dict=return_dict, │ │ 1619 │ │ │ ) │ │ 1620 │ │ elif return_dict and not isinstance(encoder_outputs, BaseModelOutput): │ │ 1621 │ │ │ encoder_outputs = BaseModelOutput( │ │ │ │ /opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py:1212 in _call_impl │ │ │ │ 1209 │ │ │ bw_hook = hooks.BackwardHook(self, full_backward_hooks) │ │ 1210 │ │ │ input = bw_hook.setup_input_hook(input) │ │ 1211 │ │ │ │ ❱ 1212 │ │ result = forward_call(*input, **kwargs) │ │ 1213 │ │ if _global_forward_hooks or self._forward_hooks: │ │ 1214 │ │ │ for hook in (*_global_forward_hooks.values(), *self._forward_hooks.values()) │ │ 1215 │ │ │ │ hook_result = hook(self, input, result) │ │ │ │ /opt/conda/lib/python3.7/site-packages/transformers/models/t5/modeling_t5.py:1051 in forward │ │ │ │ 1048 │ │ │ │ │ cross_attn_layer_head_mask=cross_attn_layer_head_mask, │ │ 1049 │ │ │ │ │ past_key_value=past_key_value, │ │ 1050 │ │ │ │ │ use_cache=use_cache, │ │ ❱ 1051 │ │ │ │ │ output_attentions=output_attentions, │ │ 1052 │ │ │ │ ) │ │ 1053 │ │ │ │ │ 1054 │ │ │ # layer_outputs is a tuple with: │ │ │ │ /opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py:1212 in _call_impl │ │ │ │ 1209 │ │ │ bw_hook = hooks.BackwardHook(self, full_backward_hooks) │ │ 1210 │ │ │ input = bw_hook.setup_input_hook(input) │ │ 1211 │ │ │ │ ❱ 1212 │ │ result = forward_call(*input, **kwargs) │ │ 1213 │ │ if _global_forward_hooks or self._forward_hooks: │ │ 1214 │ │ │ for hook in (*_global_forward_hooks.values(), *self._forward_hooks.values()) │ │ 1215 │ │ │ │ hook_result = hook(self, input, result) │ │ │ │ /opt/conda/lib/python3.7/site-packages/transformers/models/t5/modeling_t5.py:680 in forward │ │ │ │ 677 │ │ │ layer_head_mask=layer_head_mask, │ │ 678 │ │ │ past_key_value=self_attn_past_key_value, │ │ 679 │ │ │ use_cache=use_cache, │ │ ❱ 680 │ │ │ output_attentions=output_attentions, │ │ 681 │ │ ) │ │ 682 │ │ hidden_states, present_key_value_state = self_attention_outputs[:2] │ │ 683 │ │ attention_outputs = self_attention_outputs[2:] # Keep self-attention outputs an │ │ │ │ /opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py:1212 in _call_impl │ │ │ │ 1209 │ │ │ bw_hook = hooks.BackwardHook(self, full_backward_hooks) │ │ 1210 │ │ │ input = bw_hook.setup_input_hook(input) │ │ 1211 │ │ │ │ ❱ 1212 │ │ result = forward_call(*input, **kwargs) │ │ 1213 │ │ if _global_forward_hooks or self._forward_hooks: │ │ 1214 │ │ │ for hook in (*_global_forward_hooks.values(), *self._forward_hooks.values()) │ │ 1215 │ │ │ │ hook_result = hook(self, input, result) │ │ │ │ /opt/conda/lib/python3.7/site-packages/transformers/models/t5/modeling_t5.py:586 in forward │ │ │ │ 583 │ │ │ layer_head_mask=layer_head_mask, │ │ 584 │ │ │ past_key_value=past_key_value, │ │ 585 │ │ │ use_cache=use_cache, │ │ ❱ 586 │ │ │ output_attentions=output_attentions, │ │ 587 │ │ ) │ │ 588 │ │ hidden_states = hidden_states + self.dropout(attention_output[0]) │ │ 589 │ │ outputs = (hidden_states,) + attention_output[1:] # add attentions if we output │ │ │ │ /opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py:1212 in _call_impl │ │ │ │ 1209 │ │ │ bw_hook = hooks.BackwardHook(self, full_backward_hooks) │ │ 1210 │ │ │ input = bw_hook.setup_input_hook(input) │ │ 1211 │ │ │ │ ❱ 1212 │ │ result = forward_call(*input, **kwargs) │ │ 1213 │ │ if _global_forward_hooks or self._forward_hooks: │ │ 1214 │ │ │ for hook in (*_global_forward_hooks.values(), *self._forward_hooks.values()) │ │ 1215 │ │ │ │ hook_result = hook(self, input, result) │ │ │ │ /opt/conda/lib/python3.7/site-packages/transformers/models/t5/modeling_t5.py:498 in forward │ │ │ │ 495 │ │ │ return hidden_states │ │ 496 │ │ │ │ 497 │ │ # get query states │ │ ❱ 498 │ │ query_states = shape(self.q(hidden_states)) # (batch_size, n_heads, seq_length, │ │ 499 │ │ │ │ 500 │ │ # get key/value states │ │ 501 │ │ key_states = project( │ │ │ │ /opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py:1212 in _call_impl │ │ │ │ 1209 │ │ │ bw_hook = hooks.BackwardHook(self, full_backward_hooks) │ │ 1210 │ │ │ input = bw_hook.setup_input_hook(input) │ │ 1211 │ │ │ │ ❱ 1212 │ │ result = forward_call(*input, **kwargs) │ │ 1213 │ │ if _global_forward_hooks or self._forward_hooks: │ │ 1214 │ │ │ for hook in (*_global_forward_hooks.values(), *self._forward_hooks.values()) │ │ 1215 │ │ │ │ hook_result = hook(self, input, result) │ │ │ │ /opt/conda/lib/python3.7/site-packages/torch/nn/modules/linear.py:114 in forward │ │ │ │ 111 │ │ │ init.uniform_(self.bias, -bound, bound) │ │ 112 │ │ │ 113 │ def forward(self, input: Tensor) -> Tensor: │ │ ❱ 114 │ │ return F.linear(input, self.weight, self.bias) │ │ 115 │ │ │ 116 │ def extra_repr(self) -> str: │ │ 117 │ │ return 'in_features={}, out_features={}, bias={}'.format( │ │ │ │ /opt/conda/lib/python3.7/site-packages/deepspeed/runtime/zero/linear.py:116 in zero3_linear_wrap │ │ │ │ 113 │ │ 114 def zero3_linear_wrap(input, weight, bias=None): │ │ 115 │ if bias is None: │ │ ❱ 116 │ │ return LinearFunctionForZeroStage3.apply(input, weight) │ │ 117 │ else: │ │ 118 │ │ return LinearFunctionForZeroStage3.apply(input, weight, bias) │ │ 119 │ │ │ │ /opt/conda/lib/python3.7/site-packages/torch/cuda/amp/autocast_mode.py:97 in decorate_fwd │ │ │ │ 94 │ def decorate_fwd(*args, **kwargs): │ │ 95 │ │ if cast_inputs is None: │ │ 96 │ │ │ args[0]._fwd_used_autocast = torch.is_autocast_enabled() │ │ ❱ 97 │ │ │ return fwd(*args, **kwargs) │ │ 98 │ │ else: │ │ 99 │ │ │ autocast_context = torch.is_autocast_enabled() │ │ 100 │ │ │ args[0]._fwd_used_autocast = False │ │ │ │ /opt/conda/lib/python3.7/site-packages/deepspeed/runtime/zero/linear.py:61 in forward │ │ │ │ 58 │ │ │ # fused op is marginally faster │ │ 59 │ │ │ ret = torch.addmm(bias, input, weight.t()) │ │ 60 │ │ else: │ │ ❱ 61 │ │ │ output = input.matmul(weight.t()) │ │ 62 │ │ │ if bias is not None: │ │ 63 │ │ │ │ output += bias │ │ 64 │ │ │ ret = output │ ╰──────────────────────────────────────────────────────────────────────────────────────────────────╯ RuntimeError: CUDA error: CUBLAS_STATUS_NOT_INITIALIZED when calling `cublasCreate(handle)` ```<|||||>super! I'm able to reproduce this on a single gpu and without deepspeed, so deepspeed is not at fault here. So drop deepspeed, switch to a single gpu and step through with debugger through the first training step. now using a single gpu and removing deepspeed completely and you will get the same problem. The problem is indicated by multiple lines of: ``` ../aten/src/ATen/native/cuda/Indexing.cu:1141: indexSelectLargeIndex: block: [43,0,0], thread: [0,0,0] Assertion `srcIndex < srcSelectDimSize` failed. ../aten/src/ATen/native/cuda/Indexing.cu:1141: indexSelectLargeIndex: block: [43,0,0], thread: [1,0,0] Assertion `srcIndex < srcSelectDimSize` failed. ../aten/src/ATen/native/cuda/Indexing.cu:1141: indexSelectLargeIndex: block: [43,0,0], thread: [2,0,0] Assertion `srcIndex < srcSelectDimSize` failed. ../aten/src/ATen/native/cuda/Indexing.cu:1141: indexSelectLargeIndex: block: [43,0,0], thread: [3,0,0] Assertion `srcIndex < srcSelectDimSize` failed. ../aten/src/ATen/native/cuda/Indexing.cu:1141: indexSelectLargeIndex: block: [43,0,0], thread: [4,0,0] Assertion `srcIndex < srcSelectDimSize` failed. ``` and that usually indicates a bug in the code wrt to tensor indices. Either in your custom code or the trainer. So after removing deepspeed congif, run otherwise the same cmd line (you can continue using the `deepspeed` launcher - it has nothing to do with the deepspeed integration) ``` rm -rf save_dir; CUDA_LAUNCH_BLOCKING=1 CUDA_VISIBLE_DEVICES=0 deepspeed train.py --name ul2-test --model yhavinga/ul2-base-en-nl --train demo_train.json --val demo_val.json --max_input_len 128 --max_target_len 512 --save_dir save_dir --num_epochs 3 --learning_rate 2e-4 --eval_steps 3000 --gradient_accumulation_steps 8 --per_device_train_batch_size 1 --per_device_eval_batch_size 1 --bf16 ``` and you start getting a usable traceback: ``` Traceback (most recent call last): File "train.py", line 167, in <module> trained_model = trainer.train_model() File "train.py", line 80, in train_model self.trainer.train() File "/mnt/nvme0/code/huggingface/transformers-master-2/src/transformers/trainer.py", line 1557, in train return inner_training_loop( File "/mnt/nvme0/code/huggingface/transformers-master-2/src/transformers/trainer.py", line 1808, in _inner_training_loop tr_loss_step = self.training_step(model, inputs) File "/mnt/nvme0/code/huggingface/transformers-master-2/src/transformers/trainer.py", line 2561, in training_step loss = self.compute_loss(model, inputs) File "/mnt/nvme0/code/huggingface/transformers-master-2/src/transformers/trainer.py", line 2593, in compute_loss outputs = model(**inputs) File "/home/stas/anaconda3/envs/py38-pt113/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl return forward_call(*input, **kwargs) File "/home/stas/anaconda3/envs/py38-pt113/lib/python3.8/site-packages/torch/nn/parallel/distributed.py", line 1040, in forward output = self._run_ddp_forward(*inputs, **kwargs) File "/home/stas/anaconda3/envs/py38-pt113/lib/python3.8/site-packages/torch/nn/parallel/distributed.py", line 1000, in _run_ddp_forward return module_to_run(*inputs[0], **kwargs[0]) File "/home/stas/anaconda3/envs/py38-pt113/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl return forward_call(*input, **kwargs) File "/mnt/nvme0/code/huggingface/transformers-master-2/src/transformers/models/t5/modeling_t5.py", line 1623, in forward encoder_outputs = self.encoder( File "/home/stas/anaconda3/envs/py38-pt113/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl return forward_call(*input, **kwargs) File "/mnt/nvme0/code/huggingface/transformers-master-2/src/transformers/models/t5/modeling_t5.py", line 1000, in forward hidden_states = self.dropout(inputs_embeds) File "/home/stas/anaconda3/envs/py38-pt113/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl return forward_call(*input, **kwargs) File "/home/stas/anaconda3/envs/py38-pt113/lib/python3.8/site-packages/torch/nn/modules/dropout.py", line 59, in forward return F.dropout(input, self.p, self.training, self.inplace) File "/home/stas/anaconda3/envs/py38-pt113/lib/python3.8/site-packages/torch/nn/functional.py", line 1252, in dropout return _VF.dropout_(input, p, training) if inplace else _VF.dropout(input, p, training) RuntimeError: philox_cuda_state for an unexpected CUDA generator used during capture. In regions captured by CUDA graphs, you may only use the default CUDA RNG generator on the device that's current when capture begins. If you need a non-default (user-supplied) generator, or a generator on another device, please file an issue. ``` So the failure appears to be inside `dropout` - Unless you'd like to spend some time with debugger and get to the root of it, it's probably the best to close this issue and start a new one now devoid of deepspeed, and providing all the repro details in the OP and ask the t5 maintainers to figure it out. Most likely it has something to do with the shapes of the tensors or shape manipulation - it's hard to tell w/o a closer look. I'm currently working on another project, so always happy to jump in on a deepspeed issue which are very rare, but won't have time at the moment to work on other issues.<|||||>I found one report with the same error, but I'm not sure if it's related: https://github.com/pytorch/pytorch/issues/91950 I also was able to reproduce this issue with pt-1.10 and 1.11 - so it's unlikely to be a recent pytorch issue. almost certain something is off in the code. <|||||>Thank you @stas00 <|||||>I'm a sucker for a difficult problem, here you go, I stepped with debugger. Have a look at the snapshot - your input_ids are a way too too big: ![snapshot_81](https://user-images.githubusercontent.com/10676103/215912725-725a6476-1c80-4688-b41b-66184772a24f.png) <|||||>Thank you. I see -- how is that possible? Do you think it's a bf16 issue? Update: the inputs seem to be fine on my end: ``` {'input_ids': tensor([[ 1150, 268, 2522, 267, 1231, 3634, 263, 32132, 3634, 334, 3113, 264, 314, 279, 321, 316, 1]]), 'attention_mask': tensor([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]]), 'labels': tensor([[430 6, 264, 314, 279, 321, 316, 1]])} ../aten/src/ATen/native/cuda/Indexing.cu:1141: indexSelectLargeIndex: block: [43,0,0], thread: [32,0,0] Assertion `srcIndex < srcSelectDimSize` failed. ../aten/src/ATen/native/cuda/Indexing.cu:1141: indexSelectLargeIndex: block: [43,0,0], thread: [33,0,0] Assertion `srcIndex < srcSelectDimSize` failed. ../aten/src/ATen/native/cuda/Indexing.cu:1141: indexSelectLargeIndex: block: [43,0,0], thread: [34,0,0] Assertion `srcIndex < srcSelectDimSize` failed. ../aten/src/ATen/native/cuda/Indexing.cu:1141: indexSelectLargeIndex: block: [43,0,0], thread: [35,0,0] Assertion `srcIndex < srcSelectDimSize` failed. ../aten/src/ATen/native/cuda/Indexing.cu:1141: indexSelectLargeIndex: block: [43,0,0], thread: [36,0,0] Assertion `srcIndex < srcSelectDimSize` failed. ../aten/src/ATen/native/cuda/Indexing ```<|||||>@younesbelkada Could you take a look please? UL2 is broken by a script that works for flan-t5 and other seq2seq models.<|||||>yes, they are ok at the `outputs = model(**inputs)` frame and then are borked at the point of dropout, but this happens much sooner,. I will have a look. It breaks somewhere inside `T5Stack.forward`<|||||>ok, it has to do with the size of the embedding matrix. In this case it's `32128x768` but your `input_ids` contain higher numbers than `32128-1`: ``` print(max(input_ids.flatten())) ``` gives `32132` if I hack your code to do: ``` input_ids = input_ids % 32127 ``` then everything works. Now that you understand what the problem is I trust you can unravel the rest? Most likely your tokenizer vocab isn't matching the vocab dimension of the embedding matrix. It's sad that pytorch doesn't give a user friendly error. edit: actually it does on cpu, but not on cuda. p.s. and the corrupt huge `input_ids` happened because pytorch blew its head off, but due to the default async nature the body was still thinking it owned a head. That `indexSelectLargeIndex` cuda error is where things broke first and not where the traceback was showing. The blowup happened here: https://github.com/huggingface/transformers/blob/bc44e947f371924db854a460484ec46c95e50a35/src/transformers/models/t5/modeling_t5.py#L954-L956<|||||>The other debug technique is to make gpus disappear and run on cpu, using `CUDA_VISIBLE_DEVICES=""` env var setting. Usually then you get much better errors. But not all programs will transparently be able to handle this transition. in the case of your program it doesn't work due to hardcoded gpu code. and some custom gpu kernels will of course not run on cpu. <|||||>Thank you so much, Stas!<|||||>Funny enough, there still is an issue with google/ul2 (the 20B param model) even though the smaller one runs fine now. ``` [W CUDAGuardImpl.h:124] Warning: CUDA warning: an illegal memory access was encountered (function destroyEvent) terminate called after throwing an instance of 'c10::Error' what(): CUDA error: an illegal memory access was encountered CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1. ``` Could you please take another look?<|||||>but what did you change to fix the smaller one? I hope you didn't use my `%` hack - it was just to show you what the problem was - it of course wasn't meant to be a solution - apologies if it wasn't obvious. the larger model is most likely has a different vocab size, so you really need to figure out your setup to read the config correctly and get the tokenizer set up right - usually this is mostly done for you, but this is where you'd check since you wrote your custom code. First make this small model work correctly w/o hardcoding any numbers - then move onto the large one and most likely it'll just work.<|||||>I'm requesting to make this recurring experience of embedding lookup explosion on cuda to be less painful for the users here: https://github.com/pytorch/pytorch/issues/93880 <|||||>I called `model.resize_token_embeddings(len(tokenizer))` (which I think is a more general solution than the % hack) and it worked on the smaller model. It doesn't work on the larger model, which has the same vocabulary size of 32128. The `CUDA error: an illegal memory access was encountered` on the larger model was always different than the one seen on the smaller model. I think something else is going on here.<|||||>It's very possible that you have a multitude of errors. Please ensure that you use the fixed version that you validated working with the smaller model. I think I have already asked you to show me the full traceback with `CUDA_LAUNCH_BLOCKING=1` and it wasn't telling anything useful. this feature is also broken in the recent NCCL versions. can you share the fixed code? <|||||>Yes, the CUDA traceback is completely useless. I've updated train.py in s3://phind-demo with the latest version. Here are all the files for your reference (only train.py has been modified): - train.py (https://phind-demo.s3.amazonaws.com/train.py) - ds_config.json (https://phind-demo.s3.amazonaws.com/ds_config.json) - utils folder containing dataset_formats.py, which has the Seq2SeqDataset class (https://phind-demo.s3.amazonaws.com/utils/dataset_formats.py) I am attempting to run ``` deepspeed train.py \ --name ul2-test \ --model google/ul2 \ --train demo_train.json \ --val demo_val.json \ --max_input_len 128 \ --max_target_len 512 \ --save_dir <my save dir> \ --num_epochs 3 \ --learning_rate 2e-4 \ --eval_steps 3000 \ --gradient_accumulation_steps 1 \ --per_device_train_batch_size 1 \ --per_device_eval_batch_size 1 \ --deepspeed ds_config.json ``` but using `--model yhavinga/ul2-base-en-nl` now works just fine. Thanks again.<|||||>ok, I was able to reproduce the problem and figured out the cause and the fix. This time deepspeed was at fault (not the integration). The cause is this setting in `ds_config.json`: ``` "sub_group_size": 1e12, ``` Set it to `1e9` as it's recommended in the docs and everything will work. It's probably a bug in some deepspeed or pytorch cuda kernel that doesn't check the memory allocations and clearly this one is too big. surely users shouldn't go through such hell because they made an uninformed choice about some obscure optimization settings. (I personally don't understand all of these and thus never touch those, they probably shouldn't even be in the default config file) To help future users please file a bug report at https://github.com/microsoft/DeepSpeed/issues And say that when you use ``` "sub_group_size": 1e12, ``` on an 8x 80GB A100 gpu node, deepspeed segfaults, with: ``` [W CUDAGuardImpl.h:124] Warning: CUDA warning: an illegal memory access was encountered (function destroyEvent) terminate called after throwing an instance of 'c10::Error' what(): CUDA error: an illegal memory access was encountered CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1. Exception raised from c10_cuda_check_implementation at ../c10/cuda/CUDAException.cpp:31 (most recent call first): frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x57 (0x7f49bd1a6457 in /opt/conda/lib/python3.7/site-packages/torch/lib/libc10.so) frame #1: c10::detail::torchCheckFail(char const*, char const*, unsigned int, std::string const&) + 0x64 (0x7f49bd1703ec in /opt/conda/lib/python3.7/site-packages/torch/lib/libc10.so) frame #2: c10::cuda::c10_cuda_check_implementation(std::string const&, std::string const&, int, bool) + 0xb4 (0x7f49bd246c64 in /opt/conda/lib/python3.7/site-packages/torch/lib/libc10_cuda.so) frame #3: <unknown function> + 0x1e0dc (0x7f49bd21e0dc in /opt/conda/lib/python3.7/site-packages/torch/lib/libc10_cuda.so) frame #4: c10::cuda::CUDACachingAllocator::raw_delete(void*) + 0x244 (0x7f49bd221054 in /opt/conda/lib/python3.7/site-packages/torch/lib/libc10_cuda.so) frame #5: <unknown function> + 0x4f6823 (0x7f49aa4ab823 in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_python.so) frame #6: c10::TensorImpl::~TensorImpl() + 0x1a0 (0x7f49bd1869e0 in /opt/conda/lib/python3.7/site-packages/torch/lib/libc10.so) frame #7: c10::TensorImpl::~TensorImpl() + 0x9 (0x7f49bd186af9 in /opt/conda/lib/python3.7/site-packages/torch/lib/libc10.so) frame #8: std::vector<at::Tensor, std::allocator<at::Tensor> >::~vector() + 0x8b (0x7f49aa4add1b in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_python.so) frame #9: c10d::ProcessGroupNCCL::WorkNCCL::~WorkNCCL() + 0x8c (0x7f491bf3ae8c in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_cuda_cpp.so) frame #10: c10d::ProcessGroupNCCL::WorkNCCL::~WorkNCCL() + 0x9 (0x7f491bf3b349 in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_cuda_cpp.so) frame #11: <unknown function> + 0xbe302c (0x7f49aab9802c in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_python.so) frame #12: <unknown function> + 0x3e4272 (0x7f49aa399272 in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_python.so) frame #13: <unknown function> + 0x3e51af (0x7f49aa39a1af in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_python.so) frame #14: <unknown function> + 0xe5698 (0x56347c18d698 in /opt/conda/bin/python3.7) frame #15: <unknown function> + 0x1f7b89 (0x56347c29fb89 in /opt/conda/bin/python3.7) frame #16: <unknown function> + 0x191de8 (0x56347c239de8 in /opt/conda/bin/python3.7) frame #17: _PyEval_EvalFrameDefault + 0x4c8a (0x56347c26426a in /opt/conda/bin/python3.7) frame #18: _PyFunction_FastCallKeywords + 0x184 (0x56347c1d73d4 in /opt/conda/bin/python3.7) frame #19: <unknown function> + 0x191de8 (0x56347c239de8 in /opt/conda/bin/python3.7) frame #20: _PyEval_EvalFrameDefault + 0xb2a (0x56347c26010a in /opt/conda/bin/python3.7) frame #21: <unknown function> + 0x1f7b66 (0x56347c29fb66 in /opt/conda/bin/python3.7) frame #22: _PyFunction_FastCallDict + 0xaef (0x56347c1a78cf in /opt/conda/bin/python3.7) frame #23: _PyEval_EvalFrameDefault + 0x1f86 (0x56347c261566 in /opt/conda/bin/python3.7) frame #24: _PyEval_EvalCodeWithName + 0x33d (0x56347c1a5ccd in /opt/conda/bin/python3.7) frame #25: _PyFunction_FastCallKeywords + 0x320 (0x56347c1d7570 in /opt/conda/bin/python3.7) frame #26: <unknown function> + 0x191de8 (0x56347c239de8 in /opt/conda/bin/python3.7) frame #27: _PyEval_EvalFrameDefault + 0x4c8a (0x56347c26426a in /opt/conda/bin/python3.7) frame #28: _PyFunction_FastCallKeywords + 0x184 (0x56347c1d73d4 in /opt/conda/bin/python3.7) frame #29: <unknown function> + 0x191de8 (0x56347c239de8 in /opt/conda/bin/python3.7) frame #30: _PyEval_EvalFrameDefault + 0xb2a (0x56347c26010a in /opt/conda/bin/python3.7) frame #31: _PyEval_EvalCodeWithName + 0x33d (0x56347c1a5ccd in /opt/conda/bin/python3.7) frame #32: _PyFunction_FastCallDict + 0x6a0 (0x56347c1a7480 in /opt/conda/bin/python3.7) frame #33: <unknown function> + 0x185ea4 (0x56347c22dea4 in /opt/conda/bin/python3.7) frame #34: _PyObject_FastCallKeywords + 0x18c (0x56347c238b8c in /opt/conda/bin/python3.7) frame #35: <unknown function> + 0x191f79 (0x56347c239f79 in /opt/conda/bin/python3.7) frame #36: _PyEval_EvalFrameDefault + 0x16bb (0x56347c260c9b in /opt/conda/bin/python3.7) frame #37: _PyFunction_FastCallKeywords + 0x184 (0x56347c1d73d4 in /opt/conda/bin/python3.7) frame #38: <unknown function> + 0x191de8 (0x56347c239de8 in /opt/conda/bin/python3.7) frame #39: _PyEval_EvalFrameDefault + 0x4c8a (0x56347c26426a in /opt/conda/bin/python3.7) frame #40: _PyFunction_FastCallKeywords + 0x184 (0x56347c1d73d4 in /opt/conda/bin/python3.7) frame #41: <unknown function> + 0x191de8 (0x56347c239de8 in /opt/conda/bin/python3.7) frame #42: _PyEval_EvalFrameDefault + 0x4c8a (0x56347c26426a in /opt/conda/bin/python3.7) frame #43: _PyEval_EvalCodeWithName + 0x33d (0x56347c1a5ccd in /opt/conda/bin/python3.7) frame #44: _PyFunction_FastCallDict + 0x6a0 (0x56347c1a7480 in /opt/conda/bin/python3.7) frame #45: <unknown function> + 0x185ea4 (0x56347c22dea4 in /opt/conda/bin/python3.7) frame #46: _PyObject_FastCallKeywords + 0x18c (0x56347c238b8c in /opt/conda/bin/python3.7) frame #47: <unknown function> + 0x191f79 (0x56347c239f79 in /opt/conda/bin/python3.7) frame #48: _PyEval_EvalFrameDefault + 0x16bb (0x56347c260c9b in /opt/conda/bin/python3.7) frame #49: _PyEval_EvalCodeWithName + 0x33d (0x56347c1a5ccd in /opt/conda/bin/python3.7) frame #50: _PyFunction_FastCallDict + 0x6a0 (0x56347c1a7480 in /opt/conda/bin/python3.7) frame #51: _PyEval_EvalFrameDefault + 0x1f86 (0x56347c261566 in /opt/conda/bin/python3.7) frame #52: _PyEval_EvalCodeWithName + 0x33d (0x56347c1a5ccd in /opt/conda/bin/python3.7) frame #53: _PyFunction_FastCallKeywords + 0x320 (0x56347c1d7570 in /opt/conda/bin/python3.7) frame #54: <unknown function> + 0x191de8 (0x56347c239de8 in /opt/conda/bin/python3.7) frame #55: _PyEval_EvalFrameDefault + 0x16bb (0x56347c260c9b in /opt/conda/bin/python3.7) frame #56: _PyEval_EvalCodeWithName + 0x33d (0x56347c1a5ccd in /opt/conda/bin/python3.7) frame #57: _PyFunction_FastCallDict + 0x6a0 (0x56347c1a7480 in /opt/conda/bin/python3.7) frame #58: <unknown function> + 0x185a63 (0x56347c22da63 in /opt/conda/bin/python3.7) frame #59: PyObject_Call + 0x6c (0x56347c1b09dc in /opt/conda/bin/python3.7) frame #60: <unknown function> + 0x21d3e7 (0x56347c2c53e7 in /opt/conda/bin/python3.7) frame #61: _PyObject_FastCallKeywords + 0x3cb (0x56347c238dcb in /opt/conda/bin/python3.7) frame #62: <unknown function> + 0x191f79 (0x56347c239f79 in /opt/conda/bin/python3.7) frame #63: _PyEval_EvalFrameDefault + 0x16bb (0x56347c260c9b in /opt/conda/bin/python3.7) ``` and when you use use `"sub_group_size": 1e12,` all works. Which most likely means the segfault happens when memory gets allocated or immediately after. and some protection against segfault is needed. And point to this thread for more details. Offer to provide an easy to run repro details. But perhaps they don't need it and this segfault is a sufficient info for someone who wrote it. <|||||>Thank you so much, Stas. You're right that `sub_group_size` is 1e9 in the HF DeepSpeed integration docs, but there's a sample config with 1e12 on the DeepSpeed ZeRO doc page (https://www.deepspeed.ai/docs/config-json/#zero-optimizations-for-fp16-training) and I think that's where I got it from. I'll open up an issue in DeepSpeed. Thanks again for going above and beyond.<|||||>Wonderful. And please report that doc issue too. Thank you, Michael.<|||||>For posterity @ngimel kindly shared that pt-1.13.0 and 1.13.1 are buggy wrt to disappearing error messages in cuda, this has been fixed in https://github.com/pytorch/pytorch/issues/91758 - and the fix is already available in nightlies. So if you run into situations like this Issue and the cuda error is incomprehensible - please use either `torch<1.13` which doesn't have this problem or whatever the next version will be: pt-2.0.0 probably (or nightly if you're brave).
transformers
21,377
closed
Added model resources for LayoutLM Issue#19848
Added resources to documentation of LayoutLM model as per Issue#19848. @stevhliu
01-30-2023 22:05:23
01-30-2023 22:05:23
_The documentation is not available anymore as the PR was closed or merged._<|||||>It looks like you might've only updated your branch with the latest changes on the `main` Hugging Face repo, I don't see any of the changes! 🙈 <|||||>Hi @stevhliu, you were right, i only updated my branch (sorry my bad ), i think i have committed the changes now. Sorry for this.<|||||>Awesome, thank you so much for your contribution! 🤗 Everything looks good to me, pinging @sgugger for a final look!
transformers
21,376
closed
Lazy loading models on systems with more VRAM than RAM
### Feature request I would like the ability to lazy load models to the GPU using `AutoModelForCausalLM.from_pretrained`. At the moment, it is possible to reduce the RAM usage using the `low_cpu_mem_usage=True` option, but on systems with more VRAM than RAM (like Google Colab with 12GB RAM and 16GB VRAM), it is not possible to load certain models due to a RAM bottleneck. ### Motivation See above ### Your contribution --
01-30-2023 18:42:04
01-30-2023 18:42:04
Could you please share a snippet of code that fails on such an env with `device_map="auto"` sent to `from_pretrained`? This loads the model directly on the GPU (as long as there is enough space) so this should work for your use case.<|||||>Surely, here is a snippet that causes an out of memory error on Google Colab (the free instance with 12.7GB RAM and 15GB VRAM): ``` !pip install -U accelerate transformers from transformers import AutoModelForCausalLM model = AutoModelForCausalLM.from_pretrained("PygmalionAI/pygmalion-6b", device_map='auto') ``` I have tried every possible combination of `.cuda()` and `low_cpu_mem_usage=True`: ``` model = AutoModelForCausalLM.from_pretrained("PygmalionAI/pygmalion-6b", device_map='auto') model = AutoModelForCausalLM.from_pretrained("PygmalionAI/pygmalion-6b", device_map='auto').cuda() model = AutoModelForCausalLM.from_pretrained("PygmalionAI/pygmalion-6b", low_cpu_mem_usage=True, device_map='auto') model = AutoModelForCausalLM.from_pretrained("PygmalionAI/pygmalion-6b", low_cpu_mem_usage=True, device_map='auto').cuda() ``` In all cases, the RAM usage steadily increases until it passes the 12GB mark and the Colab session crashes. On my machine, this model uses 11653.7 GiB VRAM and 2605.79 GiB RAM once fully loaded to the GPU, so in principle it should be possible to load it on Colab.<|||||>I think you are missing a `torch_dtype=torch.float16` or `torch_dtype=torch.bfloat16` to get to 12GB of use. Otherwise the model will need 24GB of memory if it has 6b parameters (the default torch dtype in PyTorch being float32).<|||||>You are correct, both of these allow me to load the model successfully: ``` model = AutoModelForCausalLM.from_pretrained("PygmalionAI/pygmalion-6b", low_cpu_mem_usage=True, device_map='auto', torch_dtype=torch.float16) model = AutoModelForCausalLM.from_pretrained("PygmalionAI/pygmalion-6b", device_map='auto', torch_dtype=torch.float16) ``` But with these, the RAM usage *after the model is loaded* is very high: 12.2GB out of a total of 12.7GB. This makes the session very unstable and prone to crashing if other libraries are imported. Is this high RAM usage normal? Can it be avoided?<|||||>Can you try to see if adding a layer of garbage collector helps? ```py import gc gc.collect() ``` There is no reason for the CPU RAM to be used once the model is fully loaded on the GPU.<|||||>I did try `gc.collect()` earlier today and that didn't release the CPU RAM memory. Now I tried to repeat the experiment just to make sure, and I couldn't even load the model because the `model = AutoModelForCausalLM.from_pretrained("PygmalionAI/pygmalion-6b", low_cpu_mem_usage=True, device_map='auto', torch_dtype=torch.float16)` call made the Colab session crash after running out of RAM.<|||||>After loading the model with the command above, doing this releases the VRAM but not the RAM: ``` import gc model = None gc.collect() torch.cuda.empty_cache() ``` This looks exactly like https://github.com/huggingface/transformers/issues/21094. Are these two bugs related?<|||||>I've recreated it, report as follows: (`available_memory` returns the `%` of memory available) Working as expected (w/o big model inference, hooks, etc) ```python >>> import psutil, torch >>> from transformers import AutoModelForCausalLM >>> available_memory = lambda: psutil.virtual_memory().available * 100 / psutil.virtual_memory().total >>> available_memory() 97.8753999829287 >>> model = AutoModelForCausalLM.from_pretrained("PygmalionAI/pygmalion-6b", low_cpu_mem_usage=True) >>> available_memory() 69.87882027448968 >>> model = None() >>> import gc >>> gc.collect() >>> available_memory() 97.28031713868933 ``` Issue: ```python >>> available_memory() 97.28031713868933 >>> model = AutoModelForCausalLM.from_pretrained( ... "PygmalionAI/pygmalion-6b", ... low_cpu_mem_usage=True, ... device_map='auto', ... torch_dtype=torch.float16 ... ) >>> available_memory() 95.77584944795181 >>> model = None >>> gc.collect() >>> torch.cuda.empty_cache() >>> available_memory() 95.73520915357973 ``` Note the fact that basically no memory was released here (on multiple repeated checks the RAM hit 95.77%)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>I think that lazy loading models would be an important addition to `transformers` in the context of loading models to Google Colab, but I am not sure how doable it is. A workaround for now is to reshard the models.<|||||>Mmm, diving into the reproducer @muellerzr, it looks like memory is not released by PyTorch when moving the model to a device: ``` import psutil, torch from transformers import AutoModelForCausalLM available_memory = lambda: psutil.virtual_memory().available * 100 / psutil.virtual_memory().total print(available_memory()) model = AutoModelForCausalLM.from_pretrained("PygmalionAI/pygmalion-6b", low_cpu_mem_usage=True) model = model.to(0) print(available_memory()) del model import gc gc.collect() print(available_memory()) ``` shows no memory is released. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>From the discussion, it seems to me that lazy loading is not the only issue. One also wants to garbage collect parts of the state dict that are no longer in use. For the use-case of apply model deltas, this requires streaming out the updated model weights rather than waiting for all the deltas to be applied.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
21,375
closed
Mismatch of tensor shapes in CrossEntropyLoss for custom head layer in BART
Hi, so far I've been working with the BartForConditionalGeneration. Now I want to use a custom head layer instead. In a linear layer after the base models decoder, I want to input the output of the base bart model and additional some numerical data similar to the code [here](https://colab.research.google.com/drive/1ZLfcB16Et9U2V-udrw8zwrfChFCIhomz?usp=sharing#scrollTo=m-TTyOMJOGBD). Following this I came up with the following forward function: ``` def forward(self, input_ids, tokens, **kwargs): labels = kwargs.get('labels') attn_mask = kwargs.get('attention_mask') out = self.model_base(input_ids, attention_mask=attn_mask) token_features = tokens.unsqueeze(1) concat= torch.concat((out[0][:, 0, :], token_features), dim=-1) out_lin = self.custom_layer(concat) loss_fct = torch.nn.CrossEntropyLoss() masked_lm_loss = loss_fct(out_lin.view(-1, self.model.config.vocab_size), labels.view(-1)) ``` where out_lin is the following linear layer: ``` self.custom_layer = torch.nn.Linear(in_features = self.hidden_dim + self.token_dim, out_features = self.model.config.vocab_size) ``` For the loss function I took orientation from the original code for the [BartForConditionalGeneration](https://huggingface.co/transformers/v2.11.0/_modules/transformers/modeling_bart.html#BartForConditionalGeneration): ``` outputs = self.model( input_ids, attention_mask=attention_mask, decoder_input_ids=decoder_input_ids, encoder_outputs=encoder_outputs, decoder_attention_mask=decoder_attention_mask, decoder_cached_states=decoder_cached_states, use_cache=use_cache, ) lm_logits = F.linear(outputs[0], self.model.shared.weight, bias=self.final_logits_bias) outputs = (lm_logits,) + outputs[1:] # Add cache, hidden states and attention if they are here if lm_labels is not None: loss_fct = nn.CrossEntropyLoss() # TODO(SS): do we need to ignore pad tokens in lm_labels? masked_lm_loss = loss_fct(lm_logits.view(-1, self.config.vocab_size), lm_labels.view(-1)) outputs = (masked_lm_loss,) + outputs return outputs ``` The error I obtain is ``` --------------------------------------------------------------------------- File c:\Users\M\Anaconda\envs\simp_env\lib\site-packages\pytorch_lightning\trainer\trainer.py:579, in Trainer.fit(self, model, train_dataloaders, val_dataloaders, datamodule, ckpt_path) 577 raise TypeError(f"`Trainer.fit()` requires a `LightningModule`, got: {model.__class__.__qualname__}") 578 self.strategy._lightning_module = model --> 579 call._call_and_handle_interrupt( 580 self, self._fit_impl, model, train_dataloaders, val_dataloaders, datamodule, ckpt_path 581 ) File c:\Users\M\Anaconda\envs\simp_env\lib\site-packages\pytorch_lightning\trainer\call.py:38, in _call_and_handle_interrupt(trainer, trainer_fn, *args, **kwargs) 36 return trainer.strategy.launcher.launch(trainer_fn, *args, trainer=trainer, **kwargs) 37 else: ---> 38 return trainer_fn(*args, **kwargs) 40 except _TunerExitException: ... 3024 if size_average is not None or reduce is not None: 3025 reduction = _Reduction.legacy_get_string(size_average, reduce) -> 3026 return torch._C._nn.cross_entropy_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index, label_smoothing) ValueError: Expected input batch_size (6) to match target batch_size (1536). ``` So far I understand that 6 is the batch size of my data output from the custom linear layer (which is torch.Size([6, 50267]) where 50267 is the self.final_logits_bias/vocab_size). My labels have the shape torch.Size([6, 256]) which when flattened leads to the 1536. As my labels have the same shape as before and my layer seems to me the same as the one from the ConditionalGenerationModel which I used before, I am unsure why I suddenly receive this size incompatibility issue, when I did not before. Furthermore, I am unsure why the first code referenced here uses only hidden_states.last_hidden_state[:, 0, :] so only batch_size and hidden_size but not the sequence length. Without it my data has the shape torch.Size([6, 256, 768]). I would be thankful for any guidance on how to make the tensors compatible and make the custom layer work. Did I misunderstand the examples mentioned above?
01-30-2023 18:00:17
01-30-2023 18:00:17
You should try the [forums](https://discuss.huggingface.co/) for questions like this as we keep issues for bugs and feature requests only.<|||||>Oh I am very sorry, I moved the question to the forum. Thank your for the hint!
transformers
21,374
closed
decoder_hidden_states output inconsistent when generating with SpeechEncoderDecoder models
### System Info - `transformers` version: 4.23.1 - Platform: Linux-4.18.0-372.36.1.el8_6.x86_64-x86_64-with-glibc2.28 - Python version: 3.9.12 - Huggingface_hub version: 0.10.1 - PyTorch version (GPU?): 1.12.1+cu113 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: yes (a100) - Using distributed or parallel set-up in script?: yes, but script below produces the same thing with one GPU ### Who can help? @sanchit-gandhi @gante ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction When decoding a `SpeechEncoderDecoderModel`, we can output the decoder hidden states with the args `output_hidden_states` and `return_dict_in_generate` to `True`. However, the output lengths are not consistent between the beam search decoding and greedy decoding, and between the output sequences and decoder hidden states. The following example is using `wav2vec2-xls-r-300m-en-to-15`, and shows this inconsistency: ```python from transformers import Wav2Vec2Processor, SpeechEncoderDecoderModel import numpy as np folder = "" # custom folder where the model are stored since we don't have internet on the compute nodes device = 'cuda:0' model = SpeechEncoderDecoderModel.from_pretrained(folder + "facebook/wav2vec2-xls-r-300m-en-to-15").to(device) processor = Wav2Vec2Processor.from_pretrained(folder + "facebook/wav2vec2-xls-r-300m-en-to-15") # generating dummy data with variable lengths audios = [ np.random.random((17_000 + i,)) for i in range(10) # bsz of 10 ] input_values = processor( audios, return_tensors="pt", padding=True, sampling_rate=16_000, ).input_values.to(device) # common parameters to both greedy and beam search decoding common_params = { 'max_new_tokens': 200, 'output_hidden_states': True, 'output_scores': True, 'return_dict_in_generate': True } #### print("Greedy decoding:") generated_greedy = model.generate( input_values, num_beams=1, **common_params ) print(" sequences shape: ", generated_greedy['sequences'].shape) print(" decoder_hidden_states len: ", len(generated_greedy['decoder_hidden_states'])) #### print("Beam search decoding:") generated_beam_search = model.generate( input_values, num_beams=2, **common_params ) print(" sequences shape: ", generated_beam_search['sequences'].shape) print(" decoder_hidden_states len: ", len(generated_beam_search['decoder_hidden_states'])) ``` The output of that script is: ``` Greedy decoding: sequences shape: torch.Size([10, 3]) decoder_hidden_states len: 2 Beam search decoding: sequences shape: torch.Size([10, 3]) decoder_hidden_states len: 39 ``` Following the documentation for [GreedySearchEncoderDecoderOutput](https://huggingface.co/docs/transformers/main/en/internal/generation_utils#transformers.generation.GreedySearchEncoderDecoderOutput.decoder_hidden_states) and [BeamSearchEncoderDecoderOutput](https://huggingface.co/docs/transformers/main/en/internal/generation_utils#transformers.generation.BeamSearchEncoderDecoderOutput.decoder_hidden_states): - **greedy**: decoder_hidden_states (tuple(tuple(torch.FloatTensor)), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple (one element for each generated token) of tuples (one element for each layer of the decoder) of torch.FloatTensor of shape (batch_size, generated_length, hidden_size). - **beam search**: decoder_hidden_states (tuple(tuple(torch.FloatTensor)), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple (one element for each generated token) of tuples (one element for each layer of the decoder) of torch.FloatTensor of shape (batch_size*num_beams*num_return_sequences, generated_length, hidden_size). I think the length of the tuple (one element for each generated token) should be the same as the `sequence_length`. *PS: it seems that when the audio is long enough so that the output is capped at `max_new_tokens`, the sequences length is `max_new_tokens + 1`, and the hidden states `max_new_tokens`.* ### Expected behavior The sequences output should be the same length as the decoder_hidden_states length, like: ``` Greedy decoding: sequences shape: torch.Size([10, 3]) decoder_hidden_states len: 3 Beam search decoding: sequences shape: torch.Size([10, 3]) decoder_hidden_states len: 3 ``` or ``` Greedy decoding: sequences shape: torch.Size([10, 2]) decoder_hidden_states len: 2 Beam search decoding: sequences shape: torch.Size([10, 39]) decoder_hidden_states len: 39 ```
01-30-2023 16:57:40
01-30-2023 16:57:40
Hey @valentinp72 👋 I don't see a cause for a bug, but I am aware that our docstrings are in need of improvements :) Allow me to elaborate: 1. Greedy Search: your output (`sequences`) will be `[batch_size, generated_tokens + 1]`, and the `decoder_hidden_states` will have length `generated_tokens`. `sequences` has the `+1` because the output sequence contains the BOS token, which is set before the first forward pass of the model, so there are no `decoder_hidden_states` for that token. 2. Beam Search: Here it's trickier. In essence, beam search looks for candidate outputs until it hits a stopping condition. The candidate outputs can have fewer tokens than the total number of generation steps -- for instance, in an encoder-decoder text model, if your input is `How much is 2 + 2?` and the model generates as candidates `<BOS>4<EOS>` (3 tokens) and `<BOS>The answer is potato<EOS>` (for argument's sake, 6 tokens) before deciding to stop, you should see `sequences` with shape `[1, 3]` and `decoder_hidden_states` with length `5`, because `5` tokens were generated internally before settling on the 1st candidate. Does it make more sense now? 🤗 <|||||>Oh I see! Indeed, the docs could be improved :) So, if I'd want to have the *real* hidden states associated with the first candidate (in beam search), eg. `<BOS>4<EOS>`, I would need to extract only the first 3 hidden states from `decoder_hidden_states`? For example, that should be correct, assuming the `decoder_hidden_states` are sorted the same way the candidates are ordered? (ie. `[0, 2, 4, 6, 8, 10, 12, 14, 16, 18]` for a batch size of `10` and a beam width of `2`.) ```python bsz = generated_beam_search['sequences'].shape[0] seqlen = generated_beam_search['sequences'].shape[1] featdim = 1024 decoder_hidden_states = torch.zeros((bsz, seqlen, featdim)).to(device) for i in range(seqlen): all_beams = generated_beam_search['decoder_hidden_states'][i][-1][:,0,:] # filtering the hidden states to only the first candidate for each sample only_first = all_beams.index_select( dim=0, index=torch.tensor([x for x in range(0, bsz * beam_width, beam_width)]).to(device) ) decoder_hidden_states[:,i] = only_first ``` If I'm correct, I think an option to return the hidden states in a tensor format (instead of tuples of tensors) according the output candidates could be nice, for both greedy and beam search decoding. <|||||>@valentinp72 > So, if I'd want to have the real hidden states associated with the first candidate (in beam search), eg. `<BOS>4<EOS>`, I would need to extract only the first 3 hidden states from `decoder_hidden_states`? Almost correct! The first 2 hidden states (one for `4`, another for `<EOS>`. `<BOS>` has no corresponding hidden states) As for the exact methodology to extract the right values from `decoder_hidden_states` with beam search, the plot thickens 😅 It is the same problem as extracting the token-level scores from the `scores` output in beam search -- see [this function](https://github.com/huggingface/transformers/blob/42b60f8b02941b0c40c42e150a101eb372c3856e/src/transformers/generation/utils.py#L927) and its examples. If you replace `scores` by `decoder_hidden_states`, it should be very close to what you want. In a nutshell, the index of the n-th output sequence in beam search changes over the course of its execution. The output sequence with index 0 may correspond to the sequence with index 1 at the 1st generation step, the sequence with index 5 at the 2nd generation step, and so on. `beam_indices` contains the index of each output sequence at each beam search step, from which you can de-scramble `decoder_hidden_states`.<|||||>Thank you. I've adapted the `compute_transition_scores` to what I want, but I still have one doubt about the contents of `beam_indices`. Here is an example generated by the beam search: ``` tensor([[ 0, 0, 0, 3, 3, 1, 0, 2, 1, 0, 3, 0, 0, -1], [ 5, 5, 5, 8, 6, 5, 5, 7, 6, 6, 6, 5, 5, -1], [10, 10, 10, 13, 12, 10, 10, 10, 10, 10, 12, 10, 10, -1], [15, 15, 15, 18, 17, 15, 15, 16, 16, 15, 17, 15, 15, -1], [20, 20, 20, 23, 21, 20, 22, 21, 21, 20, 20, 20, 0, -1], [25, 25, 25, 28, 27, 26, 26, 27, 27, 26, 29, 26, 25, -1], [30, 30, 30, 33, 32, 31, 30, 32, 32, 32, 32, 30, 30, -1], [35, 35, 35, 38, 38, 36, 35, 37, 37, 36, 38, 35, 35, -1], [40, 40, 40, 43, 42, 40, 40, 40, 40, 40, 42, 40, 40, -1], [45, 45, 45, 48, 49, 46, 46, 47, 46, 45, 47, 45, 45, -1]], device='cuda:0') ``` If we take the first row (first sequence), does it means that the correct hidden states for that sequence can be found at indexes `0, 0, 0, 3, 3, 1, 0, 2, 1, 0, 3, 0, 0`? If so, why, for the 5th sequence, we have the index `0` for the last hidden states? Shouldn't each beam be independent?<|||||>@valentinp72 > If so, why, for the 5th sequence, we have the index 0 for the last hidden states? Shouldn't each beam be independent? They should 👀 can you share the snippet that leads to those `beam_indices`? That may be a bug. <|||||>Now I'm no longer able to reproduce this error. I think it was due to a bug on my own, while implementing my function I might have executed it twice, leading to (some?) -1 being replaced by 0. I'm closing this issue as it seems my function that extracts the hidden representations works. I'm sharing it below if others needs it: ```python def extract_decoder_hidden_states( generate_output_dict, hidden_layer_idx=-1, ): """ Extracts the decoder hidden states representation from GreedySearchEncoderDecoderOutput and BeamSearchEncoderDecoderOutput, associated with the `sequences` output. - generate_output_dict: output dict from the model.generate() method you should add the following arguments to generate: - output_hidden_states=True - output_scores=True - return_dict_in_generate=True - hidden_layer_idx: index of the layer to extract the representation from (-1 == last one) """ greedy = isinstance(generate_output_dict, GreedySearchEncoderDecoderOutput) beamy = isinstance(generate_output_dict, BeamSearchEncoderDecoderOutput) if greedy: # in greedy decoding, the beam_indices is not present, so we create one # where the first beam is always selected scores = generate_output_dict['scores'] device = generate_output_dict['sequences'].device beam_indices = torch.arange(scores[0].shape[0]).view(-1, 1) beam_indices = beam_indices.expand(-1, len(scores)).to(device) elif beamy: if 'beam_indices' not in generate_output_dict: raise RuntimeError( "You should export the scores with output_scores=True when " \ "calling extract_decoder_hidden_states with " \ "BeamSearchEncoderDecoderOutput" ) beam_indices = generate_output_dict['beam_indices'].clone() else: raise NotImplementedError( "extract_decoder_hidden_states only works with " \ "GreedySearchEncoderDecoderOutput and BeamSearchEncoderDecoderOutput " \ "output types." ) # handling of the target length and preparing the masking for tokens # outside of that length beam_indices_mask = beam_indices < 0 max_beam_length = (1 - beam_indices_mask.long()).sum(-1).max() beam_indices = beam_indices[:, :max_beam_length] beam_indices_mask = beam_indices_mask[:, :max_beam_length] beam_indices[beam_indices_mask] = 0 seqlen = generate_output_dict['sequences'].shape[1] - 1 # creating the output hidden_states representation in format: # [bsz * beam_width ; seqlen ; featdim] decoder_hidden_states = torch.stack([ generate_output_dict['decoder_hidden_states'][i][hidden_layer_idx][:,0,:].index_select( dim=0, index=beam_indices[:,i] # reordering using the beam_indices ) for i in range(seqlen) ]).transpose(0, 1) # setting to 0 the hidden_states were it doesn't make sense to have an output decoder_hidden_states[beam_indices_mask] = 0 return decoder_hidden_states ```
transformers
21,373
closed
Error Multi-Node Training with Deepspeed
### System Info - `transformers` version: 4.24.0.dev0 - Platform: Linux-3.10.0-1127.el7.x86_64-x86_64-with-glibc2.17 - Python version: 3.8.13 - Huggingface_hub version: 0.10.1 - PyTorch version (GPU?): 1.8.2+cu111 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? @stas00 ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction This error occurred when I was running run_clm.py script with deepspeed. 1. Configure the hostfile provided to deepspeed as follows node1 slots=1 node2 slots=2 ### Expected behavior The run gets stuck at the following for around 30mins for the node I am running the scripts from node1: [2023-01-30 08:34:30,454] [INFO] [comm.py:633:init_distributed] Initializing TorchBackend in D$ epSpeed with backend nccl After 30 mins or so, I get the following error node2: Traceback (most recent call last): node2: File "run_clm.py", line 659, in <module> node2: main() node2: File "run_clm.py", line 244, in main node2: model_args, data_args, training_args = parser.parse_args_into_dataclasses() node2: File "/home/transformers/src/transformers/hf_argparser.py", lin$ 226, in parse_args_into_dataclasses node2: obj = dtype(**inputs) node2: File "<string>", line 104, in __init__ node2: File "/home/transformers/src/transformers/training_args.py", li$ e 1118, in __post_init__ node2: and (self.device.type != "cuda") node2: File "/home/transformers/src/transformers/utils/import_utils.py$ , line 1000, in wrapper node2: return func(*args, **kwargs) node2: File "/home/transformers/src/transformers/training_args.py", li$ e 1478, in device node2: return self._setup_devices node2: File "/home/transformers/src/transformers/utils/generic.py", li$ e 57, in __get__ node2: cached = self.fget(obj) node2: File "/home/transformers/src/transformers/utils/import_utils.py$ , line 1000, in wrapper node2: return func(*args, **kwargs) node2: File "/home/transformers/src/transformers/training_args.py", li$ e 1413, in _setup_devices node2: deepspeed.init_distributed() node2: File "/home/anaconda3/envs/deepspeed_hf/lib/python3.8/site-packages/deepspeed/comm/$ omm.py", line 637, in init_distributed node2: cdb = TorchBackend(dist_backend, timeout, init_method) node2: File "/home/anaconda3/envs/deepspeed_hf/lib/python3.8/site-packages/deepspeed/comm/$ orch.py", line 30, in __init__ node2: self.init_process_group(backend, timeout, init_method) node2: File "/home/anaconda3/envs/deepspeed_hf/lib/python3.8/site-packages/deepspeed/comm/$ orch.py", line 34, in init_process_group node2: torch.distributed.init_process_group(backend, node2: File "/home/anaconda3/envs/deepspeed_hf/lib/python3.8/site-packages/torch/distribut$ d/distributed_c10d.py", line 500, in init_process_group node2: store, rank, world_size = next(rendezvous_iterator) node2: File "/home/anaconda3/envs/deepspeed_hf/lib/python3.8/site-packages/torch/distribut$ d/rendezvous.py", line 190, in _env_rendezvous_handler node2: store = TCPStore(master_addr, master_port, world_size, start_daemon, timeout) node2: RuntimeError: connect() timed out.
01-30-2023 16:48:08
01-30-2023 16:48:08
30min is the timeout for any NCCL operations. I assume it never started training? Was it doing anything during the 30min, like pre-processing a dataset? please run again and do ``` pip install py-spy py-spy dump --pid PID ``` PID of the process that is stuck. And of course your Issue doesn't tell us anything about how to reproduce it or even which program you had a problem with.<|||||>The code was stuck at this node1: [2023-01-30 08:34:30,454] [INFO] [comm.py:633:init_distributed] Initializing TorchBackend in D$ epSpeed with backend nccl The output for the py-spy command for node 1 is Thread 39975 (idle): "MainThread" _try_wait (subprocess.py:1764) _wait (subprocess.py:1806) wait (subprocess.py:1083) main (deepspeed/launcher/runner.py:522) <module> (deepspeed:6) To reproduce just run [run_clm.py](https://github.com/huggingface/transformers/blob/main/examples/pytorch/language-modeling/run_clm.py) with deepspeed and in the hostfile add two nodes as follows node1 slots=1 node2 slots=2<|||||>thank you, yes, so you're not launching deepspeed properly. what is `node1`? it has to be an actual hostname, what is `slots=1` what is the actual setup - you have 2 nodes with 2 gpus each? then it'd be something like: ``` hostname1 slots=2 hostname2 slots=2 ``` <|||||>Yeah, two nodes with 1 GPU each hostname1 slots=1 hostname2 slots=1 <|||||>most likely you then have an ssh issue where it gets stuck in trying to connect to those nodes. As we have cleared out that this is not an integration issue - please reopen this question at https://github.com/microsoft/DeepSpeed/issues and when you report it probably start with a simple test script so it's easier to reproduce / isolate to the deepspeed launcher. e.g. you can use this script https://github.com/stas00/toolbox/blob/master/pytorch/torch-distributed-gpu-test.py but you will need to adapt your launching to the way you're trying to do (the instructions inside the script don't use `deepspeed` as the launcher).
transformers
21,372
closed
Add cPython files in build
# What does this PR do? As reported by @clefourrier the Graphormer model in the last release is not usable as is, as the cpython file containing the code for the collation of samples is not included in the built package. This PR fixes that by including the extensions (similar to what we did for custom CUDA kernels). This will be included in the next patch release.
01-30-2023 15:41:42
01-30-2023 15:41:42
_The documentation is not available anymore as the PR was closed or merged._