repo
stringclasses
1 value
number
int64
1
25.3k
state
stringclasses
2 values
title
stringlengths
1
487
body
stringlengths
0
234k
โŒ€
created_at
stringlengths
19
19
closed_at
stringlengths
19
19
comments
stringlengths
0
293k
transformers
23,178
closed
Update tokenization_llama.py
# What does this PR do? Fixes # (issue) https://github.com/huggingface/transformers/issues/23175 we could see it : https://github.com/facebookresearch/llama/blob/main/llama/generation.py #line62: generation.py
05-06-2023 10:26:48
05-06-2023 10:26:48
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23178). All of your documentation changes will be reflected on that endpoint.<|||||>cc @gante since Arthur is on holiday.<|||||>Hey @sjm1992st -- the issue in #23175 is unrelated to the tokenizer (see my comment there). As such, without further context, I won't accept this PR :)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
23,177
closed
Can you write code that trains bert with MLM and the next sentence at the same time?
### Feature request Or is there a training code already in place? ### Motivation I've learned that training with both is better than training with mlm alone, by which I mean the generated vector features ### Your contribution no yet
05-06-2023 09:29:25
05-06-2023 09:29:25
Hi @xiao12mm, I belive the NSP loss isn't available for model training directly in huggingface. This [thread](https://discuss.huggingface.co/t/continual-pre-training-from-an-initial-checkpoint-with-mlm-and-nsp/6869) provide a script for NSP training. About this - "training with both is better than training with mlm alone", there has been specific research done to verify the claim. One of the conclusions from "RoBERTa: A Robustly Optimized BERT Pretraining Approach" paper is that - "removing the NSP loss matches or slightly improves downstream task performance". <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
23,176
closed
I want to use 'from_ Pretrained' to read the '.safetensors' model file. What should I do?
### System Info - `transformers` version: 4.29.0.dev0 - Platform: Linux-6.2.0-20-generic-x86_64-with-glibc2.17 - Python version: 3.8.16 - Huggingface_hub version: 0.14.1 - Safetensors version: not installed - PyTorch version (GPU?): 2.0.0+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? _No response_ ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction My code: llama_config = AutoConfig.from_pretrained(llama_path + '/config.json') llama = AutoModelForCausalLM.from_pretrained(model_bytes, config = llama_config) llama_path include: model.safetensors, config.json and other config files. ### Expected behavior I want to use 'from_ Pretrained' to read the '.safetensors' model file. What should I do?
05-06-2023 05:49:42
05-06-2023 05:49:42
`AutoModelForCausalLM.from_pretrained(llama_path)` is enough.<|||||>> `AutoModelForCausalLM.from_pretrained(llama_path)` is enough. I used your method and got an error: OSError: Error no file named pytorch_model.bin, tf_model.h5, model.ckpt.index or flax_model.msgpack found in directory pretrain_ models/llama_7b. models/llama_7b.<|||||>Then your comment above was wrong: >llama_path include: >model.safetensors, config.json and other config files. If you have the `model.safetensors` file, `from_pretrained` will succeed. Unles you don't have `safetensors` installed in which case you shouldn't be able to have that file converted from the conversion script, but it's easily fixable with `pip install safetensors`.<|||||>> ้‚ฃไนˆไฝ ไธŠ้ข็š„่ฏ„่ฎบๆ˜ฏ้”™่ฏฏ็š„๏ผš > > > llama_path ๅŒ…ๆ‹ฌ๏ผš > > model.safetensorsใ€config.json ็ญ‰้…็ฝฎๆ–‡ไปถใ€‚ > > ๅฆ‚ๆžœไฝ ๆœ‰่ฟ™ไธช`model.safetensors`ๆ–‡ไปถ๏ผŒ`from_pretrained`ๅฐฑไผšๆˆๅŠŸใ€‚้™ค้žไฝ ๆฒกๆœ‰`safetensors`ๅฎ‰่ฃ…๏ผŒๅœจ่ฟ™็งๆƒ…ๅ†ตไธ‹ไฝ ไธๅบ”่ฏฅ่ƒฝๅคŸไปŽ่ฝฌๆข่„šๆœฌ่ฝฌๆข่ฏฅๆ–‡ไปถ๏ผŒไฝ†ๅฎƒๅพˆๅฎนๆ˜“็”จ`pip install safetensors`. I install safetensors and use following code: AutoModelForCausalLM.from_pretrained(llama_path) and then, I got a new error: AttributeError: 'NoneType' object has no attribute 'get' ? Is it the reason for my Transformers version? I am using pip install git+ https://github.com/huggingface/transformers The method of downloading is not directly 'pip install transformers'. Because when I directly 'pip install transformers', I have problems with from transformers import LlamaForCausalLM, LlamaTokenizer. <|||||>> I'm sure the path contain the model.safetensors file <|||||>Same Issue Here. I Want to Use The Model "wojtab/llava-7b-v0-4bit-128g" using from_pretrained()<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Got a Soution! Checkout AUTOGPTQ.<|||||>@TheFaheem Sorry, may I know how to solve this problem?<|||||>> @TheFaheem Sorry, may I know how to solve this problem? Check it out Here => https://github.com/PanQiWei/AutoGPTQ<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
23,175
closed
When using model.generate, it does not stop at eos_token, but instead continues until the maximum length.
### System Info ubuntu20.04 transformers==4.29.0 ### Who can help? _No response_ ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction use my own model, infer by ```shell output_texts = model.generate( input_ids=input_ids, attention_mask=attention_mask, pad_token_id= tokenizer.eos_token_id, eos_token_id= tokenizer.eos_token_id, max_new_tokens=500, do_sample=False, top_k=30, top_p=0.85, temperature=0.3, repetition_penalty=1.2) ``` output like: ```shell ๅ› ๆญค้œ€่ฆ่ฟ›่กŒ้ขๅค–็š„ไผ˜ๅŒ–ใ€‚</s></s></s></s></s></s></s></s></s></s></s></s></s></s></s></s></s></s></s></s></s></s></s></s></s></s> ``` ### Expected behavior should end by eos token but not
05-05-2023 23:47:45
05-05-2023 23:47:45
Hey @bestpredicts ๐Ÿ‘‹ I agree that `</s>` should not be generated. However, I need a stand-alone short reproducer to understand what's truly going on. We have many Llama checkpoints that are not compatible with `transformers`.<|||||>@bestpredicts I have same question, do you solved it?<|||||>> Hey @bestpredicts ๐Ÿ‘‹ > > I agree that `</s>` should not be generated. However, I need a stand-alone short reproducer to understand what's truly going on. We have many Llama checkpoints that are not compatible with `transformers`. me too,same question.<|||||>You problem can come from the `model.eos_token_id` that is not the correct one (wild guess) but we need a minimal reproducer to help you. <|||||>Edit: have deleted this comment, because think the issue I was seeing was just the EOS was outputted as very unlikely by the models for some reason. (Check the history for the code snippet.)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
23,174
closed
MPT
### Model description New LLM from MosaicML, 7B parameters. See: https://www.mosaicml.com/blog/mpt-7b ### Open source status - [X] The model implementation is available - [X] The model weights are available ### Provide useful links for the implementation https://huggingface.co/mosaicml/mpt-7b/tree/main The model is already implemented in a HF/T compatible way, but has multiple source files for its model implementation, some components that aren't used in this current model, and most importantly, dependencies that aren't normally included in HF (e.g. einops, flash_attn). Do the HF folks have a view on whether we would want to add those dependencies, or implement a vanilla version based only on existing requirements (in which case, it would arguably be easier to modify an existing LM implementation instead, rather than use MosaicML's implemenetation)?
05-05-2023 19:36:29
05-05-2023 19:36:29
Can this model be implemented without the need for executing custom python files? I believe it's a huge security risk to allow these models to run their own python code. Yes it is up to the users to verify it's safety and take responsibility for their own actions; yet in the real world, most people won't scan through the python files that come with the models and the next majority might not understand code enough to verify it's safety. <|||||>Disregarding HF, does anyone have inference code to run the MPT with FasterTransformer?<|||||>We are currently talking about this integration with mosaic ML, hope to have updates on this soon! <|||||>> Disregarding HF, does anyone have inference code to run the MPT with FasterTransformer? Here is a script for converting a HuggingFace MPT checkpoint to FasterTransformer https://github.com/mosaicml/llm-foundry/blob/main/scripts/inference/convert_hf_mpt_to_ft.py<|||||>> We are currently talking about this integration with mosaic ML, hope to have updates on this soon! @ArthurZucker Curious to know how the talks with MosaicML are going. :-)<|||||>Probably need to update the ticket description citing MPT-30B as well :-)<|||||>> We are currently talking about this integration with mosaic ML, hope to have updates on this soon! @ArthurZucker Any updates on this issue?<|||||>Yes! We didn't receive a proper answer, I'll be taking this over! Will open a pr by tomorrow! ๐Ÿ˜‰
transformers
23,173
closed
Add FlaxWhisperForAudioClassification model
Fixes #21779
05-05-2023 15:58:48
05-05-2023 15:58:48
_The documentation is not available anymore as the PR was closed or merged._<|||||>cc @sanchit-gandhi <|||||>The test failures are appearing on this one. Let's fix them and re-merge!<|||||>We need to make two changes following the updates in #22954! First, we need to assign the attribute `gradient_checkpointing` to the class `FlaxWhisperForAudioClassificationModule`, similar to what we do for `FlaxWhisperForConditionalGeneration`: https://github.com/huggingface/transformers/blob/a5741d7cb59f8a81d1f5fc7a6b106056d34f9969/src/transformers/models/whisper/modeling_flax_whisper.py#L1176 We then need to forward `self.gradient_checkpointing` to the encoder: ```diff - self.encoder = FlaxWhisperEncoder(config=self.config, dtype=self.dtype) + self.encoder = FlaxWhisperEncoder(config=self.config, dtype=self.dtype, gradient_checkpointing=self.gradient_checkpointing) ``` This will facilitate gradient checkpointing for the module!<|||||>@sgugger @sanchit-gandhi Done, all tests pass !
transformers
23,172
closed
Change summarization model
Change the summarization to a better and smaller one. The `philschmid/flan-t5-base-samsum` gets +6 points on rogue on the samesum dataset.
05-05-2023 14:56:36
05-05-2023 14:56:36
_The documentation is not available anymore as the PR was closed or merged._
transformers
23,171
closed
Support `logging_ratio`, `save_ratio`, and `eval_ratio` (like for `warmup_ratio`)
### Feature request I would love if `TrainingArguments` and the Huggingface `Trainer` would support `logging_ratio`, `save_ratio`, and `eval_ratio` arguments (complementing `logging_steps`, `save_steps`, and `eval_steps`). If the `*_ratio` argument is set to e.g. `0.1`, logging/saving/eval would be done every `0.1 * total_training_steps`. This is already done for `warmup_ratio` and `warmup_steps`. ### Motivation When dealing with many different tasks and datasets, it can be frustrating to have to calculate different appropriate `logging_steps` etc. for each individual dataset. This proposal would enable a unified, simple and concise way to solve this problem. ### Your contribution I realize this might not be trivial to fully integrate, but hopefully, we can take `warmup_steps` and `warmup_ratio` as a reference. Depending on how deep the required changes are, I can also submit a PR (with some pointers on what to look out for).
05-05-2023 14:50:01
05-05-2023 14:50:01
We already have 96 training arguments though, and that would make three more of all users to learn :-/<|||||>Another option would be to allow `float` inputs for `logging_steps`, `save_steps`, `eval_steps` and interpret all inputs `<1.0` as a ratio (and throw an error if inputs `>1.0` are not integers). But yes there is a tradeoff with too much complexity<|||||>I would prefer that solution actually, even if the naming is not perfect.<|||||>Going over the code a bit, it seems like we would have to wait until after this if statement in `_inner_training_loop` to set the correct values based on the `max_steps` that gets calculated there + some additional guards in the `__post_init__` of the `TrainingArguments`. https://github.com/huggingface/transformers/blob/04ab5605fbb4ef207b10bf2772d88c53fc242e83/src/transformers/trainer.py#L1683-L1711 At first glance, it looks like the setting of when things get logged/saved/evaluated in `DefaultFlowCallback` should work out-of-the-box with this change. I'm willing to contribute the changes once I find some time, does the general plan sound reasonable?<|||||>Yes it does. I'll be looking forward to your PR!
transformers
23,170
closed
Gradient Checkpointing Fails with frozen parameters
### System Info The PEFT methods freeze the bulk of the transformer, apart from an external module. When I enable gradient checkpointing and train with these models or even if I simply freeze an embedding layer of a normal model, training breaks. So this problem is not specific to PEFT: gradient_checkpointing + frozen first parameter = Error But if I do ``` for n, param in model.named_parameters(): param.requires_grad = True break ``` it trains successfully. So it seems like there is a check if the first parameter has a gradient. Ideally, I would not have to set the first parameter (embedding) to True, as I want the whole model including embeddings frozen. ``` warnings.warn("None of the inputs have requires_grad=True. Gradients will be None") /admin/miniconda3/envs/peft/lib/python3.10/site-packages/torch/utils/checkpoint.py:31: UserWarning: None of the inputs have requires_grad=True. Gradients will be None warnings.warn("None of the inputs have requires_grad=True. Gradients will be None") โ•ญโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ Traceback (most recent call last) โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฎ โ”‚ /admin/peft/model/model_training/trainer.py:48 โ”‚ โ”‚ 0 in <module> โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ /admin/peft/model/model_training/trainer.py:47 โ”‚ โ”‚ 4 in main โ”‚ โ”‚ โ”‚ โ”‚ 471 โ”‚ โ”‚ compute_metrics=partial(compute_metrics, metrics=metrics, preprocess_fns=preproc โ”‚ โ”‚ 472 โ”‚ โ”‚ preprocess_logits_for_metrics=preprocess_logits_for_metrics, โ”‚ โ”‚ 473 โ”‚ ) โ”‚ โ”‚ โฑ 474 โ”‚ trainer.train(resume_from_checkpoint=training_conf.resume_from_checkpoint) โ”‚ โ”‚ 475 โ”‚ trainer.save_model() โ”‚ โ”‚ 476 โ”‚ tokenizer.save_pretrained(output_dir) โ”‚ โ”‚ 477 โ”‚ โ”‚ โ”‚ โ”‚ /admin//miniconda3/envs/peft/lib/python3.10/site-packages/transformers/ โ”‚ โ”‚ trainer.py:1639 in train โ”‚ โ”‚ โ”‚ โ”‚ 1636 โ”‚ โ”‚ inner_training_loop = find_executable_batch_size( โ”‚ โ”‚ 1637 โ”‚ โ”‚ โ”‚ self._inner_training_loop, self._train_batch_size, args.auto_find_batch_size โ”‚ โ”‚ 1638 โ”‚ โ”‚ ) โ”‚ โ”‚ โฑ 1639 โ”‚ โ”‚ return inner_training_loop( โ”‚ โ”‚ 1640 โ”‚ โ”‚ โ”‚ args=args, โ”‚ โ”‚ 1641 โ”‚ โ”‚ โ”‚ resume_from_checkpoint=resume_from_checkpoint, โ”‚ โ”‚ 1642 โ”‚ โ”‚ โ”‚ trial=trial, โ”‚ โ”‚ โ”‚ โ”‚ /admin//miniconda3/envs/peft/lib/python3.10/site-packages/transformers/ โ”‚ โ”‚ trainer.py:1906 in _inner_training_loop โ”‚ โ”‚ โ”‚ โ”‚ 1903 โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ with model.no_sync(): โ”‚ โ”‚ 1904 โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ tr_loss_step = self.training_step(model, inputs) โ”‚ โ”‚ 1905 โ”‚ โ”‚ โ”‚ โ”‚ else: โ”‚ โ”‚ โฑ 1906 โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ tr_loss_step = self.training_step(model, inputs) โ”‚ โ”‚ 1907 โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ 1908 โ”‚ โ”‚ โ”‚ โ”‚ if ( โ”‚ โ”‚ 1909 โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ args.logging_nan_inf_filter โ”‚ โ”‚ โ”‚ โ”‚ /admin//miniconda3/envs/peft/lib/python3.10/site-packages/transformers/ โ”‚ โ”‚ trainer.py:2668 in training_step โ”‚ โ”‚ โ”‚ โ”‚ 2665 โ”‚ โ”‚ โ”‚ โ”‚ scaled_loss.backward() โ”‚ โ”‚ 2666 โ”‚ โ”‚ elif self.deepspeed: โ”‚ โ”‚ 2667 โ”‚ โ”‚ โ”‚ # loss gets scaled under gradient_accumulation_steps in deepspeed โ”‚ โ”‚ โฑ 2668 โ”‚ โ”‚ โ”‚ loss = self.deepspeed.backward(loss) โ”‚ โ”‚ 2669 โ”‚ โ”‚ else: โ”‚ โ”‚ 2670 โ”‚ โ”‚ โ”‚ loss.backward() โ”‚ โ”‚ 2671 โ”‚ โ”‚ โ”‚ โ”‚ /admin//miniconda3/envs/peft/lib/python3.10/site-packages/deepspeed/uti โ”‚ โ”‚ ls/nvtx.py:11 in wrapped_fn โ”‚ โ”‚ โ”‚ โ”‚ 8 โ”‚ function call.""" โ”‚ โ”‚ 9 โ”‚ def wrapped_fn(*args, **kwargs): โ”‚ โ”‚ 10 โ”‚ โ”‚ get_accelerator().range_push(func.__qualname__) โ”‚ โ”‚ โฑ 11 โ”‚ โ”‚ ret_val = func(*args, **kwargs) โ”‚ โ”‚ 12 โ”‚ โ”‚ get_accelerator().range_pop() โ”‚ โ”‚ 13 โ”‚ โ”‚ return ret_val โ”‚ โ”‚ 14 โ”‚ โ”‚ โ”‚ โ”‚ /admin//miniconda3/envs/peft/lib/python3.10/site-packages/deepspeed/run โ”‚ โ”‚ time/engine.py:1974 in backward โ”‚ โ”‚ โ”‚ โ”‚ 1971 โ”‚ โ”‚ if self.zero_optimization(): โ”‚ โ”‚ 1972 โ”‚ โ”‚ โ”‚ self.optimizer.is_gradient_accumulation_boundary = self.is_gradient_accumula โ”‚ โ”‚ 1973 โ”‚ โ”‚ โ”‚ ) โ”‚ โ”‚ โฑ 1974 โ”‚ โ”‚ โ”‚ self.optimizer.backward(loss, retain_graph=retain_graph) โ”‚ โ”‚ 1975 โ”‚ โ”‚ elif self.amp_enabled(): โ”‚ โ”‚ 1976 โ”‚ โ”‚ โ”‚ # AMP requires delaying unscale when inside gradient accumulation boundaries โ”‚ โ”‚ 1977 โ”‚ โ”‚ โ”‚ # https://nvidia.github.io/apex/advanced.html#gradient-accumulation-across-i โ”‚ โ”‚ โ”‚ โ”‚ /admin//miniconda3/envs/peft/lib/python3.10/site-packages/deepspeed/run โ”‚ โ”‚ time/zero/stage_1_and_2.py:2028 in backward โ”‚ โ”‚ โ”‚ โ”‚ 2025 โ”‚ โ”‚ โ”‚ scaled_loss = self.external_loss_scale * loss โ”‚ โ”‚ 2026 โ”‚ โ”‚ โ”‚ scaled_loss.backward() โ”‚ โ”‚ 2027 โ”‚ โ”‚ else: โ”‚ โ”‚ โฑ 2028 โ”‚ โ”‚ โ”‚ self.loss_scaler.backward(loss.float(), retain_graph=retain_graph) โ”‚ โ”‚ 2029 โ”‚ โ”‚ โ”‚ 2030 โ”‚ def check_overflow(self, partition_gradients=True): โ”‚ โ”‚ 2031 โ”‚ โ”‚ self._check_overflow(partition_gradients) โ”‚ โ”‚ โ”‚ โ”‚ /admin//miniconda3/envs/peft/lib/python3.10/site-packages/deepspeed/run โ”‚ โ”‚ time/fp16/loss_scaler.py:54 in backward โ”‚ โ”‚ โ”‚ โ”‚ 51 โ”‚ โ”‚ โ”‚ 52 โ”‚ def backward(self, loss, retain_graph=False): โ”‚ โ”‚ 53 โ”‚ โ”‚ scaled_loss = loss * self.loss_scale โ”‚ โ”‚ โฑ 54 โ”‚ โ”‚ scaled_loss.backward(retain_graph=retain_graph) โ”‚ โ”‚ 55 โ”‚ โ”‚ 56 โ”‚ โ”‚ 57 class LossScaler(LossScalerBase): โ”‚ โ”‚ โ”‚ โ”‚ /admin//miniconda3/envs/peft/lib/python3.10/site-packages/torch/_tensor โ”‚ โ”‚ .py:488 in backward โ”‚ โ”‚ โ”‚ โ”‚ 485 โ”‚ โ”‚ โ”‚ โ”‚ create_graph=create_graph, โ”‚ โ”‚ 486 โ”‚ โ”‚ โ”‚ โ”‚ inputs=inputs, โ”‚ โ”‚ 487 โ”‚ โ”‚ โ”‚ ) โ”‚ โ”‚ โฑ 488 โ”‚ โ”‚ torch.autograd.backward( โ”‚ โ”‚ 489 โ”‚ โ”‚ โ”‚ self, gradient, retain_graph, create_graph, inputs=inputs โ”‚ โ”‚ 490 โ”‚ โ”‚ ) โ”‚ โ”‚ 491 โ”‚ โ”‚ โ”‚ โ”‚ /admin//miniconda3/envs/peft/lib/python3.10/site-packages/torch/autogra โ”‚ โ”‚ d/__init__.py:197 in backward โ”‚ โ”‚ โ”‚ โ”‚ 194 โ”‚ # The reason we repeat same the comment below is that โ”‚ โ”‚ 195 โ”‚ # some Python versions print out the first line of a multi-line function โ”‚ โ”‚ 196 โ”‚ # calls in the traceback and some print out the last line โ”‚ โ”‚ โฑ 197 โ”‚ Variable._execution_engine.run_backward( # Calls into the C++ engine to run the bac โ”‚ โ”‚ 198 โ”‚ โ”‚ tensors, grad_tensors_, retain_graph, create_graph, inputs, โ”‚ โ”‚ 199 โ”‚ โ”‚ allow_unreachable=True, accumulate_grad=True) # Calls into the C++ engine to ru โ”‚ โ”‚ 200 โ”‚ โ•ฐโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฏ RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn ``` ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction from transformers import AutoModel from peft import LoraConfig, get_peft_model model = AutoModel.from_pretrained('decapoda-research/llama-7b-hf') config = LoraConfig( r=16, lora_alpha=32, target_modules=["q_proj", "k_proj", "v_proj", "o_proj"], lora_dropout=0.05, bias="none", task_type="CAUSAL_LM", ) model = get_peft_model(model, config) and then proceed to train with gradient_checkpointing enabled. ### Expected behavior Gradient checkpointing shouldn't affect whether a subset of parameters are frozen. As PEFT models are increasingly popular as well as gradient_checkpointing it makes sense to get to the bottom of this bug.
05-05-2023 13:25:37
05-05-2023 13:25:37
cc @younesbelkada and @pacman100 <|||||>hi @jamesharrisivi For that you need to make sure that the input being passed to the peft model has `requires_grad` set to `True` This is a duplicate of https://discuss.huggingface.co/t/peft-lora-gpt-neox-backward-pass-failing/35641 Can you try to add: ```python if hasattr(model, "enable_input_require_grads"): model.enable_input_require_grads() else: def make_inputs_require_grad(module, input, output): output.requires_grad_(True) model.get_input_embeddings().register_forward_hook(make_inputs_require_grad) ``` somewhere in your training script, before the call to `get_peft_model` ? <|||||>I agree though we can do it directly when creating the peft model, I propose to fix it in https://github.com/huggingface/peft/pull/404<|||||>@younesbelkada Is this fix also intended to be compatible with Deepspeed?<|||||>I think it would work with DS out of the box but not sure, did you tried already on your end @bradfox2 ?<|||||>> I think it would work with DS out of the box but not sure, did you tried already on your end @bradfox2 ? From what I saw, get_input_embeddings() wasn't defined on the DS model object.
transformers
23,169
closed
Cant get deepspeed parm
### System Info latest version 2023.5.5 ### Who can help? @stas00 ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Use Accelerate and deepspeed plugin , transformers Trainer class. I get config.ymal from `accelerate config` as follows: ``` ymal compute_environment: LOCAL_MACHINE deepspeed_config: deepspeed_config_file: /xxx/dp_zero2.json zero3_init_flag: false distributed_type: DEEPSPEED downcast_bf16: 'no' machine_rank: 0 main_training_function: main num_machines: 1 num_processes: 4 rdzv_backend: static same_network: true tpu_env: [] tpu_use_cluster: false tpu_use_sudo: false use_cpu: false ``` `/xxx/dp_zero2.json` is config, dont care. However get deepspeed=None from TrainingArguments when run ` accelerate launch xxx.py `๏ผŒIt means deepspeed parms is invalid. ### Expected behavior Expect `Trainer class` read deepspeed parm from ymal file
05-05-2023 13:12:49
05-05-2023 13:12:49
This is a wrong repo. For Accelerate please file Issues under https://github.com/huggingface/accelerate/issues
transformers
23,168
closed
shift torch dynamo handling to accelerate
### What does this PR do? 1. Shits the torch dynamo handling to accelerate 2. Should be merged after #23158 3. No user facing change. Now, users can use `accelerate launch` for torch dynamo, e.g., ``` accelerate launch --dynamo_backend=inductor ./examples/pytorch/text-classification/run_glue.py --model_name_or_path bert-base-cased --task_name $TASK_NAME --do_train --do_eval --max_seq_length 128 --per_device_train_batch_size 32 --learning_rate 2e-5 --num_train_epochs 3 --output_dir ~/temp/$TASK_NAME/ --fp16 --overwrite_output_dir --pad_to_max_length --dataloader_drop_last ``` Current usage like below is unimpacted: ``` python ./examples/pytorch/text-classification/run_glue.py --model_name_or_path bert-base-cased --task_name $TASK_NAME --do_train --do_eval --max_seq_length 128 --per_device_train_batch_size 32 --learning_rate 2e-5 --num_train_epochs 3 --output_dir ~/temp/$TASK_NAME/ --fp16 --overwrite_output_dir --torch_compile --pad_to_max_length --dataloader_drop_last ```
05-05-2023 12:11:19
05-05-2023 12:11:19
_The documentation is not available anymore as the PR was closed or merged._
transformers
23,167
closed
fixed whisper positional encoding
Whisper Positional Encoding has incorrect behavior when passing inputs_embeds: - When we are passing `input_ids` (batch_size x seq_len) it takes the -1 dimension of it which is correct. - When we are passing `input_embeds` (batch_size x seq_len x embedding_dim) it doesn't work. If we take -1 dimension, we get embedding dim elements. My fix is just to take the 1 dimension which will be always correct
05-05-2023 11:51:42
05-05-2023 11:51:42
_The documentation is not available anymore as the PR was closed or merged._<|||||>@clefourrier @gante<|||||>Looks like, there are no problems with tests. ![image](https://user-images.githubusercontent.com/43551010/236498753-3a084a00-873a-4c91-b9af-caf5a12481e2.png)
transformers
23,166
closed
๐ŸŒ [i18n-KO] Translated `troubleshooting.mdx` to Korean
<!-- PR์˜ ์ œ๋ชฉ์€ "๐ŸŒ [i18n-KO] Translated `<your_file>.mdx` to Korean" ์œผ๋กœ ๋ถ€ํƒ๋“œ๋ฆฝ๋‹ˆ๋‹ค --> # What does this PR do? Translated the `troubleshooting.mdx` file of the documentation to Korean. Thank you in advance for your review ๐Ÿ˜„ Part of https://github.com/huggingface/transformers/issues/20179 <!-- ๋ฉ”์ธ ์ด์Šˆ์— ๊ธฐ๋ก์ด ๋‚จ์•„์š”! ๊ฐ€์งœ์—ฐ๊ตฌ์†Œ ๋ฆฌํฌ๋ฅผ ์‚ฌ์šฉํ•ด ์—ฐ์Šตํ•˜์‹ค๋•Œ๋Š” ์ œ๊ฑฐํ•ด์ฃผ์‹œ๋ฉด ๊ฐ์‚ฌํ•˜๊ฒ ์Šต๋‹ˆ๋‹ค! :smile: --> ## Before reviewing - [x] Check for missing / redundant translations (๋ฒˆ์—ญ ๋ˆ„๋ฝ/์ค‘๋ณต ๊ฒ€์‚ฌ) - [x] Grammar Check (๋งž์ถค๋ฒ• ๊ฒ€์‚ฌ) - [x] Review or Add new terms to glossary (์šฉ์–ด ํ™•์ธ ๋ฐ ์ถ”๊ฐ€) - [x] Check Inline TOC (e.g. `[[lowercased-header]]`) - [x] Check live-preview for gotchas (live-preview๋กœ ์ •์ƒ์ž‘๋™ ํ™•์ธ) ## Who can review? (Initial) <!-- 1. ์œ„ ์ฒดํฌ๊ฐ€ ๋ชจ๋‘ ์™„๋ฃŒ๋œ ๋’ค์—๋งŒ ๊ฐ€์งœ์—ฐ๊ตฌ์†Œ ํŒ€์›๋“ค์—๊ฒŒ ๋ฆฌ๋ทฐ ์š”์ฒญํ•˜๋Š” ์•„๋ž˜ ์ฃผ์„์„ ๋…ธ์ถœํ•ด์ฃผ์„ธ์š”! --> Team PseudoLab, may you please review this PR? @0525hhgus, @KIHOON71, @sim-so, @gabrielwithappy, @HanNayeoniee, @wonhyeongseo, @jungnerd ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? (Final) <!-- 2. ๊ฐ€์งœ์—ฐ๊ตฌ์†Œ ํŒ€์›๋“ค๊ณผ ๋ฆฌ๋ทฐ๊ฐ€ ๋๋‚œ ํ›„์—๋งŒ ํ—ˆ๊น…ํŽ˜์ด์Šค ์ง์›๋“ค์—๊ฒŒ ๋ฆฌ๋ทฐ ์š”์ฒญํ•˜๋Š” ์•„๋ž˜ ์ฃผ์„์„ ๋…ธ์ถœํ•ด์ฃผ์„ธ์š”! --> May you please review this PR? @sgugger, @ArthurZucker, @eunseojo
05-05-2023 11:32:31
05-05-2023 11:32:31
_The documentation is not available anymore as the PR was closed or merged._<|||||>May you please review this PR? ๐Ÿ˜ƒ @sgugger, @ArthurZucker, @eunseojo<|||||>@0525hhgus you need to put it out of draft mode if it's ready for review.<|||||>> @0525hhgus you need to put it out of draft mode if it's ready for review. I changed to ready for review status! Thank you for your review.
transformers
23,165
closed
i got a Trainer error: Attempting to unscale FP16 gradients
### System Info - `transformers` version: 4.28.1 - Platform: Linux-5.4.0-148-generic-x86_64-with-glibc2.27 - Python version: 3.9.16 - Huggingface_hub version: 0.13.4 - Safetensors version: not installed - PyTorch version (GPU?): 1.13.1 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> - device : Tesla T4*4 - CUDA-11.6 ### Who can help? @sgugger Now, when I add fp16=True, i get the error: ValueError: Attempting to unscale FP16 gradients. when running trainer.train() ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction from transformers import LlamaTokenizer, LlamaForCausalLM,AutoTokenizer,AutoModelForSeq2SeqLM, LlamaConfig from peft import prepare_model_for_int8_training, LoraConfig, get_peft_model, get_peft_model_state_dict merge_tokenizer = LlamaTokenizer.from_pretrained('/home/han/new_store/Llama/merged_tokenizer_hf',padding=True, truncation=True) print(len(merge_tokenizer)) n = merge_tokenizer.add_special_tokens({'pad_token': '[PAD]'}) len(merge_tokenizer) from datasets import load_dataset dataset = load_dataset("json", data_files="./data/alpaca_data_zh_51k.json") dataset = dataset.filter(lambda x: x["output"]!=None) dataset = dataset.filter(lambda x: x["input"] !=None) def preprocess_function(sample): l = "A chat between a curious human and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the human's questions.</s>Human:" for i in range(len(sample['instruction'])): if sample['input'][i]!='': sample['instruction'][i]=l+sample['instruction'][i]+'[PAD]'+sample['input'][i] # print(sample['input'][i]) output = ['Assistant:'+i for i in sample['output']] model_inputs = merge_tokenizer(sample['instruction'], truncation=True,padding=True,max_length=200) labels = merge_tokenizer(output, truncation=True, padding=True,max_length=200) model_inputs["labels"] = labels["input_ids"] # print(model_inputs) return model_inputs input_data = dataset['train'].map(preprocess_function,batched=True,remove_columns=['instruction','input','output']) import torch model = LlamaForCausalLM.from_pretrained('decapoda-research/llama-7b-hf',device_map='auto',cache_dir='./cache/',torch_dtype=torch.float16) model.resize_token_embeddings(len(merge_tokenizer)) from transformers import TrainingArguments, Trainer, DataCollatorForLanguageModeling trainArgs = TrainingArguments( output_dir= '../ckps_emb', do_train=True, # per_device_train_batch_size=4, auto_find_batch_size=True, fp16=True, gradient_accumulation_steps=4, evaluation_strategy="steps", save_strategy="steps", save_steps=1000, eval_steps=1000, logging_steps=20, warmup_steps=100, num_train_epochs=2, learning_rate=5e-4, load_best_model_at_end=True, report_to="wandb" ) for name, param in model.named_parameters(): param.requires_grad_(False) if name =='model.embed_tokens.weight': param.requires_grad_(True) print(name, "requires_grad:", param.requires_grad) trainer = Trainer( model=model, args=trainArgs, train_dataset=input_data, eval_dataset=input_data, data_collator=DataCollatorForLanguageModeling(merge_tokenizer, mlm=False), ) model.config.use_cache = True trainer.train() model.save_pretrained('../ckps/demo_llama71_full') ### Expected behavior i except it does not give a error ,ValueError:Attempting to unscale FP16 gradients.
05-05-2023 11:03:11
05-05-2023 11:03:11
You can't train a model loaded in FP16: ``` model = LlamaForCausalLM.from_pretrained(xxx, torch_dtype=torch.float16) ``` is the culprit here. I don't know how PEFT initializes the layer to train afterwards, but some of them must be in the same dtype cc @younesbelkada <|||||>I second what @sgugger said, however I see that you're importing peft but doing nothing with it, also make sure to use the latest `peft` release as it contains some bug fixes. ```bash pip install --upgrade peft ``` In my opinion, to use PEFT at its best, you should load your model in 8bit as follows: ```python from peft import LoraConfig, get_peft_model, prepare_model_for_int8_training from transformers import LlamaTokenizer, LlamaForCausalLM path_to_llama = xxx model = LlamaForCausalLM.from_pretrained( path_to_llama, device_map="auto", load_in_8bit=True ) tokenizer = LlamaTokenizer.from_pretrained(path_to_llama) config = LoraConfig( r=16, lora_alpha=32, target_modules=["q_proj", "v_proj"], lora_dropout=0.05, bias="none", task_type="CAUSAL_LM", ) model = prepare_model_for_int8_training(model) model = get_peft_model(model, config) ... # get your dataset etc here trainer = Trainer( model=model, ... ) ``` Also make sure to use `transformers` latest release as well: ```bash pip install --upgrade transformers ```<|||||>For reference, I would have a look at how the PEFT slow tests are designed, check here: https://github.com/huggingface/peft/blob/b1059b73aab9043b118ff19b0cf96263ea86248a/tests/test_gpu_examples.py#L114 <|||||>Thank you for your reply, when I update the latest PEFT and transformers, All problems are resolved. <|||||>> You can't train a model loaded in FP16: > > ``` > model = LlamaForCausalLM.from_pretrained(xxx, torch_dtype=torch.float16) > ``` > > is the culprit here. I don't know how PEFT initializes the layer to train afterwards, but some of them must be in the same dtype cc @younesbelkada Thanks for the answer, it saved me some time to test if it is possible to fine tune a model loaded in FP16. But what about models loaded in 8bit? Can I just fine tune the model with an 8-bit optimiser without using any PEFT techniques such as LoRA? If I can't tune a model loaded in 8bit, I wonder why we are allowed to use LoRA to fine tune the model?<|||||>> I second what @sgugger said, however I see that you're importing peft but doing nothing with it, also make sure to use the latest `peft` release as it contains some bug fixes. > > ```shell > pip install --upgrade peft > ``` > > In my opinion, to use PEFT at its best, you should load your model in 8bit as follows: > > ```python > from peft import LoraConfig, get_peft_model, prepare_model_for_int8_training > from transformers import LlamaTokenizer, LlamaForCausalLM > > path_to_llama = xxx > model = LlamaForCausalLM.from_pretrained( > path_to_llama, > device_map="auto", > load_in_8bit=True > ) > > tokenizer = LlamaTokenizer.from_pretrained(path_to_llama) > > config = LoraConfig( > r=16, > lora_alpha=32, > target_modules=["q_proj", "v_proj"], > lora_dropout=0.05, > bias="none", > task_type="CAUSAL_LM", > ) > > model = prepare_model_for_int8_training(model) > model = get_peft_model(model, config) > > ... # get your dataset etc here > trainer = Trainer( > model=model, > ... > ) > ``` > > Also make sure to use `transformers` latest release as well: > > ```shell > pip install --upgrade transformers > ``` Hi Younes, thank you for your work on PEFT. Recently I read some papers measuring the performance difference between full fine-tuning and lora-based fine-tuning. There's actually a huge difference between the tuned models in terms of their benchmarks/metrics. Here're the links to the publications: https://arxiv.org/abs/2304.14454 ![image](https://user-images.githubusercontent.com/50556116/236809378-88376565-f41b-4a29-bb84-bd380dc44dfd.png) https://arxiv.org/abs/2304.08109 ![image](https://user-images.githubusercontent.com/50556116/236809531-29315632-33bd-43cc-8c01-c95cd5382d83.png) I am very grateful that we have these open-source fine-tuning techniques. But I am curious about your opinions on the performance trade-off between lora and full-tuing? Thanks for your concerns.<|||||>Hi @IwanVan Thanks for your reply and your interests, I will answer to your questions to the best of my knowlegde 1- Sadly it is not possible to do pure int8 training, (i.e. pass the full 8bit model to the optimizer state) as I believe this will result in a very unstable training as your weight matrix can be only represented in 8bit precision (256 possible values), so the model won't probably learn anything. Although it's not possible to train in pure fp16 (from my understanding), you can train your model in a precision called `bfloat16` (simply pass `torch_dtype=torch.bfloat16`), that has the same training dynamics as `float32`, and that is commonly used to train large scale models. We have made a detailed blogpost about that [here](https://huggingface.co/blog/hf-bitsandbytes-integration) that I invite your to have a look. 2- This seems to be a new paper so it's the first time I go through, and from my understanding it tries to fine-tune Llama to the medical paper domain. I agree the differences here sound quite large. Thinking about it loud, maybe the domain gap was too high for that model but I am not sure. Empirically it has been showed (from the original paper and from what I have seen so far) that you can get very comparable results (sometimes better results) than full finetuning when going for PEFT methods (and on all modalities, vision, text, RLHF, etc.), so I would say it would really depend on your usecase, dataset, etc.. Note that with PEFT you can fit into your devices the model + optimizer states of very large models! In [this blogpost](https://huggingface.co/blog/trl-peft) we show how to fit a 20B model into a 24GB GPU and train that model. This is totally not possible when going for full-finetuning. I would say this is the main (and big) advantage of PEFT methods. cc also @pacman100 that would probably have more insights here! Thanks!<|||||>> If I can't tune a model loaded in 8bit, I wonder why we are allowed to use LoRA to fine tune the model? Because in the case of tuning the LoRA layers, the base model will stay untouched, in 8bit, but the LoRA layers that we're going to train will be kept in full precision (float32)<|||||>> Hi @IwanVan Thanks for your reply and your interests, I will answer to your questions to the best of my knowlegde > > 1- Sadly it is not possible to do pure int8 training, (i.e. pass the full 8bit model to the optimizer state) as I believe this will result in a very unstable training as your weight matrix can be only represented in 8bit precision (256 possible values), so the model won't probably learn anything. Although it's not possible to train in pure fp16 (from my understanding), you can train your model in a precision called `bfloat16` (simply pass `torch_dtype=torch.bfloat16`), that has the same training dynamics as `float32`, and that is commonly used to train large scale models. We have made a detailed blogpost about that [here](https://huggingface.co/blog/hf-bitsandbytes-integration) that I invite your to have a look. > > 2- This seems to be a new paper so it's the first time I go through, and from my understanding it tries to fine-tune Llama to the medical paper domain. I agree the differences here sound quite large. Thinking about it loud, maybe the domain gap was too high for that model but I am not sure. Empirically it has been showed (from the original paper and from what I have seen so far) that you can get very comparable results (sometimes better results) than full finetuning when going for PEFT methods (and on all modalities, vision, text, RLHF, etc.), so I would say it would really depend on your usecase, dataset, etc.. Note that with PEFT you can fit into your devices the model + optimizer states of very large models! In [this blogpost](https://huggingface.co/blog/trl-peft) we show how to fit a 20B model into a 24GB GPU and train that model. This is totally not possible when going for full-finetuning. I would say this is the main (and big) advantage of PEFT methods. cc also @pacman100 that would probably have more insights here! > > Thanks! Hi @younesbelkada , thanks again for your quick response. 1. I actually have implemented a lot of your example codes from the [Peft lib](https://github.com/huggingface/peft/tree/main/examples) already. Also the `load_in_8bit` support backed by bnb is really impressive, and I've used it for zero-/ few-shot inference with LLM on a single 4090. For training, I have implemented almost every factors that were mention in [Efficient Training on a Single GPU](https://huggingface.co/docs/transformers/perf_train_gpu_one) by using the HF trainer. However, the largest model that I can tune in full precision is flan-t5-3B with very efficient setup and a new GPU-friendly optimizer called [Lion](https://github.com/lucidrains/lion-pytorch), but in [8bit version](https://github.com/TimDettmers/bitsandbytes/blob/main/bitsandbytes/optim/lion.py#L36). 2. Personally I am very excited about efficient fine-tuning techniques such as Lora, and I have carefully examined the code for AdaLoRA and a newer technique called [Ladder Side-Tuning (LST)](https://github.com/ylsung/Ladder-Side-Tuning), and I have [asked the authors](https://github.com/ylsung/Ladder-Side-Tuning/issues/6) if they intend to integrate this technique into the peft library. However, the reason I have been on the fence for the last two weeks with regard to peft techniques such as lora is that there is a growing number of papers appearing which fine-tune models using peft techniques based on some very new auto-regressive models. An increasing number of studies show that lora seems to have significant robustness problems for training of domain-specific ([medical](https://arxiv.org/abs/2304.14454)) and other language ([Chinese](https://arxiv.org/abs/2304.08109)) instructions. In these papers, lora lags behind full fine-tuning almost across the board in all metrics. Certainly I agree with your analysis of the causes above, and I am not in a hurry to draw conclusions about the results from these papers, as new technologies need to be viewed rationally. But I wonder if I could open a new issue in the peft repository to follow up on the current new research on peft/lora and see if I could find a reasonable explanation for the difference in performance across different fine-tuning techniques by documenting and analysing similar papers over time and get more developers involved in the discussion? Regards, Wang<|||||>@younesbelkada Hello, I load 7B llama for peft Lora finetune on a single v100 but got OOM, is that normal? am using default float(32). Does it have to be load in in8 for lora finetuning?<|||||>@younesbelkada after load in in8, I got error like this: ``` RuntimeError: expected scalar type Half but found Float ``` I swear i have no where set float16 in my code..... <|||||>hi @lucasjinreal Do you used `prepare_model_for_int8_training` on your script? You need to call that method before calling `get_peft_model`<|||||>@younesbelkada I noticed that LION merge into master, when will it update to pip btw? > Do you used prepare_model_for_int8_training on your script? Yes, I have used. after I set `fp16=False` it now works. But, do u know why 32GB unable to train with float32? Am have to using deepspeed to offload now, and int8 training seems slower than offload<|||||>hi @lucasjinreal It should be already in pip there should be an announcement soon about that :) > Yes, I have used. after I set fp16=False it now works. Awesome! > But, do u know why 32GB unable to train with float32? Am have to using deepspeed to offload now, and int8 training seems slower than offload Yes int8 can be slower in some cases, you might be interested in using FP4 quantization that should be much faster, it will be part of the announcement today as well. I will keep you posted Relevant links: https://github.com/artidoro/qlora & https://github.com/huggingface/transformers/pull/23479 <|||||>@younesbelkada Looking forward to it, do u mean fp4 training? Looks like only decent GPU like H100 support it. Will transformers new release incuding this as well?
transformers
23,164
closed
๐ŸŒ [i18n-KO] Translated object_detection.mdx to Korean
# What does this PR do? Translated the object_detection.mdx file of the documentation to Korean. Thank you in advance for your review. Part of https://github.com/huggingface/transformers/issues/20179 <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> ## Before reviewing - [x] Check for missing / redundant translations (๋ฒˆ์—ญ ๋ˆ„๋ฝ/์ค‘๋ณต ๊ฒ€์‚ฌ) - [x] Grammar Check (๋งž์ถค๋ฒ• ๊ฒ€์‚ฌ) - [x] Review or Add new terms to glossary (์šฉ์–ด ํ™•์ธ ๋ฐ ์ถ”๊ฐ€) - [x] Check Inline TOC (e.g. [[lowercased-header]]) - [x] Check live-preview for gotchas (live-preview๋กœ ์ •์ƒ์ž‘๋™ ํ™•์ธ) ## Who can review? (Initial) @0525hhgus, @KIHOON71, @sim-so, @gabrielwithappy, @HanNayeoniee, @wonhyeongseo, @jungnerd ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? May you please review this PR? @sgugger, @ArthurZucker, @eunseojo <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
05-05-2023 10:53:43
05-05-2023 10:53:43
_The documentation is not available anymore as the PR was closed or merged._
transformers
23,163
closed
Better check for packages availability
Following up huggingface/accelerate#1356 Refactor checks to avoid boilerplate and to ensure we are not picking up a folder that happens to be called as the package. I assume all _*_available are for caching purposes. But is that really needed?
05-05-2023 10:49:00
05-05-2023 10:49:00
_The documentation is not available anymore as the PR was closed or merged._<|||||>> Thanks a lot for the refactor, this is super nice! > > There are a couple of packages that do not properly implement metadata (I kno for sure `opencv` does not since we are adding it in another PR), could you quickly check that all the packages for which you did this PR do implement the metadata? if they don't we need to rely on the old way, which is fine as it should be an exceptional case. I have checked the packages and found issues only on sklearn (the import is sklearn and package scikit-learn) and decord. Could you please double check: smdistributed, tensorflow_text, torchdistx?<|||||>I have validated that `tensorflow_text` works. Installing `torchdistx` seems to painful for the time I have right now, but looking at the GitHub, everything should be fine. Checking internally for `smdistributed` since it only exists in SageMaker envirnoments.<|||||>Not hearing anything back on `smdistributed`, so let's just merge this PR and see if anyone complains :sweat_smile: Can you just fix the conflict?<|||||>don't think failures are due to this PR. Also main is failing
transformers
23,162
closed
Unpin numba
# What does this PR do? Numba was pinned to <0.57.0 in #23118 - this is because it forced an update of the numpy package to >= 1.24. From numpy >= 1.24, converting a ragged list to a numpy array requires the user to **explicitly** set `dtype=object` (before this happened automatically, but threw a deprecation warning). This PR updates the feature extraction and tokenisation utils to explicitly specify `dtype=object` when converting ragged lists to numpy arrays.
05-05-2023 09:29:56
05-05-2023 09:29:56
_The documentation is not available anymore as the PR was closed or merged._<|||||>Failing test is unrelated ([tf compile test](https://circleci-tasks-prod.s3.us-east-1.amazonaws.com/forks/storage/artifacts/d5b57382-7f67-4274-9623-7f238ef4fb6f/457033993/0fb5a370-4bdb-43e1-a1a2-c242ca17c8d3/0/~/transformers/reports/tests_tf/failures_short.txt?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=ASIAQVFQINEONDE666BB%2F20230522%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20230522T165404Z&X-Amz-Expires=60&X-Amz-Security-Token=IQoJb3JpZ2luX2VjEJn%2F%2F%2F%2F%2F%2F%2F%2F%2F%2FwEaCXVzLWVhc3QtMSJIMEYCIQCSbOdyE07pYZr8Irw7BBJgEnLQCNFnrj3zK2%2FcTGV9jwIhAM3cUGClvwCTfkqdiYJKR5orc0gzQPRB5KHXpXqX00XzKrQCCML%2F%2F%2F%2F%2F%2F%2F%2F%2F%2FwEQAxoMMDQ1NDY2ODA2NTU2Igw2mZInyce2C0JoQbsqiALBk31dgOmIJwAV%2FwHQoioyIo8GaTDckrk0%2BW1lfxXFVCAB8YxcUGOvSBFIIijvkMpBa6jlob4IQ9dAZf%2FvMKH1aXfOEXU284URRx6VEYoapQELm8CUc35O3YEeF%2BOIyQojAI9e0oBTYlhqzn%2Fg2bzV3mvyzvKtHWN17wvOWHBAKoba%2Bt2FdxkJBKq%2BnmQHHyQYq7JxDJ1L0Jd6xKa6uT85hjmkPNFiZr8iAvicJtcSM4UVdY5o9yI2Wty8lc8IDykk9%2BS9HeJ2mONGAowSBJz33LKjZDnRe4oOWkKt%2F3kaRm%2BTVzchn4Hy6poGKG6wr%2FqAHv0Kyd%2FJ09FviDzMvppV4tFzoOXkgicwm7auowY6nAHbz06ploAc04Toucr8X%2BlicPoUNiKWQBpolxtbSGpfvxKsTlSge8HaMvGqx6EZSjDG0JlC3MCgKfLfg5WhwkX0MMBQnv4UhP65R%2B7aEUNv%2B9yZOC6NCvvu8Bv9Hj0ml4fRCcggNYtqwNQKZRmshd59IK0ZqBCdWIfbQ5x8uvmJubBnBR7kmfmexwxiUOZ%2BbVMZBqDpSuzXGhXlH%2B0%3D&X-Amz-SignedHeaders=host&x-id=GetObject&X-Amz-Signature=5c8e53587d9a3023543892b06988a69ef02ebd3d2ccff9bfdac9b8dbf7587786))
transformers
23,161
closed
mac m2 max data collator issue
### System Info - `transformers` version: 4.28.1 - Platform: macOS-13.2.1-arm64-arm-64bit - Python version: 3.9.6 - Huggingface_hub version: 0.13.4 - Safetensors version: not installed - PyTorch version (GPU?): 2.0.0 (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: yes - Using distributed or parallel set-up in script?: no ### Who can help? @sgugger ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction data_collator = DataCollatorForSeq2Seq( tokenizer, model=model, label_pad_token_id=label_pad_token_id, pad_to_multiple_of=8, return_tensors='pt' ) ### Expected behavior I'd expect normal behaviour but get TypeError: can't convert mps:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.
05-05-2023 09:01:24
05-05-2023 09:01:24
No one can help without a clear reproducer of the issue.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
23,160
closed
implement unlimiformer into transformers
### Feature request https://github.com/abertsch72/unlimiformer promises to support unlimited input length on any transformer based encoder/decoder model with sub-linear cost in time. ### Motivation Context lengths are fairly limited ### Your contribution Testing
05-05-2023 09:01:18
05-05-2023 09:01:18
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Bump > Am 04.06.2023 um 17:01 schrieb github-actions[bot] ***@***.***>: > > ๏ปฟ > This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. > > Please note that issues that do not follow the contributing guidelines are likely to be ignored. > > โ€” > Reply to this email directly, view it on GitHub, or unsubscribe. > You are receiving this because you authored the thread. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
23,159
closed
search buffers for dtype
# What does this PR do? This PR extends the logic of get_parameter_dtype to search buffers after parameters are searched. If a model is frozen such that all its parameters are turned into buffers, the current logic may not be able to find a dtype even if it tries to search module.\_\_dict\_\_. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
05-05-2023 08:02:50
05-05-2023 08:02:50
_The documentation is not available anymore as the PR was closed or merged._
transformers
23,158
closed
move fsdp handling to accelerate
### What does this PR do? 1. Moves PyTorch FSDP handling to Accelerate 2. Should be merged after #23151 3. No user-facing change. Now, users can use `accelerate launch` for fsdp in Trainer, e.g.: ``` accelerate launch --num_processes=2 --use_fsdp --mixed_precision=bf16 --fsdp_auto_wrap_policy=TRANSFORMER_BASED_WRAP --fsdp_transformer_layer_cls_to_wrap="BertLayer" --fsdp_sharding_strategy=1 --fsdp_state_dict_type=FULL_STATE_DICT ./examples/pytorch/text-classification/run_glue.py --model_name_or_path bert-base-cased --task_name $TASK_NAME --do_train --do_eval --max_seq_length 128 --per_device_train_batch_size 16 --learning_rate 5e-5 --num_train_epochs 3 --output_dir /tmp/$TASK_NAME/ --overwrite_output_dir ``` Continue to use torchrun with trainer args as usual. ``` torchrun --nnodes 1 --nproc-per-node 2 ./examples/pytorch/text-classification/run_glue.py --model_name_or_path bert-base-cased --task_name $TASK_NAME --do_train --do_eval --max_seq_length 128 --per_device_train_batch_size 16 --learning_rate 5e-5 --num_train_epochs 3 --output_dir /tmp/$TASK_NAME/ --overwrite_output_dir --fsdp "full_shard auto_wrap" --fsdp_transformer_layer_cls_to_wrap BertLayer --bf16 ```
05-05-2023 07:41:58
05-05-2023 07:41:58
_The documentation is not available anymore as the PR was closed or merged._<|||||>Hello Sylvain, we can't do that as FSDP XLA integration uses it and that isn't supported yet in accelerate
transformers
23,207
closed
Better prompt error messages
I got error when AutoTokenizer.from_pretrained: ``` huggingface_hub.utils._validators.HFValidationError: Repo id must be in the form 'repo_name' or 'namespace/repo_name': './dist/models/vicuna-v1-7b'. Use `repo_type` argument if needed. ``` this was caused local path doesn't exist actually, but the error messsages makes me confused, maybe add some prompt if path does not exist to aware users you should put right path rather than keep telling my using right repo id??
05-05-2023 03:13:42
05-05-2023 03:13:42
Hi @lucasjinreal , I'm transferring this issue to `transformers` as this is not really related to `huggingface_hub` itself (hfh is the underlying library making calls to the HF Hub but is not responsible if a path is provided as repo_id when downloading a file). cc @sgugger I'm not an expert on how files are loaded in `transformers` but I think a "catch `HfValidationError`" statement in [this try/except](https://github.com/huggingface/transformers/blob/main/src/transformers/utils/hub.py#L423) (`from huggingface_hub.utils import HFValidationError`) would allow a better error message.<|||||>Mmm that's actually a bit tricky since this error can come from multiple causes.<|||||>Yeah, but I think they might can be detect in priority order, some obviously pattern can handled more properly<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
23,157
closed
Fixing class embedding selection in owl-vit
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # For OWL-ViT image-guided object detection, there is a mistake in selecting the best embedding (the most distinct one with high IoU). Specifically; selected_inds is a [num_inds, 1] dimensional tensor, where the indexes indicate which queries had a high IoU with target object bbox. But, as selected_inds[0] was selected, only the first of all the possible queries is selected. Specifically, instead of selected_embeddings being a [num_inds, D] dimensional tensor, it is a [1, D] dimensional tensor. This led ultimately to the first query always being selected, not the most unique one as required. An error is not raised. To see this is the case, just add a print statement of 'torch.argmin(mean_sim)' here: https://github.com/huggingface/transformers/blob/01734dba842c29408c96caa5c345c9e415c7569b/src/transformers/models/owlvit/modeling_owlvit.py#L1505 & you will see it is always 0. - `transformers` version: 4.28.1 - Platform: Linux-5.4.0-109-generic-x86_64-with-glibc2.31 - Python version: 3.11.3 - Huggingface_hub version: 0.14.1 - Safetensors version: not installed - PyTorch version (GPU?): 2.0.0+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: Yes - ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 --> @sgugger @NielsRogge @alaradirik @amyeroberts
05-05-2023 00:11:44
05-05-2023 00:11:44
_The documentation is not available anymore as the PR was closed or merged._<|||||>@orrzohar thank you for opening the PR! I'll take double check the original code and the forward pass shortly.<|||||>After this fix I have problem with predictions. I Use [Colab demo](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/zeroshot_object_detection_with_owlvit.ipynb) for owl-vit And on example photo with cats I have only one prediction. Looks suspicious and not the same as original repo <img width="945" alt="Screenshot 2023-05-09 at 16 13 13" src="https://github.com/huggingface/transformers/assets/55710648/02503c77-acb1-4d97-95a2-3d2f0ff6599f"> @alaradirik <|||||>> Hi @MaslikovEgor, Hub demos are deployed once and not updated unless triggered. I'm rebooting the demo to reflect the changes. Nice to meet you! Yeah, I understand this. But in google colab demo we install fresh transformers from source: `!pip install git+https://github.com/huggingface/transformers.git` So this is the problem with this changes @alaradirik <|||||>I found when evaluating COCO that [email protected] increases from 6 to 37. This is still below the expected 44+, but closer to the reported/expected performance. I am still trying to figure out why.
transformers
23,156
closed
Add `no_trainer` scripts to pre-train Vision Transformers
# What does this PR do? Add scripts to pre-train Transformer-based Vision models without using the Trainer class. Fixes #20053 Fixes #20412 This PR completes the stalled PR #20412. ## Who can review? @amyeroberts @NielsRogge @sgugger
05-04-2023 18:21:05
05-04-2023 18:21:05
_The documentation is not available anymore as the PR was closed or merged._<|||||>@NielsRogge As per https://github.com/huggingface/transformers/pull/20412#issuecomment-1370792519, I have made a comparison notebook running both the trainer.py and no_trainer.py scripts on a small dataset. It can be viewed [here](https://colab.research.google.com/drive/1er4n0_AoQZU-OtXZPla3i-rJ38apA_zh?usp=sharing). Both the scripts progress similarly.<|||||>Thanks a lot @awinml! I'll assign core maintainers for a final review.
transformers
23,155
closed
TF port of Convnextv2
# What does this PR do? TF port of convnextv2 @amyeroberts
05-04-2023 18:15:48
05-04-2023 18:15:48
While converting pt weights to TensorFlow I am getting this error: how to solve this? ``` All PyTorch model weights were used when initializing TFConvNextV2ForImageClassification. All the weights of TFConvNextV2ForImageClassification were initialized from the PyTorch model. If your task is similar to the task the model of the checkpoint was trained on, you can already use TFConvNextV2ForImageClassification for predictions without further training. Traceback (most recent call last): File "/usr/local/bin/transformers-cli", line 8, in <module> sys.exit(main()) File "/usr/local/lib/python3.10/dist-packages/transformers/commands/transformers_cli.py", line 55, in main service.run() File "/usr/local/lib/python3.10/dist-packages/transformers/commands/pt_to_tf.py", line 344, in run raise ValueError( ValueError: The cross-loaded TensorFlow model has different outputs, something went wrong! List of maximum output differences above the threshold (5e-05): logits: 3.871e+00 List of maximum hidden layer differences above the threshold (5e-05): hidden_states[1]: 3.463e-01 hidden_states[2]: 1.682e+00 hidden_states[3]: 2.259e+01 hidden_states[4]: 6.839e-01 ``` Code used: ``` !transformers-cli pt-to-tf --model-name facebook/convnextv2-nano-1k-224 --no-pr --local-dir /content/convnextv2-nano-1k-224 ```<|||||>_The documentation is not available anymore as the PR was closed or merged._
transformers
23,154
closed
Revert "Add FlaxWhisperForAudioClassification model"
Reverts huggingface/transformers#22883
05-04-2023 17:46:57
05-04-2023 17:46:57
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23154). All of your documentation changes will be reflected on that endpoint.
transformers
23,153
closed
[`Blip`] Remove redundant shift right
# What does this PR do? Fixes https://github.com/huggingface/transformers/issues/23000 In fact `_shift_right` does not need to be called inside `BlipForQuestionAnswering` as the right shifting of the tokens is done directly on the text decoder as mentioned by the user. Therefore, that class will be trained to perform next-next token prediction instead of next-token prediction. The fix is to simply remove that shift method and the call to it. cc @sgugger
05-04-2023 16:50:10
05-04-2023 16:50:10
_The documentation is not available anymore as the PR was closed or merged._<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23153). All of your documentation changes will be reflected on that endpoint.
transformers
23,152
closed
resume checkpoint and continue training using deepspeed integration while changing the number of gpus
### System Info transformers version: 4.28.1 torch version: 1.13+cu116 Can trainer support resuming checkpoint while using different number of gpus? I saw optimizer states are saved per rank while saving checkpoints, how can i resuming successfully while changing the numer of gpus ? ### Who can help? _No response_ ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction trainer.train(resume_from_checkpoint=checkpoint) ### Expected behavior resuming successfully
05-04-2023 13:30:22
05-04-2023 13:30:22
transformers
23,151
closed
accelerate DDP integrate
### What does this PR do? 1. Move DDP preparation to Accelerate. 2. This PR should be merged after #23148 3. No user-facing change. Now, user can use `accelerate launch` for DDP and MP, e.g., ``` accelerate launch --num_processes 2 --multi_gpu --mixed_precision "bf16" run_glue.py --model_name_or_path bert-base-cased --task_name $TASK_NAME --do_train --do_eval --max_seq_length 128 --per_device_train_batch_size 16 --learning_rate 5e-5 --num_train_epochs 3 --output_dir /tmp/$TASK_NAME/ --overwrite_output_dir ``` Previous way of using torchrun works as usual: ``` torchrun --nnodes 1 --nproc-per-node 2 run_glue.py --model_name_or_path bert-base-cased --task_name $TASK_NAME --do_train --do_eval --max_seq_length 128 --per_device_train_batch_size 16 --learning_rate 5e-5 --num_train_epochs 3 --output_dir /tmp/$TASK_NAME/ --overwrite_output_dir --bf16 ``` Empirical nuances that I noticed: 1. As DDP uses Accelerate, the LR Scheduler is run `num_processes` per step. Previously, it was only run once per step. Because of this, lr decreases rapidly when using Accelerate's integration. In the above example, I had to increase LR from 2e-5 to 5e-5 to account for this behaviour in order to maintain the performance.
05-04-2023 12:53:17
05-04-2023 12:53:17
_The documentation is not available anymore as the PR was closed or merged._
transformers
23,150
closed
After completion of Trainer.hyperparameter_search() attribute trainer.state.best_model_checkpoint references the last trained model instead of the best one
### System Info - `transformers` version: 4.28.1 - Platform: Linux-5.19.0-41-generic-x86_64-with-glibc2.35 - Python version: 3.11.3 - Huggingface_hub version: 0.14.1 - Safetensors version: not installed - PyTorch version (GPU?): 2.0.0+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: yes - Using distributed or parallel set-up in script?: no ### Who can help? @sgugger ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction https://colab.research.google.com/drive/1Ht14ntTQy96-_zO-iVlwvAgkyY8t6vKc?usp=sharing Call `Trainer.hyperparameter_search()`; when it completes, attribute `Trainer.state.best_model_checkpoint` and other `Trainer.state` attributes reference the last trained model, in the sequence of models trained by `Trainer.hyperparameter_search()`. Note: to speed-up reproduction of the issue, I have limited the training dataset size in the provided code, line #49; that's why the evaluation metrics at the end of the hyperparameters search are poor. ### Expected behavior After `Trainer.hyperparameter_search()` completes, attribute `Trainer.state.best_model_checkpoint` should contain the filename of the checkpoint with the **best** model among all the models trained during hyperparameters search, not the **last** model; that is, the model trained during the run indicated in the `BestRun` instance returned by `hyperparameter_search()` Likewise, other `Trainer.state` attributes should relate to the same model, e.g: `Trainer.state.best_metric` `Trainer.state.epoch` `Trainer.state.global_step`
05-04-2023 12:16:54
05-04-2023 12:16:54
Hyperparameter search does not play well with the best model indeed. That's not something in our roadmap for fixing, but we are happy to look at any PR!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
23,149
closed
gpt2 multi-gpu fix
# What does this PR do? Move tensors to same device. Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [X] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [X] Did you write any new necessary tests? ## Who can review? @younesbelkada @amyeroberts
05-04-2023 12:02:54
05-04-2023 12:02:54
_The documentation is not available anymore as the PR was closed or merged._
transformers
23,148
closed
accelerate mixed precision integrate
### What does this PR do? 1. Shift Trainer's mixed precision handling to accelerate. Having smaller PR for this instead of mixing this with DDP, FSDP and DeepSpeed changes. 2. Sharded DDP and Apex are cases not supported in Accelerate and because of this I'm unable to simplify further and delete chunks of code in Trainer. 3. No user-facing changes. User can now use `accelerate launch` for launching mixed precision training in Trainer. Example given below: ``` accelerate launch --mixed_precision="bf16" run_glue.py --model_name_or_path bert-base-cased --task_name $TASK_NAME --do_train --do_eval --max_seq_length 128 --per_device_train_batch_size 32 --learning_rate 2e-5 --num_train_epochs 3 --output_dir ~/temp/$TASK_NAME/ --overwrite_output_dir ``` the previous usage via `python` or `torchrun` is same. ``` python run_glue.py --model_name_or_path bert-base-cased --task_name $TASK_NAME --do_train --do_eval --max_seq_length 128 --per_device_train_batch_size 32 --learning_rate 2e-5 --num_train_epochs 3 --output_dir ~/temp/$TASK_NAME/ --fp16 --overwrite_output_dir ```
05-04-2023 10:58:44
05-04-2023 10:58:44
_The documentation is not available anymore as the PR was closed or merged._
transformers
23,147
closed
[`GPT-J`] Fix causal mask dtype
# What does this PR do? Fixes https://github.com/huggingface/transformers/issues/23136 When going for `low_cpu_mem_usage` each parameter is force-casted to the expected dtype, which is force-set to `torch.float16` for 8bit models. Therefore, for 8bit models (and also half-precision models) the causal mask is always force casted to float16 as it is part of the model's state dict, hence expected to be loaded from the Hub if the mask is available on the state dict. The fix is to add `persistant=False` and add a field `_keys_to_ignore_on_unexpected` (for removing the warnings) to avoid loading that causal mask from the state dict and assign it to the buffer, and all causal masks that are saved as buffers should do the same to avoid unexpected behaviors. cc @sgugger
05-04-2023 10:34:40
05-04-2023 10:34:40
_The documentation is not available anymore as the PR was closed or merged._
transformers
23,146
closed
Loading quantization model video memory occupancy problem
### System Info The original model is loaded as an 8bit model and saved. When the saved quantization model is loaded again, the video memory occupancy is the same as that of the original model. - `transformers` version: 4.29.0.dev0 - Platform: Windows-10-10.0.19045-SP0 - Python version: 3.9.12 - Huggingface_hub version: 0.13.3 - Safetensors version: not installed - PyTorch version (GPU?): 1.12.0+cu113 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? @sgugger @younesbelkada @Arthur ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction code: ``` from transformers import AutoModelForCausalLM, AutoTokenizer model = AutoModelForCausalLM.from_pretrained('D:/DL/pretrain_model/bloom-560m') model_q = AutoModelForCausalLM.from_pretrained('D:/DL/pretrain_model/bloom-560m', load_in_8bit=True, device_map='auto') print(model.get_memory_footprint()) print(model_q.get_memory_footprint()) model_q.save_pretrained('D:/DL/model_result/bloom-560m-8bit') model_q_again = AutoModelForCausalLM.from_pretrained('D:/DL/model_result/bloom-560m-8bit') print(model_q_again.get_memory_footprint()) ``` output: ``` C:\ProgramData\Anaconda3\envs\torch\lib\site-packages\tqdm\auto.py:21: TqdmWarning: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html from .autonotebook import tqdm as notebook_tqdm Overriding torch_dtype=None with `torch_dtype=torch.float16` due to requirements of `bitsandbytes` to enable model loading in mixed int8. Either pass torch_dtype=torch.float16 or don't pass this argument at all to remove this warning. ===================================BUG REPORT=================================== Welcome to bitsandbytes. For bug reports, please submit your error trace to: https://github.com/TimDettmers/bitsandbytes/issues ================================================================================ CUDA SETUP: CUDA runtime path found: C:\ProgramData\Anaconda3\envs\torch\bin\cudart64_110.dll CUDA SETUP: Highest compute capability among GPUs detected: 8.6 CUDA SETUP: Detected CUDA version 116 CUDA SETUP: Loading binary C:\ProgramData\Anaconda3\envs\torch\lib\site-packages\bitsandbytes\libbitsandbytes_cuda116.dll... C:\ProgramData\Anaconda3\envs\torch\lib\site-packages\bitsandbytes\cuda_setup\main.py:141: UserWarning: C:\ProgramData\Anaconda3\envs\torch did not contain cudart64_110.dll as expected! Searching further paths... warn(msg) C:\ProgramData\Anaconda3\envs\torch\lib\site-packages\bitsandbytes\cuda_setup\main.py:141: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {WindowsPath('C:/ProgramData/Anaconda3/envs/torch/Library/mingw-w64/bin'), WindowsPath('C:/Program Files/MongoDB/Server/6.0/bin'), WindowsPath('C:/ProgramData/Anaconda3/envs/torch/Library/usr/bin')} warn(msg) 2236858368 816439296 Detected the presence of a `quantization_config` attribute in the model's configuration but you don't have the correct `bitsandbytes` version to support int8 serialization. Please install the latest version of `bitsandbytes` with `pip install --upgrade bitsandbytes`. 2236858368 ``` ### Expected behavior I hope that when I load the quantized saved model again, the video memory usage can correspond to the actual size of the quantized model
05-04-2023 07:30:57
05-04-2023 07:30:57
Hi @Doraemon20190612 Thanks for your interest in using this feature! As stated on the warning: ```bash Detected the presence of a `quantization_config` attribute in the model's configuration but you don't have the correct `bitsandbytes` version to support int8 serialization. Please install the latest version of `bitsandbytes` with `pip install --upgrade bitsandbytes`. ``` Therefore you need to upgrade `bitsandbytes` as stated on the warning. Can you try: ```bash pip install --upgrade bitsandbytes ```<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
23,145
open
Detr Models cannot be loaded with `device_map="auto"`
### System Info - `transformers` version: 4.28.1 - Platform: macOS-13.1-x86_64-i386-64bit - Python version: 3.9.2 - Huggingface_hub version: 0.12.1 - Safetensors version: not installed - PyTorch version (GPU?): 1.13.1 (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: no - Using distributed or parallel set-up in script?: no ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ```python from transformers import pipeline p = pipeline( "object-detection", model="facebook/detr-resnet-50", image_processor="facebook/detr-resnet-50", device_map="auto" ) ``` ### Expected behavior This does not work because the `transformers.models.detr.modeling_detr.DetrConvEncoder` model init involves copy weights from `nn.BatchNorm2d` to `DetrFrozenBatchNorm2d` which is not allowed when on a meta device. ``` File "/Users/chiragjn/venv39/lib/python3.9/site-packages/transformers/pipelines/__init__.py", line 779, in pipeline framework, model = infer_framework_load_model( File "/Users/chiragjn/venv39/lib/python3.9/site-packages/transformers/pipelines/base.py", line 262, in infer_framework_load_model model = model_class.from_pretrained(model, **kwargs) File "/Users/chiragjn/venv39/lib/python3.9/site-packages/transformers/models/auto/auto_factory.py", line 471, in from_pretrained return model_class.from_pretrained( File "/Users/chiragjn/venv39/lib/python3.9/site-packages/transformers/modeling_utils.py", line 2629, in from_pretrained model = cls(config, *model_args, **model_kwargs) File "/Users/chiragjn/venv39/lib/python3.9/site-packages/transformers/models/detr/modeling_detr.py", line 1373, in __init__ self.model = DetrModel(config) File "/Users/chiragjn/venv39/lib/python3.9/site-packages/transformers/models/detr/modeling_detr.py", line 1205, in __init__ backbone = DetrConvEncoder(config) File "/Users/chiragjn/venv39/lib/python3.9/site-packages/transformers/models/detr/modeling_detr.py", line 354, in __init__ replace_batch_norm(backbone) File "/Users/chiragjn/venv39/lib/python3.9/site-packages/transformers/models/detr/modeling_detr.py", line 314, in replace_batch_norm frozen.weight.data.copy_(bn.weight) NotImplementedError: Cannot copy out of meta tensor; no data! ``` The model loads fine with a specific device with `device` argument.
05-04-2023 03:49:40
05-04-2023 03:49:40
cc @alaradirik and @amyeroberts <|||||>Hi @chiragjn, I was able to replicate the error on my local (also macOS-13.1-x86_64-i386-64bit) and I'm looking into the issue.<|||||>A quick update - I tracked down the issue to the accelerate library, setting `device_map=True` sets `low_cpu_mem_usage` to True. This causes the model parameters to be initialized as meta tensors, which can not be copied to CPU or GPU without tensor conversion. This issue also affects DETA, Conditional DETR, Deformable DETR and Table Transformers as they have identical frozen modules that are initialized by copying the parameters of their respective backbone models. We will be opening a fix PR shortly!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Hey, is there any progress with this issue?<|||||>Hi @AlonZolfi, @alaradirik has now left Hugging Face, so I'm picking this up. As @alaradirik mentions, this arises as a consequence the replacement of the batch norm in the backbone of these models. I'll be digging into it properly next week when I have a bit more time. Re-opening the issue as it's not yet solved and will keep you posted! <|||||>It closed again, there was some progress with the issue?<|||||>@amyeroberts the problem is indeed annoying, I have similar problem fine-tuning some models like llama. anyone working to solve it?
transformers
23,144
closed
Remove typo in perf_train_gpu_many.mdx
Simple typo in the documentation (excess `w` in the word `bottom`) # What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [X] Did you write any new necessary tests? (N/A) ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
05-04-2023 03:45:17
05-04-2023 03:45:17
_The documentation is not available anymore as the PR was closed or merged._
transformers
23,143
closed
fix spelling error
# What does this PR do? fix spelling error change referrred to referred ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
05-04-2023 03:40:00
05-04-2023 03:40:00
_The documentation is not available anymore as the PR was closed or merged._
transformers
23,142
closed
Add TrOCR resources
# What does this PR do? Adds resources of OpenAI GPT according to https://github.com/huggingface/transformers/issues/20055 Fixes #20055 (partially) ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @stevhliu
05-04-2023 02:09:22
05-04-2023 02:09:22
_The documentation is not available anymore as the PR was closed or merged._
transformers
23,141
closed
fix: Passing language as acronym to Whisper generate
# What does this PR do? Fixes #23140 <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 --> @hollance @gante
05-03-2023 22:47:37
05-03-2023 22:47:37
_The documentation is not available anymore as the PR was closed or merged._<|||||>cc @sanchit-gandhi <|||||>Thank you @sanchit-gandhi I made the requested changes and just have two callouts regarding the changes One is I also realized we could use `TO_LANGUAGE_CODE.values()` instead of `LANGUAGES.keys()` for checking the acronym so I made that edit, and two is to keep that test fast I did a hacky setattr for the `generation_config` so it had some properties that `model.generate` is expecting it to have<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23141). All of your documentation changes will be reflected on that endpoint.
transformers
23,140
closed
Whisper generation support for passing acronym to language arg
### System Info - `transformers` version: 4.29.0.dev0 - Platform: macOS-13.0-arm64-arm-64bit - Python version: 3.9.16 - Huggingface_hub version: 0.12.0 - Safetensors version: 0.2.8 - PyTorch version (GPU?): 1.13.1 (False) - Tensorflow version (GPU?): 2.11.0 (False) - Flax version (CPU?/GPU?/TPU?): 0.5.3 (cpu) - Jax version: 0.3.6 - JaxLib version: 0.3.5 - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help? @hollance @gante ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction ```py processor = WhisperProcessor.from_pretrained("openai/whisper-tiny") model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-tiny") ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation") sample = ds[0]["audio"]["array"] input_features = processor.feature_extractor(sample, return_tensors="pt").input_features pred_ids = model.generate(input_features, language="de") ``` Throws this error: <img width="778" alt="Screenshot 2023-05-03 at 6 29 38 PM" src="https://user-images.githubusercontent.com/78612354/236067028-ee7ab371-e9a2-44eb-9895-b5c8f3a2fcdd.png"> Then this error when that's fixed: <img width="1198" alt="Screenshot 2023-05-03 at 6 30 34 PM" src="https://user-images.githubusercontent.com/78612354/236067052-8f1ae574-db51-44e4-800c-aa4f38b0200e.png"> ### Expected behavior Should recognize and use language passed in acronym format as per the docstring
05-03-2023 22:45:13
05-03-2023 22:45:13
cc @ArthurZucker <|||||>Yeah I'm not sure why it was decided the language token had to be passed in there, and at the very least the current error message is misleading. Arthur is probably the best person to look at this.
transformers
23,139
closed
Generate: text generation pipeline no longer emits `max_length` warning when it is not set
# What does this PR do? Fixes #22636 In the `text-generation` pipeline, `max_length` is updated to take into account the prefix (which defaults to the BOS token). When `max_new_tokens` was set, it meant that `.generate` received both parameters, triggering the warning. This PR defers the `max_length` update to right before generation, in case `max_new_tokens` is set at call time, and only updates it if `max_new_tokens` is not set -- avoiding triggering the warning if the user has not set `max_length` while keeping the same behavior. ____________________________________ Test script, which was triggering the warning before this change: ```py from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline, GenerationConfig device = "cuda:0" model_name = "facebook/opt-1.3b" # tokenizer, model tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained( model_name, low_cpu_mem_usage=True, pad_token_id=tokenizer.eos_token_id ).to(device) # pipeline pipe = pipeline("text-generation", model=model, tokenizer=tokenizer, device=device) # generate text text = "Hello " result = pipe( text, generation_config=GenerationConfig( max_new_tokens=70, num_beams=1, do_sample=False ) ) # print result print(result) ```
05-03-2023 20:52:17
05-03-2023 20:52:17
_The documentation is not available anymore as the PR was closed or merged._<|||||>@Narsil added test ๐Ÿ‘ (precisely as suggested, using a small model checks that the warning is raised only when it should)
transformers
23,138
closed
BatchEncoding breaks duck-typing, either document or auto-cast to dict
### Feature request I would like to propose that somewhere in the typical model pipeline, such as the data collators or the Trainer loop, instances of BatchEncoding should be converted to dicts to avoid breaking DataParallel models. Alternatively, I could add to the documentation to state that BatchEncodings should not be passed to PyTorch models. Maintainers, please let me know which solution you would prefer. ### Motivation I am currently trying to integrate the sentence-transformers library with the nice Trainer API. One of the peculiarities of sentence-transformers is that rather than unpacking the model parameters like so `outputs = model(**inputs)`, the model expects a dict as input, which is internally unpacked. No big deal, I just override the `prediction_step` and remove the unpacking. However, this strangely failed when using a DataParallel setup. I realized this is because the following code in DataParallel could not handle BatchEncodings: ``` def scatter(inputs, target_gpus, dim=0): r""" Slices tensors into approximately equal chunks and distributes them across given GPUs. Duplicates references to objects that are not tensors. """ def scatter_map(obj): if isinstance(obj, torch.Tensor): return Scatter.apply(target_gpus, None, dim, obj) if _is_namedtuple(obj): return [type(obj)(*args) for args in zip(*map(scatter_map, obj))] if isinstance(obj, tuple) and len(obj) > 0: return list(zip(*map(scatter_map, obj))) if isinstance(obj, list) and len(obj) > 0: return [list(i) for i in zip(*map(scatter_map, obj))] if isinstance(obj, dict) and len(obj) > 0: return [type(obj)(i) for i in zip(*map(scatter_map, obj.items()))] return [obj for targets in target_gpus] ``` DataParallel walks like a dict, talks like a dict, prints out exactly like a dict, but is not a subclass of dict, and therefore breaks this code. The tensors are not distributed across GPUs, and errors result. In my opinion, this violates the idea of duck typing, but I realize that technically it's PyTorch who's breaking duck typing here. Normally, this isn't a problem since Trainer unpacks the model args (BatchEncoding) in its predict_step, but I can't imagine sentence-transformers is the only library that does not adopt this convention. ### Your contribution I would like to do one of three things: 1. Change DataCollators to output dicts or have Trainer check for BatchEncodings and convert them to dicts 2. Document that BatchEncodings should not be passed to models 3. Submit a PR to PyTorch that modifies this function to check for UserDicts like BatchEncodings
05-03-2023 20:34:06
05-03-2023 20:34:06
This was additionally made harder to debug because the typing for prediction_step indicates a Dict when it is actually a BatchEncoding. Since they print out the same, the only way I identified the cause of the bug was by stepping through and inspecting the types. ``` def prediction_step( self, model: nn.Module, inputs: Dict[str, Union[torch.Tensor, Any]], prediction_loss_only: bool, ignore_keys: Optional[List[str]] = None, ) -> Tuple[Optional[torch.Tensor], Optional[torch.Tensor], Optional[torch.Tensor]]: ```<|||||>1 and 2 are not viable options on our side. We do rely on properties of `BatchEncoding` in our inputs. You can try 3 for PyTorch, the check that is needed is ``` from collections.abc import Mapping if isinstance(obj, Mapping) and len(obj) > 0: return [type(obj)(i) for i in zip(*map(scatter_map, obj.items()))] ``` (which is the one we use in all our tooling), but I don't guarantee they will accept it. Or you can do the conversion from `BatchEncoding` to `dict` in your own subclass of the Trainer. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
23,137
closed
TypeError: is_accelerate_available() got an unexpected keyword argument 'check_partial_state'.
### System Info - `transformers` version: 4.29.0.dev0 - Platform: Linux-5.10.147+-x86_64-with-glibc2.31 - Python version: 3.10.11 - Huggingface_hub version: 0.14.1 - Safetensors version: not installed - PyTorch version (GPU?): 2.0.0+cu118 (True) - Tensorflow version (GPU?): 2.12.0 (True) - Flax version (CPU?/GPU?/TPU?): 0.6.9 (gpu) - Jax version: 0.4.8 - JaxLib version: 0.4.7 - Using GPU in script?: yes - Using distributed or parallel set-up in script?: no ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction trainer = CustomTrainer( model=model, train_dataset=train_data_loader, eval_dataset=validation_data_loader, args=transformers.TrainingArguments( per_device_train_batch_size=4, gradient_accumulation_steps=4, warmup_steps=30, max_steps=50, learning_rate=2e-4, fp16=True, logging_steps=1, evaluation_strategy="steps", output_dir="outputs", weight_decay=0.01, # L2 regularization ), data_collator=transformers.DataCollatorForLanguageModeling( tokenizer, mlm=False ), ) # # Add dropout to the model (if not already present) # model.config.dropout = 0.1 model.config.use_cache = False # silence the warning, Please re-enable for inference! trainer.train() ### Expected behavior @ArthurZucker and @younesbelkada This should ideally start a traning loop but instead I am getting unexpected keyword argument check_partial_state TypeError Traceback (most recent call last) โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฎ โ”‚ in <cell line: 1>:5 โ”‚ โ”‚ in __init__:111 โ”‚ โ”‚ โ”‚ โ”‚ /usr/local/lib/python3.10/dist-packages/transformers/training_args.py:1279 in __post_init__ โ”‚ โ”‚ โ”‚ โ”‚ 1276 โ”‚ โ”‚ if ( โ”‚ โ”‚ 1277 โ”‚ โ”‚ โ”‚ self.framework == "pt" โ”‚ โ”‚ 1278 โ”‚ โ”‚ โ”‚ and is_torch_available() โ”‚ โ”‚ โฑ 1279 โ”‚ โ”‚ โ”‚ and (self.device.type != "cuda") โ”‚ โ”‚ 1280 โ”‚ โ”‚ โ”‚ and (get_xla_device_type(self.device) != "GPU") โ”‚ โ”‚ 1281 โ”‚ โ”‚ โ”‚ and (self.fp16 or self.fp16_full_eval) โ”‚ โ”‚ 1282 โ”‚ โ”‚ ): โ”‚ โ”‚ โ”‚ โ”‚ /usr/local/lib/python3.10/dist-packages/transformers/training_args.py:1643 in device โ”‚ โ”‚ โ”‚ โ”‚ 1640 โ”‚ โ”‚ The device used by this process. โ”‚ โ”‚ 1641 โ”‚ โ”‚ """ โ”‚ โ”‚ 1642 โ”‚ โ”‚ requires_backends(self, ["torch"]) โ”‚ โ”‚ โฑ 1643 โ”‚ โ”‚ return self._setup_devices โ”‚ โ”‚ 1644 โ”‚ โ”‚ โ”‚ 1645 โ”‚ @property โ”‚ โ”‚ 1646 โ”‚ def n_gpu(self): โ”‚ โ”‚ โ”‚ โ”‚ /usr/local/lib/python3.10/dist-packages/transformers/utils/generic.py:54 in __get__ โ”‚ โ”‚ โ”‚ โ”‚ 51 โ”‚ โ”‚ attr = "__cached_" + self.fget.__name__ โ”‚ โ”‚ 52 โ”‚ โ”‚ cached = getattr(obj, attr, None) โ”‚ โ”‚ 53 โ”‚ โ”‚ if cached is None: โ”‚ โ”‚ โฑ 54 โ”‚ โ”‚ โ”‚ cached = self.fget(obj) โ”‚ โ”‚ 55 โ”‚ โ”‚ โ”‚ setattr(obj, attr, cached) โ”‚ โ”‚ 56 โ”‚ โ”‚ return cached โ”‚ โ”‚ 57 โ”‚ โ”‚ โ”‚ โ”‚ /usr/local/lib/python3.10/dist-packages/transformers/training_args.py:1558 in _setup_devices โ”‚ โ”‚ โ”‚ โ”‚ 1555 โ”‚ def _setup_devices(self) -> "torch.device": โ”‚ โ”‚ 1556 โ”‚ โ”‚ requires_backends(self, ["torch"]) โ”‚ โ”‚ 1557 โ”‚ โ”‚ logger.info("PyTorch: setting up devices") โ”‚ โ”‚ โฑ 1558 โ”‚ โ”‚ if not is_sagemaker_mp_enabled() and not is_accelerate_available(check_partial_s โ”‚ โ”‚ 1559 โ”‚ โ”‚ โ”‚ raise ImportError( โ”‚ โ”‚ 1560 โ”‚ โ”‚ โ”‚ โ”‚ "Using the `Trainer` with `PyTorch` requires `accelerate`: Run `pip inst โ”‚ โ”‚ 1561 โ”‚ โ”‚ โ”‚ ) โ”‚ โ•ฐโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฏ TypeError: is_accelerate_available() got an unexpected keyword argument 'check_partial_state'
05-03-2023 20:22:36
05-03-2023 20:22:36
Looks like you may have a borked install of Transformers? If installing from source, that function does accept `check_partial_state` as can be seen [here](https://github.com/huggingface/transformers/blob/78b7debf56efb907c6af767882162050d4fbb294/src/transformers/utils/import_utils.py#L582).<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
23,136
closed
[GPT-J] where expected condition to be a boolean tensor, but got a tensor with dtype Half
### System Info - `transformers` version: 4.28.1 - Platform: Linux-5.10.147+-x86_64-with-glibc2.31 - Python version: 3.10.11 - Huggingface_hub version: 0.14.1 - Safetensors version: not installed - PyTorch version (GPU?): 2.0.0+cu118 (True) - Tensorflow version (GPU?): 2.12.0 (True) - Flax version (CPU?/GPU?/TPU?): 0.6.9 (gpu) - Jax version: 0.4.8 - JaxLib version: 0.4.7 - Using GPU in script?: yes - Using distributed or parallel set-up in script?: no ### Who can help? @younesbelkada ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction Collab example - [Link](https://colab.research.google.com/drive/1D689vHxZk5Bov5piIXOQ62t3M_cim4dr?usp=sharing) ### Expected behavior I expected some output generated from the model
05-03-2023 19:46:23
05-03-2023 19:46:23
Hi @Praful932 Thanks for the issue, I have managed to reproduce it and fix it with https://github.com/huggingface/transformers/pull/23147 Can you try to uninstall `transformers` and install it again from source? ```bash pip install git+https://github.com/huggingface/transformers ```<|||||>This is working, Thanks for the quick fix!
transformers
23,135
closed
loss 0.0 or NaN when training T5 or Flan-T5 models with bf16 on multiple GPUs
### System Info - `transformers` version: 4.28.0 - Platform: Linux-5.19.0-1022-gcp-x86_64-with-Ubuntu-22.04-jammy - Python version: 3.7.16 - Huggingface_hub version: 0.14.1 - Safetensors version: not installed - PyTorch version (GPU?): 1.13.1+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help? @stas00 ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Steps to reproduce: 1. Launch the example summarization training pipeline with `--bf16` and one of the Flan-T5 or T5 models such as `google/flan-t5-base` and `t5-large`. ``` python3.7 examples/pytorch/summarization/run_summarization.py \ --report_to none \ --bf16 \ --model_name_or_path google/flan-t5-base \ --evaluation_strategy steps \ --logging_strategy steps \ --logging_steps 10 \ --save_strategy steps \ --save_steps 30000 \ --num_train_epochs 3 \ --do_train \ --do_eval \ --dataset_name cnn_dailymail \ --dataset_config "3.0.0" \ --source_prefix "summarize: " \ --output_dir /tmp/tst-summarization \ --per_device_train_batch_size=4 \ --per_device_eval_batch_size=4 \ --max_train_samples=10000 \ --max_eval_samples=100 \ --overwrite_output_dir \ --overwrite_cache ``` Then you will see the following logs: ``` {'loss': 0.0, 'learning_rate': 4.893503727369542e-05, 'epoch': 0.06} {'eval_loss': nan, 'eval_runtime': 1.6413, 'eval_samples_per_second': 60.927, 'eval_steps_per_second': 2.437, 'epoch': 0.06} ``` ### Expected behavior Loss and eval_loss are wrong during training. Only T5 model works properly is `t5-small`, same as what was mentioned here: https://discuss.huggingface.co/t/t5-fp16-issue-is-fixed/3139, but other `T5` or `Flan-T5` models (including `flan-t5-small`) still suffer from this issue. I can train with FP32 without this problem, but would like to know if the fix (maybe not relevant here? since bf16 is different from fp16) mentioned has been incorporated.
05-03-2023 19:27:34
05-03-2023 19:27:34
BTW, if I use deepspeed to launch the script then `loss` and `eval_loss` are normal. In addition, the 0.0 or NaN only happens when it is trained on multiple GPUs in the DataParallel mode, which is aligned with this issue: https://github.com/huggingface/transformers/issues/18899<|||||>Thank you for the good report and the link to the other similar issue, @cchen-dialpad I think hardly anybody uses DP since DDP was introduced. Is there a reason to use DP when DDP is by far more superior? Switching to DDP was the resolution of the ticket you linked to: https://github.com/huggingface/transformers/issues/18899#issuecomment-1249262873 <|||||>@stas00 Oh, I guess I just got curious why that is the case, DDP works but DP doesn't :)<|||||>You're more than welcome to try to figure it out, @cchen-dialpad - I haven't used DP in many years, perhaps it's not being well maintained because it's rarely used? It'd be an optimizer issue most likely if you want a place to start.<|||||>lol I see, thanks for the pointers!
transformers
23,134
closed
Tidy Pytorch GLUE benchmark example
Migration to Evaluate for metric is not quite complete # What does this PR do? #18369 left the Pytorch GLUE Benchmark example a bit rough, still hand implementing some metrics, and leaving metric objects constructed but unused in some cases. This work completes the migration to Evaluate. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @sgugger @atturaioe
05-03-2023 18:07:18
05-03-2023 18:07:18
_The documentation is not available anymore as the PR was closed or merged._
transformers
23,133
closed
Remove redundant print statements
# What does this PR do? Removes leftover comments / print lines from `test_backbone_common.py`.
05-03-2023 16:43:22
05-03-2023 16:43:22
_The documentation is not available anymore as the PR was closed or merged._
transformers
23,132
closed
Add Unlimiformer to ๐Ÿค— transformers
### Model description I want to add the recently released Unlimiformer (Long-Range Transformers with Unlimited Length Input) model to ๐Ÿค— transformers. Will this be a good addition? ### Open source status - [X] The model implementation is available - [X] The model weights are available ### Provide useful links for the implementation Paper: https://arxiv.org/abs/2305.01625 Code: https://github.com/abertsch72/unlimiformer Model weights: https://github.com/abertsch72/unlimiformer#trained-models Authors: @abertsch72 cc @LysandreJik
05-03-2023 15:35:12
05-03-2023 15:35:12
transformers
23,131
closed
Handle padding warning in generation when using `inputs_embeds`
# What does this PR do? Fixes: #23042 if `input_ids` was given, check if the last id in any sequence is `pad_token_id` if `inputs_embeds` was given, check if the last embed in any sequence is *all* zeros - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? [Please add a link to it if that's the case.](https://github.com/huggingface/transformers/issues/23042) - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests?
05-03-2023 14:43:00
05-03-2023 14:43:00
cc @gante<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>> Why not simply add `and len(inputs_tensor.shape) == 2` to the `if`? Short, clear code is easier to maintain ๐Ÿ™Œ Just adding `and len(inputs_tensor.shape) == 2` wouldn't print the warning if using `inputs_embeds`. Would you prefer that behaviour? Also, this logic would fail if the pad token for some embedding is **not** a tensor full of the `pad_token_id`.<|||||>> Just adding and len(inputs_tensor.shape) == 2 wouldn't print the warning if using inputs_embeds. Would you prefer that behaviour? Also, this logic would fail if the pad token for some embedding is not a tensor full of the pad_token_id. Yeah, I'd prefer the shorter version with the `len(inputs_tensor.shape) == 2` check only. The reason being that although it is less precise, using the embeddings as an input is an advanced use case, for which we tend to be more hands-off. It also makes the code shorter and, therefore, more readable :) (if we were to make complete checks at all points, the code would quickly become unmaintainable)<|||||>Alright, I've changed the logic to be very simple now. It just doesn't check this condition if `inputs_embeds` was passed.<|||||>@zrthxn this PR needs to be rebased with `main` -- we fixed a dependency issue that's showing up on the CI down here ๐Ÿ˜ฌ apologies for the extra work!<|||||>@gante no problem! Just did that.
transformers
23,130
closed
<wip> Early draft of crossformer model
Fixes #22852
05-03-2023 14:40:15
05-03-2023 14:40:15
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23130). All of your documentation changes will be reflected on that endpoint.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Hi @raghavanone, thanks for opening this PR! The easiest, fastest and preferred way to add a new model is directly onto the hub: https://huggingface.co/docs/transformers/model_sharing The bar for adding models into the transformers repo through a PR is a lot higher and will require all passing tests and approval from a maintainer. As such, adding this way will take a lot longer. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
23,129
closed
KeyError: 'llama' on using any variant of OpenAssistant LLaMa models
### System Info - `transformers` version: 4.29.0.dev0 - Platform: Linux-5.15.0-1033-aws-x86_64-with-glibc2.31 - Python version: 3.10.11 - Huggingface_hub version: 0.14.1 - Safetensors version: not installed - PyTorch version (GPU?): 2.0.0+cu117 (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help? _No response_ ### Information - [ ] The official example scripts - [x] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [x] My own task or dataset (give details below) ### Reproduction 1. Download any OpenAsssitant LLaMa model with `transformers.AutoModelForCausalLM.` & `transformers.AutoTokenizer` (e.g. `TheBloke/OpenAssistant-SFT-7-Llama-30B-HF`) 2. Try to generate anything with it. logs: ``` open-assistant-inference-worker-1 | File "/opt/miniconda/envs/text-generation/lib/python3.9/site-packages/text_generation_server/server.py", line 99, in serve_inner open-assistant-inference-worker-1 | model = get_model(model_id, revision, sharded, quantize) open-assistant-inference-worker-1 | open-assistant-inference-worker-1 | File "/opt/miniconda/envs/text-generation/lib/python3.9/site-packages/text_generation_server/models/__init__.py", line 52, in get_model open-assistant-inference-worker-1 | config = AutoConfig.from_pretrained(model_id, revision=revision) open-assistant-inference-worker-1 | open-assistant-inference-worker-1 | File "/opt/miniconda/envs/text-generation/lib/python3.9/site-packages/transformers-4.27.0.dev0-py3.9.egg/transformers/models/auto/configuration_auto.py", line 882, in from_pretrained open-assistant-inference-worker-1 | config_class = CONFIG_MAPPING[config_dict["model_type"]] open-assistant-inference-worker-1 | open-assistant-inference-worker-1 | File "/opt/miniconda/envs/text-generation/lib/python3.9/site-packages/transformers-4.27.0.dev0-py3.9.egg/transformers/models/auto/configuration_auto.py", line 588, in __getitem__ open-assistant-inference-worker-1 | raise KeyError(key) open-assistant-inference-worker-1 | open-assistant-inference-worker-1 |. KeyError: 'llama' ``` ### Expected behavior The model inference is working correctly without any issues
05-03-2023 14:23:04
05-03-2023 14:23:04
The error message shows an older version of Transformers (4.27.0) are you sure you are executing this in the right Python environment?<|||||>That's right, upgrading to transformers >= 4.28.1 solved the issue. Thank you ๐Ÿ™Œ<|||||>I have the same problem, the environment where I am installing the new version of transformers is here: `RUN /opt/miniconda/envs/worker/bin/pip install -r requirements.txt` but the line where the download_model.py is being executed uses another enviroment: /opt/miniconda/envs/text-generation/bin/python /worker/download_model.py I want to use distributed inferencing and GPUs, should I install the requirements from the worker env in the text-generation env? thank you
transformers
23,128
closed
Generate: better warnings with pipelines
# What does this PR do? Addresses comments in https://github.com/huggingface/transformers/issues/23054 This PR adds the following enhancements to generate-related pipeline warnings: 1. Clarifies the `max_length` reduction suggestion in the summarization pipeline 2. Also pipes task-specific configuration to `generation_config` (when applicable), which fixes the warning about relying on `model.config` to parameterize `generate`
05-03-2023 12:17:06
05-03-2023 12:17:06
_The documentation is not available anymore as the PR was closed or merged._
transformers
23,127
closed
Generate: correct beam search length on score calculation for multi batch generation
# What does this PR do? Fixes #23084 When computing the score with length penalty, the length was (incorrectly) incremented once per batch member. It should only be incremented once -- the length here is `cur_len` (the length of the generated tokens) + `1` (the token being added at the iteration). Slow tests were ran for BART, T5, GPT2 -- no regressions.
05-03-2023 11:31:31
05-03-2023 11:31:31
_The documentation is not available anymore as the PR was closed or merged._
transformers
23,126
closed
Support union types `X | Y` syntax for `HfArgumentParser` for Python 3.10+
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> Support union types `X | Y` syntax for `HfArgumentParser` for Python 3.10+. Allow users using Python 3.10+ to opt in new typing futures, such as [union types `X | Y` (PEP 604)](https://peps.python.org/pep-0604). Note that `typing.get_type_hints` does not work for union types on Python 3.7-3.9. <!-- Remove if not applicable --> Fixes #20249 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? Testing union types `X | Y` for Python 3.7-3.9 needs to add `from __future__ import annotations` at the top of the test script. I'm not sure should we need to create a separate test script or add new test cases directly in `tests/utils/test_hf_argparser.py`. ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
05-03-2023 10:49:29
05-03-2023 10:49:29
_The documentation is not available anymore as the PR was closed or merged._
transformers
23,125
closed
Generate: slow assisted generation test
# What does this PR do? `test_assisted_decoding_matches_greedy_search` fails once in a while, which blocks development. This PR removes the blocker by moving it to a slow test. Why a slow test (and not redesign the test or add the flaky decorator)? 1. It is impossible to remove at 100% the non-determinism in this test. Some form of masking has to be used by design, which means that there is always a chance for the generations to diverge. When the generated sequences do diverge, the scores should be very similar at the step they diverge, as they are caused by the very small values within the numerical attention masks. 2. I've tried to add the check above (when the sequences diverge, the scores should be similar)... but some models still failed that check quite hard when the sequences didn't match. Some well-established models can run it without observed failures (e.g. 10k runs on GPT2 = 0 sequence mismatches). Others, especially roberta-based models, fail at a high rate. This means I should explore this further. 3. Since there is something to be explored, I believe the slow decorator is more appropriate: we can track failures without risking red CI on PRs.
05-03-2023 10:26:21
05-03-2023 10:26:21
_The documentation is not available anymore as the PR was closed or merged._<|||||>> Maybe the test will be less flaky if done on a pretrained checkpoint Definitely! However, I think there is a deeper problem here, the logits diverge way more than I'd expect on some models, and it's odd that those models rely on the same base code (roberta). After I finish preparing the release for assisted generation, I'll get back to sorting related bugs
transformers
23,124
closed
MarianMT architecture and onnx format
**MarianMT architecture** I found an interesting detail about MarianMT implementation in huggingface. There is no "Softmax" layer after "Linear" at the end of the model, despite the default architecture of transformer. ``` MarianMTModel( (model): MarianModel( (shared): Embedding(58930, 512, padding_idx=58929) (encoder): MarianEncoder( (embed_tokens): Embedding(58930, 512, padding_idx=58929) (embed_positions): MarianSinusoidalPositionalEmbedding(512, 512) (layers): ModuleList( (0-5): 6 x MarianEncoderLayer( (self_attn): MarianAttention( (k_proj): Linear(in_features=512, out_features=512, bias=True) (v_proj): Linear(in_features=512, out_features=512, bias=True) (q_proj): Linear(in_features=512, out_features=512, bias=True) (out_proj): Linear(in_features=512, out_features=512, bias=True) ) (self_attn_layer_norm): LayerNorm((512,), eps=1e-05, elementwise_affine=True) (activation_fn): SiLUActivation() (fc1): Linear(in_features=512, out_features=2048, bias=True) (fc2): Linear(in_features=2048, out_features=512, bias=True) (final_layer_norm): LayerNorm((512,), eps=1e-05, elementwise_affine=True) ) ) ) (decoder): MarianDecoder( (embed_tokens): Embedding(58930, 512, padding_idx=58929) (embed_positions): MarianSinusoidalPositionalEmbedding(512, 512) (layers): ModuleList( (0-5): 6 x MarianDecoderLayer( (self_attn): MarianAttention( (k_proj): Linear(in_features=512, out_features=512, bias=True) (v_proj): Linear(in_features=512, out_features=512, bias=True) (q_proj): Linear(in_features=512, out_features=512, bias=True) (out_proj): Linear(in_features=512, out_features=512, bias=True) ) (activation_fn): SiLUActivation() (self_attn_layer_norm): LayerNorm((512,), eps=1e-05, elementwise_affine=True) (encoder_attn): MarianAttention( (k_proj): Linear(in_features=512, out_features=512, bias=True) (v_proj): Linear(in_features=512, out_features=512, bias=True) (q_proj): Linear(in_features=512, out_features=512, bias=True) (out_proj): Linear(in_features=512, out_features=512, bias=True) ) (encoder_attn_layer_norm): LayerNorm((512,), eps=1e-05, elementwise_affine=True) (fc1): Linear(in_features=512, out_features=2048, bias=True) (fc2): Linear(in_features=2048, out_features=512, bias=True) (final_layer_norm): LayerNorm((512,), eps=1e-05, elementwise_affine=True) ) ) ) ) (lm_head): Linear(in_features=512, out_features=58930, bias=False) ) ``` There is no problem when loading this model via "MarianMTModel.from_pretrained" and calling ".generate()" method, everything works fine, returning output shaped (batch_size, max_seq_len). **MarianMT onnx format** However, when I tried to convert MarianMT huggingface model into onnx format via "torch.onnx.export" and use it with "onnxruntime.InferenceSession" calling "run()" method, I got raw embedding batches as outputs shaped (batch_size, max_seq_len, 58930), which I can't decode into text using MarianTokenizer. I suppose, it is caused by the absence of that Softmax layer at the end. **Regarding this, I have two questions:** - Is it normal that MarianMT in huggingface transformers has no Softmax layer at the end? - Is there a way to decode output embeddings shaped (batch_size, max_seq_len, 58930) into text? @gante @amyeroberts @sgugger, I'm not sure whether to consider this a bug or just a slight misunderstanding, so I'd be really grateful for some advice.
05-03-2023 10:18:03
05-03-2023 10:18:03
No model in Transformers implements the softmax at the end, they return logits, or if labels are provided, the loss directly.<|||||>Assuming your output from the ONNX model really is the logits and is in numpy format you could probably use a code snippet like this to decode it into text: ``` import numpy as np from transformers import MarianTokenizer from scipy.special import softmax tokenizer = MarianTokenizer.from_pretrained('Helsinki-NLP/opus-mt-en-es', cache_dir='./estokenizer') bsz = 1 seq_len = 42 vocab_size = 20000 logits = np.random.rand(bsz, seq_len, vocab_size) # Output from the model. token_probs = softmax(logits, axis=-1) token_ids = np.argsort(token_probs, axis=2)[:, :, -1] # Get top token ID. tokenizer.batch_decode(token_ids) ``` But are you sure about manually converting it to ONNX? Huggingface's Optimum package should support converting Marian to ONNX off the bat, with beam search support and all the whistles.<|||||>@sgugger, thanks a lot, now I see how it works)<|||||>@SmartWashingMachine, yes output was logits indeed. Thanks for the code snipped, it works well, but it seems that the model should be converted in some other way. I used Huggingface's Optimum for MarianMT and it works perfect with conversion and inference. However, i couldn't use it for m2m100 and mbart, google collab crashes because of lack of memory during conversion. I'll look for some other way of speed up for these two.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
23,123
closed
improve unclear documentation
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) Unclear documentation in EarlyStoppingCallback meachanism. ## Before submitting - [ x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @sgugger <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
05-03-2023 08:59:24
05-03-2023 08:59:24
_The documentation is not available anymore as the PR was closed or merged._
transformers
23,122
closed
Fix ConvNext V2 parameter naming issue
# What does this PR do? Renames gamma and beta parameters of the `ConvNextV2GRN` module, which caused the `save_pretrained` method to rename these parameters to weight and bias. Existing checkpoints on the hub can be loaded without any warnings once the PR is merged. Fixes #23090 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [X ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests?
05-03-2023 08:45:03
05-03-2023 08:45:03
_The documentation is not available anymore as the PR was closed or merged._
transformers
23,121
closed
[`Doctest`]ย Fix pix2struct doctest
# What does this PR do? Link to failing job: https://github.com/huggingface/transformers/actions/runs/4867713745/jobs/8680544136 This PR fixes the current failing doctest for pix2struct. https://github.com/huggingface/transformers/pull/23051 fixed the issues related with pix2struct and training by changing the way attention masks are computed. Naturally this has changed the value of the expected loss function in the docstring cc @ydshieh
05-03-2023 08:16:47
05-03-2023 08:16:47
_The documentation is not available anymore as the PR was closed or merged._
transformers
23,120
closed
Trainer.hyperparameter_search() should give the option to reload the model from the best run before completing
### Feature request After calling `Trainer.hyperparameter_search()`, the instance of `Trainer` contains the last trained model, among the multiple models trained for optimization of the hyperparameters. There should be an option, perhaps among those of `TrainingArguments`, to have the instance of `Trainer` reload the model from the best run (the best model) before `Trainer.hyperparameter_search()` returns. ### Motivation After optimizing hyperparameters we are typically interested in the trained model that optimizes them, not the model that, accidentally, was trained in the last run of optimization. We are interested in the *best* model, not the *last* model. Currently, accessing the best model is tricky, as one has to reload it from checkpoints on disk, after figuring out what the path to the wanted checkpoint is. The latter is further complicated by the fact that there may be multiple checkpoints on disk for the run that produced the best model, and we specifically want the checkpoint at the end of the epoch where the objective function was maximized (or minimized), which is not necessarily the last epoch of the run. ### Your contribution N/A
05-03-2023 07:07:24
05-03-2023 07:07:24
I have found the path to the saved checkpoint with the best model, after hyperparameter optimization has completed, is in `Trainer.state.best_model_checkpoint`, the model can then be easily loaded from that checkpoint. I leave this here in case it may help someone else find where that information is tucked away.<|||||>Hi, I have found that the proposed solution does not work. Seems like `Trainer.state.best_model_checkpoint` will always have the best checkpoint for the latest Trial run. I went around this by creating my own `TrainerCallback` that tracks best and last checkpoints for each all trials in a sweep. In the end I just need to get it from it.
transformers
23,119
closed
added farsi lang
# What does this PR do? Added Farsi (fa) to the docs. I have translated `_toctree.yml` and `accelerate.mdx` and `add_new_pipeline.mdx`. the rest also will be translated soon. I will make this pull request so others also can contribute and make this process faster. <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @sgugger <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
05-03-2023 04:34:31
05-03-2023 04:34:31
_The documentation is not available anymore as the PR was closed or merged._<|||||>> Please only add the translated file in the new folder. @sgugger only the translated files are now in fa folder. the rest have been removed.<|||||>> Thanks a lot! One last comment on the `add_new_model` file. @sgugger I'm sorry I didn't quite get what you mean? can you please tell me more to do it right away? appreciate it. <|||||>You have translated half the file only. Maybe leave it out of this PR and add it in a new PR when you're fully done?
transformers
23,118
closed
Pin numba for now
# What does this PR do? Today's release of `numba` broke the audio feature extractors. Not sure if it's because of numba by itself or because it forces an update to Numpy 1.24. Will be investigated later by the audio team but in the meantime pinning `numba` to make `main` green. cc @ydshieh @sanchit-gandhi for information, will merge this as soon as the CI is green.
05-03-2023 01:25:08
05-03-2023 01:25:08
_The documentation is not available anymore as the PR was closed or merged._
transformers
23,117
closed
Provide a different API solution instead of offline mode
### Feature request First off, thanks for the stellar lib and for all the work to get state of the art models in a consumable and documented state! I'm somewhat new to transformers, so this feedback is coming from a place of heavily using the library for a week. I find offline mode unintuitive. I don't see it referenced on a variety of tutorials so I spent a day trying to figure out how to load local files with transformers, see https://github.com/huggingface/transformers/issues/23116 The docs are somewhat buried in a section I wouldn't expect: Installation https://huggingface.co/docs/transformers/installation#offline-mode ### Motivation Offline mode makes it more difficult to work with local files, instead, methods like `from_pretrained` could pay attention to if a file path is locally sourced (starts with / ) or a url (https) or huggingface repo (no / or url prefix, has a username/repo pattern) ### Your contribution I'm happy to provide feedback.
05-03-2023 00:00:03
05-03-2023 00:00:03
Hi there. `from_pretrained` already accepts a path to a folder, I'm not sure what it is you are requesting.<|||||>The relevant docs to load from local data can be found here: https://huggingface.co/docs/transformers/v4.28.1/en/main_classes/model#transformers.PreTrainedModel.from_pretrained. The `from_pretrained` method accepts either a `repo_id` to a repo on the ๐Ÿค— hub or a local path to a folder.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
23,116
closed
OneFormerImageProcessor does not support passing local config file, always tries to download from repo
### System Info - `transformers` version: 4.29.0.dev0 - Platform: Linux-5.19.0-41-generic-x86_64-with-glibc2.35 - Python version: 3.10.10 - Huggingface_hub version: 0.14.1 - Safetensors version: 0.3.1 - PyTorch version (GPU?): 2.0.0+cu117 (True) - Tensorflow version (GPU?): 2.11.1 (False) - Flax version (CPU?/GPU?/TPU?): 0.5.3 (cpu) - Jax version: 0.3.6 - JaxLib version: 0.3.5 - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? @amyeroberts this forum post I put up seems like a bug: https://discuss.huggingface.co/t/how-to-load-local-config-json-for-oneformerimageprocessor-without-invoking-huggingfacehub-downloader/38372 The OneFormerImageProcessor should accept local config files without trying to download them from a repo_path https://github.com/huggingface/transformers/blob/v4.28.1/src/transformers/models/oneformer/image_processing_oneformer.py#L323 ### Information - [X] The official example scripts - [X] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction ``` from transformers import OneFormerProcessor config_path = "/local/config/path" OneFormerProcessor.from_pretrained(config_path, ignore_mismatched_sizes=True)ignore_mismatched_sizes=True) ``` ### Expected behavior the processor gets initialized and doesn't error with ``` + f"Repository Not Found for url: {response.url}." + "\nPlease make sure you specified the correct `repo_id` and" " `repo_type`.\nIf you are trying to access a private or gated repo," " make sure you are authenticated." ```
05-02-2023 21:27:41
05-02-2023 21:27:41
@rbavery Thanks for raising this issue. I'm able to load a processor locally on the development branch without issue: ```python from transformers import OneFormerProcessor processor = OneFormerProcessor.from_pretrained('shi-labs/oneformer_ade20k_swin_tiny') processor.save_pretrained('foo') new_processor = OneFormerProcessor.from_pretrained('foo') ``` Note, the processor combines two processing objects - the image processor and a tokenizer - and so configurations + additional files are necessary for to successfully load both to create the processor. Could you share the files in the folder you're trying to load from? In the `foo` folder created, I see the following files: ``` merges.txt special_tokens_map.json tokenizer_config.json preprocessor_config.json tokenizer.json vocab.json ``` As a small side note, in the example snippet, I believe there's a small typo in the code, and should be: ```python from transformers import OneFormerProcessor config_path = "/local/config/path" OneFormerProcessor.from_pretrained(config_path, ignore_mismatched_sizes=True) ``` <|||||>Hi I have a similar problem , even when cloning the files locally still need to download ade20k_panoptic.json and it will not work without it<|||||>Hi @ammarali32, Ah OK, I understand now. This download is happening because of the [prepare_metadata method](https://github.com/huggingface/transformers/blob/17a55534f5e5df10ac4804d4270bf6b8cc24998d/src/transformers/models/oneformer/image_processing_oneformer.py#L323), which looks to download the file from the hub, and by default points to the `"shi-labs/oneformer_demo"` path. After being downloaded once, it should be possible to work in offline mode as it will be stored in the cache. However, I appreciate this isn't a complete solution. If there's another repo on the hub you wish to download the class info file from, replacing `repo_path` when instantiating the image processor class should be enough. To make the class look to either local files or on the hub, the image processing code would need to be reworked a bit. This is something that should happen in the future, however it's not a piece of work I have capacity to work on at the moment. If anyone from the community would like to take this I'm happy to review any PRs.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
23,115
closed
Add resources for LayoutLmV2 and reformat documentation resources
# What does this PR do? From #19848 This PR adds resources to the LayoutLMV2 documentation page. Also in the LayoutLMV2 documentation page the documentation resources heading was incoherent with the other doc pages so I removed the heading and put the task specific guides under the corresponding task headings.
05-02-2023 18:49:07
05-02-2023 18:49:07
cc @stevhliu <|||||>_The documentation is not available anymore as the PR was closed or merged._
transformers
23,114
closed
Add accelerate support - vision MAE models
# What does this PR do? Adds accelerate support to VideoMAE and ViTMAE following the changes made in the [equivalent ViT PR](https://github.com/huggingface/transformers/pull/20174) Fixes #23086 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests?
05-02-2023 17:24:28
05-02-2023 17:24:28
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23114). All of your documentation changes will be reflected on that endpoint.<|||||>@sgugger I checked with 2 GPUs, I'll run with just one to make sure it still works ๐Ÿ‘
transformers
23,113
closed
docs: ko: fix: update `_toctree.yml`
# What does this PR do? Resolve conflicts raised by a recent `_toctree.yml` change (#23049) * Edited some titles to match the new coherent style. * Moved sections to match the English table of contents. * As all of the two removed files were yet to be translated, no work was needed.
05-02-2023 14:57:19
05-02-2023 14:57:19
Closing in favor of #23112 <|||||>Checking if same conflicts emerge with these changes as colleague's branch is experiencing.<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>Closing again in favor of #23112 (Checks are passing)<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23113). All of your documentation changes will be reflected on that endpoint.
transformers
23,112
closed
docs: ko: update `_toctree.yml`
# What does this PR do? Part of https://github.com/huggingface/transformers/issues/20179 Initial version is in https://github.com/huggingface/transformers/pull/22581 Updated `_toctree.yml` according to https://github.com/huggingface/transformers/pull/23049 This PR restructures TOC for the documentation. Here's the scope of the restructure: a) TOC is sorted from โ€œbeginnerโ€ topics to more advanced b) Some topics have been renamed c) Task Guides are collapsed by default and now are on on the same level (currently NLP task guides are hidden, and not aligned with other modalities) d) โ€œGeneral usageโ€ has been renamed to โ€œDeveloper Guidesโ€ e) Benchmarks, notebooks, and community resources have been moved under Developer Guides f) โ€œConverting from TensorFlow checkpointsโ€ and "Migrating from previous packages" pages removed To dos: Some topics have been renamed to be concise and more descriptive. Needs to translate to Korean. ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? (Initial) <!-- 1. ์œ„ ์ฒดํฌ๊ฐ€ ๋ชจ๋‘ ์™„๋ฃŒ๋œ ๋’ค์—๋งŒ ๊ฐ€์งœ์—ฐ๊ตฌ์†Œ ํŒ€์›๋“ค์—๊ฒŒ ๋ฆฌ๋ทฐ ์š”์ฒญํ•˜๋Š” ์•„๋ž˜ ์ฃผ์„์„ ๋…ธ์ถœํ•ด์ฃผ์„ธ์š”! --> Team PseudoLab, may you please review this PR? @0525hhgus, @KIHOON71, @sim-so, @gabrielwithappy, @HanNayeoniee, @wonhyeongseo, @jungnerd ## Who can review? (Final) <!-- 2. ๊ฐ€์งœ์—ฐ๊ตฌ์†Œ ํŒ€์›๋“ค๊ณผ ๋ฆฌ๋ทฐ๊ฐ€ ๋๋‚œ ํ›„์—๋งŒ ํ—ˆ๊น…ํŽ˜์ด์Šค ์ง์›๋“ค์—๊ฒŒ ๋ฆฌ๋ทฐ ์š”์ฒญํ•˜๋Š” ์•„๋ž˜ ์ฃผ์„์„ ๋…ธ์ถœํ•ด์ฃผ์„ธ์š”! --> @sgugger, @ArthurZucker, @eunseojo May you please review this PR?
05-02-2023 14:56:44
05-02-2023 14:56:44
`ko/notebook.mdx`๊ฐ€ https://github.com/huggingface/transformers/pull/22670 ์—์„œ ์ถ”๊ฐ€๋๊ธฐ ๋•Œ๋ฌธ์— ์˜ค๋ฅ˜๊ฐ€ ๋‚ฉ๋‹ˆ๋‹ค. ์‚ญ์ œํ•ด์ฃผ์‹œ๋ฉด ์•ˆ ๋‚ ๊ฑฐ์˜ˆ์š”<|||||>> `ko/notebook.mdx`๊ฐ€ #22670 ์—์„œ ์ถ”๊ฐ€๋๊ธฐ ๋•Œ๋ฌธ์— ์˜ค๋ฅ˜๊ฐ€ ๋‚ฉ๋‹ˆ๋‹ค. ์‚ญ์ œํ•ด์ฃผ์‹œ๋ฉด ์•ˆ ๋‚ ๊ฑฐ์˜ˆ์š”. ์ œ ๋‹ซ์€ ๋ธŒ๋žœ์น˜๋ฅผ ๋‹ค์‹œ ์—ด์–ด์„œ ํ…Œ์ŠคํŠธ ํ›„ ๋‹ค์‹œ ๋‚˜์—ฐ๋‹˜๊ป˜ pr ๋ณด๋‚ผ๊ฒŒ์š”. mergeํ•˜์ง€ ์•Š๊ณ ๋„ cherry-pickํ•˜๋Š” ๋ฐฉ๋ฒ•์ด ์žˆ์Šต๋‹ˆ๋‹ค. ์ด๋Ÿฌ๋ฉด squashํ•  ๋•Œ ๋” ๊ฐ„ํŽธํ•œ๋ฐ, ๊ทธ๊ฑด ๋‚˜์ค‘์— ํ•ด๋ณด์ฃ . `notebook`์ด ๋ฌธ์ œ๊ฐ€ ๋œ๋‹ค๋Š” ๋ถ€๋ถ„๊นŒ์ง€๋Š” ํŒŒ์•…ํ•˜๊ณ  ํ—ค๋งค๊ณ  ์žˆ์—ˆ๋„ค์š”.. ๊ฐ์‚ฌํ•ฉ๋‹ˆ๋‹ค!! ๋ชฉ์ฐจ ์ˆ˜์ •ํ•˜๋Š”๊ฒŒ ์‰ฝ์ง€ ์•Š๋„ค์š” ๐Ÿ˜‚ ๋‚ด์ผ ์ œ ๋ธŒ๋žœ์น˜์— ๋นŒ๋“œ ์˜ค๋ฅ˜ ์—†๋Š”์ง€ ํ™•์ธํ•ด๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค~<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>> @MKhalusova Could you double-check this matches your latest changes? > Thanks. From what I can see it does match the changes, the structure is the same as in my changes. I can't verify the translation of the titles that were renamed, but it seems to be matching too. <|||||>> > @MKhalusova Could you double-check this matches your latest changes? > > Thanks. > > From what I can see it does match the changes, the structure is the same as in my changes. I can't verify the translation of the titles that were renamed, but it seems to be matching too. Sorry that my reply is a little bit late. Not all renamed titles are translated in here. Some titles have been renamed, but some are not (e.g. โ€œGeneral usageโ€ has been renamed to โ€œDeveloper Guidesโ€) Since changing title affects every document, [Pseudo Lab team](https://github.com/Pseudo-Lab) and I are going to translate all the renamed titles as we keep translating docs to Korean!
transformers
23,111
closed
fix resume fsdp
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes [# 23034](https://github.com/huggingface/transformers/issues/23034) When training a model with FSDP, the checkpoint is not saved and loaded correctly. Only rank 0's optimizer state dict is saved. This PR fixes this issue. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @pacman100 <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
05-02-2023 14:15:16
05-02-2023 14:15:16
_The documentation is not available anymore as the PR was closed or merged._<|||||>Please run `make style` and `make quality` to fix the quality issues<|||||>I have fixed the issues. The optimizer saving had no problems. For using [scatter_full_optim_state_dict](https://pytorch.org/docs/stable/fsdp.html#torch.distributed.fsdp.FullyShardedDataParallel.scatter_full_optim_state_dict), indeed loading on rank 0 is enough, which can save CPU memory usage.<|||||>cc @sgugger for a second look<|||||>thanks for the fix!
transformers
23,110
closed
[ONNX] Sam fix
# What does this PR do? This PR provides a few changes to make the ONNX export work for `SamModel`.
05-02-2023 13:03:18
05-02-2023 13:03:18
_The documentation is not available anymore as the PR was closed or merged._
transformers
23,109
closed
Add head_mask for llama
# What does this PR do? Support inputting a head_mask to LLaMA's forward like other models. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
05-02-2023 12:53:11
05-02-2023 12:53:11
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23109). All of your documentation changes will be reflected on that endpoint.<|||||>cc @ArthurZucker and @younesbelkada
transformers
23,108
closed
[`Flava`] Fix flava `torch.distributed.nn.functional import all_gather` issue
# What does this PR do? Fixes https://github.com/huggingface/transformers/issues/23047 Flava had some code that were copy-pasted from the original repository: https://github.com/facebookresearch/multimodal/blob/c6f6e44ec6e0addfdf01695db860a6febeb2d88b/torchmultimodal/utils/distributed.py#L12 From my understanding, It seems that there are two versions of `all_gather`: - `torch.distributed.nn.functional.all_gather` that backpropagates the gradients to all workers - `torch.distributed.all_gather` that will backpropagate the gradients to the current worker only cc @sgugger
05-02-2023 12:42:08
05-02-2023 12:42:08
_The documentation is not available anymore as the PR was closed or merged._
transformers
23,107
closed
[docs] Text to speech task guide
This PR adds a multimodal task guide on fine-tuning SpeechT5 for text-to-speech. It's based on a wonderfully [detailed notebook](https://colab.research.google.com/drive/1i7I5pzBcU3WDFarDnzweIj4-sVVoIUFJ#scrollTo=uELTb9CcOaCp) by @hollance.
05-02-2023 12:37:43
05-02-2023 12:37:43
_The documentation is not available anymore as the PR was closed or merged._<|||||>PR with images: https://huggingface.co/datasets/huggingface/documentation-images/discussions/86
transformers
23,106
closed
๐ŸŒ [i18n-KO] Translated `asr.mdx` to Korean
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Translated the `asr.mdx` file of the documentation to Korean. Thank you in advance for your review. Part of https://github.com/huggingface/transformers/issues/20179 ## Before reviewing - [x] Check for missing / redundant translations (๋ฒˆ์—ญ ๋ˆ„๋ฝ/์ค‘๋ณต ๊ฒ€์‚ฌ) - [x] Grammar Check (๋งž์ถค๋ฒ• ๊ฒ€์‚ฌ) - [x] Review or Add new terms to glossary (์šฉ์–ด ํ™•์ธ ๋ฐ ์ถ”๊ฐ€) - [x] Check Inline TOC (e.g. `[[lowercased-header]]`) - [x] Check live-preview for gotchas (live-preview๋กœ ์ •์ƒ์ž‘๋™ ํ™•์ธ) ## Who can review? (Initial) Team PseudoLab, may you please review this PR? @0525hhgus, @KIHOON71, @sim-so, @gabrielwithappy, @HanNayeoniee, @wonhyeongseo, @jungnerd ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? (Final) @sgugger, @ArthurZucker, @eunseojo May you please review this PR? <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
05-02-2023 12:00:03
05-02-2023 12:00:03
_The documentation is not available anymore as the PR was closed or merged._<|||||>Hope you have a great week! Could you please review this PR? @sgugger, @ArthurZucker, @eunseojo
transformers
23,105
closed
Enable to use custom tracer in FX `symbolic_trace`
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> This PR enables to specify the tracer to use when using `symbolic_trace` and Torch FX. For instance, this can be useful when the user wants a different tracing granularity to not enter some specific modules (e.g. see https://github.com/huggingface/optimum-habana/pull/223). ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
05-02-2023 10:28:59
05-02-2023 10:28:59
_The documentation is not available anymore as the PR was closed or merged._<|||||>Pinging @sgugger for final approval.
transformers
23,104
closed
Add focalnet backbone
# What does this PR do? Adds `FocalNetBackbone` class to be used by X-Decoder and possibly other frameworks as FocalNet was published fairly recently. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [X ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [X ] Did you write any new necessary tests?
05-02-2023 10:00:13
05-02-2023 10:00:13
_The documentation is not available anymore as the PR was closed or merged._
transformers
23,103
closed
Sentences tokenized by LLaMA's tokenizer have `bos` tokens but do not have `eos` tokens.
### System Info transformers version: main Platform: Linux-5.4.0-113-generic-x86_64-with-glibc2.10 Python version: 3.8.16 Huggingface_hub version: 0.11.1 PyTorch version (GPU?): 1.12.1 (True) Tensorflow version (GPU?): not installed (NA) Flax version (CPU?/GPU?/TPU?): not installed (NA) Jax version: not installed JaxLib version: not installed Using GPU in script?: Yes Using distributed or parallel set-up in script?: Not ### Who can help? @ArthurZucker @sgug ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction I find that the batches tokenized by llama's tokenizer have `bos` tokens but do not have `eos` tokens, leading to my finetuned llama do not stop properly during inference. Is it a bug, or are there some reasons for this practice? https://github.com/huggingface/transformers/blob/b8648290d2d97e7c7dbccd2d4a6a4f44e70d3b63/src/transformers/models/llama/tokenization_llama.py#L72 ### Expected behavior An explanation to my confusion.
05-02-2023 08:56:42
05-02-2023 08:56:42
Same issue<|||||>Hey! Sorry for the late reply, and thanks for opening an issue ๐Ÿค— This is expected, because the official repository's default behaviour is the same. That is because during inference you don't need it to be added. You should initialise the tokenizer with the argument set to `True`. Tell me if this does not adresses your issue <|||||>Oh, I got it. Thanks for your reply! @ArthurZucker
transformers
23,102
closed
Strictly Generate JSON
### Feature request It would be nice if we could force a generative LM to only produce JSON with a specific schema. I was thinking the end user should be able to do something as simple as `model.generate_json(input_ids, tokenizer, schema=MyDataclass)`. More specifically, `MyDataclass` should be any dataclass made up of the following types: int, float, bool, str, list, option, enum (treated as strings in the JSON), or another dataclass following these rules. I've already gone ahead and done a proof of concept showing this is possible [here](https://github.com/Ryul0rd/llm-json-output). I haven't put it in a PR because the code is a mess and it has very poor performance (10x or more normal inference time) but it otherwise works. The performance aspect in particular is an issue because I did some profiling and there doesn't seem to be anything I'm doing that's obviously inefficient so this might just be a limitation of Python and using a language like C++ or Rust might be necessary. I'm not sure how the maintainers feel about adding either of these languages to the library. Another feature my implementation doesn't currently support is adding additional constraints beyond types. eg. min or max length on strings or arrays, min or max values on ints and floats. The max length on strings and arrays is particularly important because without that you can't guarantee the full JSON string will fit in your output token budget. ### Motivation People are starting to build LLMs into larger apps by creating plugins/chains/agents etc. In many of these cases, we want the LM to produce some structured output of some kind that we can then parse. JSON is a common choice here but simply prompting a model and hoping for the best doesn't always work exactly the way you want it to. The black box nature of ANNs means failures are hard to predict and debug. Forcing the model to output valid JSON with a certain schema would improve the ability of developers to reason about the space of the model output. This also has the rather nice property that you can get reasonable output from models that aren't instruct/assistant finetuned. Check out this example from GPT2. The first 3 lines are the prompt and the fourth is generated: ``` Plain Text: Max is a 37 year old guy. His hobbies include gaming and martial arts. JSON: {"name":"Max","age":37,"is_male":true,"email_address":null} ``` Without forcing it to adhere to the JSON as output, GPT2 produces the following: ``` Plain Text: Max is a 37 year old guy. His hobbies include gaming and martial arts. JSON: Max is a 37 year old guy. His hobbies include gaming and martial arts. JSON: Max is a 37 year old guy. His hobbies include gaming and martial arts. JSON: Max is a 37 year old guy. His hobbies include gaming and martial arts. JSON: Max is a 37 ``` ### Your contribution I'd be open to writing the final code and making a PR once I get people's thoughts/advice depending on what people think of the performance/language issue. I've been learning Rust recently but am not an expert.
05-02-2023 05:55:28
05-02-2023 05:55:28
Hi, This seems very similar to this repo: https://github.com/1rgs/jsonformer. It's a wrapper around HF Transformer models (specifically, `xxxForCausalLM` models) to only fill in the values, not the keys of JSON schemas.<|||||>Thanks for pointing out that repo to me as I was unaware. It does claim to do exactly what I want but does have some issues currently, including both a failure to actually generate correct JSON reliably (The example I used above made it throw an error) and performance issues similar to my own. I took a look at how they were approaching the problem and I don't think they can fix their performance issues without a complete rewrite either. I think I'll take a stab at the Rust rewrite and see how it goes. Would having Rust in transformers be an issue? If so I can just make my own library but I do think it would be better if the feature had the visibility boost of actually being in transformers.<|||||>Since the fast core of the tokenizers library is also implemented in Rust it shouldn't be an issue to have your implementation in Rust as well. Btw: Did you take a look at [Kor](https://github.com/eyurtsev/)? It tries a similar thing within langchain...<|||||>I did look at Kor. My issue with it is that it's just using the "prompt and hope for the best" approach rather than actually providing any sort of guarantees about the output like jsonformer and my approach are doing.<|||||>Yes it's more like a prompt template engine. One cool feature is that they support pydantic models.<|||||>@Ryul0rd author of https://github.com/1rgs/jsonformer here, ended up fixing a few bugs and perf issues over the last day. Can you try once again? If it doesn't work can you send me a repro case? Thanks! Example notebook here: https://colab.research.google.com/github/1rgs/jsonformer/blob/main/Jsonformer_example.ipynb<|||||>There are now several libraries that support various methods of getting structured output from LMs. In addition to jsonformer, there's also [guidance](https://github.com/microsoft/guidance) and [LMQL](https://lmql.ai/). I'd say all 3 are still in a pretty rough/early stage at the moment but they've all got slightly different ideas about how to do things so. It still might be worth adding something like this to transformers at some point but it might be worth holding off a bit until we see which ideas work out best.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
23,101
closed
Update perf_train_gpu_one.mdx
# What does this PR do? Minor changes - Corrected a word's spelling. Changed markdown syntax for heading and for URL to be seen as a link. ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @sgugger, @stevhliu and @MKhalusova <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
05-02-2023 05:50:13
05-02-2023 05:50:13
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23101). All of your documentation changes will be reflected on that endpoint.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
23,100
closed
gen_kwargs in Seq2SeqTrainer
https://github.com/huggingface/transformers/blob/b8648290d2d97e7c7dbccd2d4a6a4f44e70d3b63/src/transformers/trainer_seq2seq.py#L257 Hi, I'm trying to use the Seq2SeqTrainer with generation in the evaluation_loop and it looks like the config isn't being properly passed to the prediction step. I'm having to manually set self._gen_kwargs as this is not initialised anywhere else in the evaluation_loop. It is initialised in the evaluate call but this uses the Trainer evaluate implementation which lacks generation. Am I missing something?
05-02-2023 04:57:54
05-02-2023 04:57:54
cc @gante <|||||>My bad I realised the preprocess_logits_for_metrics function that I found [here](https://discuss.huggingface.co/t/cuda-out-of-memory-when-using-trainer-with-compute-metrics/2941/13) was truncating the generation output.
transformers
23,099
closed
learning rate resets on resumption from checkpoint
### System Info - `transformers` version: 4.26.1 - Platform: Linux-3.10.0-1160.71.1.el7.x86_64-x86_64-with-glibc2.17 - Python version: 3.9.12 - Huggingface_hub version: 0.13.0 - PyTorch version (GPU?): 1.13.1+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: yes - Using distributed or parallel set-up in script?: yes ### Who can help? @sgugger @stas00 @ArthurZucker ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction ``Steps to reproduce the behaviour : 1. Training Script -- ``` if __name__ == "__main__": model_path = '/checkpoint-38000' model = AutoModelForCausalLM.from_pretrained(model_path) tokenizer = AutoTokenizer.from_pretrained(model_path) tokenizer.pad_token_id = tokenizer.eos_token_id train_path = '/train/*' train_data = glob(train_path) val_path = 'val/*' val_data = glob(val_path) dataset = load_dataset("json", data_files = {"train": train_data, "validation" : val_data}) dataset = dataset.map(transform, batched=True, remove_columns = ["id" ,"tokens"]) train_dataset = dataset["train"] val_dataset = dataset["validation"] parser = HfArgumentParser(TrainingArguments) parser.add_argument("--model_name_or_dir") training_args, args = parser.parse_args_into_dataclasses() transformers.logging.set_verbosity_debug() trainer = Trainer( model, training_args, train_dataset=train_dataset, eval_dataset=val_dataset, tokenizer=tokenizer, data_collator=DataCollatorForTokenClassification(tokenizer, padding='longest'), compute_metrics=None, callbacks = [TensorBoardCallback()] ) if trainer.is_world_process_zero(): print(dataset) trainer.pop_callback(MLflowCallback) if training_args.do_train: if trainer.is_world_process_zero(): print("Training...") start = time.time() trainer.train(model_path=model_path) mlflow.log_metric( "time/epoch", (time.time() - start) / 60 / training_args.num_train_epochs ) ``` 2. Params -- ```export CUDA_VISIBLE_DEVICES=0,1,2,3,4,5 accelerate launch train_cerebras_checkpoint.py \ --output_dir output_dir \ --num_train_epochs 30 \ --do_train --per_device_train_batch_size 10 \ --fsdp "full_shard auto_wrap" \ --fsdp_transformer_layer_cls_to_wrap "GPT2Block" \ --logging_steps 1 \ --save_strategy "steps" \ --save_steps 2000 \ --fp16 \ --gradient_checkpointing true ``` 3. trainer_state.json (only last few epochs shown)-- ``` { "epoch": 13.47, "learning_rate": 5.270405303307255e-06, "loss": 2.4699, "step": 37990 }, { "epoch": 13.47, "learning_rate": 5.2691867124856815e-06, "loss": 2.4614, "step": 37991 }, { "epoch": 13.47, "learning_rate": 5.267968121664108e-06, "loss": 2.3527, "step": 37992 }, { "epoch": 13.47, "learning_rate": 5.266749530842534e-06, "loss": 2.42, "step": 37993 }, { "epoch": 13.47, "learning_rate": 5.2655309400209605e-06, "loss": 2.6322, "step": 37994 }, { "epoch": 13.47, "learning_rate": 5.264312349199386e-06, "loss": 2.566, "step": 37995 }, { "epoch": 13.47, "learning_rate": 5.263093758377812e-06, "loss": 2.5026, "step": 37996 }, { "epoch": 13.47, "learning_rate": 5.261875167556239e-06, "loss": 2.6096, "step": 37997 }, { "epoch": 13.47, "learning_rate": 5.260656576734664e-06, "loss": 2.7513, "step": 37998 }, { "epoch": 13.47, "learning_rate": 5.2594379859130905e-06, "loss": 2.5066, "step": 37999 }, { "epoch": 13.48, "learning_rate": 5.258219395091516e-06, "loss": 2.7268, "step": 38000 } ``` 4. Learning rate after resumption -- ``` 'loss': 2.4654, 'learning_rate': 2.754905437352246e-05, 'epoch': 13.48} {'loss': 2.8525, 'learning_rate': 2.754905437352246e-05, 'epoch': 13.48} {'loss': 2.9781, 'learning_rate': 2.7548463356973997e-05, 'epoch': 13.48} {'loss': 2.8067, 'learning_rate': 2.7547872340425534e-05, 'epoch': 13.48} ``` ### Expected behavior 1. Learning rate should resume from the last stored LR. 2. The loss seems higher post resumption as compared to before (Maybe due to LR?) 3. I also notice that the time per iteration has almost double (19s to 38s), even though I am using the same number of GPUs
05-02-2023 03:55:28
05-02-2023 03:55:28
When doing `model_path = '=/checkpoint-38000'` you are not resuming training from the checkpoint, you are starting a new fresh training with the model of this checkpoint. To resume training from a checkpoint, you need to use the [`resume_from_checkpoint` argument](https://huggingface.co/docs/transformers/main_classes/trainer#transformers.Trainer.train.resume_from_checkpoint) of `Trainer.train`.<|||||>@sgugger not sure what I am missing : Code : ``` model_path = 'output_dir/checkpoint-38000' model = AutoModelForCausalLM.from_pretrained(model_path) tokenizer = AutoTokenizer.from_pretrained(model_path) tokenizer.pad_token_id = tokenizer.eos_token_id train_path = '/train/*' train_data = glob(train_path) val_path = '/val/*' val_data = glob(val_path) dataset = load_dataset("json", data_files = {"train": train_data, "validation" : val_data}) dataset = dataset.map(transform, batched=True, remove_columns = ["id" ,"tokens"]) train_dataset = dataset["train"] val_dataset = dataset["validation"] print('Training data length', len(train_dataset)) print('Validation data length', len(val_dataset)) parser = HfArgumentParser(TrainingArguments) parser.add_argument("--model_name_or_dir") training_args, args = parser.parse_args_into_dataclasses() transformers.logging.set_verbosity_debug() trainer = Trainer( args=training_args, model=model, tokenizer = tokenizer, train_dataset=train_dataset, eval_dataset=val_dataset, data_collator=DataCollatorForTokenClassification(tokenizer, padding='longest'), compute_metrics=None, callbacks = [TensorBoardCallback()] ) if trainer.is_world_process_zero(): print(dataset) trainer.pop_callback(MLflowCallback) if training_args.do_train: if trainer.is_world_process_zero(): print("Training...") start = time.time() trainer.train(resume_from_checkpoint=True) mlflow.log_metric( "time/epoch", (time.time() - start) / 60 / training_args.num_train_epochs ) ``` Script : ` accelerate launch train_cerebras_checkpoint.py \ --resume_from_checkpoint True \ --output_dir /output_dir \ --num_train_epochs 30 \ --do_train --per_device_train_batch_size 10 \ --fsdp "full_shard auto_wrap" \ --fsdp_transformer_layer_cls_to_wrap "GPT2Block" \ --logging_steps 1 \ --save_strategy "steps" \ --save_steps 2000 \ --fp16 \ --gradient_checkpointing true ` In the logs, I see the following -- `Continuing training from checkpoint, will skip to saved global_step` However the learning rate still resets. I expect it to be around `e-06` but it is at `e-05` I can confirm that `output_dir` contains `checkpoint-38000`<|||||>I also debugged and noticed that the execution goes through here - https://github.com/huggingface/transformers/blob/v4.26.1/src/transformers/trainer.py#L2333 Furthermore, I checked the last_lr from my optimizer and it seems to be as I see in the training_state.json : ``` path = '/output_dir/checkpoint-38000/scheduler.pt' x = torch.load(path) x['_last_lr'] [5.258219395091516e-06, 5.258219395091516e-06] ``` Not able to understand why therefore the LR starts from `e-05` when I resume from checkpoint. @sgugger<|||||>@agneet42 Is there a typo here: ``` model_path = 'output_dir/checkpoint-38000' ``` Should be below or are you running under root path? ``` model_path = '/output_dir/checkpoint-38000' ```<|||||>@sgugger Hi, I'm using `Trainer.train(last_checkpoint_path)` but still got lr reset. Here is some info might help. - checkpoint folder files ```text - config.json - generation_config.json - optimizer.pt - pytorch_model.bin - rng_state.pth - scaler.pt - trainer_state.json - training_args.bin ``` - Code: Load from checkpoint ```python args.from_checkpoint = "./checkpoint-30000" # Define the training arguments training_args = TrainingArguments( output_dir=args.save_dir, overwrite_output_dir=True, num_train_epochs=2, per_device_train_batch_size=12, per_device_eval_batch_size=12, gradient_accumulation_steps=3, evaluation_strategy='steps', eval_steps=40000, save_steps=10000, logging_steps=100, learning_rate=5e-5, warmup_steps=1000, fp16=True, logging_dir='./logs' ) # Create a trainer instance trainer = Trainer( model=model, args=training_args, train_dataset=train_dataset, eval_dataset=eval_dataset, data_collator=lambda data: {'input_ids': torch.stack([f[0] for f in data]), 'attention_mask': torch.stack([f[1] for f in data]), 'labels': torch.stack([f[0] for f in data])}) # Fine-tune the model trainer.train(args.from_checkpoint) ``` Log: Previous training logs ```text {'loss': 0.1625, 'learning_rate': 3.67605680426363e-06, 'epoch': 0.93} {'loss': 0.1627, 'learning_rate': 3.3450296269323714e-06, 'epoch': 0.94} {'loss': 0.1618, 'learning_rate': 3.0140024496011123e-06, 'epoch': 0.94} {'loss': 0.1617, 'learning_rate': 2.6829752722698537e-06, 'epoch': 0.95} {'loss': 0.1623, 'learning_rate': 2.3519480949385943e-06, 'epoch': 0.95} {'loss': 0.1608, 'learning_rate': 2.0209209176073357e-06, 'epoch': 0.96} ``` Log: After load from checkpoint logs ```text {'loss': 0.1715, 'learning_rate': 2.631964570647042e-05, 'epoch': 0.96} {'loss': 0.1757, 'learning_rate': 2.6238236347650525e-05, 'epoch': 0.97} {'loss': 0.1758, 'learning_rate': 2.6156826988830634e-05, 'epoch': 0.97} {'loss': 0.1766, 'learning_rate': 2.6075417630010747e-05, 'epoch': 0.97} {'loss': 0.1754, 'learning_rate': 2.5994008271190857e-05, 'epoch': 0.98} {'loss': 0.1789, 'learning_rate': 2.5912598912370966e-05, 'epoch': 0.98} {'loss': 0.1797, 'learning_rate': 2.583118955355108e-05, 'epoch': 0.98} {'loss': 0.18, 'learning_rate': 2.5750594288319384e-05, 'epoch': 0.99} ``` <|||||>@wmhcqw I still don't have any code I can reproduce on this issue. To be able to reproduce the code needs to include the dataset/model creation as whole as the creation of the checkpoint from which you are then resuming training. Resuming training is tested in our CI and there is no issue of learning rate resetting there, so the examples of this situation we have on our side work. To debug what is particular to the bug you are encountering, I need to be able to reproduce it.<|||||>@sgugger I think I've found the problem. Here's the sample code to reproduce this issue. File: train.py ```python import os import argparse from tqdm import tqdm import torch import torch.nn as nn from torch.utils.data import DataLoader, Dataset from transformers import GPTNeoConfig, GPTNeoForCausalLM, GPT2Tokenizer, TrainingArguments, Trainer import random from random import randint class DummyDataset(Dataset): def __init__(self, words, tokenizer): self.tokenizer = tokenizer self.data = { "input_ids": [], "attention_mask": [] } for word in tqdm(words): res = tokenizer(word) self.data["input_ids"].append(torch.LongTensor(res["input_ids"])) self.data["attention_mask"].append(torch.LongTensor(res["attention_mask"])) def __len__(self): return len(self.data['input_ids']) def __getitem__(self, idx): input_ids = self.data['input_ids'][idx] attention_mask = self.data['attention_mask'][idx] return input_ids, attention_mask if __name__ == "__main__": tokenizer = GPT2Tokenizer.from_pretrained("gpt2") # print(tokenizer("Hello World")) names=["We","I","They","He","She","Jack","Jim","Rose","You"] verbs=["was", "is", "are", "were"] nouns=["playing a game", "watching television", "talking", "dancing", "speaking", "playing basketball", "eating dinner"] random.seed(42) train_sens = [] for i in range(1000): train_sens.append(names[randint(0,len(names)-1)]+" "+verbs[randint(0,len(verbs)-1)]+" "+nouns[randint(0,len(nouns)-1)]) eval_sens = [] for i in range(100): eval_sens.append(names[randint(0,len(names)-1)]+" "+verbs[randint(0,len(verbs)-1)]+" "+nouns[randint(0,len(nouns)-1)]) train_dataset = DummyDataset(train_sens, tokenizer) eval_dataset = DummyDataset(eval_sens, tokenizer) config = GPTNeoConfig( vocab_size=len(tokenizer.get_vocab()), n_positions=1024, n_ctx=2048, n_embd=768, n_layer=1, n_head=1, intermediate_size=3072 ) model = GPTNeoForCausalLM(config).cuda() training_args = TrainingArguments( output_dir="./dummpy_model", overwrite_output_dir=True, num_train_epochs=1, # 2 per_device_train_batch_size=1, per_device_eval_batch_size=1, gradient_accumulation_steps=1, evaluation_strategy='steps', eval_steps=1000, save_steps=100, logging_steps=10, learning_rate=5e-5, warmup_steps=10, fp16=True, logging_dir='./logs' ) trainer = Trainer( model=model, args=training_args, train_dataset=train_dataset, eval_dataset=eval_dataset, data_collator=lambda data: {'input_ids': torch.stack([f[0] for f in data]), 'attention_mask': torch.stack([f[1] for f in data]), 'labels': torch.stack([f[0] for f in data])}) trainer.train() # trainer.train("./dummpy_model/checkpoint-1000") ``` Steps to reproduce: 1. run `python train.py`, and you will get a checkpoint folder named 'checkpoint-1000' 2. change the `train.py` code - change the training_args num_train_epcohs from 1->2 (**This is the reason, the total steps changed.**) - comment trainer.train() - uncomment trainer.train("./dummpy_model/checkpoint-1000") 3. run `python train.py` again, training resume from step 1000 but with reset lr. Logs: Step1. `python train.py` ```text {'loss': 1.6059, 'learning_rate': 2.7878787878787883e-05, 'epoch': 0.45} {'loss': 1.6617, 'learning_rate': 2.7373737373737374e-05, 'epoch': 0.46} {'loss': 1.4485, 'learning_rate': 2.686868686868687e-05, 'epoch': 0.47} {'loss': 1.5028, 'learning_rate': 2.636363636363636e-05, 'epoch': 0.48} {'loss': 1.5889, 'learning_rate': 2.585858585858586e-05, 'epoch': 0.49} {'loss': 1.3763, 'learning_rate': 2.5353535353535356e-05, 'epoch': 0.5} {'loss': 1.5049, 'learning_rate': 2.4848484848484847e-05, 'epoch': 0.51} ... {'loss': 1.2957, 'learning_rate': 6.060606060606061e-07, 'epoch': 0.99} {'loss': 1.406, 'learning_rate': 1.0101010101010101e-07, 'epoch': 1.0} {'eval_loss': 1.3103933334350586, 'eval_runtime': 5.7138, 'eval_samples_per_second': 17.502, 'eval_steps_per_second': 17.502, 'epoch': 1.0} {'train_runtime': 497.58, 'train_samples_per_second': 2.01, 'train_steps_per_second': 2.01, 'train_loss': 1.8285735349655152, 'epoch': 1.0} ``` Step3. `python train.py` ```text {'loss': 1.5102, 'learning_rate': 2.492462311557789e-05, 'epoch': 1.01} {'loss': 1.3294, 'learning_rate': 2.4673366834170854e-05, 'epoch': 1.02} {'loss': 1.511, 'learning_rate': 2.442211055276382e-05, 'epoch': 1.03} {'loss': 1.4702, 'learning_rate': 2.4170854271356786e-05, 'epoch': 1.04} {'loss': 1.5096, 'learning_rate': 2.391959798994975e-05, 'epoch': 1.05} {'loss': 1.4866, 'learning_rate': 2.3668341708542715e-05, 'epoch': 1.06} {'loss': 1.4644, 'learning_rate': 2.3417085427135678e-05, 'epoch': 1.07} {'loss': 1.2187, 'learning_rate': 2.3165829145728644e-05, 'epoch': 1.08} {'loss': 1.5627, 'learning_rate': 2.291457286432161e-05, 'epoch': 1.09} {'loss': 1.5059, 'learning_rate': 2.2663316582914573e-05, 'epoch': 1.1} ``` As you can see, the learning rate is reset to where epoch 0.5, so I think the learning rate is not saved but the steps, and learning rate is calculated according to the total steps. This is not a bug. I thought the learning rate is saved but I was wrong. Changing training arguments before loading from checkpoint is not an expected behaviour, I think. <|||||>> 2. change the train.py code change the training_args num_train_epcohs from 1->2 (This is the reason, the total steps changed.) You cannot change a single a training argument when resuming training and expect the training to resume properly. This is the source of the bug, not the `Trainer` itself.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>@sgugger I am facing related problem, though not pertaining to learning rate but relates to resuming the training with change in GPUs available. I have written the detailed issue [here (huggingface forums)](https://discuss.huggingface.co/t/skipped-batches-do-not-consider-distributed-training/43832). Would really appreciate if you can help! Thanks and regards!
transformers
23,098
closed
fix: Fix incorrent loading config in AutoTokenizer.from_pretrained
# What does this PR do? The description in argument `subfolder` of `AutoTokenizer.from_pretrained` doesn't work properly. > subfolder (str, optional) โ€” In case the relevant files are located inside a subfolder of the model repo on huggingface.co (e.g. for facebook/rag-token-base), specify it here. The code the error occurs ```py from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("facebook/rag-token-base", subfolder="generator_tokenizer") ``` The error message ``` None of PyTorch, TensorFlow >= 2.0, or Flax have been found. Models won't be available and only tokenizers, configuration and file/data utilities can be used. Traceback (most recent call last): File "/Users/daehee/Workspace/Projects/transformers/main.py", line 4, in <module> tokenizer = AutoTokenizer.from_pretrained("facebook/rag-token-base", subfolder="generator_tokenizer") ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/daehee/Workspace/Projects/transformers/src/transformers/models/auto/tokenization_auto.py", line 657, in from_pretrained config = AutoConfig.from_pretrained( ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/daehee/Workspace/Projects/transformers/src/transformers/models/auto/configuration_auto.py", line 922, in from_pretrained config_dict, unused_kwargs = PretrainedConfig.get_config_dict(pretrained_model_name_or_path, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/daehee/Workspace/Projects/transformers/src/transformers/configuration_utils.py", line 574, in get_config_dict config_dict, kwargs = cls._get_config_dict(pretrained_model_name_or_path, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/daehee/Workspace/Projects/transformers/src/transformers/configuration_utils.py", line 629, in _get_config_dict resolved_config_file = cached_file( ^^^^^^^^^^^^ File "/Users/daehee/Workspace/Projects/transformers/src/transformers/utils/hub.py", line 404, in cached_file raise EnvironmentError(f"Could not locate {full_filename} inside {path_or_repo_id}.") OSError: Could not locate generator_tokenizer/config.json inside facebook/rag-token-base. ``` ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @ArthurZucker <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
05-02-2023 03:39:57
05-02-2023 03:39:57
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23098). All of your documentation changes will be reflected on that endpoint.<|||||>No this fix is incorrect. The problem lies in the checkpoint you are using, which does not specify the tokenizer class in the `tokenizer_config.json` present in the subfolder. If this was done properly, this path would not be executed.<|||||>The current function finds `config.json` in subfolder not root if tokenizer class in `tokenizer_config.json` is not specified. So `AutoTokenizer.from_pretrained("facebook/rag-token-base", subfolder="generator_tokenizer")` fails since [facebook/rag-token-base/generator_tokenizer](https://huggingface.co/facebook/rag-token-base/tree/main/generator_tokenizer) doesn't have `config.json`. This PR makes the function find `config.json` in root not subfolder to run the example code described in the document. Or `config.json` has to be put in [facebook/rag-token-base/generator_tokenizer](https://huggingface.co/facebook/rag-token-base/tree/main/generator_tokenizer)?<|||||>As I said above, the function should not even attempt to find a `config.json`. It only does so because the tokenizer config is not right.<|||||>Then there are several problems in this. 1. Docs of [AutoTokenizer.from_pretrained](https://huggingface.co/docs/transformers/v4.28.1/en/model_doc/auto#transformers.AutoTokenizer.from_pretrained) and [PreTrainedTokenizerBase.from_pretrained](https://huggingface.co/docs/transformers/v4.28.1/en/internal/tokenization_utils#transformers.PreTrainedTokenizerBase.from_pretrained) should be fixed. `subfolder` in the docs is decsribed with incorrect example [facebook/rag-token-base](https://huggingface.co/facebook/rag-token-base/tree/main/generator_tokenizer). It may cause confusion. 2. Correct error message has to be shown like "tokenizer class is not specified in tokenizer_config.json" instead of finding `config.json` Do I understand your comment properly?<|||||>No, there is one problem and it is that the tokenizer config file in that repo is wrong and should be fixed. There is nothing to change after that.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
23,097
closed
Sliding window pipeline with average of logits
# What does this PR do? This adds a variation on the existing `ChunkingPipeline` approach for handling strings with more than `model_max_length` tokens. After using the tokenizer to split the text into chunks (identically to how `ChunkingPipeline` does so), `SlidingWindowTokenClassificationPipeline` then averages the values of the logits for each token (across all sliding "windows" that happen to cover that token), and finally feeds those logits into the usual entity-extraction logic. The existing implementation of `TokenClassificationPipeline` instead runs entity extraction on each window separately, then takes the highest-scoring entity in case of any overlap. Implements #14631 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. **[Link](https://github.com/huggingface/transformers/issues/14631) to discussion** - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? @Narsil
05-02-2023 01:23:57
05-02-2023 01:23:57
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23097). All of your documentation changes will be reflected on that endpoint.<|||||>Can you provide some kind of benchmarks in terms fo benefits ? If this is better in some form we can consider adding it in the pipelines. In general we try to avoid adding new pipelines which don't change the I/O. Also you can have your own pipeline coding on the hub directly : https://huggingface.co/docs/transformers/v4.28.1/en/main_classes/pipelines#pipeline-custom-code<|||||>@Narsil, @wigwit and I can work on trying to benchmark this (versus the existing approach) on an NER dataset. Also, do you think this functionality would actually be more at home inside the existing `TokenClassificationPipeline` (as an alternative to the existing extract-entities-then-resolve-overlaps approach)? It could be activated by the user passing `average_logits=True` as one of the `__init__()` parameters. I'm realizing that would probably make more sense than creating another pipeline class and "task" just for the sliding window behavior.<|||||>Thanks for your PR but this looks more like a pipeline that would benefit from living entirely on the Hub using the [custom pipeline](https://huggingface.co/docs/transformers/add_new_pipeline#share-your-pipeline-on-the-hub) API than being added into Transformers.<|||||>Should this be closed?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
23,096
closed
Dramatic Performance Drop of `CLIPVisionModel` Related Model After Upgrading `transformers` From `4.27.4` to `4.28.x`
### System Info @amyeroberts Related versions are ``` transformers==4.27.4 ``` and ``` transformers==4.28.1 ``` ### Who can help? @amyeroberts ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction I'm sure that `CLIPVisionModel` loaded `from_pretrained` like from the `laion` pretrained CLIP ViT will output totally different tensor output with exactly same input image. And using `transformers==4.28.1` will lead to a dramatic performance drop for reasons worth digging. Extensive tests have been conducted to verify that this issue seems irrelevant to the torch version (e.g. `2.0` or `1.13`) Probably you can reproduce this by load from `laion/CLIP-ViT-H-14-laion2B-s32B-b79K` which is quite popular. ### Expected behavior Two version output almost same tensor given same input image.
05-02-2023 01:02:11
05-02-2023 01:02:11
I found **this issue may impact tons of vision workloads** and I hope it can be resolved as soon as possible.<|||||>Also cc @younesbelkada <|||||>Hi @submartingales, thanks for reporting! So that I can pin down the issue, is the input image the same before being passed to the processor? e.g. ```python import torch from transformers import CLIPProcessor, CLIPModel torch.manual_seed(0) # Dummy image which is always the same for each version image = torch.randint(0, 256, (3, 300, 300)) processor = CLIPProcessor.from_pretrained("openai/clip-vit-base-patch32") model = CLIPModel.from_pretrained("openai/clip-vit-base-patch32") # Model inputs might change based on if there's a change in processing logic inputs = processor(text=["a photo of a cat", "a photo of a dog"], images=image, return_tensors="pt", padding=True) outputs = model(**inputs) ``` Or is are the `pixel_values` exactly the same? ```python import torch from transformers import CLIPProcessor, CLIPModel torch.manual_seed(0) pixel_values = torch.rand(1, 3, 224, 224) input_ids = torch.Tensor( [[49406, 320, 1125, 539, 320, 2368, 49407], [49406, 320, 1125, 539, 320, 1929, 49407]] ).long() processor = CLIPProcessor.from_pretrained("openai/clip-vit-base-patch32") model = CLIPModel.from_pretrained("openai/clip-vit-base-patch32") # The model inputs exactly the same for different library versions inputs = {"input_ids": input_ids, "pixel_values": pixel_values} outputs = model(**inputs) ``` With regards to expected behaviour, could you give some more information about what's changed? Specifically what is being measured in terms of performance e.g. is it the clip loss? And how much it has changed? <|||||>@amyeroberts I will make two notebooks to clarify, please wait for several minutes.<|||||>@amyeroberts Actually, we cannot disclose all resources that are required to run the notebooks for reasons you will definitely know once you have read them. But the performance drop (the last cell's output, the higher the better) are consistent on different platforms and the only variable is the version of `transformers` so at least for now we believe the model's behavior change is caused by the package upgrading. The only difference between two notebooks given in the zip file is the `transformers` 's version, and the checkpoint we have loaded are exactly the same. [two-version-notebook.zip](https://github.com/huggingface/transformers/files/11377971/two-version-notebook.zip) <|||||>@amyeroberts In short, every time we try upgrading the `transformers` version for new features, no matter what `torch` version we are using, what platform we are running on, we found our prediction workflow failed. For the specific task solved in the notebooks, another observation I can provide is that once `transformers` has been upgraded to `4.28.1`, the final prediction, say, the output for each input image, when loaded a model with the same weights, the model is possible to generate output with magnitude differences of over a thousand times for each input image and finally result in the performance drop.<|||||>The uploaded two notebooks demonstrate the performance drop considering inference. What we are experiencing at the same time, is that during training using `cos` loss which is related to the task in the notebook, the `transformers==4.27.4` powered model converge easily on about $50k$ images but `transformers==4.28.1` based model won't converge on just $1k$ images. The architecture we have chosen is straight forward and if we load `from_pretrained('laion/CLIP-ViT-H-14-laion2B-s32B-b79K')` regardless of the Internet connection restriction on certain platform, the problem still exists.<|||||>@amyeroberts With respect to the output difference, at the $8st$ cell of the two given notebook, we can see that the tensor output for the first sample, is different. The `4.28.1` version gives ``` -0.776564 1.751475 1.938180 0.474142 -0.191921 ... ``` while `4.27.4` gives ``` -2.197644 2.167892 -0.369088 -0.928763 -3.423420 ... ```<|||||>> @amyeroberts In short, every time we try upgrading the `transformers` version for new features, no matter what `torch` version we are using, what platform we are running on, we found our prediction workflow failed. For the specific task solved in the notebooks, another observation I can provide is that once `transformers` has been upgraded to `4.28.1`, the final prediction, say, the output for each input image, when loaded a model with the same weights, the model is possible to generate output with magnitude differences of over a thousand times for each input image and finally result in the performance drop. @amyeroberts The `thousand times` I mean above is related to a similar strategy with another weight checkpoint, which is not presented in [two-version-notebook.zip](https://github.com/huggingface/transformers/files/11377971/two-version-notebook.zip). My coworkers guess that something important related to the overall CLIP workflow has changed between `4.27.4` and `4.28.1`, which has caused some incompatibilities issues.<|||||>@amyeroberts Any progress on this issue? If you can roughly locate the code change related to this issue, I am happy to submit a pull request to fix it.<|||||>Hi @submartingales, thanks for sharing more details and the notebooks. I suspect this is related to a change in the cropping behaviour identified in a similar [issue](https://github.com/huggingface/transformers/issues/22505). The fastest way to regain the old behaviour whilst waiting for the fix to be merged would be implementing an image processor which overrides the cropping behaviour e.g. something like this: ```python from typing import Dict, Optional, Union import numpy as np from transformers import CLIPTokenizer, CLIPImageProcessor, CLIPProcessor from transformers.image_transforms import get_image_size, to_channel_dimension_format from transformers.image_utils import ChannelDimension, get_image_size, infer_channel_dimension_format from transformers.image_processing_utils import get_size_dict class NewCLIPImageProcessor(CLIPImageProcessor): def center_crop( self, image: np.ndarray, size: Dict[str, int], data_format: Optional[Union[str, ChannelDimension]] = None, **kwargs ) -> np.ndarray: size = get_size_dict(size) if "height" not in size or "width" not in size: raise ValueError(f"The `size` parameter must contain the keys (height, width). Got {size.keys()}") image = to_channel_dimension_format(image, ChannelDimension.FIRST) if data_format is None: data_format = infer_channel_dimension_format(image) image_height, image_width = get_image_size(image) crop_height, crop_width = size["height"], size["width"] crop_top = int((image_height - crop_height + 1) * 0.5) crop_left = int((image_width - crop_width + 1) * 0.5) image = image[:, crop_top : crop_top + crop_height, crop_left : crop_left + crop_width] image = to_channel_dimension_format(image, data_format) return image image_processor = NewCLIPImageProcessor.from_pretrained("openai/clip-vit-base-patch32") tokenizer = CLIPTokenizer.from_pretrained("openai/clip-vit-base-patch32") processor = CLIPProcessor(image_processor=image_processor, tokenizer=tokenizer) ``` <|||||>@amyeroberts Thank you for your code to fix this but we are sorry to inform that after we have updated the `processor` by incorporating your code snippet, the problem still exists and the model's output based on `transformers==4.28.1` does not change to what it shall be.<|||||>@amyeroberts These days we have performed further experiments using `transformers==4.29.2` and such output change persists in `transformers==4.29.2` and the output tensor is `allclose`d to what is outputed by `transformers==4.28.1`.<|||||>@submartingales If you do not share a reproducer of the bug, there is really nothing we can do to help.<|||||>@sgugger Now we make public all resources required to reproduce the bug, in two public notebooks with all related checkpoints loaded in public datasets. Any account can now copy & edit the notebook and reproduce the behavior change with "pin to original environment" checked. + The `transformers==4.29.2` version whose output is allclosed to `transformers==4.28.x` is given in https://www.kaggle.com/code/qiexifan/huggingface-transformers-4292-versioning-last-500k + The `transformers==4.27.4` version is given in https://www.kaggle.com/code/qiexifan/huggingface-transformers-4274-versioning-last-500k<|||||>Hi @submartingales, thanks for sharing the repro. I've tracked down the change in the model outputs down to a bug fix in 4.28.x: #22458. In the shared notebooks in the `ImageDataset` class, the images are converted to torch tensors in the `__getitem__` method using `ToTensor()`. `ToTensor()` doesn't just convert the PIL image to a tensor, but also scales the pixel values between 0-1. The image transforms library uses Pillow to resize the images. If the input is an array, then its first converted to a PIL image, and then converted back to an array. To convert an array to a PIL.Image.Image, its pixels must be integer values between [0, 255]. In 4.27.4, if the input had pixel values [0, 1], and we rescale so this conversion happened, the output array wasn't rescaled back down -> the output array had pixel values between [0, 255]. If using `ToTensor` then the image processor should have `do_rescale=False` set to prevent the pixel values being divided by `255` twice. This was likely the cause of the degraded performance (as the images in 4.27.4 had their pixel values multiplied by 255 when resizing, nullifying this double divide. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
23,095
closed
`torch.compile` is ignored when using DeepSpeed
### System Info Since #22279, `torch.compile` is called at the end of `_wrap_model`. However, [these lines](https://github.com/huggingface/transformers/blob/main/src/transformers/trainer.py#LL1397C1-L1398C34) immediately return the deepspeed engine, so `torch.compile` is never executed, even when asking for it in the training args. I don't think this is intended because DeepSpeed does not automatically run `torch.compile`, but please correct me if I am wrong. @stas00 @sgugger ### Who can help? _No response_ ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction 1. Install transformers and deepspeed with torch 2.0 2. Run any HF transformers training script with `--deepspeed ds_config.json --torch_compile` 3. See logs, no `torch.compile` logs to be found. this means training is slower with zero 0 than without using deepspeed (due to the lack of speedup from compilation) ### Expected behavior `torch.compile` should still be called somewhere in the trainer when using DeepSpeed
05-01-2023 23:47:57
05-01-2023 23:47:57
I don't think DeepSpeed support `torch.compile` yet, so this was done intentionally. If the situation has changed, we can of course revisit.<|||||>What Sylvain said, wrt the 2 not working together. But it's not that Deepspeed doesn't support `torch.compile`, it's rather that `torch.compile` is very immature - somewhere between alpha and beta state based on my experiments - many other things break with `torch.compile` besides Deepspeed.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
23,094
closed
Bump flask from 2.0.3 to 2.3.2 in /examples/research_projects/decision_transformer
Bumps [flask](https://github.com/pallets/flask) from 2.0.3 to 2.3.2. <details> <summary>Release notes</summary> <p><em>Sourced from <a href="https://github.com/pallets/flask/releases">flask's releases</a>.</em></p> <blockquote> <h2>2.3.2</h2> <p>This is a security fix release for the 2.3.x release branch.</p> <ul> <li>Security advisory: <a href="https://github.com/pallets/flask/security/advisories/GHSA-m2qf-hxjv-5gpq">https://github.com/pallets/flask/security/advisories/GHSA-m2qf-hxjv-5gpq</a>, CVE-2023-30861</li> <li>Changes: <a href="https://flask.palletsprojects.com/en/2.3.x/changes/#version-2-3-2">https://flask.palletsprojects.com/en/2.3.x/changes/#version-2-3-2</a></li> <li>Milestone: <a href="https://github.com/pallets/flask/milestone/29?closed=1">https://github.com/pallets/flask/milestone/29?closed=1</a></li> </ul> <h2>2.3.1</h2> <p>This is a fix release for the 2.3.x release branch.</p> <ul> <li>Changes: <a href="https://flask.palletsprojects.com/en/2.3.x/changes/#version-2-3-1">https://flask.palletsprojects.com/en/2.3.x/changes/#version-2-3-1</a></li> <li>Milestone: <a href="https://github.com/pallets/flask/milestone/28?closed=1">https://github.com/pallets/flask/milestone/28?closed=1</a></li> </ul> <h2>2.3.0</h2> <p>This is a feature release, which includes new features, removes previously deprecated code, and adds new deprecations. The 2.3.x branch is now the supported fix branch, the 2.2.x branch will become a tag marking the end of support for that branch. We encourage everyone to upgrade, and to use a tool such as <a href="https://pypi.org/project/pip-tools/">pip-tools</a> to pin all dependencies and control upgrades. Test with warnings treated as errors to be able to adapt to deprecation warnings early.</p> <ul> <li>Changes: <a href="https://flask.palletsprojects.com/en/2.3.x/changes/#version-2-3-0">https://flask.palletsprojects.com/en/2.3.x/changes/#version-2-3-0</a></li> <li>Milestone: <a href="https://github.com/pallets/flask/milestone/24?closed=1">https://github.com/pallets/flask/milestone/24?closed=1</a></li> </ul> <h2>2.2.4</h2> <p>This is a fix release for the 2.2.x release branch.</p> <ul> <li>Changes: <a href="https://flask.palletsprojects.com/en/2.2.x/changes/#version-2-2-4">https://flask.palletsprojects.com/en/2.2.x/changes/#version-2-2-4</a></li> <li>Milestone: <a href="https://github.com/pallets/flask/milestone/27?closed=1">https://github.com/pallets/flask/milestone/27?closed=1</a></li> </ul> <h2>2.2.3</h2> <p>This is a fix release for the 2.2.x release branch.</p> <ul> <li>Changes: <a href="https://flask.palletsprojects.com/en/2.2.x/changes/#version-2-2-3">https://flask.palletsprojects.com/en/2.2.x/changes/#version-2-2-3</a></li> <li>Milestone: <a href="https://github.com/pallets/flask/milestone/26?closed=1">https://github.com/pallets/flask/milestone/26?closed=1</a></li> </ul> <h2>2.2.2</h2> <p>This is a fix release for the <a href="https://github.com/pallets/flask/releases/tag/2.2.0">2.2.0</a> feature release.</p> <ul> <li>Changes: <a href="https://flask.palletsprojects.com/en/2.2.x/changes/#version-2-2-2">https://flask.palletsprojects.com/en/2.2.x/changes/#version-2-2-2</a></li> <li>Milestone: <a href="https://github.com/pallets/flask/milestone/25?closed=1">https://github.com/pallets/flask/milestone/25?closed=1</a></li> </ul> <h2>2.2.1</h2> <p>This is a fix release for the <a href="https://github.com/pallets/flask/releases/tag/2.2.0">2.2.0</a> feature release.</p> <ul> <li>Changes: <a href="https://flask.palletsprojects.com/en/2.2.x/changes/#version-2-2-1">https://flask.palletsprojects.com/en/2.2.x/changes/#version-2-2-1</a></li> <li>Milestone: <a href="https://github.com/pallets/flask/milestone/23?closed=1">https://github.com/pallets/flask/milestone/23?closed=1</a></li> </ul> <h2>2.2.0</h2> <p>This is a feature release, which includes new features and removes previously deprecated code. The 2.2.x branch is now the supported bug fix branch, the 2.1.x branch will become a tag marking the end of support for that branch. We encourage everyone to upgrade, and to use a tool such as <a href="https://pypi.org/project/pip-tools/">pip-tools</a> to pin all dependencies and control upgrades.</p> <ul> <li>Changes: <a href="https://flask.palletsprojects.com/en/2.2.x/changes/#version-2-2-0">https://flask.palletsprojects.com/en/2.2.x/changes/#version-2-2-0</a></li> <li>Milestone: <a href="https://github.com/pallets/flask/milestone/19?closed=1">https://github.com/pallets/flask/milestone/19?closed=1</a></li> </ul> <h2>2.1.3</h2> <!-- raw HTML omitted --> </blockquote> <p>... (truncated)</p> </details> <details> <summary>Changelog</summary> <p><em>Sourced from <a href="https://github.com/pallets/flask/blob/main/CHANGES.rst">flask's changelog</a>.</em></p> <blockquote> <h2>Version 2.3.2</h2> <p>Released 2023-05-01</p> <ul> <li>Set <code>Vary: Cookie</code> header when the session is accessed, modified, or refreshed.</li> <li>Update Werkzeug requirement to &gt;=2.3.3 to apply recent bug fixes.</li> </ul> <h2>Version 2.3.1</h2> <p>Released 2023-04-25</p> <ul> <li>Restore deprecated <code>from flask import Markup</code>. :issue:<code>5084</code></li> </ul> <h2>Version 2.3.0</h2> <p>Released 2023-04-25</p> <ul> <li> <p>Drop support for Python 3.7. :pr:<code>5072</code></p> </li> <li> <p>Update minimum requirements to the latest versions: Werkzeug&gt;=2.3.0, Jinja2&gt;3.1.2, itsdangerous&gt;=2.1.2, click&gt;=8.1.3.</p> </li> <li> <p>Remove previously deprecated code. :pr:<code>4995</code></p> <ul> <li>The <code>push</code> and <code>pop</code> methods of the deprecated <code>_app_ctx_stack</code> and <code>_request_ctx_stack</code> objects are removed. <code>top</code> still exists to give extensions more time to update, but it will be removed.</li> <li>The <code>FLASK_ENV</code> environment variable, <code>ENV</code> config key, and <code>app.env</code> property are removed.</li> <li>The <code>session_cookie_name</code>, <code>send_file_max_age_default</code>, <code>use_x_sendfile</code>, <code>propagate_exceptions</code>, and <code>templates_auto_reload</code> properties on <code>app</code> are removed.</li> <li>The <code>JSON_AS_ASCII</code>, <code>JSON_SORT_KEYS</code>, <code>JSONIFY_MIMETYPE</code>, and <code>JSONIFY_PRETTYPRINT_REGULAR</code> config keys are removed.</li> <li>The <code>app.before_first_request</code> and <code>bp.before_app_first_request</code> decorators are removed.</li> <li><code>json_encoder</code> and <code>json_decoder</code> attributes on app and blueprint, and the corresponding <code>json.JSONEncoder</code> and <code>JSONDecoder</code> classes, are removed.</li> <li>The <code>json.htmlsafe_dumps</code> and <code>htmlsafe_dump</code> functions are removed.</li> <li>Calling setup methods on blueprints after registration is an error instead of a warning. :pr:<code>4997</code></li> </ul> </li> <li> <p>Importing <code>escape</code> and <code>Markup</code> from <code>flask</code> is deprecated. Import them directly from <code>markupsafe</code> instead. :pr:<code>4996</code></p> </li> <li> <p>The <code>app.got_first_request</code> property is deprecated. :pr:<code>4997</code></p> </li> <li> <p>The <code>locked_cached_property</code> decorator is deprecated. Use a lock inside the decorated function if locking is needed. :issue:<code>4993</code></p> </li> </ul> <!-- raw HTML omitted --> </blockquote> <p>... (truncated)</p> </details> <details> <summary>Commits</summary> <ul> <li><a href="https://github.com/pallets/flask/commit/f3b8f570545200c87465d18386f3fc9f2258307a"><code>f3b8f57</code></a> release version 2.3.2</li> <li><a href="https://github.com/pallets/flask/commit/c990bba94ab9bc81adf2d33e83c9a9628a2098f2"><code>c990bba</code></a> update min test env</li> <li><a href="https://github.com/pallets/flask/commit/adedb2a64ea7703369bc89021710b439ee79f8dc"><code>adedb2a</code></a> Merge pull request <a href="https://redirect.github.com/pallets/flask/issues/5101">#5101</a> from pallets/update-werkzeug</li> <li><a href="https://github.com/pallets/flask/commit/e1aedecdc689cc9a79131851dbdabf6c3bc49c9e"><code>e1aedec</code></a> update werkzeug</li> <li><a href="https://github.com/pallets/flask/commit/37badc3ce8b0665e3454547839196a676729309f"><code>37badc3</code></a> update changelog</li> <li><a href="https://github.com/pallets/flask/commit/70f906c51ce49c485f1d355703e9cc3386b1cc2b"><code>70f906c</code></a> Merge pull request from GHSA-m2qf-hxjv-5gpq</li> <li><a href="https://github.com/pallets/flask/commit/8705dd39c4fa563ea0fe0bf84c85da8fcc98b88d"><code>8705dd3</code></a> set <code>Vary: Cookie</code> header consistently for session</li> <li><a href="https://github.com/pallets/flask/commit/9532cba45d2339e90ebf04f178b1e4f2064e7328"><code>9532cba</code></a> fix mypy finding</li> <li><a href="https://github.com/pallets/flask/commit/0bc7356ce1ae11e633426902aba76d525f4523da"><code>0bc7356</code></a> start version 2.3.2</li> <li><a href="https://github.com/pallets/flask/commit/f07fb2b607c1eaa724ca9bfe43e2dc20d97d34de"><code>f07fb2b</code></a> Merge pull request <a href="https://redirect.github.com/pallets/flask/issues/5086">#5086</a> from pallets/release-2.3.1</li> <li>Additional commits viewable in <a href="https://github.com/pallets/flask/compare/2.0.3...2.3.2">compare view</a></li> </ul> </details> <br /> [![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=flask&package-manager=pip&previous-version=2.0.3&new-version=2.3.2)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) --- <details> <summary>Dependabot commands and options</summary> <br /> You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts). </details>
05-01-2023 23:30:21
05-01-2023 23:30:21
_The documentation is not available anymore as the PR was closed or merged._
transformers
23,093
closed
Merge type hints from microsoft/python-type-stubs
# What does this PR do? Merge type definitions from https://github.com/microsoft/python-type-stubs/tree/main/transformers-stubs so it can be removed from Pylance. This is also work towards #16059 I cross-checked the types with what I got at runtime. I also ran `pyright --pythonversion=3.7` on both files to sanity check I'm not writing anything that will obviously break at runtime under Python 3.7 ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? (yes but make commands are not working on my machine) - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. (no) - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). (I guess? type hints are docs, docstrings and external docs should stay the same from my PR) - [ ] Did you write any new necessary tests? (no, if you test using mypy/pyright this should already be picked up. Unit tests should naturally break if using syntax or imports incompatible with 3.7) ## Who can review? ๐Ÿคท The list below doesn't mention typing / type hints I guess @Rocketknight1 who opened #16059 <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
05-01-2023 22:23:13
05-01-2023 22:23:13
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23093). All of your documentation changes will be reflected on that endpoint.<|||||>> the tokenizers imported for Fnet, T5 or Pegasus are wrong I copied exactly what I obtained from runtime inspection (see image below). `FNetTokenizer`, `PegasusTokenizer`, `T5Tokenizer` and `RagTokenizer` have no common base class with `PreTrainedTokenizer`. Unless there's a `Protocol` I could use (or add) instead, or if you're fine with hiding that these are real potential results, or simplify the `Union` by throwing in a `type` or even an `Any` (with a comment about incomplete type), ![image](https://user-images.githubusercontent.com/1350584/235555329-70e467e6-2071-4bf3-8452-1c43dc13536f.png) > not interested in creating dependencies on the auto module over all those new modules Updated to not have runtime dependencies<|||||>Python is not a statically typed language and your runtime inspection will be different form another user's runtime inspection depending on the packages installed. Again this is way more headache that what we want to deal with and the benefits of adding type hints, so we won't merge any type hints in the auto module.<|||||>Since it's not possible to get accurate and useful inline generic type hints without changing the base class to an alias due to Python 3.8 support: To be reconsidered once python 3.8 support is dropped. I'll backport this to https://github.com/microsoft/python-type-stubs so they're at least accurate. Which may or may not be migrated to typeshed at some point.
transformers
23,092
closed
Simplifying Output from Text Classification Pipelines
### Feature request > This feature request references the content discussed in a [HF thread](https://discuss.huggingface.co/t/i-have-trained-my-classifier-now-how-do-i-do-predictions/3625). I was just wondering if there is a particular reason why the output of the `pipe` shown above is a double list? For instance, the output of the following: ```python from transformers import TextClassificationPipeline model = ... tokenizer = ... pipe = TextClassificationPipeline(model=model, tokenizer=tokenizer, return_all_scores=True) # outputs a list of dicts like [[{'label': 'NEGATIVE', 'score': 0.0001223755971295759}, {'label': 'POSITIVE', 'score': 0.9998776316642761}]] res = pipe("I love this movie!") ``` is a list that has _two_ square brackets (i.e., `[[ ... ]]`). This means the indexing process to grab say the negative score requires: ```python neg = res[0][0]["score"] ``` This could be enhanced by simply returning a single dictionary object: ```python res = {"label":["NEGATIVE", "POSITIVE"], "score":[0.0001, 0.9998]} ``` ### Motivation This idea came from reading a [HF discussion thread](https://discuss.huggingface.co/t/i-have-trained-my-classifier-now-how-do-i-do-predictions/3625). It was two years ago, so I did not want to reopen the conversation there. Also, I think this is a feature addition, but feel free to correct me if I am wrong. ### Your contribution I do not currently have plans to submit a PR, but if there is interest from the HF team, then I will take a harder look and comment here if I can make the change and submit a PR. My initial guess is that this is not something bothers many users.
05-01-2023 20:29:37
05-01-2023 20:29:37
This is also something we cannot change without breaking the code of many many users :-)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
23,091
closed
DETR: changing position_embedding and key_value_position_embedding args
# What does this PR do? This PR refers to #19833 , and it just update some variables/docstrings names. Quoting the Issue, the paper mentions that the `position_embeddings` argument of the cross-attention layer are these input embeddings called `object queries`. And the `key_value_position_embeddings` is refered to as `spatial_position_embeddings`. This PR is limited to DETR model. ### Notes This is my first contribution, so I'm happy to adjust anything in this PR. I ran all tests and style, and it went all, except for one: `make fixup`. I got the following output: ![image](https://user-images.githubusercontent.com/70359945/235523582-4a0dded1-abb7-4ada-b673-f59b9542dc4a.png) Reading the output, I assume it is about other file using classes in modeling_detr. I'll wait for updates. I will also wait for review for doc updating or more guidance. Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. https://github.com/huggingface/transformers/issues/19833 - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @NielsRogge @amyeroberts
05-01-2023 20:18:03
05-01-2023 20:18:03
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23091). All of your documentation changes will be reflected on that endpoint.<|||||>Hi, I dont know if this issue is still up. I believe you need to change the names of the files mentioned in the fixup too. Since in the paper of [Conditional DETR](https://arxiv.org/pdf/2108.06152.pdf), they also use the same nomenclature (sometimes `object queries` are also called `content queries` though) . For example in [modeling_conditional_detr.py](https://github.com/huggingface/transformers/blob/bd469c40659ce76c81f69c7726759d249b4aef49/src/transformers/models/conditional_detr/modeling_conditional_detr.py#LL556C9-L556C28) the names of the forward function are still `position_embeddings`, so you would need to change that to `object queries` for consistency. Same applies to the other file mentioned in the fixup too. I am also new to fixing PRs in this repo, so I would leave this decision to the reviewers, but I believe it makes sense if you would like to apply the changes @Lorenzobattistela . If not, maybe another issue could be created for that. <|||||>@A-guridi Hey, I understand these names could be changed to keep consistency, and I am up to do this. But I don't know if this is the right to do since the issue is specific about DETR. But I'll try what you said, let's wait up the reviewers <|||||>@NielsRogge Could you review and confirm if it aligns with your suggestion in #19833? <|||||>> Thanks for working on this, left some comments. > > Specifically, DETR's decoder uses 2 types of position embeddings: > > * the ones that are added to the inputs i.e. hidden states of each cross-attention layer (the object_queries) > * the ones that are added to the keys and values of each cross-attention layer (the spatial_position_embeddings) working on it<|||||>git history got messed up, will open a new PR just with the correct changes<|||||>Reopened PR #24652
transformers
23,090
closed
ConvNextV2 weight not initialized
### System Info kaggle NoteBook: - `transformers` version: 4.27.4 - Platform: Linux-5.15.90+-x86_64-with-debian-bullseye-sid - Python version: 3.7.12 - Huggingface_hub version: 0.13.3 - PyTorch version (GPU?): 1.13.0+cpu (False) - Tensorflow version (GPU?): 2.11.0 (False) - Flax version (CPU?/GPU?/TPU?): 0.6.4 (cpu) - Jax version: 0.3.25 - JaxLib version: 0.3.25 - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? @sgugger @alaradirik @amyeroberts ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ``` from transformers import AutoImageProcessor, ConvNextV2ForImageClassification import torch from datasets import load_dataset dataset = load_dataset("huggingface/cats-image") image = dataset["test"]["image"][0] image_processor = AutoImageProcessor.from_pretrained("facebook/convnextv2-atto-1k-224") model = ConvNextV2ForImageClassification.from_pretrained("facebook/convnextv2-atto-1k-224") inputs = image_processor(image, return_tensors="pt") with torch.no_grad(): logits = model(**inputs).logits # model predicts one of the 1000 ImageNet classes predicted_label = logits.argmax(-1).item() print(model.config.id2label[predicted_label]) ``` output: ``` Downloading builder script: 100% 2.56k/2.56k [00:00<00:00, 120kB/s] Downloading and preparing dataset cats_image/image to /root/.cache/huggingface/datasets/huggingface___cats_image/image/1.9.0/68fbc793fb10cd165e490867f5d61fa366086ea40c73e549a020103dcb4f597e... Downloading data files: 100% 1/1 [00:00<00:00, 2.30it/s] Downloading data: 100% 173k/173k [00:00<00:00, 637kB/s] Extracting data files: 100% 1/1 [00:00<00:00, 59.25it/s] Dataset cats_image downloaded and prepared to /root/.cache/huggingface/datasets/huggingface___cats_image/image/1.9.0/68fbc793fb10cd165e490867f5d61fa366086ea40c73e549a020103dcb4f597e. Subsequent calls will reuse this data. 100% 1/1 [00:00<00:00, 56.19it/s] Downloading (โ€ฆ)rocessor_config.json: 100% 352/352 [00:00<00:00, 12.5kB/s] Downloading (โ€ฆ)lve/main/config.json: 100% 69.7k/69.7k [00:00<00:00, 2.71MB/s] Downloading pytorch_model.bin: 100% 14.9M/14.9M [00:00<00:00, 70.7MB/s] Some weights of the model checkpoint at facebook/convnextv2-atto-1k-224 were not used when initializing ConvNextV2ForImageClassification: ['convnextv2.encoder.stages.2.layers.3.grn.weight', 'convnextv2.encoder.stages.1.layers.0.grn.weight', 'convnextv2.encoder.stages.2.layers.4.grn.weight', 'convnextv2.encoder.stages.0.layers.1.grn.bias', 'convnextv2.encoder.stages.2.layers.5.grn.bias', 'convnextv2.encoder.stages.0.layers.1.grn.weight', 'convnextv2.encoder.stages.3.layers.1.grn.weight', 'convnextv2.encoder.stages.1.layers.1.grn.weight', 'convnextv2.encoder.stages.0.layers.0.grn.weight', 'convnextv2.encoder.stages.2.layers.0.grn.bias', 'convnextv2.encoder.stages.2.layers.2.grn.bias', 'convnextv2.encoder.stages.1.layers.0.grn.bias', 'convnextv2.encoder.stages.3.layers.0.grn.weight', 'convnextv2.encoder.stages.3.layers.1.grn.bias', 'convnextv2.encoder.stages.1.layers.1.grn.bias', 'convnextv2.encoder.stages.2.layers.0.grn.weight', 'convnextv2.encoder.stages.2.layers.1.grn.weight', 'convnextv2.encoder.stages.2.layers.4.grn.bias', 'convnextv2.encoder.stages.2.layers.1.grn.bias', 'convnextv2.encoder.stages.2.layers.3.grn.bias', 'convnextv2.encoder.stages.2.layers.5.grn.weight', 'convnextv2.encoder.stages.2.layers.2.grn.weight', 'convnextv2.encoder.stages.0.layers.0.grn.bias', 'convnextv2.encoder.stages.3.layers.0.grn.bias'] - This IS expected if you are initializing ConvNextV2ForImageClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model). - This IS NOT expected if you are initializing ConvNextV2ForImageClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). Some weights of ConvNextV2ForImageClassification were not initialized from the model checkpoint at facebook/convnextv2-atto-1k-224 and are newly initialized: ['convnextv2.encoder.stages.2.layers.5.grn.beta', 'convnextv2.encoder.stages.1.layers.0.grn.beta', 'convnextv2.encoder.stages.1.layers.1.grn.gamma', 'convnextv2.encoder.stages.0.layers.0.grn.beta', 'convnextv2.encoder.stages.2.layers.0.grn.gamma', 'convnextv2.encoder.stages.0.layers.1.grn.gamma', 'convnextv2.encoder.stages.3.layers.1.grn.beta', 'convnextv2.encoder.stages.2.layers.4.grn.gamma', 'convnextv2.encoder.stages.1.layers.1.grn.beta', 'convnextv2.encoder.stages.3.layers.0.grn.beta', 'convnextv2.encoder.stages.2.layers.0.grn.beta', 'convnextv2.encoder.stages.3.layers.1.grn.gamma', 'convnextv2.encoder.stages.2.layers.5.grn.gamma', 'convnextv2.encoder.stages.2.layers.3.grn.gamma', 'convnextv2.encoder.stages.2.layers.2.grn.beta', 'convnextv2.encoder.stages.2.layers.4.grn.beta', 'convnextv2.encoder.stages.2.layers.1.grn.gamma', 'convnextv2.encoder.stages.0.layers.1.grn.beta', 'convnextv2.encoder.stages.2.layers.2.grn.gamma', 'convnextv2.encoder.stages.3.layers.0.grn.gamma', 'convnextv2.encoder.stages.2.layers.1.grn.beta', 'convnextv2.encoder.stages.0.layers.0.grn.gamma', 'convnextv2.encoder.stages.2.layers.3.grn.beta', 'convnextv2.encoder.stages.1.layers.0.grn.gamma'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. racket, racquet ``` ### Expected behavior No warning should be there about initialization of weights
05-01-2023 19:07:42
05-01-2023 19:07:42
Hi @IMvision12, thanks for opening the issue! This doesn't effect the output logits significantly but we pinpointed the issue and will fix it shortly.
transformers
23,089
closed
[WIP] Add GC ViT model
# What does this PR do? Adds to the Transformers library the GC ViT model. _still a work in progress, everything but the docs_ I did not find any PR related to this model architecture and i am really surprised, so instead of adding a new _Issue_, i will add here the information related to the model. ### Open source status - [x] The model implementation is available - [x] The model weights are available ### Provide useful links for the implementation - Model paper [here](https://arxiv.org/pdf/2206.09959.pdf) - Official Implementation [here](https://github.com/NVlabs/GCVit/) - Timm Implementation with pretrained Weights _(small detail, weights are under a non-commercial share-alike license)_ [here](https://github.com/huggingface/pytorch-image-models/blob/main/timm/models/gcvit.py) It is my first PR, so things are going to be slow, I'll let you know if I have any questions (I expect Github to notify me when someone responds). <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Fixes # (issue) --> ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @amyeroberts <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
05-01-2023 17:58:10
05-01-2023 17:58:10
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23089). All of your documentation changes will be reflected on that endpoint.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
23,088
closed
Generate: work around PT `multinomial` sampling 0 probability tokens
# What does this PR do? Fixes #22979 As raised in [this `transformers` issue](https://github.com/huggingface/transformers/issues/22979) and [this `pytorch` issue](https://github.com/pytorch/pytorch/issues/48841), `multinomial` can erroneously pick `0` probability tokens. According to the reports and my own observations, the error is much more likely on CPU. There is a high chance that a token with `-inf` logits is selected: in this [simple example with `top_k=40`](https://github.com/huggingface/transformers/issues/22979#issuecomment-1529770291), it happens 0.158% of the times on CPU -- or ~50% chance that a sequence with 500 newly generated tokens to have at least one token that shouldn't be there. This PR adds a quick-and-dirty workaround, while the PT team works in the issue: at each sample step, pick 5 candidates, and keep the first valid one. Assuming independence, the probability of having one or more forbidden token in the example above drops to ~5e-10 %. Runtime overhead: considering `distilgpt2`, a small model where operations outside the model have some weight, it got 2% slower on GPU (RTX3090) and 1% slower on CPU (Ryzen 9 5950X). On larger models, the slowdown becomes negligible.
05-01-2023 17:58:01
05-01-2023 17:58:01
_The documentation is not available anymore as the PR was closed or merged._<|||||>As discussed internally, this is a regression on the PyTorch side for 2.0, so this should be fixed by PyTorch and not by us adding some overload to `generate`.<|||||>(closing because of the comment above)
transformers
23,087
open
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.LongTensor [1, 128]] is at version 3; expected version 2 instead. Hint: the backtrace further above shows the operation that failed to compute its gradient. The variable in question was changed in there or anywhere later. Good luck!
### System Info - `transformers` version: 4.25.1 - Platform: Linux-5.13.0-27-generic-x86_64-with-debian-bullseye-sid - Python version: 3.7.15 - Huggingface_hub version: 0.11.1 - PyTorch version (GPU?): 1.10.1+cu113 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? @sgugger @ArthurZucker ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction I want to train a embedding-based retrieval qa system by minimizing the contrastive loss of correct (q,a) pairs against in-batch negatives. I also want it to be run on multiple gpus. But I run into the problem of backward propagation in position embedding layer of BERT (which I infer from the error log) when runing in distributed manner. I don't know where is broken (trainer? BertModel? pytorch?) btw, the code works in single gpu setting Command that I ran: ```bash torchrun --nproc_per_node 2 retrieval_qa.py \ --model_name_or_path bert-base-uncased \ --output_dir debug \ --max_steps 10000 \ --remove_unused_columns False \ --learning_rate 5e-5 \ --logging_steps 10 \ --save_steps 500 \ --warmup_ratio 0.0 \ --per_device_train_batch_size 16 \ --normalize True ``` Error details: ```bash ***** Running training ***** Num examples = 20360 Num Epochs = 16 Instantaneous batch size per device = 16 Total train batch size (w. parallel, distributed & accumulation) = 32 Gradient Accumulation steps = 1 Total optimization steps = 10000 Number of trainable parameters = 109482240 0%| | 0/10000 [00:00<?, ?it/s][W python_anomaly_mode.cpp:104] Warning: Error detected in EmbeddingBackward0. Traceback of forward call that caused the error: File "retrieval_qa.py", line 213, in <module> main() File "retrieval_qa.py", line 209, in main trainer.train() File "/data01/lizehan/anaconda3/envs/beir/lib/python3.7/site-packages/transformers/trainer.py", line 1531, in train ignore_keys_for_eval=ignore_keys_for_eval, File "/data01/lizehan/anaconda3/envs/beir/lib/python3.7/site-packages/transformers/trainer.py", line 1775, in _inner_training_loop tr_loss_step = self.training_step(model, inputs) File "/data01/lizehan/anaconda3/envs/beir/lib/python3.7/site-packages/transformers/trainer.py", line 2523, in training_step loss = self.compute_loss(model, inputs) File "retrieval_qa.py", line 142, in compute_loss token_type_ids=inputs[k]['token_type_ids'], File "/data01/lizehan/anaconda3/envs/beir/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl return forward_call(*input, **kwargs) File "/data01/lizehan/anaconda3/envs/beir/lib/python3.7/site-packages/torch/nn/parallel/distributed.py", line 886, in forward output = self.module(*inputs[0], **kwargs[0]) File "/data01/lizehan/anaconda3/envs/beir/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl return forward_call(*input, **kwargs) File "retrieval_qa.py", line 103, in forward model_output = self.model(**kwargs) File "/data01/lizehan/anaconda3/envs/beir/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl return forward_call(*input, **kwargs) File "/data01/lizehan/anaconda3/envs/beir/lib/python3.7/site-packages/transformers/models/bert/modeling_bert.py", line 1019, in forward past_key_values_length=past_key_values_length, File "/data01/lizehan/anaconda3/envs/beir/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl return forward_call(*input, **kwargs) File "/data01/lizehan/anaconda3/envs/beir/lib/python3.7/site-packages/transformers/models/bert/modeling_bert.py", line 236, in forward position_embeddings = self.position_embeddings(position_ids) File "/data01/lizehan/anaconda3/envs/beir/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl return forward_call(*input, **kwargs) File "/data01/lizehan/anaconda3/envs/beir/lib/python3.7/site-packages/torch/nn/modules/sparse.py", line 160, in forward self.norm_type, self.scale_grad_by_freq, self.sparse) File "/data01/lizehan/anaconda3/envs/beir/lib/python3.7/site-packages/torch/nn/functional.py", line 2044, in embedding return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse) (function _print_stack) [W python_anomaly_mode.cpp:104] Warning: Error detected in EmbeddingBackward0. Traceback of forward call that caused the error: File "retrieval_qa.py", line 213, in <module> main() File "retrieval_qa.py", line 209, in main trainer.train() File "/data01/lizehan/anaconda3/envs/beir/lib/python3.7/site-packages/transformers/trainer.py", line 1531, in train ignore_keys_for_eval=ignore_keys_for_eval, File "/data01/lizehan/anaconda3/envs/beir/lib/python3.7/site-packages/transformers/trainer.py", line 1775, in _inner_training_loop tr_loss_step = self.training_step(model, inputs) File "/data01/lizehan/anaconda3/envs/beir/lib/python3.7/site-packages/transformers/trainer.py", line 2523, in training_step loss = self.compute_loss(model, inputs) File "retrieval_qa.py", line 142, in compute_loss token_type_ids=inputs[k]['token_type_ids'], File "/data01/lizehan/anaconda3/envs/beir/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl return forward_call(*input, **kwargs) File "/data01/lizehan/anaconda3/envs/beir/lib/python3.7/site-packages/torch/nn/parallel/distributed.py", line 886, in forward output = self.module(*inputs[0], **kwargs[0]) File "/data01/lizehan/anaconda3/envs/beir/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl return forward_call(*input, **kwargs) File "retrieval_qa.py", line 103, in forward model_output = self.model(**kwargs) File "/data01/lizehan/anaconda3/envs/beir/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl return forward_call(*input, **kwargs) File "/data01/lizehan/anaconda3/envs/beir/lib/python3.7/site-packages/transformers/models/bert/modeling_bert.py", line 1019, in forward past_key_values_length=past_key_values_length, File "/data01/lizehan/anaconda3/envs/beir/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl return forward_call(*input, **kwargs) File "/data01/lizehan/anaconda3/envs/beir/lib/python3.7/site-packages/transformers/models/bert/modeling_bert.py", line 236, in forward position_embeddings = self.position_embeddings(position_ids) File "/data01/lizehan/anaconda3/envs/beir/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl return forward_call(*input, **kwargs) File "/data01/lizehan/anaconda3/envs/beir/lib/python3.7/site-packages/torch/nn/modules/sparse.py", line 160, in forward self.norm_type, self.scale_grad_by_freq, self.sparse) File "/data01/lizehan/anaconda3/envs/beir/lib/python3.7/site-packages/torch/nn/functional.py", line 2044, in embedding return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse) (function _print_stack) Traceback (most recent call last): File "retrieval_qa.py", line 213, in <module> main() File "retrieval_qa.py", line 209, in main trainer.train() File "/data01/lizehan/anaconda3/envs/beir/lib/python3.7/site-packages/transformers/trainer.py", line 1531, in train ignore_keys_for_eval=ignore_keys_for_eval, File "/data01/lizehan/anaconda3/envs/beir/lib/python3.7/site-packages/transformers/trainer.py", line 1775, in _inner_training_loop tr_loss_step = self.training_step(model, inputs) File "/data01/lizehan/anaconda3/envs/beir/lib/python3.7/site-packages/transformers/trainer.py", line 2541, in training_step Traceback (most recent call last): File "retrieval_qa.py", line 213, in <module> loss.backward() File "/data01/lizehan/anaconda3/envs/beir/lib/python3.7/site-packages/torch/_tensor.py", line 307, in backward torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs) File "/data01/lizehan/anaconda3/envs/beir/lib/python3.7/site-packages/torch/autograd/__init__.py", line 156, in backward allow_unreachable=True, accumulate_grad=True) # allow_unreachable flag RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.LongTensor [1, 128]] is at version 3; expected version 2 instead. Hint: the backtrace further above shows the operation that failed to compute its gradient. The variable in question was changed in there or anywhere later. Good luck! ``` Source code of `retrieval_qa.py` ```Python import logging import os import sys from typing import Dict, List, Tuple, Optional, Any, Union import torch from torch import nn from torch.nn import functional as F from transformers import AutoConfig, AutoModel, AutoTokenizer from transformers import ( HfArgumentParser, set_seed, ) import os from dataclasses import dataclass, field from typing import Optional, List from transformers import TrainingArguments from transformers import DataCollatorWithPadding from transformers.trainer import Trainer import logging logger = logging.getLogger(__name__) # Name of the files used for checkpointing TRAINING_ARGS_NAME = "training_args.bin" TRAINER_STATE_NAME = "trainer_state.json" OPTIMIZER_NAME = "optimizer.pt" SCHEDULER_NAME = "scheduler.pt" SCALER_NAME = "scaler.pt" @dataclass class ModelArguments: model_name_or_path: str = field( metadata={"help": "Path to pretrained model or model identifier from huggingface.co/models"} ) config_name: Optional[str] = field( default=None, metadata={"help": "Pretrained config name or path if not the same as model_name"} ) tokenizer_name: Optional[str] = field( default=None, metadata={"help": "Pretrained tokenizer name or path if not the same as model_name"} ) cache_dir: Optional[str] = field( default=None, metadata={"help": "Where do you want to store the pretrained models downloaded from s3"} ) normalize: bool = field(default=False) pooling: str = field(default='mean') @dataclass class QPCollator(DataCollatorWithPadding): """ Wrapper that does conversion from List[Tuple[encode_qry, encode_psg]] to List[qry], List[psg] and pass batch separately to the actual collator. Abstract out data detail for the model. """ max_q_len: int = 32 max_p_len: int = 128 def __call__(self, features): keys = list(features[0].keys()) collated_batch = {} for key in keys: if not isinstance(features[0][key], str): continue text = [f[key] for f in features] # print(text) text_batch = self.tokenizer( text, padding='max_length', truncation=True, max_length=self.max_p_len, return_tensors="pt", ) collated_batch[key] = text_batch return collated_batch class AutoModelForSentenceEmbedding(nn.Module): def __init__( self, model_name_or_path, tokenizer=None, pooling='cls', normalize=True, ): super(AutoModelForSentenceEmbedding, self).__init__() self.model = AutoModel.from_pretrained(model_name_or_path) self.tokenizer = tokenizer if tokenizer else AutoTokenizer.from_pretrained(model_name_or_path) self.pooling = pooling self.normalize = normalize def forward(self, **kwargs): model_output = self.model(**kwargs) embeddings = self.mean_pooling(model_output, kwargs['attention_mask']) if self.normalize: embeddings = F.normalize(embeddings, p=2, dim=1) return embeddings def mean_pooling(self, model_output, attention_mask): token_embeddings = model_output[0] # First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) def save_pretrained(self, output_path): self.model.save_pretrained(output_path) class EmbeddingTrainer(Trainer): def _save(self, output_dir: Optional[str] = None, state_dict=None): # If we are executing this function, we are the process zero, so we don't check for that. output_dir = output_dir if output_dir is not None else self.args.output_dir os.makedirs(output_dir, exist_ok=True) logger.info(f"Saving model checkpoint to {output_dir}") self.model.save_pretrained(output_dir) if self.tokenizer is not None: self.tokenizer.save_pretrained(output_dir) # Good practice: save your training arguments together with the trained model torch.save(self.args, os.path.join(output_dir, TRAINING_ARGS_NAME)) def compute_loss(self, model, inputs, return_outputs=False): all_embeddings = {} for k in ['question', 'answer']: all_embeddings[k] = model( input_ids=inputs[k]['input_ids'], attention_mask=inputs[k]['attention_mask'], token_type_ids=inputs[k]['token_type_ids'], ) embeddings_query = all_embeddings['question'] embeddings_pos = all_embeddings['answer'] scores = embeddings_query @ embeddings_pos.T labels = torch.arange(0, embeddings_query.shape[0], dtype=torch.long, device=embeddings_query.device) self.cross_entropy = torch.nn.CrossEntropyLoss(reduction='mean') loss = self.cross_entropy(scores, labels) return loss def main(): parser = HfArgumentParser((ModelArguments, TrainingArguments)) if len(sys.argv) == 2 and sys.argv[1].endswith(".json"): model_args, training_args = parser.parse_json_file(json_file=os.path.abspath(sys.argv[1])) else: model_args, training_args = parser.parse_args_into_dataclasses() model_args: ModelArguments training_args: TrainingArguments if ( os.path.exists(training_args.output_dir) and os.listdir(training_args.output_dir) and training_args.do_train and not training_args.overwrite_output_dir ): raise ValueError( f"Output directory ({training_args.output_dir}) already exists and is not empty. Use --overwrite_output_dir to overcome." ) # Setup logging logging.basicConfig( format="%(asctime)s - %(levelname)s - %(name)s - %(message)s", datefmt="%m/%d/%Y %H:%M:%S", level=logging.INFO if training_args.local_rank in [-1, 0] else logging.WARN, ) set_seed(training_args.seed) tokenizer = AutoTokenizer.from_pretrained( model_args.model_name_or_path, cache_dir=model_args.cache_dir ) model = AutoModelForSentenceEmbedding( model_args.model_name_or_path, pooling=model_args.pooling, normalize=model_args.normalize, ) from datasets import load_dataset wq = load_dataset('wiki_qa', split='train') train_dataset = wq.remove_columns('label') data_collator = QPCollator(tokenizer=tokenizer) torch.autograd.set_detect_anomaly(True) trainer = EmbeddingTrainer( model=model, args=training_args, train_dataset=train_dataset, data_collator=data_collator, tokenizer=tokenizer, ) trainer.train() if __name__ == "__main__": main() ``` ### Expected behavior Currently there is no problem on single gpu. I want this code to run normally on multi-gpus. But it seems somewhere is broken... It's hard to find where the problem is cause I'm not super familar with how pytorch/trainer/bertmodel works in distributed manner... Could you help me? Thanks!
05-01-2023 16:05:19
05-01-2023 16:05:19
Hey! Given how big the reproduction script is, I'm gonna say this is probably related to the way you are wrapping the use of transformers models, and would recommend you to ask on the [forum](https://discuss.huggingface.co/) to see if anyone in the community can help you with this! I won't have time to dive into this, maybe @younesbelkada <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Hi @jordane95 @ArthurZucker Sadly I won"t have time to dig into that :/ @jordane95 do you still face the issue on the main branch of transformers?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>> Hi @jordane95 @ArthurZucker Sadly I won"t have time to dig into that :/ @jordane95 do you still face the issue on the main branch of transformers? Yeah, this seems to be a problem involved with the siamese architecture? Althogh I can avoid this error by moving loss computation operations in `compute_loss` function of the trainer class to the `forward` function of model class, I'm still curious why this error occurs.
transformers
23,086
open
VideoMAEForVideoClassification does not support `device_map='auto'` yet.
### Feature request Support for `device_map = 'auto'` so that the VideoMAE models can be run with Int8 mixed precision. For reproducibility, here is what I get when I run the command in a collab notebook (w/ GPU) with accelerate and bitsandbytes installed: ``` from transformers import AutoModelForVideoClassification model_name = 'MCG-NJU/videomae-base-finetuned-ssv2 #Example checkpoint model = AutoModelForVideoClassification.from_pretrained(model_name,load_in_8bit=True,device_map='auto') ``` Which gives the following error message: ``` Overriding torch_dtype=None with `torch_dtype=torch.float16` due to requirements of `bitsandbytes` to enable model loading in mixed int8. Either pass torch_dtype=torch.float16 or don't pass this argument at all to remove this warning. โ•ญโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ Traceback (most recent call last) โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฎ โ”‚ in <cell line: 4>:4 โ”‚ โ”‚ โ”‚ โ”‚ /usr/local/lib/python3.10/dist-packages/transformers/models/auto/auto_factory.py:471 in โ”‚ โ”‚ from_pretrained โ”‚ โ”‚ โ”‚ โ”‚ 468 โ”‚ โ”‚ โ”‚ ) โ”‚ โ”‚ 469 โ”‚ โ”‚ elif type(config) in cls._model_mapping.keys(): โ”‚ โ”‚ 470 โ”‚ โ”‚ โ”‚ model_class = _get_model_class(config, cls._model_mapping) โ”‚ โ”‚ โฑ 471 โ”‚ โ”‚ โ”‚ return model_class.from_pretrained( โ”‚ โ”‚ 472 โ”‚ โ”‚ โ”‚ โ”‚ pretrained_model_name_or_path, *model_args, config=config, **hub_kwargs, โ”‚ โ”‚ 473 โ”‚ โ”‚ โ”‚ ) โ”‚ โ”‚ 474 โ”‚ โ”‚ raise ValueError( โ”‚ โ”‚ โ”‚ โ”‚ /usr/local/lib/python3.10/dist-packages/transformers/modeling_utils.py:2703 in from_pretrained โ”‚ โ”‚ โ”‚ โ”‚ 2700 โ”‚ โ”‚ โ”‚ ) โ”‚ โ”‚ 2701 โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ 2702 โ”‚ โ”‚ โ”‚ if model._no_split_modules is None: โ”‚ โ”‚ โฑ 2703 โ”‚ โ”‚ โ”‚ โ”‚ raise ValueError(f"{model.__class__.__name__} does not support `device_m โ”‚ โ”‚ 2704 โ”‚ โ”‚ โ”‚ no_split_modules = model._no_split_modules โ”‚ โ”‚ 2705 โ”‚ โ”‚ โ”‚ if device_map not in ["auto", "balanced", "balanced_low_0", "sequential"]: โ”‚ โ”‚ 2706 โ”‚ โ”‚ โ”‚ โ”‚ raise ValueError( โ”‚ โ•ฐโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฏ ValueError: VideoMAEForVideoClassification does not support `device_map='auto'` yet. ``` ### Motivation I saw a similar issue #22018 which got resolved really quickly. Hoping that this won't be a lot of work to incorperate into the VideoMAE models :slightly_smiling_face: ### Your contribution Would prefer if someone more familiar with the repo did this instead (it doesn't appear to be much work if the update is like #22207 but I didn't understand what the change did and don't currently have time to study the codebase)
05-01-2023 14:44:43
05-01-2023 14:44:43
cc @alaradirik and @amyeroberts <|||||>Hi, thanks for the commitment. I tested with this change, but there is still a bug which made me very confusing: ```python RuntimeError: CUDA error: CUBLAS_STATUS_EXECUTION_FAILED when calling `cublasSgemmStridedBatched( handle, opa, opb, m, n, k, &alpha, a, lda, stridea, b, ldb, strideb, &beta, c, ldc, stridec, num_batches)` ``` My codes can run on one GPU, and I met this error when I run it on two GPUs with this change. I already checked the compatibility of CUDA, pytorch etc...Also did small tests about training on other small dataset with two GPUs by simple pytorch codes. They all worked. I even set the batch_size=1, the error is still there... If you have any idea about this error, I am really appreciated.
transformers
23,085
closed
Depricate xpu_backend for ddp_backend
# What does this PR do? This PR depricates the `xpu_backend` training argument for a new `ddp_backend` argument that can be passed to the `AcceleratorState` directly when desired/appropriate. Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @sgugger
05-01-2023 13:16:54
05-01-2023 13:16:54
_The documentation is not available anymore as the PR was closed or merged._
transformers
23,084
closed
A potential bug here found in `BeamSearchScorer.process`
### System Info System doesn't matter. ### Who can help? @gante ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction I am new to the transformer and reading the source code of the beam search in `src/transformers/generation/beam_search.py`. And I have a question about the code here: https://github.com/huggingface/transformers/blob/main/src/transformers/generation/beam_search.py#L290 I noticed in PR #21993, the variable `cur` is increased by one before checking if beam hyp is done, but **this variable is increased in a loop**. In other words, this variable would be increased for each sample in this batch. I wonder if there is any particular reason for that. ### Expected behavior ```python def process( self, input_ids: torch.LongTensor, next_scores: torch.FloatTensor, next_tokens: torch.LongTensor, next_indices: torch.LongTensor, pad_token_id: Optional[int] = None, eos_token_id: Optional[Union[int, List[int]]] = None, beam_indices: Optional[torch.LongTensor] = None, ) -> Tuple[torch.Tensor]: cur_len = input_ids.shape[-1] + 1 # the one should be add up here, instead of in the loop. # some code here. for batch_idx, beam_hyp in enumerate(self._beam_hyps): # some code here. self._done[batch_idx] = self._done[batch_idx] or beam_hyp.is_done( next_scores[batch_idx].max().item(), cur_len ) # some code here. ```
05-01-2023 10:53:01
05-01-2023 10:53:01
Hey @ZachVec -- I believe you are correct, the implementation is incorrect for batch size > 1. I'll open a PR to fix it :)<|||||>Should be fixed now ๐Ÿค—
transformers
23,083
closed
Fix string syntax error in logger warning message (additional comma)
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> This warning message was introduced in this PR: https://github.com/huggingface/transformers/pull/21707, but one additional comma exists in the message string: https://github.com/huggingface/transformers/blob/7f4f8b97d03a16f89737ebda4386411f47f4f104/src/transformers/models/blip_2/modeling_blip_2.py#L1607-L1613 Which could cause the following error message as the second part is parsed as an additional argument: ``` File "/newdata/xinwen/miniconda3/lib/python3.10/site-packages/transformers/models/blip_2/modeling_blip_2.py", line 1626, in _preprocess_accelerate logger.warning( Message: 'The `language_model` is not in the `hf_device_map` dictionary and you are running your script in a multi-GPU environment. this may lead to unexpected behavior when using `accelerate`. Please pass a `device_map` that contains `language_model` to remove this warning. Please refer to https://github.com/huggingface/blog/blob/main/accelerate-large-models.md for' Arguments: (' more details on creating a `device_map` for large models.',) ``` This PR fixes this issue by simply replacing this additional comma. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. https://huggingface.co/google/flan-ul2/discussions/6#643a02e5623c970188059c17 - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. cc @sgugger <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
05-01-2023 10:03:32
05-01-2023 10:03:32
_The documentation is not available anymore as the PR was closed or merged._
transformers
23,082
closed
Add support for beam search's num_return_sequencs flag in flax
Fixes part of https://github.com/huggingface/transformers/issues/22696
05-01-2023 08:57:04
05-01-2023 08:57:04
CC @gianlucadetommaso and @gante <|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>@mayankagarwals to make our CI green, you will likely need to rebase with `main`, then run `make fixup`, then commit. Also, tag the PR as ready when you're ready for a final check from a core maintainer :)<|||||>Hey @gante , Yes! Have done those. Wanted to get your views on it before cleaning up the PR. The CI is green now. I couldn't find any specific test for this so didn't add but the following script serves as a decent test for functional purposes ``` from transformers import FlaxGPT2LMHeadModel, GPT2Tokenizer tokenizer = GPT2Tokenizer.from_pretrained("gpt2") model = FlaxGPT2LMHeadModel.from_pretrained("gpt2", pad_token_id=tokenizer.eos_token_id) input_ids = tokenizer.encode('I enjoy walking with my cute dog', return_tensors='jax') beam_output = model.generate( input_ids, max_length=50, num_beams=5, no_repeat_ngram_size=2, num_return_sequences=2, early_stopping=True ) print("All generated hypotheses:\n") for sequence in beam_output.sequences.tolist(): print(tokenizer.decode(sequence, skip_special_tokens=True)) print("-------") ``` Output before the change: ``` All generated hypotheses: I enjoy walking with my cute dog, but I'm not sure if I'll ever be able to walk with him again. I'm not sure if I'll ever be able to walk with him again. I'm not sure if I'll ------- ``` Output after change: ``` All generated hypotheses: I enjoy walking with my cute dog, but I'm not sure if I'll ever be able to walk with him again. I'm not sure if I'll ever be able to walk with him again. I'm not sure if I'll ------- I enjoy walking with my cute dog, but I'm not sure if I'll ever be able to walk with him again. I'm not sure if I'll ever be able to walk with him again. I'm not sure if ------- ```<|||||>> But augment it to return N beams and verify we get the sequences/scores for these beams Sure, have added the test. Thanks for the reference, made it a 5-minute work :p @sanchit-gandhi @gante <|||||>ready for a final check, tagging a core maintainer
transformers
23,081
closed
GPTNeoXAttention does not deal with odd numbers of attention heads
### System Info transformers 4.29, HEAD Linux (not relevant, reproducible on e.g. Mac OS) python 3.10.11 (not relevant, reproducible on e.g. python 3.9.13) ### Who can help? @ArthurZucker @younesbelkada ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction When the hidden size is not evenly divisible by the number of attention heads, GPTNeoXAttention throws an exception while trying to reshape its state. Here is a minimal example to reproduce: ``` from transformers import pipeline p=pipeline(model="Isotonic/gpt_neox_225M",task="text-generation") p("I like to eat ") ``` The exception is the following: ``` /home/jps/anaconda3/envs/transformers/lib/python3.10/site-packages/transformers/models/gpt_neox/modeling_gpt_neox.py:133 in forward รข RuntimeError: shape '[1, 5, 12, 255]' is invalid for input of size 15360 ``` ### Expected behavior What happens here is that the model has 12 attention heads and a hidden size of 1024. Thus, head_size is calculated as 1024 // 12 == 85. Then, 3 * head-size is not 256 but 255, giving the issue here. I am not quite sure how this is supposed to work. In https://github.com/EleutherAI/gpt-neox, the code checks that the hidden size is evenly divisible by the number of heads. This would not enable the use of this model, but it might give a better error message. Is there any chance to run such a model?
05-01-2023 08:24:14
05-01-2023 08:24:14
Hey! Thanks for reporting!Feel free to open a PR that raises and error when the hidden size is not divisible by the number of heads. This indeed should not happen. (Can probably be checked in the config)
transformers
23,080
closed
Fix grammar error in summarization pipeline
# What does this PR do? Fixes a minor grammar error I noticed while using the summarization pipeline. ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. - pipelines: @Narsil
05-01-2023 00:37:47
05-01-2023 00:37:47
_The documentation is not available anymore as the PR was closed or merged._<|||||>Thank you very much !