repo
stringclasses
1 value
number
int64
1
25.3k
state
stringclasses
2 values
title
stringlengths
1
487
body
stringlengths
0
234k
created_at
stringlengths
19
19
closed_at
stringlengths
19
19
comments
stringlengths
0
293k
transformers
24,081
closed
Error when fine-tuning RWKV using HuggingFace Trainer: 2 positional arguments but 3 were given
### System Info i am using Kaggle and I am using 2 t4 gpus. ### Who can help? _No response_ ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction steps to reproduce: 1. run my code 2. it just happens ### Expected behavior this is my code https://www.kaggle.com/code/lostgoldplayer/fine-tuning using the huggingface trainer, i am trying to fine-tune "RWKV/rwkv-4-430m-pile" using my dataset "breadlicker45/musenet-encoders-12k".
06-07-2023 15:22:54
06-07-2023 15:22:54
cc @younesbelkada @ArthurZucker <|||||>I think this has been fixed in https://github.com/huggingface/transformers/pull/23774 Can you try to install transformers with: ```bash pip install git+https://github.com/huggingface/transformers.git ```<|||||>> I think this has been fixed in #23774 Can you try to install transformers with: > > ```shell > pip install git+https://github.com/huggingface/transformers.git > ``` thanks, it works now
transformers
24,080
closed
Byebye pytorch 1.9
# What does this PR do? PyTorch 1.9 was related on 2021/06/15. It's sad, but 2 years is long enough :-)
06-07-2023 15:21:24
06-07-2023 15:21:24
_The documentation is not available anymore as the PR was closed or merged._
transformers
24,079
closed
[doc build] Use secrets
Companion pr to https://github.com/huggingface/doc-builder/pull/379
06-07-2023 15:21:10
06-07-2023 15:21:10
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24079). All of your documentation changes will be reflected on that endpoint.
transformers
24,078
closed
Pop
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
06-07-2023 14:28:03
06-07-2023 14:28:03
transformers
24,077
closed
Fix expected value in tests of the test fetcher
# What does this PR do? #24051 broke a test in the test suite of the test fecther. The PR did not run the CI because the modification was detected as a docstring modification. This is due to a """ in the middle of the file that this PR also fixes.
06-07-2023 13:49:37
06-07-2023 13:49:37
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24077). All of your documentation changes will be reflected on that endpoint.
transformers
24,076
closed
Be nice to TF
# What does this PR do? ... and to @Rocketknight1 To be serious: to avoid OOM issue introduced in #23234. Note `torch_job` use `pytest_num_workers=3`. See [this comment](https://github.com/huggingface/transformers/pull/24071#issuecomment-1580778679).
06-07-2023 13:29:40
06-07-2023 13:29:40
_The documentation is not available anymore as the PR was closed or merged._<|||||>LGTM! If the torch value is `3` we could probably reduce the TF value even lower, but let's try this first.<|||||>`7` avoids the issue, but still 96~98% memory. Change it to `6` and will merge.<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24076). All of your documentation changes will be reflected on that endpoint.
transformers
24,075
closed
[Not to merge before 2023/06/28] Time to say goodbye to py37 😭
# What does this PR do? Byebye! EOL of python 3.7 is `2023/06/27`. https://endoflife.date/python
06-07-2023 13:18:32
06-07-2023 13:18:32
_The documentation is not available anymore as the PR was closed or merged._<|||||>test failures will be addressed soon by related authors. Not related to this PR.
transformers
24,074
closed
[`Hub`] Add `safe_serialization` in push_to_hub
# What does this PR do? This PR adds the possibility to directly push safetensors weight on the Hub, as the `save_pretrained` method was called with default args and it is currently not possible to pass kwargs to use `safe_serialization=True`. cc @sgugger @Narsil Related: https://github.com/huggingface/peft/pull/553 also cc @pacman100
06-07-2023 12:46:55
06-07-2023 12:46:55
_The documentation is not available anymore as the PR was closed or merged._
transformers
24,073
closed
Support PEFT models when saving the model using trainer
# What does this PR do? Currently if one calls `save_model` using the Trainer or `push_to_hub` with a PEFT model, it will push the base model instead of the adapters to reproduce (after `pip install trl peft transformers`): ```python from datasets import load_dataset from trl import SFTTrainer from peft import LoraConfig dataset = load_dataset("imdb", split="train") peft_config = LoraConfig( r=16, lora_alpha=32, lora_dropout=0.05, bias="none", task_type="CAUSAL_LM", ) trainer = SFTTrainer( "EleutherAI/gpt-neo-125m", train_dataset=dataset, dataset_text_field="text", peft_config=peft_config ) trainer.save_model("test-sft") ``` cc @pacman100 @sgugger @amyeroberts
06-07-2023 11:51:39
06-07-2023 11:51:39
_The documentation is not available anymore as the PR was closed or merged._
transformers
24,072
open
Assistant Model With Falcon Fails
### System Info transformers version: 4.30.0.dev0 (and also 4.29.2) python version: 3.9 platform: sagemaker notebook on aws running on g5.12xlarge ### Who can help? @ArthurZucker @younesbelkada @gante ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Code: ``` import transformers import torch bnb_config = transformers.BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_use_double_quant=True, bnb_4bit_quant_type="nf4", bnb_4bit_compute_dtype=torch.bfloat16 ) model = "tiiuae/falcon-40b-instruct" tokenizer = transformers.AutoTokenizer.from_pretrained(model) model = transformers.AutoModelForCausalLM.from_pretrained(model, quantization_config=bnb_config, device_map="auto", trust_remote_code=True) assistant = "tiiuae/falcon-7b-instruct" assistant = transformers.AutoModelForCausalLM.from_pretrained(assistant, quantization_config=bnb_config, device_map="auto", trust_remote_code=True) text = "Girafatron is" inputs = tokenizer(text, return_tensors="pt", return_token_type_ids=False) inputs = {k: v.cuda() for k, v in inputs.items()} model.eval() with torch.no_grad(): outputs = model.generate(**inputs, max_new_tokens=16, assistant_model=assistant, do_sample=False) ``` Exception: ``` IndexError Traceback (most recent call last) /tmp/ipykernel_38536/3021020828.py in <cell line: 7>() 6 model.eval() 7 with torch.no_grad(): ----> 8 outputs = model.generate(**inputs, 9 max_new_tokens=16, 10 assistant_model=assistant, ~/anaconda3/envs/pytorch_p39/lib/python3.9/site-packages/torch/utils/_contextlib.py in decorate_context(*args, **kwargs) 113 def decorate_context(*args, **kwargs): 114 with ctx_factory(): --> 115 return func(*args, **kwargs) 116 117 return decorate_context ~/anaconda3/envs/pytorch_p39/lib/python3.9/site-packages/transformers/generation/utils.py in generate(self, inputs, generation_config, logits_processor, stopping_criteria, prefix_allowed_tokens_fn, synced_gpus, assistant_model, streamer, **kwargs) 1490 1491 # 12. run assisted generate -> 1492 return self.assisted_decoding( 1493 input_ids, 1494 assistant_model=assistant_model, ~/anaconda3/envs/pytorch_p39/lib/python3.9/site-packages/transformers/generation/utils.py in assisted_decoding(self, input_ids, assistant_model, do_sample, logits_processor, logits_warper, stopping_criteria, pad_token_id, eos_token_id, output_attentions, output_hidden_states, output_scores, return_dict_in_generate, synced_gpus, streamer, **model_kwargs) 4380 # 5.3. Discard past key values relative to unused assistant tokens 4381 new_cache_size = new_cur_len - 1 -> 4382 outputs.past_key_values = _crop_past_key_values(self, outputs.past_key_values, new_cache_size) 4383 model_kwargs["assistant_past_key_values"] = _crop_past_key_values( 4384 assistant_model, model_kwargs["assistant_past_key_values"], new_cache_size - 1 ~/anaconda3/envs/pytorch_p39/lib/python3.9/site-packages/transformers/generation/utils.py in _crop_past_key_values(model, past_key_values, maximum_length) 4522 new_past.append( 4523 ( -> 4524 past_key_values[idx][0][:, :, :maximum_length, :], 4525 past_key_values[idx][1][:, :, :maximum_length, :], 4526 ) IndexError: too many indices for tensor of dimension 3``` ### Expected behavior expected to run without errors
06-07-2023 10:44:04
06-07-2023 10:44:04
Hey thanks for reporting! Will have a look asap! Though the model is on the hub we should try to make this run smoothly! <|||||>Hey @yrapop01 👋 I've had a look at assisted generation, and here's what I found: 1. The immediate error you see can be fixed -- Falcon has a custom cache structure, so it needs custom code for cache slicing. Easy peasy. 2. We then hit a more complex wall -- the modeling code does not handle the case where there is a cache and the input ids' length is larger than 1 (a bit of a special case that is needed for the assistant [here](https://github.com/huggingface/transformers/blob/0675600a60b260d6bdb9c8ad91d932d690672bf0/src/transformers/generation/utils.py#L4268)). This means it requires modelling changes, but the model is not yet in `transformers`. I'm going to discuss internally, and let you know of our next steps :)<|||||>@yrapop01 we are adding Falcon to `transformers` (as opposed to hub-loaded model code), I'll make sure assisted generation works in the transformers version!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Do you have ETA for adding Falcon to transformers?<|||||>@yrapop01 You can follow the PR here: #24523 <|||||>Thank you!
transformers
24,071
closed
Make the TF dummies even smaller
cc @ydshieh - this will probably break some things, but if I can make it work it should reduce the memory usage during building a lot
06-07-2023 10:23:41
06-07-2023 10:23:41
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24071). All of your documentation changes will be reflected on that endpoint.<|||||>Adding @frostming's fix for Keras 2.13 to this PR as well<|||||>Tests are passing now, pinging @ydshieh and @amyeroberts for quick review! Unfortunately it's quite hard for me to test if this PR will fix the entire memory issue in the overnight CI, but we'll try this fix and if the memory issues remain then I'll try some other things too.<|||||>I can trigger a run.<|||||>It's more like when several processes are run in parallel: on circleci, 8 pytest processes.<|||||>I also have the same question on why the memory usage increases that much. Previously, we don't really use batch size 1 in dummy if I remember correctly.<|||||>A run is triggered [here](https://app.circleci.com/pipelines/github/huggingface/transformers/65919/workflows/b0c80ba6-3278-47a0-87d9-228c402b35e9/jobs/819654). If you need more changes and more runs to check, you can update the branch https://github.com/huggingface/transformers/tree/run_even_lower_tf_dummy_memory on top of this PR branch.<|||||>@amyeroberts I don't have a very good intuition for this, actually. I think it's some combination of: - The test runners were already at 90%+ memory usage before all of these PRs and tests are run in parallel as @ydshieh said, which means small perturbations could push them over the limit. - The update changed the shapes of dummies a bit - they should be smaller on average, especially after this PR, but maybe they ended up a little larger for some high-memory models and that caused the issues. It's also possible that the update sped up building by removing unnecessary build ops left over from TF 1 and not unneccessarily passing dummies when models were already built. Speeding up builds might cause tests to be in the actual model calls more of the time, and if peak model usage occurs during the actual model calls and we have lots of tests running in parallel then more tests being in the calls simultaneously might result in higher peak memory usage for the test server. This is all purely speculative on my part, though - I can't reproduce the problem locally and the nature of the parallel tests makes it hard to point to a single culprit for an OOM error!<|||||>@ydshieh the new run seems to be passing - there's an unrelated issue with one of the vit-mae tests that I can't reproduce locally and that doesn't seem related, but I think this PR resolves most of the problems!<|||||>@Rocketknight1 Unfortunately, it doesn't pass. We have to go to the `Resource` tab, and see the memory usage. <img width="1017" alt="Screenshot 2023-06-07 145417" src="https://github.com/huggingface/transformers/assets/2521628/59ad6a00-f07e-44e8-8fce-dd4164b7e3f6"> And if you click [Download the full output as a file](https://circleci.com/api/v1.1/project/github/huggingface/transformers/819654/output/111/0?file=true&allocation-id=64806dd5d4d5c9764f27e205-0-build%2F60721B2E), you will see `worker gw7 crashed and worker restarting disabled`. 😢 😭 <|||||>Well, to be more sure, I can revert the PR #23234 on **another branch**, so would be `main` without that PR, and run the test. The goal is to make sure no other PRs contribute to the OOM issue. Do you want me to do this?<|||||>No, I'm pretty confident that the change to the dummies is the cause!<|||||>@ydshieh Can we reduce the number of parallel workers by 1 for these tests? I think the speed boost from these PRs (plus some other ones I have planned) should compensate for any slowdown we experience, and it would be good to be able to make small changes without breaking fragile parallel tests like these<|||||>Let me open a PR for that :-)<|||||>(rebasing onto @ydshieh's PR to test everything in combination)
transformers
24,070
closed
no default_to_square and max_size passed [self.resize(image=image, size=self.size, resample=self.resample) for image in images]
### System Info ubantu 20.04 cuda 11.6 cudnn8.8 transformer 4.24.0 ### Who can help? @amyeroberts @sgugger @vanpelt ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction if self.do_resize and self.size is not None: images = [self.resize(image=image, size=self.size, resample=self.resample) for image in images] the parameters default_to_square and max_size of self.resize( ) function in feature_extraction_vit can't be passed by preprocessor_config.json file,i don't want to use the default resize ways, how to modify the config file or code. the content of preprocessor_config is as follows: { "do_normalize": true, "do_resize": true, "feature_extractor_type": "ViTFeatureExtractor", "image_mean": [ 0.5, 0.5, 0.5 ], "image_std": [ 0.5, 0.5, 0.5 ], "resample": 2, "size": 384, "default_to_square": false, "max_size": 384 } ### Expected behavior if self.do_resize and self.size is not None: images = [self.resize(image=image, size=self.size, resample=self.resample) for image in images] the parameters default_to_square and max_size of self.resize( ) function in feature_extraction_vit can't be passed by preprocessor_config.json file,i don't want to use the default resize ways, how to modify the config file or code. the content of preprocessor_config is as follows: { "do_normalize": true, "do_resize": true, "feature_extractor_type": "ViTFeatureExtractor", "image_mean": [ 0.5, 0.5, 0.5 ], "image_std": [ 0.5, 0.5, 0.5 ], "resample": 2, "size": 384, "default_to_square": false, "max_size": 384 }
06-07-2023 10:15:22
06-07-2023 10:15:22
Hi @cqray1990, thanks for raising this issue. So that we can be help, could you explain a bit more about the expected behaviour and what you're trying to do with the image processor? Do you have a checkpoint on the hub you could share? To change the resizing behaviour of the image processor, you can either modify the `size` parameter in the config file e.g.: ```json { "do_normalize": true, "do_resize": true, "image_processor_type": "ViTImageProcessor", "image_mean": [ 0.5, 0.5, 0.5 ], "image_std": [ 0.5, 0.5, 0.5 ], "resample": 2, "size": {"height": 384, "width": 384}, } ``` note: the feature extractors for vision models have been deprecated in place of image processors. Pass it into the image processor when instantiating: ``` # Override size settings from a pretrained checkpoint image_processor = ViTImageProcessor.from_pretrained(checkpoint, size={"height": 384, "width": 384}) # Create a new image processor, override the default size parameter image_processor = ViTImageProcessor(size={"height": 384, "width": 384}) ``` Or keep the default behaviour and modify just when processing ``` image_processor = ViTImageProcessor() # default behaviour - images are resized to 224x224 pixel_values = image_processor(image).pixel_values # Override default - images resized to 384x384 pixel_values = image_processor(image, size={"height": 384, "width": 384}).pixel_values ``` <|||||>> Hi @cqray1990, thanks for raising this issue. > > So that we can be help, could you explain a bit more about the expected behaviour and what you're trying to do with the image processor? Do you have a checkpoint on the hub you could share? > > To change the resizing behaviour of the image processor, you can either modify the `size` parameter in the config file e.g.: > > ```json > { > "do_normalize": true, > "do_resize": true, > "image_processor_type": "ViTImageProcessor", > "image_mean": [ > 0.5, > 0.5, > 0.5 > ], > "image_std": [ > 0.5, > 0.5, > 0.5 > ], > "resample": 2, > "size": {"height": 384, "width": 384}, > } > ``` > > note: the feature extractors for vision models have been deprecated in place of image processors. > > Pass it into the image processor when instantiating: > > ``` > # Override size settings from a pretrained checkpoint > image_processor = ViTImageProcessor.from_pretrained(checkpoint, size={"height": 384, "width": 384}) > > # Create a new image processor, override the default size parameter > image_processor = ViTImageProcessor(size={"height": 384, "width": 384}) > ``` > > Or keep the default behaviour and modify just when processing > > ``` > image_processor = ViTImageProcessor() > > # default behaviour - images are resized to 224x224 > pixel_values = image_processor(image).pixel_values > > # Override default - images resized to 384x384 > pixel_values = image_processor(image, size={"height": 384, "width": 384}).pixel_values > ``` @amyeroberts the ViTImageProcessor ways to preprocess the image is only rdirectly resize to size={"height": 384, "width": 384},don't inlucde padding, old version 2.24.- have this operation,but the parameters default_to_square and max_size of self.resize( ) function i can't be passes ,cause i need padding<|||||>@cqray1990 The image processors are written to be aligned with the model's preprocessing from its paper, so they won't all perform the same operations. Could you share the checkpoint being used? ViT's feature extractor / image processor has never padded the images. [This is the class in v4.24](https://github.com/huggingface/transformers/blob/94b3f544a1f5e04b78d87a2ae32a7ac252e22e31/src/transformers/models/vit/feature_extraction_vit.py). I don't believe the values of `default_to_square` have ever been used by these classes if in the config. `default_to_square` controls the behaviour of how the output image size is calculated and is model specific. `max_size` is a deprecated argument and also hasn't ever been used by the vit image processor. If there's a specific set of transformations you wish to perform with the input images, I suggest [looking through the different model image processors](https://github.com/search?q=repo%3Ahuggingface%2Ftransformers+path%3Asrc%2Ftransformers%2Fmodels%2F**%2Fimage_processing_*.py+ImageProcessor%28BaseImageProcessor%29&type=code), and finding one which suits your needs, or writing your own custom one. If padding is needed, you can search for [image processors that use the `do_pad` flag](https://github.com/search?q=repo%3Ahuggingface%2Ftransformers+path%3Asrc%2Ftransformers%2Fmodels%2F**%2Fimage_processing_*.py+do_pad&type=code). <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
24,069
closed
[LEDModel, Longformer] Make_fx compatibility
### System Info - transformers version: 4.29.2 - Platform: Linux - Python version: 3.8.16 - PyTorch version: 2.0.1 ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Related to #23907 To reproduce: ```python from torch.fx.experimental.proxy_tensor import make_fx from transformers import LEDModel model = LEDModel.from_pretrained('allenai/led-base-16384', torchscript=True) inp = model.dummy_inputs['input_ids'] model.eval() fx_g = make_fx(model)(inp) ``` The presence of `.item()` and `torch.div` cause make_fx to fail for LEDModel and Longformer with the following error: ``` RuntimeError: It appears that you're trying to get value out of a tracing tensor with aten._local_scalar_dense.default - erroring out! It's likely that this is caused by data-dependent control flow or similar. It may be possible to trace this with dynamic shapes; try setting tracing_mode='symbolic' in your make_fx call. ``` The calls to `item()` could be removed without side effects, and I believe the same is true for replacing `torch.div` with regular python divisions. Even with these adjustments, make_fx seems to fail in both models because of `is_global_attn`: https://github.com/huggingface/transformers/blob/f1660d7e23d4432513fe060bde4f9b7b29f05204/src/transformers/models/led/modeling_led.py#L233 https://github.com/huggingface/transformers/blob/f1660d7e23d4432513fe060bde4f9b7b29f05204/src/transformers/models/longformer/modeling_longformer.py#L601 I am not sure if it would be possible to have a workaround for that. One thing to note is that the presence of `.item()` and `torch.div` also causes graph breaks when using torch dynamo to get the FX representations of these models. It seems that `is_global_attn` is not an issue in that case. ### Expected behavior The full FX representation of LEDModel/Longformer using make_fx
06-07-2023 10:13:40
06-07-2023 10:13:40
@Giuseppe5 As with #23907, happy to look at a PR with a fix! cc @ArthurZucker @younesbelkada <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
24,068
open
Feature request: support more than device keyword arg when calling .to() on BatchEncoding
### Feature request I see that the `.to()` [method](https://github.com/huggingface/transformers/blob/f1660d7e23d4432513fe060bde4f9b7b29f05204/src/transformers/tokenization_utils_base.py#L751) of `BatchEncoding` returned by tokenizers only supports the `device` keyword argument. However, the `BatchFeature` returned by image processors/audio feature extractors supports [more keyword arguments](https://github.com/huggingface/transformers/blob/f1660d7e23d4432513fe060bde4f9b7b29f05204/src/transformers/feature_extraction_utils.py#L187), most importantly `dtype`. ### Motivation This is handy as it allows to do: ``` from transformers import ViTImageProcessor import torch from PIL import Image import requests processor = ViTImageProcessor() url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) encoding = processor(image, return_tensors="pt").to(device="cuda", dtype=torch.float16) for k,v in encoding.items(): print(k,v.dtype) ``` which returns ``` pixel_values torch.float16 ``` ### Your contribution I could submit a PR but if anyone has the bandwidth for this, would be great to add it :D
06-07-2023 09:49:41
06-07-2023 09:49:41
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>@NielsRogge @amyeroberts Hey I am interested on working on this issue. It is my first one so I'll need some guidance. I'll try my best and ping you if i need help!<|||||>@amannagarkar Great! :D Feel free to ask questions on the PR once it's opened! <|||||>I can help with this too @amyeroberts Would that be okay? @amannagarkar <|||||>@Rishab26 I am already working on the issue! I will let you know in a couple of days? I am testing the code for errors and will open a pr shortly! <|||||>@amannagarkar Sure, no worries. Happy to collaborate together too. I've taken a shot at it 👍
transformers
24,067
closed
fix executable batch size issue
# What does this PR do? 1. Fixes #24050 2. Context: We weren't handling properly the `auto_find_batch_size=True` case. Here, we need to free all of the stored model references in the Accelerator each time as mentioned in the https://github.com/huggingface/accelerate/blob/main/examples/by_feature/automatic_gradient_accumulation.py 3. This PR does that.
06-07-2023 08:49:39
06-07-2023 08:49:39
_The documentation is not available anymore as the PR was closed or merged._
transformers
24,066
closed
A minor change to fix a bug when using torch.compile()
When using model = torch.compile(model), the class of new model is changed to OptimizedModule. The function, inspect.signature(self.model.forward), will return ["args", "kargs"], withou "input_ids", "labels", and so on. This will result in the dataset remove all columns and the data sample is a empty dict, and incurs bug when forward propagation. By using "self.model._orig_mod.forward", above problem can be fixed. # What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> With model = torch.compile(model) operation, the class of the model will change from previous model to "torch._dynamo.eval_frame.OptimizedModule", and thus inspect.signature(self.model.forward) will return ["args", "kargs"] instead of expectable variable names of the model. This results in that the data columns are removed and the incurs bug during training. This pull request can fix this bug. ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? This is a minor change, and anyone can review it. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
06-07-2023 07:37:22
06-07-2023 07:37:22
transformers
24,065
closed
Add CPMBee model
# What does this PR do? Adds the [CPM-Bee](https://github.com/OpenBMB/CPM-Bee/tree/main) pytorch model. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
06-07-2023 07:31:11
06-07-2023 07:31:11
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24065). All of your documentation changes will be reflected on that endpoint.<|||||>@ArthurZucker @younesbelkada Please kindly have a review: )<|||||>Hey @gongbaitao ! Thanks a lot for opening a PR and contributing to the HF ecosystem! 🤗 We have recently been trying to push for `model on the hub` and have as much support as we can there. It will also be easier to integrate it! Here is a [tutorial](https://huggingface.co/docs/transformers/custom_models) if that sound good to you!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
24,064
closed
[WIP] Add VGCN-BERT model
# What does this PR do? Adds the VGCN-BERT model from [VGCN-BERT: Augmenting BERT with Graph Embedding for Text Classification](https://arxiv.org/abs/2004.05707) paper. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes #24038 (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. [24038](https://github.com/huggingface/transformers/issues/24038) - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
06-07-2023 04:58:23
06-07-2023 04:58:23
cc @ArthurZucker @younesbelkada <|||||>Hey! Thanks a lot for opening this PR 🔥 We have been pushing a lot for models to be on the hub, as it is a lot easier to implement ! What do you think about trying [this tutorial out](https://huggingface.co/docs/transformers/custom_models)! <|||||>> Hey! Thanks a lot for opening this PR 🔥 We have been pushing a lot for models to be on the hub, as it is a lot easier to implement ! What do you think about trying [this tutorial out](https://huggingface.co/docs/transformers/custom_models)! Thanks for your reply @ArthurZucker . I am trying with the new way and put model with code to [here in hub](https://huggingface.co/zhibinlu/vgcn-bert-distilbert-base-uncased) but I found I can not put my `modeling_graph.py` in my upload script with this function `model.push_to_hub`. Also, people need import it using `from transformers.models.vgcn_bert.modeling_graph import WordGraph` in this PR, but do you have a suggestion when I put it in hub? And, how to put model.safetensors instead of model.bin; how to put other files like `README.md` (I create manually in the hub UI), `tokenizer.json` etc. my upload script ``` from vgcn_bert.configuration_vgcn_bert import VGCNBertConfig from vgcn_bert.modeling_vgcn_bert import VGCNBertModel, VGCNBertForMaskedLM, VGCNBertForMultipleChoice, VGCNBertForQuestionAnswering, VGCNBertForSequenceClassification, VGCNBertForTokenClassification import transformers as tfr from vgcn_bert.modeling_graph import WordGraph VGCNBertConfig.register_for_auto_class() VGCNBertModel.register_for_auto_class("AutoModel") VGCNBertForMaskedLM.register_for_auto_class("AutoModelForMaskedLM") VGCNBertForMultipleChoice.register_for_auto_class("AutoModelForMultipleChoice") VGCNBertForQuestionAnswering.register_for_auto_class("AutoModelForQuestionAnswering") VGCNBertForSequenceClassification.register_for_auto_class("AutoModelForSequenceClassification") VGCNBertForTokenClassification.register_for_auto_class("AutoModelForTokenClassification") tokenizer = tfr.AutoTokenizer.from_pretrained( "zhibinlu/vgcn-distilbert-base-uncased" ) model = VGCNBertModel.from_pretrained( "zhibinlu/vgcn-distilbert-base-uncased", ) from huggingface_hub import notebook_login notebook_login() model.push_to_hub("vgcn-bert-distilbert-base-uncased") ```<|||||>Oups your question slipped through the cracks, let me answers to the best of my knowledge<|||||>> 1. I found I can not put my modeling_graph.py in my upload script with this function model.push_to_hub. > 2. Also, people need import it using from transformers.models.vgcn_bert.modeling_graph import WordGraph in this PR, but do you have a suggestion when I put it in hub? > 3. And, how to put model.safetensors instead of model.bin; > 4. how to put other files like README.md (I create manually in the hub UI), tokenizer.json etc. 1. That is expected, if you have a look at this [doc page](https://huggingface.co/docs/hub/models-uploading#using-the-huggingfacehub-client-library), it will help you upload the actual code. `push_to_hub` is not made for this! 2. When you put the code on the hub (using `upload` or equivalent), then you simply need to create a `config.json`. If you want an example, here is [one](https://huggingface.co/tiiuae/falcon-7b/blob/main/config.json). Falcon is hosted on the hub. 3. You should be able to save the safetensors weights using `use_safetensors=True` option when pushing to the hub/saving the model. 4. The readme can also be uploaded on the hub like any other files. You can push the tokenizer using `tokenizer.push_to_hub("path")` hope this helps you <|||||>> > 1. I found I can not put my modeling_graph.py in my upload script with this function model.push_to_hub. > > 2. Also, people need import it using from transformers.models.vgcn_bert.modeling_graph import WordGraph in this PR, but do you have a suggestion when I put it in hub? > > 3. And, how to put model.safetensors instead of model.bin; > > 4. how to put other files like README.md (I create manually in the hub UI), tokenizer.json etc. > > 1. That is expected, if you have a look at this [doc page](https://huggingface.co/docs/hub/models-uploading#using-the-huggingfacehub-client-library), it will help you upload the actual code. `push_to_hub` is not made for this! > 2. When you put the code on the hub (using `upload` or equivalent), then you simply need to create a `config.json`. If you want an example, here is [one](https://huggingface.co/tiiuae/falcon-7b/blob/main/config.json). Falcon is hosted on the hub. > 3. You should be able to save the safetensors weights using `use_safetensors=True` option when pushing to the hub/saving the model. > 4. The readme can also be uploaded on the hub like any other files. You can push the tokenizer using `tokenizer.push_to_hub("path")` > > hope this helps you @ArthurZucker Ok, these answers will help me, after getting rid of all the problems, I will cancel this PR.<|||||>The new implement is here: https://huggingface.co/zhibinlu/vgcn-bert-distilbert-base-uncased<|||||>Thanks a lot for sharing this and adding this model! 🔥
transformers
24,063
closed
Add option for `trust_remote_code=True` on transformers-cli download
### Feature request Currently is very convenient to download models using `transformers-cli download`, however some models need an extra argument for `trust_remote_code=True` for example `transformers-cli download "tiiuae/falcon-40b"` ### Motivation Would it make sense to add `transformers-cli download "tiiuae/falcon-40b" --trust_remote_code` ### Your contribution PR
06-06-2023 22:48:52
06-06-2023 22:48:52
Sounds like a good addition to me! cc @sgugger who's been doing a lot of the work enabling remote code integration. <|||||>Yes, happy to review a PR!<|||||>thanks @amyeroberts and @sgugger , is there any other argument that worth adding and loading a model?
transformers
24,062
closed
[Wav2Vec2] Fix torch srcipt
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes Wav2Vec2 torch script slow tests ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
06-06-2023 22:22:20
06-06-2023 22:22:20
It seems like `torch.trace(...)` doesn't like property function as it'll always call them. Since we've added the property as a private property in https://github.com/huggingface/transformers/pull/23813, let's just go the simplest way and change it to a function. This PR should fix: ``` tests/models/wav2vec2/test_modeling_wav2vec2.py::Wav2Vec2ModelTest::test_torchscript_output_attentions (line 1180) ValueError: <class 'transformers.models.wav2vec2.modeling_wav2vec2.Wav2Vec2ForCTC'> has no adapter layers. Make sure to define config.adapter_attn_dim. tests/models/wav2vec2/test_modeling_wav2vec2.py::Wav2Vec2ModelTest::test_torchscript_output_hidden_state (line 1180) ValueError: <class 'transformers.models.wav2vec2.modeling_wav2vec2.Wav2Vec2ForCTC'> has no adapter layers. Make sure to define config.adapter_attn_dim. tests/models/wav2vec2/test_modeling_wav2vec2.py::Wav2Vec2ModelTest::test_torchscript_simple (line 1180) ValueError: <class 'transformers.models.wav2vec2.modeling_wav2vec2.Wav2Vec2ForCTC'> has no adapter layers. Make sure to define config.adapter_attn_dim. tests/models/wav2vec2/test_modeling_wav2vec2.py::Wav2Vec2RobustModelTest::test_torchscript_output_attentions (line 1180) ValueError: <class 'transformers.models.wav2vec2.modeling_wav2vec2.Wav2Vec2ForCTC'> has no adapter layers. Make sure to define config.adapter_attn_dim. tests/models/wav2vec2/test_modeling_wav2vec2.py::Wav2Vec2RobustModelTest::test_torchscript_output_hidden_state (line 1180) ValueError: <class 'transformers.models.wav2vec2.modeling_wav2vec2.Wav2Vec2ForCTC'> has no adapter layers. Make sure to define config.adapter_attn_dim. tests/models/wav2vec2/test_modeling_wav2vec2.py::Wav2Vec2RobustModelTest::test_torchscript_simple (line 1180) ValueError: <class 'transformers.models.wav2vec2.modeling_wav2vec2.Wav2Vec2ForCTC'> has no adapter layers. Make sure to define config.adapter_attn_dim. ``` of the slow tests, such as ``` RUN_SLOW=1 pytest tests/models/wav2vec2/test_modeling_wav2vec2.py::Wav2Vec2ModelTest::test_torchscript_simple ``` @sgugger @ydshieh <|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>One possible different way is to raise `AttributeError` instead of `ValueError` if we want to keep the propery.
transformers
24,061
closed
Convert PyTorch checkpoint for more recent models
### Feature request Is it possible to add more examples for model conversion between PyTorch models e.g. convert_bart_original_pytorch_checkpoint_to_pytorch.py in more recent and popular architectures? (e.g. mt5) ### Motivation It would really accelerate setting up new feature/model development using pretained models on the database. ### Your contribution If anyone can give feedback I am happy to share the resulting conversion script.
06-06-2023 21:28:35
06-06-2023 21:28:35
For each of the models added to the transformers repo, conversion scripts are created to port the weights from the original weights to the library format. You can find these under [models' respective folders](https://github.com/huggingface/transformers/tree/main/src/transformers/models) e.g. for [longformer](https://github.com/huggingface/transformers/blob/main/src/transformers/models/longformer/convert_longformer_original_pytorch_lightning_to_pytorch.py), [longt5](https://github.com/huggingface/transformers/blob/main/src/transformers/models/longt5/convert_longt5x_checkpoint_to_flax.py), or the [t5 script used for mt5](https://github.com/huggingface/transformers/blob/main/src/transformers/models/t5/convert_t5_original_tf_checkpoint_to_pytorch.py).<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
24,060
closed
Empty prediction masks after switching from transformers 4.26.1 to transformers 4.29.0
### System Info Hey guys, I have been working on the 4.26.1 version perfectly fine, but I wanted to switch to 4.29.0 to make use of the latest models (such as SAM). The issue I encounter is that when running my prediction code for models such as MaskFormer and Mask2Former, my outputs between versions 4.26.1 and 4.29.0 do not match at all (4.26.1 works fine for all models, while 4.29.0 gives me empty or wrong predictions). Anything I am missing here? - `transformers` version: 4.29.0 - Platform: Linux-4.15.0-194-generic-x86_64-with-glibc2.27 - Python version: 3.10.9 - Huggingface_hub version: 0.12.1 - Safetensors version: not installed - PyTorch version (GPU?): 1.13.1+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ### Who can help? _No response_ ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction I am running a very simple prediction pipeline that looks like this: model instance: ```python mask2former = Mask2Former(<args_of_my_pytorch_lightning_model>) checkpoint = torch.load(<path_to_weights>) mask2former.load_state_dict(checkpoint['state_dict']) mask2former.eval() ``` data module instance: ```python dm.setup(stage="test") dl = dm.test_dataloader() ``` simple prediction loop ```python for sample in dl: pred_mask2former = mask2former.forward(sample["pixel_values"]) ``` ### Expected behavior The outputs produced by the model do not match between versions `4.26.1` and `4.29.0` of the `transformers` package. I get the expected behavior with `4.26.1`, but empty or (very) wrong predictions with 4.29.0` on the exact same data (using the same code/models/...).
06-06-2023 19:46:36
06-06-2023 19:46:36
Hi @alzaia, thanks for reporting this issue. To help us dig into the problem, we need to be able to reproduce the issue. Could you share a minimal snippet that we could use to compare the versions? Specifically, a model checkpoint, any processing logic to produce `sample` and any additional information which might affect the model e.g. `<args_of_my_pytorch_lightning_model>`. <|||||>Hello @amyeroberts , thanks for the reply. Unfortunately I cannot provide data or checkpoints since this is not a public project, but I can give you more information on the processing logic. On the model side, I have a pretty standard pytorch lightning module that looks like this (example with MaskFormer): ```python # Load pretrained model weights (I am using the `facebook/maskformer-swin-tiny-coco` weights here) self.model = MaskFormerForInstanceSegmentation.from_pretrained(args) # Load image processor self.img_processor = MaskFormerImageProcessor.from_pretrained(<name_of_pretrained_model>) # In the forward method outputs = self.model(pixel_values=pixel_values) # Then calling the right post-processing method from self.img_processor to reformat the output for my needs ``` On the data side, my `sample` is a standard pre-processed object with keys `['image', 'mask', 'pixel_values', 'pixel_mask', 'mask_labels', 'class_labels']`. I feed the `pixel_values` tensor in my forward, which is a tensor of shape `torch.Size([1, 3, 256, 256])`. Do you guys feel like this is enough information to try to reproduce it? It may be that some default arguments of the `img_processor` or the `from_pretrained` method of the model changed in the newest version? I will try to look more into that. <|||||>@alzaia OK, I understand if you are unable to share e.g. weights. However, to be able to diagnose the model behaviour, it is necessary to know any changes to the default model architecture / behaviour. * Is `facebook/maskformer-swin-tiny-coco` the checkpoint being used for both the image processor and the model? * For `self.model = MaskFormerForInstanceSegmentation.from_pretrained(args)` - could you share `args`? Specifically, are there any settings which might affect the model's weights e.g. `load_in_8bit`, `device_map`? Are there any kwargs overidding the model config defaults e.g. `mask_feature_size`, `backbone_config` etc? * What format are the images being passed to the image processor e.g. PIL images? * Are any settings of the image processor being overriden in the processing call e.g. `do_resize=False`? * When you say `"calling the right post-processing method"` - which one is being used? Are the empty predictions being observed in the raw model outputs or after post processing? <|||||>Thanks for the fast reply @amyeroberts, I can give more specific details on that. - Right, I am using the same checkpoint (`facebook/maskformer-swin-tiny-coco`) for both the model and the image processor here. - For the loading of the pretrained model, I am using the following: ```python # Doing multiclass (8 classes) with my own label to id mapping dictionary (all args not specified here are using the default values of course): self.model = MaskFormerForInstanceSegmentation.from_pretrained( `facebook/maskformer-swin-tiny-coco`, num_labels=8, id2label=self.id2label, label2id=self.label2id, ignore_mismatched_sizes=True, ) ``` - For the image processor, I am using, for example: ```python self.img_processor = MaskFormerImageProcessor.from_pretrained(`facebook/maskformer-swin-tiny-coco`) self.img_processor.do_resize = True self.img_processor.size = 256 ``` - For the post-processing method, it looks like this: ```python post_processed_output = self.img_processor.post_process_semantic_segmentation( outputs, target_sizes=[ [pixel_values.shape[2], pixel_values.shape[3]] for _ in range(pixel_values.shape[0]) ], ) ``` After running a prediction on the exact same sample to compare the `output` object returned by the model, I realize that it does not return the same logits. For instance, with `v4.26.1` I get an `outputs["masks_queries_logits"]` that looks like this: ``` tensor([[[[-1.0812e+01, -1.1791e+01, -1.1831e+01, ..., -1.1767e+01, -1.2532e+01, -1.0802e+01], [-1.1643e+01, -1.1272e+01, -1.1423e+01, ..., -1.1962e+01, -1.2646e+01, -1.1662e+01], [-1.1235e+01, -1.0905e+01, -1.0670e+01, ..., -1.1819e+01, -1.2373e+01, -1.1361e+01], ..., ``` While with `v4.29` I get the following `outputs["masks_queries_logits"]`: ``` tensor([[[[ 5.7369, 6.7416, 6.6841, ..., 4.9752, 5.1076, 4.8850], [ 7.2394, 8.4208, 8.4004, ..., 6.0630, 6.2035, 5.7476], [ 7.2572, 8.4786, 8.4989, ..., 5.8866, 6.0229, 5.6737], ..., ``` Which seems to indicate that the post-processing is fine, the problem arises during the prediction using the model. In terms of data, I do not do anything fancy, I do start with PIL images, but convert them to numpy arrays, and then process them with the right model processor: ```python self.img_processor: processed_inputs = self.img_processor( images=image, segmentation_maps=mask, return_tensors="pt" ) ``` Thanks for any insights to what may be causing the issue. <|||||>@alzaia Thanks for the additional info, it really helps :) There's a few things to note from the examples: **1. Model instantiation** In the snippet: ```python model = MaskFormerForInstanceSegmentation.from_pretrained( `facebook/maskformer-swin-tiny-coco`, num_labels=8, id2label=id2label, label2id=label2id, ignore_mismatched_sizes=True, ) ``` when you change the number of prediction classes, the pretrained weights for the classification head will be thrown away, and new randomly initialized weights with the correct dimensions created. Creating the model you should see: ``` Some weights of MaskFormerForInstanceSegmentation were not initialized from the model checkpoint at facebook/maskformer-swin-tiny-coco and are newly initialized because the shapes did not match: - class_predictor.weight: found shape torch.Size([134, 256]) in the checkpoint and torch.Size([9, 256]) in the model instantiated - class_predictor.bias: found shape torch.Size([134]) in the checkpoint and torch.Size([9]) in the model instantiated - criterion.empty_weight: found shape torch.Size([134]) in the checkpoint and torch.Size([9]) in the model instantiated You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. ``` This means that the output `output.class_queries_logits` will essentially be nonsense until the model has been finetuned on the downstream task. This is the case for both transformers 4.26.1 and the most recent version. This then has an effect on the segmentation masks post processing, [specifically here](https://github.com/huggingface/transformers/blob/deff5979fee1f989d26e4946c92a5c35ce695af8/src/transformers/models/maskformer/image_processing_maskformer.py#LL988C9-L988C9). When I ran the following script multiple times with transformers==4.26.1, I saw many different predicted (sometimes empty) masks: ```python import requests import matplotlib.pyplot as plt import numpy as np import torch from PIL import Image import transformers from transformers import MaskFormerForInstanceSegmentation, MaskFormerImageProcessor CHECKPOINT = "facebook/maskformer-swin-tiny-coco" id2label = {i: str(i) for i in range(8)} label2id = {str(i): i for i in range(8)} model = MaskFormerForInstanceSegmentation.from_pretrained( CHECKPOINT, num_labels=8, id2label=id2label, label2id=label2id, ignore_mismatched_sizes=True, ) image_processor = MaskFormerImageProcessor.from_pretrained( CHECKPOINT, size={"shortest_edge": 256, "longest_edge": 1333} ) url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) inputs = image_processor(images=image, return_tensors="pt") target_sizes = [[pv.shape[-2], pv.shape[-1]] for pv in inputs["pixel_values"]] with torch.no_grad(): outputs = model(**inputs) segmentation_masks = image_processor.post_process_semantic_segmentation( outputs, target_sizes ) plt.imshow(segmentation_masks) plt.show() ``` Some examples from v.4.26.1: ![maskformer_4_26_1__1](https://github.com/huggingface/transformers/assets/22614925/205d9a3e-7bc8-4df8-af53-54010b3f75ab) ![maskformer_4_26_1__2](https://github.com/huggingface/transformers/assets/22614925/af572f31-f8bb-4125-860b-9171180a5e35) ![maskformer_4_26_1__3](https://github.com/huggingface/transformers/assets/22614925/522abca2-767b-4a78-8930-6e0b749f2db3) I observed the same for the most recent release, 4.30.1. and 4.29.2. Could you try running this in your environment to see if you observe the same behaviour? This way we can try and pin down the differences between our respective environments. **2. Image Processor** The `size` parameter for the image processors is now a dictionary. Although it should still work because of efforts to create backwards compatibility, the equivalent dictionary should be used to change the behaviour. Note: this can also be set in the `from_pretrained` call: ```python image_processor = MaskFormerImageProcessor.from_pretrained( CHECKPOINT, size={"shortest_edge": 256, "longest_edge": 1333} ) ``` **3. Mask Queries Logits** This is interesting - if I save out `outputs.mask_queries_logits` from a run in v4.30.1/v4.29.2 and v.4.26.1 there's 0 difference. However, if I find the largest absolute difference for `outputs_class_queries_logits` between the two versions, it's typically ~3. This will be due to the randomly initialized head. **4. Image processing** You don't need to convert to numpy images before passing to the image processor, you can pass in PIL images directly :) <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
24,059
closed
Error with pip install in Colab Notebook
### System Info Google Colab ### Who can help? @younesbelkada ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction 1) Copy the colab notebook from the page https://huggingface.co/docs/transformers/perf_infer_gpu_one linked at https://colab.research.google.com/drive/1qOjXfQIAULfKvZqwCen8-MoWKGdSatZ4?usp=sharing. 2) Run First Cell to pip install packages. I get the following error: ERROR: Could not find a version that satisfies the requirement bitsandbyte (from versions: none) ERROR: No matching distribution found for bitsandbyte ### Expected behavior I expect the cell to run without error. When I replace ``` !pip install --quiet bitsandbyte ``` with ``` !pip install --quiet bitsandbytes ``` I get the desired behavior.
06-06-2023 19:31:42
06-06-2023 19:31:42
Thank you so much for flagging, just fixed the notebook. Closing the issue, feel free to re-open it if you see more issues!
transformers
24,058
closed
Add AzureOpenAiAgent
# What does this PR do? This PR adds an AzureOpenAiAgent, superceding #23355 since the contributor there does not seem to want to finish the PR. Fixes #23324
06-06-2023 18:01:48
06-06-2023 18:01:48
_The documentation is not available anymore as the PR was closed or merged._<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24058). All of your documentation changes will be reflected on that endpoint.
transformers
24,057
closed
CUDA OOM error when loading sharded checkpoint
### System Info * `transformers` version: 4.27.1 * Platform: Linux-5.19.0-41-generic-x86_64-with-glibc2.35 * Python version: 3.9.12 * Huggingface_hub version: 0.13.2 * PyTorch version (GPU?): 2.0.0+cu117 (True) * Tensorflow version (GPU?): not installed (NA) * Flax version (CPU?/GPU?/TPU?): not installed (NA) * Jax version: not installed * JaxLib version: not installed * Using GPU in script?: Yes * Using distributed or parallel set-up in script?: Yes, parallel (accelerate auto-mapping) ### Who can help? @sgugger @pacman100 ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction This is a port-over from an issue I wrote on the PyTorch forums [here](https://discuss.pytorch.org/t/cuda-oom-error-when-loading-sharded-checkpoint/180710). I received some help from the folks on the PyTorch side, but unfortunately, they seem to be suggesting that there may be an error in the way `Trainer` saves FSDP models. I will rehash the issue here with the additional context: > We fine-tuned Stability’s StableLM-7b using Huggingface’s Trainer API (with FSDP) and then saved the resulting checkpoints in the sharded format that is typical for large language models. Quite surprisingly, however, attempting to load the model for inference leads to a strange error when loading one of the checkpoints (`Unable to load weights from pytorch checkpoint file`) > > We took some further investigative steps by making a simple `torch.load` call on the problem shard, and got a CUDA OOM error. The exceedingly strange thing about this OOM error is that we are working with a node with 8xA100s (80GB), and the given state dict is only 171kB (comprising only 7 layers of the model). So, you can imagine seeing the following error was quite a shock: > > ``` > torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 29.31 GiB (GPU 0; 79.19 GiB total capacity; 55.76 GiB already allocated; 22.48 GiB free; 55.76 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF > ``` > > After looking into this further, I discovered a few threads discussing this issue, like [this one 3](https://discuss.pytorch.org/t/cuda-error-out-of-memory-when-load-models/38011), and attempted some of the fixes, namely loading the state dict on CPU first. After doing so, I received the following error: > `RuntimeError: Trying to resize storage that is not resizable` > > So it seems that approach is out of the question. As I previously said, the strange thing here is that the first two shards load without issue, while the third and fourth cannot be loaded. Additionally, nothing seems particularly out of place in the shard-layer mapping JSON. I am stumped here. The folks at PyTorch let us know that with FSDP models should _not_ be saved using `torch.save` and provided an example script of how they should be saved [here](https://github.com/pytorch/pytorch/blob/e71ab214226af1f9dbded944e939c6447e0e8f09/torch/distributed/checkpoint/examples/fsdp_checkpoint_example.py#L59). Does `Trainer` properly handle these larger models, or is there an extra step we should be taking here? ### Expected behavior Typically, I would expect `save_model` to process the model shards in a way that allows them to be reloaded without issue using `from_pretrained` along with `accelerate`'s auto device mapping.
06-06-2023 18:01:37
06-06-2023 18:01:37
cc @pacman100 <|||||>Hello, looking into this. In the meantime, could you try the main branch of transformers and accelerate and let us know if that works as expected? <|||||>Hi @pacman100, thanks for taking a look! When you say `main` branch, do you mean bumping the versions of `transformers` and `accelerate`?<|||||>Hello, can you do `pip install git+https://github.com/huggingface/transformers` and `pip install git+https://github.com/huggingface/accelerate`? The above PR adds functionality for `SHARDED_STATE_DICT`. Use Accelerate launcher with Trainer. More info here: [Using Accelerate Launcher with Trainer](https://huggingface.co/docs/transformers/main/en/main_classes/trainer#using-accelerate-launcher-with-trainer). Choose state_dict_type as `SHARDED_STATE_DICT` when answering questionnaire post running the command `accelerate config`. Please let us know if this solves the issue.<|||||>Thank you for this update! We utilized the new `SHARDED_STATE_DICT` functionality, but it looks like there may have been a small typo in the `trainer.py` code where the `full_osd` variable isn't saved in fsdp mode. I proposed a fix on the PR below, which allowed me to successfully save a model locally in fsdp mode: https://github.com/huggingface/transformers/pull/24328
transformers
24,056
closed
Multi GPU inference on RTX 4090 fails with RuntimeError: CUDA error: device-side assert triggered (Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.)
### System Info `transformers version`: 4.30.0.dev0 `Platform:` Linux 6.3.5-zen2-1-zen-x86_64-with-glibc2.37.3 on Arch `Python version`: 3.10.9 `PyTorch version (GPU)`: 2.0.1+cu118 (True) `peft version`: 0.4.0.dev0 `accelerate version`: 0.20.0.dev0 `bitsandbytes version`: 0.39.0 `nvidia driver version`: nvidia-dkms-530.41.03-1 ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Run the following code Code: ```python from transformers import LlamaTokenizer, LlamaForCausalLM, GenerationConfig, pipeline import torch import os # os.environ["CUDA_VISIBLE_DEVICES"] = "0,1" model_name = "/models/wizard-vicuna-13B-HF" tokenizer = LlamaTokenizer.from_pretrained(model_name) model = LlamaForCausalLM.from_pretrained(model_name, device_map='auto', torch_dtype=torch.float16, ) pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, max_length=512, temperature=0.7, top_p=0.95, repetition_penalty=1.15 ) import os os.environ["CUDA_LAUNCH_BLOCKING"] = "1" prompt = 'What are the difference between Llamas, Alpacas and Vicunas?' raw_output = pipe(get_prompt(prompt)) parse_text(raw_output) ``` While this code works fine on a single 4090 GPU. Loading any model for inference with 2 or 3 RTX 4090 is resulting in the following error: ``` /opt/conda/conda-bld/pytorch_1682343995026/work/aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [3,0,0], thread: [64,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /opt/conda/conda-bld/pytorch_1682343995026/work/aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [3,0,0], thread: [65,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /opt/conda/conda-bld/pytorch_1682343995026/work/aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [3,0,0], thread: [66,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /opt/conda/conda-bld/pytorch_1682343995026/work/aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [3,0,0], thread: [67,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /opt/conda/conda-bld/pytorch_1682343995026/work/aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [3,0,0], thread: [68,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /opt/conda/conda-bld/pytorch_1682343995026/work/aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [3,0,0], thread: [69,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /opt/conda/conda-bld/pytorch_1682343995026/work/aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [3,0,0], thread: [70,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /opt/conda/conda-bld/pytorch_1682343995026/work/aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [3,0,0], thread: [71,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /opt/conda/conda-bld/pytorch_1682343995026/work/aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [3,0,0], thread: [72,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /opt/conda/conda-bld/pytorch_1682343995026/work/aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [3,0,0], thread: [73,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /opt/conda/conda-bld/pytorch_1682343995026/work/aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [3,0,0], thread: [74,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /opt/conda/conda-bld/pytorch_1682343995026/work/aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [3,0,0], thread: [75,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. --------many such lines---------- File ~/miniconda3/envs/textgen/lib/python3.10/site-packages/accelerate/hooks.py:165, in add_hook_to_module.<locals>.new_forward(*args, **kwargs) 163 output = old_forward(*args, **kwargs) 164 else: --> 165 output = old_forward(*args, **kwargs) 166 return module._hf_hook.post_forward(module, output) File ~/miniconda3/envs/textgen/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py:227, in LlamaAttention.forward(self, hidden_states, attention_mask, position_ids, past_key_value, output_attentions, use_cache) 222 raise ValueError( 223 f"Attention mask should be of size {(bsz, 1, q_len, kv_seq_len)}, but is {attention_mask.size()}" 224 ) 225 attn_weights = attn_weights + attention_mask 226 attn_weights = torch.max( --> 227 attn_weights, torch.tensor(torch.finfo(attn_weights.dtype).min, device=attn_weights.device) 228 ) 230 # upcast attention to fp32 231 attn_weights = nn.functional.softmax(attn_weights, dim=-1, dtype=torch.float32).to(query_states.dtype) RuntimeError: CUDA error: device-side assert triggered Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions. ``` ### Expected behavior Code does inference successfully.
06-06-2023 17:47:07
06-06-2023 17:47:07
cc @Narsil @ArthurZucker @younesbelkada <|||||>This torch error usually comes from bad vocabulary ids being sent to the model. What's odd is that is seems triggered by `device_map="auto"` (it's the only modification with the working code, right ?) Which shouldn't torch this in any way. I doubt the actual cause is the referered line by the stacktrace, even despite CUDA_LAUNCH_BLOCKING, but I could be wrong (and that would be super puzzling indeed). Note: Afaik, if you launch on multiple GPUs in this way, you would be in PP (PIpeline Parallelism) and not in TP (TensorParallelism), TP being much better at getting better latencies (PP will get you larger batch sizes). https://github.com/huggingface/text-generation-inference might be better to reduce latencies by using multiple GPUs (NVLink presence will definitely be a factor)<|||||>Yes device_map="auto" is the only modification. The other thing is that RTX 4090 doesnt support NVLink.<|||||>Could be linked to `accelerate` here. At least I don't have good ideas to what might be happening here.<|||||>Re `accelerate` - @pacman100 would you have any idea what might be causing this issue? <|||||>Hello, as this is related to the big model inference/device_map, @sgugger might have a better idea wrt this issue.<|||||>Please post a full reproducer. I don't have access to" - your local folder `/models/wizard-vicuna-13B-HF` - the `get_prompt` function - the `parse_text` function Using `"huggyllama/llama-13b"` on my side and removing `get_prompt` and `parse_text` works fine on two GPUs.<|||||>I have removed all the other code and it is just now the following. I am still getting the same error. ```python from transformers import LlamaTokenizer, LlamaForCausalLM, GenerationConfig, pipeline import torch import os # os.environ["CUDA_VISIBLE_DEVICES"] = "0,1" model_name = "huggyllama/llama-13b" tokenizer = LlamaTokenizer.from_pretrained(model_name) model = LlamaForCausalLM.from_pretrained(model_name, device_map='auto', torch_dtype=torch.float16, ) pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, max_length=512, temperature=0.7, top_p=0.95, repetition_penalty=1.15 ) import os os.environ["CUDA_LAUNCH_BLOCKING"] = "1" raw_output = pipe("Hi how are you") ``` Error: ``` [0,0,0], thread: [40,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /opt/conda/conda-bld/pytorch_1682343995026/work/aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [0,0,0], thread: [41,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /opt/conda/conda-bld/pytorch_1682343995026/work/aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [0,0,0], thread: [42,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /opt/conda/conda-bld/pytorch_1682343995026/work/aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [0,0,0], thread: [43,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /opt/conda/conda-bld/pytorch_1682343995026/work/aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [0,0,0], thread: [44,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /opt/conda/conda-bld/pytorch_1682343995026/work/aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [0,0,0], thread: [45,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /opt/conda/conda-bld/pytorch_1682343995026/work/aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [0,0,0], thread: [46,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /opt/conda/conda-bld/pytorch_1682343995026/work/aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [0,0,0], thread: [47,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /opt/conda/conda-bld/pytorch_1682343995026/work/aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [0,0,0], thread: [48,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /opt/conda/conda-bld/pytorch_1682343995026/work/aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [0,0,0], thread: [49,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /opt/conda/conda-bld/pytorch_1682343995026/work/aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [0,0,0], thread: [50,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /opt/conda/conda-bld/pytorch_1682343995026/work/aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [0,0,0], thread: [51,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /opt/conda/conda-bld/pytorch_1682343995026/work/aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [0,0,0], thread: [52,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /opt/conda/conda-bld/pytorch_1682343995026/work/aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [0,0,0], thread: [53,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /opt/conda/conda-bld/pytorch_1682343995026/work/aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [0,0,0], thread: [54,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /opt/conda/conda-bld/pytorch_1682343995026/work/aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [0,0,0], thread: [55,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /opt/conda/conda-bld/pytorch_1682343995026/work/aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [0,0,0], thread: [56,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /opt/conda/conda-bld/pytorch_1682343995026/work/aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [0,0,0], thread: [57,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /opt/conda/conda-bld/pytorch_1682343995026/work/aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [0,0,0], thread: [58,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /opt/conda/conda-bld/pytorch_1682343995026/work/aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [0,0,0], thread: [59,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /opt/conda/conda-bld/pytorch_1682343995026/work/aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [0,0,0], thread: [60,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /opt/conda/conda-bld/pytorch_1682343995026/work/aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [0,0,0], thread: [61,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /opt/conda/conda-bld/pytorch_1682343995026/work/aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [0,0,0], thread: [62,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /opt/conda/conda-bld/pytorch_1682343995026/work/aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [0,0,0], thread: [63,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) Cell In[5], line 1 ----> 1 raw_output = pipe("Hi how are you") File ~/miniconda3/envs/textgen/lib/python3.10/site-packages/transformers/pipelines/text_generation.py:201, in TextGenerationPipeline.__call__(self, text_inputs, **kwargs) 160 def __call__(self, text_inputs, **kwargs): 161 """ 162 Complete the prompt(s) given as inputs. 163 (...) 199 ids of the generated text. 200 """ --> 201 return super().__call__(text_inputs, **kwargs) File ~/miniconda3/envs/textgen/lib/python3.10/site-packages/transformers/pipelines/base.py:1118, in Pipeline.__call__(self, inputs, num_workers, batch_size, *args, **kwargs) 1110 return next( 1111 iter( 1112 self.get_iterator( (...) 1115 ) 1116 ) 1117 else: -> 1118 return self.run_single(inputs, preprocess_params, forward_params, postprocess_params) File ~/miniconda3/envs/textgen/lib/python3.10/site-packages/transformers/pipelines/base.py:1125, in Pipeline.run_single(self, inputs, preprocess_params, forward_params, postprocess_params) 1123 def run_single(self, inputs, preprocess_params, forward_params, postprocess_params): 1124 model_inputs = self.preprocess(inputs, **preprocess_params) -> 1125 model_outputs = self.forward(model_inputs, **forward_params) 1126 outputs = self.postprocess(model_outputs, **postprocess_params) 1127 return outputs File ~/miniconda3/envs/textgen/lib/python3.10/site-packages/transformers/pipelines/base.py:1024, in Pipeline.forward(self, model_inputs, **forward_params) 1022 with inference_context(): 1023 model_inputs = self._ensure_tensor_on_device(model_inputs, device=self.device) -> 1024 model_outputs = self._forward(model_inputs, **forward_params) 1025 model_outputs = self._ensure_tensor_on_device(model_outputs, device=torch.device("cpu")) 1026 else: File ~/miniconda3/envs/textgen/lib/python3.10/site-packages/transformers/pipelines/text_generation.py:263, in TextGenerationPipeline._forward(self, model_inputs, **generate_kwargs) 260 generate_kwargs["min_length"] += prefix_length 262 # BS x SL --> 263 generated_sequence = self.model.generate(input_ids=input_ids, attention_mask=attention_mask, **generate_kwargs) 264 out_b = generated_sequence.shape[0] 265 if self.framework == "pt": File ~/miniconda3/envs/textgen/lib/python3.10/site-packages/torch/utils/_contextlib.py:115, in context_decorator.<locals>.decorate_context(*args, **kwargs) 112 @functools.wraps(func) 113 def decorate_context(*args, **kwargs): 114 with ctx_factory(): --> 115 return func(*args, **kwargs) File ~/miniconda3/envs/textgen/lib/python3.10/site-packages/transformers/generation/utils.py:1518, in GenerationMixin.generate(self, inputs, generation_config, logits_processor, stopping_criteria, prefix_allowed_tokens_fn, synced_gpus, assistant_model, streamer, **kwargs) 1512 raise ValueError( 1513 "num_return_sequences has to be 1 when doing greedy search, " 1514 f"but is {generation_config.num_return_sequences}." 1515 ) 1517 # 11. run greedy search -> 1518 return self.greedy_search( 1519 input_ids, 1520 logits_processor=logits_processor, 1521 stopping_criteria=stopping_criteria, 1522 pad_token_id=generation_config.pad_token_id, 1523 eos_token_id=generation_config.eos_token_id, 1524 output_scores=generation_config.output_scores, 1525 return_dict_in_generate=generation_config.return_dict_in_generate, 1526 synced_gpus=synced_gpus, 1527 streamer=streamer, 1528 **model_kwargs, 1529 ) 1531 elif is_contrastive_search_gen_mode: 1532 if generation_config.num_return_sequences > 1: File ~/miniconda3/envs/textgen/lib/python3.10/site-packages/transformers/generation/utils.py:2335, in GenerationMixin.greedy_search(self, input_ids, logits_processor, stopping_criteria, max_length, pad_token_id, eos_token_id, output_attentions, output_hidden_states, output_scores, return_dict_in_generate, synced_gpus, streamer, **model_kwargs) 2332 model_inputs = self.prepare_inputs_for_generation(input_ids, **model_kwargs) 2334 # forward pass to get next token -> 2335 outputs = self( 2336 **model_inputs, 2337 return_dict=True, 2338 output_attentions=output_attentions, 2339 output_hidden_states=output_hidden_states, 2340 ) 2342 if synced_gpus and this_peer_finished: 2343 continue # don't waste resources running the code we don't need File ~/miniconda3/envs/textgen/lib/python3.10/site-packages/torch/nn/modules/module.py:1501, in Module._call_impl(self, *args, **kwargs) 1496 # If we don't have any hooks, we want to skip the rest of the logic in 1497 # this function, and just call forward. 1498 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks 1499 or _global_backward_pre_hooks or _global_backward_hooks 1500 or _global_forward_hooks or _global_forward_pre_hooks): -> 1501 return forward_call(*args, **kwargs) 1502 # Do not call functions when jit is used 1503 full_backward_hooks, non_full_backward_hooks = [], [] File ~/miniconda3/envs/textgen/lib/python3.10/site-packages/accelerate/hooks.py:165, in add_hook_to_module.<locals>.new_forward(*args, **kwargs) 163 output = old_forward(*args, **kwargs) 164 else: --> 165 output = old_forward(*args, **kwargs) 166 return module._hf_hook.post_forward(module, output) File ~/miniconda3/envs/textgen/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py:688, in LlamaForCausalLM.forward(self, input_ids, attention_mask, position_ids, past_key_values, inputs_embeds, labels, use_cache, output_attentions, output_hidden_states, return_dict) 685 return_dict = return_dict if return_dict is not None else self.config.use_return_dict 687 # decoder outputs consists of (dec_features, layer_state, dec_hidden, dec_attn) --> 688 outputs = self.model( 689 input_ids=input_ids, 690 attention_mask=attention_mask, 691 position_ids=position_ids, 692 past_key_values=past_key_values, 693 inputs_embeds=inputs_embeds, 694 use_cache=use_cache, 695 output_attentions=output_attentions, 696 output_hidden_states=output_hidden_states, 697 return_dict=return_dict, 698 ) 700 hidden_states = outputs[0] 701 logits = self.lm_head(hidden_states) File ~/miniconda3/envs/textgen/lib/python3.10/site-packages/torch/nn/modules/module.py:1501, in Module._call_impl(self, *args, **kwargs) 1496 # If we don't have any hooks, we want to skip the rest of the logic in 1497 # this function, and just call forward. 1498 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks 1499 or _global_backward_pre_hooks or _global_backward_hooks 1500 or _global_forward_hooks or _global_forward_pre_hooks): -> 1501 return forward_call(*args, **kwargs) 1502 # Do not call functions when jit is used 1503 full_backward_hooks, non_full_backward_hooks = [], [] File ~/miniconda3/envs/textgen/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py:578, in LlamaModel.forward(self, input_ids, attention_mask, position_ids, past_key_values, inputs_embeds, use_cache, output_attentions, output_hidden_states, return_dict) 570 layer_outputs = torch.utils.checkpoint.checkpoint( 571 create_custom_forward(decoder_layer), 572 hidden_states, (...) 575 None, 576 ) 577 else: --> 578 layer_outputs = decoder_layer( 579 hidden_states, 580 attention_mask=attention_mask, 581 position_ids=position_ids, 582 past_key_value=past_key_value, 583 output_attentions=output_attentions, 584 use_cache=use_cache, 585 ) 587 hidden_states = layer_outputs[0] 589 if use_cache: File ~/miniconda3/envs/textgen/lib/python3.10/site-packages/torch/nn/modules/module.py:1501, in Module._call_impl(self, *args, **kwargs) 1496 # If we don't have any hooks, we want to skip the rest of the logic in 1497 # this function, and just call forward. 1498 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks 1499 or _global_backward_pre_hooks or _global_backward_hooks 1500 or _global_forward_hooks or _global_forward_pre_hooks): -> 1501 return forward_call(*args, **kwargs) 1502 # Do not call functions when jit is used 1503 full_backward_hooks, non_full_backward_hooks = [], [] File ~/miniconda3/envs/textgen/lib/python3.10/site-packages/accelerate/hooks.py:165, in add_hook_to_module.<locals>.new_forward(*args, **kwargs) 163 output = old_forward(*args, **kwargs) 164 else: --> 165 output = old_forward(*args, **kwargs) 166 return module._hf_hook.post_forward(module, output) File ~/miniconda3/envs/textgen/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py:292, in LlamaDecoderLayer.forward(self, hidden_states, attention_mask, position_ids, past_key_value, output_attentions, use_cache) 289 hidden_states = self.input_layernorm(hidden_states) 291 # Self Attention --> 292 hidden_states, self_attn_weights, present_key_value = self.self_attn( 293 hidden_states=hidden_states, 294 attention_mask=attention_mask, 295 position_ids=position_ids, 296 past_key_value=past_key_value, 297 output_attentions=output_attentions, 298 use_cache=use_cache, 299 ) 300 hidden_states = residual + hidden_states 302 # Fully Connected File ~/miniconda3/envs/textgen/lib/python3.10/site-packages/torch/nn/modules/module.py:1501, in Module._call_impl(self, *args, **kwargs) 1496 # If we don't have any hooks, we want to skip the rest of the logic in 1497 # this function, and just call forward. 1498 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks 1499 or _global_backward_pre_hooks or _global_backward_hooks 1500 or _global_forward_hooks or _global_forward_pre_hooks): -> 1501 return forward_call(*args, **kwargs) 1502 # Do not call functions when jit is used 1503 full_backward_hooks, non_full_backward_hooks = [], [] File ~/miniconda3/envs/textgen/lib/python3.10/site-packages/accelerate/hooks.py:165, in add_hook_to_module.<locals>.new_forward(*args, **kwargs) 163 output = old_forward(*args, **kwargs) 164 else: --> 165 output = old_forward(*args, **kwargs) 166 return module._hf_hook.post_forward(module, output) File ~/miniconda3/envs/textgen/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py:227, in LlamaAttention.forward(self, hidden_states, attention_mask, position_ids, past_key_value, output_attentions, use_cache) 222 raise ValueError( 223 f"Attention mask should be of size {(bsz, 1, q_len, kv_seq_len)}, but is {attention_mask.size()}" 224 ) 225 attn_weights = attn_weights + attention_mask 226 attn_weights = torch.max( --> 227 attn_weights, torch.tensor(torch.finfo(attn_weights.dtype).min, device=attn_weights.device) 228 ) 230 # upcast attention to fp32 231 attn_weights = nn.functional.softmax(attn_weights, dim=-1, dtype=torch.float32).to(query_states.dtype) RuntimeError: CUDA error: device-side assert triggered Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions. ``` Additional info ``` model.hf_device_map ``` {'model.embed_tokens': 0, 'model.layers.0': 0, 'model.layers.1': 0, 'model.layers.2': 0, 'model.layers.3': 0, 'model.layers.4': 0, 'model.layers.5': 0, 'model.layers.6': 0, 'model.layers.7': 0, 'model.layers.8': 0, 'model.layers.9': 0, 'model.layers.10': 0, 'model.layers.11': 0, 'model.layers.12': 0, 'model.layers.13': 1, 'model.layers.14': 1, 'model.layers.15': 1, 'model.layers.16': 1, 'model.layers.17': 1, 'model.layers.18': 1, 'model.layers.19': 1, 'model.layers.20': 1, 'model.layers.21': 1, 'model.layers.22': 1, 'model.layers.23': 1, 'model.layers.24': 1, 'model.layers.25': 1, 'model.layers.26': 1, 'model.layers.27': 2, 'model.layers.28': 2, 'model.layers.29': 2, 'model.layers.30': 2, 'model.layers.31': 2, 'model.layers.32': 2, 'model.layers.33': 2, 'model.layers.34': 2, 'model.layers.35': 2, 'model.layers.36': 2, 'model.layers.37': 2, 'model.layers.38': 2, 'model.layers.39': 2, 'model.norm': 2, 'lm_head': 2}<|||||>This issue is not happening after transformers update `4.30.2`.<|||||>> This issue is not happening after transformers update `4.30.2`. Hi @kunaldeo But when I run your code above, it still reports the same error, and I checked that the transformer version is 4.30.2, maybe multi-GPU error, when I use a single GPU, it is normal
transformers
24,055
closed
bring back `filtered_test_list_cross_tests.txt`
# What does this PR do? As discussed in [this comment](https://github.com/huggingface/transformers/pull/23737#discussion_r1220002876)
06-06-2023 17:20:25
06-06-2023 17:20:25
_The documentation is not available anymore as the PR was closed or merged._
transformers
24,054
closed
Oops, missed one
# What does this PR do? Tried to be careful, missed one 🙃 Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @sgugger
06-06-2023 17:13:18
06-06-2023 17:13:18
_The documentation is not available anymore as the PR was closed or merged._
transformers
24,053
closed
Act on deprecations in Accelerate no_trainer examples
# What does this PR do? As title states, acts on deprecation that will be going through in this PR https://github.com/huggingface/accelerate/pull/1537 to avoid nightly failures Fixes # (issue) ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @sgugger
06-06-2023 16:25:13
06-06-2023 16:25:13
_The documentation is not available anymore as the PR was closed or merged._
transformers
24,052
closed
Tiny fix for `check_self_hosted_runner.py`
# What does this PR do? See comment in the change.
06-06-2023 16:08:10
06-06-2023 16:08:10
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24052). All of your documentation changes will be reflected on that endpoint.
transformers
24,051
closed
Modification of one text example file should trigger said test
# What does this PR do? I realized while reviewing PRs like #23912 that a modification of a given text example file won't trigger the run of said test. This PR fixes that bug in the test fetcher and add some tests.
06-06-2023 15:43:26
06-06-2023 15:43:26
_The documentation is not available anymore as the PR was closed or merged._
transformers
24,050
closed
RuntimeError: unscale_() has already been called on this optimizer since the last update().
It mentions this fine-tuning notebook like: https://colab.research.google.com/#fileId=https%3A//huggingface.co/dfurman/falcon-7b-chat-oasst1/blob/main/finetune_falcon7b_oasst1_with_bnb_peft.ipynb full stack: ``` Traceback (most recent call last): File "/home/llama/train_infer/finetune_falcon7b_oasst1_with_bnb_peft.py", line 204, in <module> trainer.train() File "/home/.conda/envs/3.9/lib/python3.9/site-packages/transformers/trainer.py", line 1638, in train return inner_training_loop( File "/home/.conda/envs/3.9/lib/python3.9/site-packages/accelerate/utils/memory.py", line 132, in decorator return function(batch_size, *args, **kwargs) File "/home/.conda/envs/3.9/lib/python3.9/site-packages/transformers/trainer.py", line 1972, in _inner_training_loop self.accelerator.clip_grad_norm_( File "/home/.conda/envs/3.9/lib/python3.9/site-packages/accelerate/accelerator.py", line 1892, in clip_grad_norm_ self.unscale_gradients() File "/home/.conda/envs/3.9/lib/python3.9/site-packages/accelerate/accelerator.py", line 1855, in unscale_gradients self.scaler.unscale_(opt) File "/home/.conda/envs/3.9/lib/python3.9/site-packages/torch/cuda/amp/grad_scaler.py", line 275, in unscale_ raise RuntimeError("unscale_() has already been called on this optimizer since the last update().") RuntimeError: unscale_() has already been called on this optimizer since the last update(). ``` refs https://github.com/huggingface/transformers/pull/23914, I had upgraded the transformers to the latest commit. - `transformers` version: 4.30.0.dev0 - `Platform`: Linux-5.15.0-73-generic-x86_64-with-glibc2.31 - `Python version`: 3.9.16 - `Safetensors` version: 0.3.1 - `PyTorch` version (GPU): 2.0.1+cu117 (True) - `peft` version: 0.4.0.dev0 - `accelerate` version: 0.20.0.dev0 - `bitsandbytes` version: 0.39.0 How to slove it?
06-06-2023 15:37:28
06-06-2023 15:37:28
Can you try restarting your runtime after installing the new version to see if that fixes it? CC @pacman100 <|||||>I'm following up this notebook: https://huggingface.co/dfurman/falcon-7b-chat-oasst1/blob/main/finetune_falcon7b_oasst1_with_bnb_peft.ipynb and getting this dump when training: `File [/workspace/generative_models/.venv/lib/python3.10/site-packages/accelerate/accelerator.py:1873](https://vscode-remote+ssh-002dremote.vscode-resource.vscode-cdn.net/workspace/generative_models/.venv/lib/python3.10/site-packages/accelerate/accelerator.py:1873), in Accelerator.clip_grad_norm_(self, parameters, max_norm, norm_type) 1869 elif self.distributed_type == DistributedType.DEEPSPEED: 1870 # `accelerator.backward(loss)` is doing that automatically. Therefore, its implementation is not needed 1871 # We cannot return the gradient norm because DeepSpeed does it. 1872 return None -> 1873 self.unscale_gradients() 1874 return torch.nn.utils.clip_grad_norm_(parameters, max_norm, norm_type=norm_type) File [/workspace/generative_models/.venv/lib/python3.10/site-packages/accelerate/accelerator.py:1836](https://vscode-remote+ssh-002dremote.vscode-resource.vscode-cdn.net/workspace/generative_models/.venv/lib/python3.10/site-packages/accelerate/accelerator.py:1836), in Accelerator.unscale_gradients(self, optimizer) 1834 while isinstance(opt, AcceleratedOptimizer): 1835 opt = opt.optimizer -> 1836 self.scaler.unscale_(opt) File [/workspace/generative_models/.venv/lib/python3.10/site-packages/torch/cuda/amp/grad_scaler.py:275](https://vscode-remote+ssh-002dremote-.vscode-resource.vscode-cdn.net/workspace/generative_models/.venv/lib/python3.10/site-packages/torch/cuda/amp/grad_scaler.py:275), in GradScaler.unscale_(self, optimizer) 272 optimizer_state = self._per_optimizer_states[id(optimizer)] 274 if optimizer_state["stage"] is OptState.UNSCALED: --> 275 raise RuntimeError("unscale_() has already been called on this optimizer since the last update().") 276 elif optimizer_state["stage"] is OptState.STEPPED: 277 raise RuntimeError("unscale_() is being called after step().") RuntimeError: unscale_() has already been called on this optimizer since the last update().` These are the libraries versions I have: transformers @ git+https://github.com/huggingface/transformers.git@f1660d7e23d4432513fe060bde4f9b7b29f05204 peft @ git+https://github.com/huggingface/peft.git@7fb5f90a38cb39a31396de7e638ead9ecea692af accelerate @ git+https://github.com/huggingface/accelerate.git@62357f218f72cce88b8e086cc372b15c119b590b I have restarted and followed (to the best of my knowledge) the guidance to correct this. @pacman100 Thank you! <|||||>I am getting this as well. Tried restarting the notebook but that doesn't fix it This was working previously. Today ran a fresh install using `!pip install -q git+https://github.com/huggingface/peft.git git+https://github.com/huggingface/transformers.git`<|||||>> Can you try restarting your runtime after installing the new version to see if that fixes it? CC @pacman100 @muellerzr thanks a lot. I have restarted the kernel and tried repeatedly according to the operation, but the problem still exists.<|||||>I am facing the same issue. Tried doing a fresh install still the issue persists.<|||||>Hi all, I was able to rerun my workflow via: 1. Deleting the current runtime 2. Starting a new runtime 3. Running using `pip install transformers`<|||||>> 3\. pip install transformers Hi, @lfunderburk can you share the version of each library?thanks a lot.<|||||>`transformers==4.29.2` and `tokenizers==0.13.3` on Python 3.10.11 Below is the rest of the dependencies ``` absl-py==1.4.0 accelerate==0.20.0.dev0 aiohttp==3.8.4 aiosignal==1.3.1 alabaster==0.7.13 albumentations==1.2.1 altair==4.2.2 anyio==3.6.2 appdirs==1.4.4 argon2-cffi==21.3.0 argon2-cffi-bindings==21.2.0 array-record==0.2.0 arviz==0.15.1 astropy==5.2.2 astunparse==1.6.3 async-timeout==4.0.2 attrs==23.1.0 audioread==3.0.0 autograd==1.5 Babel==2.12.1 backcall==0.2.0 beautifulsoup4==4.11.2 bitsandbytes==0.39.0 bleach==6.0.0 blis==0.7.9 blosc2==2.0.0 bokeh==2.4.3 branca==0.6.0 build==0.10.0 CacheControl==0.12.11 cached-property==1.5.2 cachetools==5.3.0 catalogue==2.0.8 certifi==2022.12.7 cffi==1.15.1 chardet==4.0.0 charset-normalizer==2.0.12 chex==0.1.7 click==8.1.3 cloudpickle==2.2.1 cmake==3.25.2 cmdstanpy==1.1.0 colorcet==3.0.1 colorlover==0.3.0 community==1.0.0b1 confection==0.0.4 cons==0.4.5 contextlib2==0.6.0.post1 contourpy==1.0.7 convertdate==2.4.0 cryptography==40.0.2 cufflinks==0.17.3 cupy-cuda11x==11.0.0 cvxopt==1.3.0 cvxpy==1.3.1 cycler==0.11.0 cymem==2.0.7 Cython==0.29.34 dask==2022.12.1 datascience==0.17.6 datasets==2.12.0 db-dtypes==1.1.1 dbus-python==1.2.16 debugpy==1.6.6 decorator==4.4.2 defusedxml==0.7.1 dill==0.3.6 distributed==2022.12.1 dlib==19.24.1 dm-tree==0.1.8 docutils==0.16 dopamine-rl==4.0.6 duckdb==0.7.1 earthengine-api==0.1.350 easydict==1.10 ecos==2.0.12 editdistance==0.6.2 en-core-web-sm==3.5.0 entrypoints==0.4 ephem==4.1.4 et-xmlfile==1.1.0 etils==1.2.0 etuples==0.3.8 exceptiongroup==1.1.1 fastai==2.7.12 fastcore==1.5.29 fastdownload==0.0.7 fastjsonschema==2.16.3 fastprogress==1.0.3 fastrlock==0.8.1 filelock==3.12.0 firebase-admin==5.3.0 Flask==2.2.4 flatbuffers==23.3.3 flax==0.6.9 folium==0.14.0 fonttools==4.39.3 frozendict==2.3.7 frozenlist==1.3.3 fsspec==2023.4.0 future==0.18.3 gast==0.4.0 GDAL==3.3.2 gdown==4.6.6 gensim==4.3.1 geographiclib==2.0 geopy==2.3.0 gin-config==0.5.0 glob2==0.7 google==2.0.3 google-api-core==2.11.0 google-api-python-client==2.84.0 google-auth==2.17.3 google-auth-httplib2==0.1.0 google-auth-oauthlib==1.0.0 google-cloud-bigquery==3.9.0 google-cloud-bigquery-storage==2.19.1 google-cloud-core==2.3.2 google-cloud-datastore==2.15.1 google-cloud-firestore==2.11.0 google-cloud-language==2.9.1 google-cloud-storage==2.8.0 google-cloud-translate==3.11.1 google-colab==1.0.0 google-crc32c==1.5.0 google-pasta==0.2.0 google-resumable-media==2.5.0 googleapis-common-protos==1.59.0 googledrivedownloader==0.4 graphviz==0.20.1 greenlet==2.0.2 grpcio==1.54.0 grpcio-status==1.48.2 gspread==3.4.2 gspread-dataframe==3.0.8 gym==0.25.2 gym-notices==0.0.8 h5netcdf==1.1.0 h5py==3.8.0 holidays==0.25 holoviews==1.15.4 html5lib==1.1 httpimport==1.3.0 httplib2==0.21.0 huggingface-hub==0.15.1 humanize==4.6.0 hyperopt==0.2.7 idna==3.4 imageio==2.25.1 imageio-ffmpeg==0.4.8 imagesize==1.4.1 imbalanced-learn==0.10.1 imgaug==0.4.0 importlib-resources==5.12.0 imutils==0.5.4 inflect==6.0.4 iniconfig==2.0.0 intel-openmp==2023.1.0 ipykernel==5.5.6 ipython==7.34.0 ipython-genutils==0.2.0 ipython-sql==0.4.1 ipywidgets==7.7.1 itsdangerous==2.1.2 jax==0.4.10 jaxlib==0.4.10+cuda11.cudnn86 jieba==0.42.1 Jinja2==3.1.2 joblib==1.2.0 jsonpickle==3.0.1 jsonschema==4.3.3 jupyter-client==6.1.12 jupyter-console==6.1.0 jupyter_core==5.3.0 jupyter-server==1.24.0 jupyterlab-pygments==0.2.2 jupyterlab-widgets==3.0.7 kaggle==1.5.13 keras==2.12.0 kiwisolver==1.4.4 korean-lunar-calendar==0.3.1 langcodes==3.3.0 lazy_loader==0.2 libclang==16.0.0 librosa==0.10.0.post2 lightgbm==3.3.5 lit==16.0.5 llvmlite==0.39.1 locket==1.0.0 logical-unification==0.4.5 loralib==0.1.1 LunarCalendar==0.0.9 lxml==4.9.2 Markdown==3.4.3 markdown-it-py==2.2.0 MarkupSafe==2.1.2 matplotlib==3.7.1 matplotlib-inline==0.1.6 matplotlib-venn==0.11.9 mdurl==0.1.2 miniKanren==1.0.3 missingno==0.5.2 mistune==0.8.4 mizani==0.8.1 mkl==2019.0 ml-dtypes==0.1.0 mlxtend==0.14.0 more-itertools==9.1.0 moviepy==1.0.3 mpmath==1.3.0 msgpack==1.0.5 multidict==6.0.4 multipledispatch==0.6.0 multiprocess==0.70.14 multitasking==0.0.11 murmurhash==1.0.9 music21==8.1.0 natsort==8.3.1 nbclient==0.7.4 nbconvert==6.5.4 nbformat==5.8.0 nest-asyncio==1.5.6 networkx==3.1 nibabel==3.0.2 nltk==3.8.1 notebook==6.4.8 numba==0.56.4 numexpr==2.8.4 numpy==1.22.4 oauth2client==4.1.3 oauthlib==3.2.2 opencv-contrib-python==4.7.0.72 opencv-python==4.7.0.72 opencv-python-headless==4.7.0.72 openpyxl==3.0.10 opt-einsum==3.3.0 optax==0.1.5 orbax-checkpoint==0.2.1 osqp==0.6.2.post8 packaging==23.1 palettable==3.3.3 pandas==1.5.3 pandas-datareader==0.10.0 pandas-gbq==0.17.9 pandocfilters==1.5.0 panel==0.14.4 param==1.13.0 parso==0.8.3 partd==1.4.0 pathlib==1.0.1 pathy==0.10.1 patsy==0.5.3 peft==0.4.0.dev0 pexpect==4.8.0 pickleshare==0.7.5 Pillow==8.4.0 pip==23.1.2 pip-tools==6.13.0 platformdirs==3.3.0 plotly==5.13.1 plotnine==0.10.1 pluggy==1.0.0 polars==0.17.3 pooch==1.6.0 portpicker==1.3.9 prefetch-generator==1.0.3 preshed==3.0.8 prettytable==0.7.2 proglog==0.1.10 progressbar2==4.2.0 prometheus-client==0.16.0 promise==2.3 prompt-toolkit==3.0.38 prophet==1.1.3 proto-plus==1.22.2 protobuf==3.20.3 psutil==5.9.5 psycopg2==2.9.6 ptyprocess==0.7.0 py-cpuinfo==9.0.0 py4j==0.10.9.7 pyarrow==9.0.0 pyasn1==0.5.0 pyasn1-modules==0.3.0 pycocotools==2.0.6 pycparser==2.21 pyct==0.5.0 pydantic==1.10.7 pydata-google-auth==1.7.0 pydot==1.4.2 pydot-ng==2.0.0 pydotplus==2.0.2 PyDrive==1.3.1 pyerfa==2.0.0.3 pygame==2.3.0 Pygments==2.14.0 PyGObject==3.36.0 pymc==5.1.2 PyMeeus==0.5.12 pymystem3==0.2.0 PyOpenGL==3.1.6 pyparsing==3.0.9 pyproject_hooks==1.0.0 pyrsistent==0.19.3 PySocks==1.7.1 pytensor==2.10.1 pytest==7.2.2 python-apt==0.0.0 python-dateutil==2.8.2 python-louvain==0.16 python-slugify==8.0.1 python-utils==3.5.2 pytz==2022.7.1 pytz-deprecation-shim==0.1.0.post0 pyviz-comms==2.2.1 PyWavelets==1.4.1 PyYAML==6.0 pyzmq==23.2.1 qdldl==0.1.7 qudida==0.0.4 regex==2022.10.31 requests==2.27.1 requests-oauthlib==1.3.1 requests-unixsocket==0.2.0 requirements-parser==0.5.0 responses==0.18.0 rich==13.3.4 rpy2==3.5.5 rsa==4.9 scikit-image==0.19.3 scikit-learn==1.2.2 scipy==1.10.1 scs==3.2.3 seaborn==0.12.2 Send2Trash==1.8.0 setuptools==67.7.2 shapely==2.0.1 six==1.16.0 sklearn-pandas==2.2.0 smart-open==6.3.0 sniffio==1.3.0 snowballstemmer==2.2.0 sortedcontainers==2.4.0 soundfile==0.12.1 soupsieve==2.4.1 soxr==0.3.5 spacy==3.5.2 spacy-legacy==3.0.12 spacy-loggers==1.0.4 Sphinx==3.5.4 sphinxcontrib-applehelp==1.0.4 sphinxcontrib-devhelp==1.0.2 sphinxcontrib-htmlhelp==2.0.1 sphinxcontrib-jsmath==1.0.1 sphinxcontrib-qthelp==1.0.3 sphinxcontrib-serializinghtml==1.1.5 SQLAlchemy==2.0.10 sqlparse==0.4.4 srsly==2.4.6 statsmodels==0.13.5 sympy==1.11.1 tables==3.8.0 tabulate==0.8.10 tblib==1.7.0 tenacity==8.2.2 tensorboard==2.12.2 tensorboard-data-server==0.7.0 tensorboard-plugin-wit==1.8.1 tensorflow==2.12.0 tensorflow-datasets==4.9.2 tensorflow-estimator==2.12.0 tensorflow-gcs-config==2.12.0 tensorflow-hub==0.13.0 tensorflow-io-gcs-filesystem==0.32.0 tensorflow-metadata==1.13.1 tensorflow-probability==0.20.1 tensorstore==0.1.36 termcolor==2.3.0 terminado==0.17.1 text-unidecode==1.3 textblob==0.17.1 tf-slim==1.1.0 thinc==8.1.9 threadpoolctl==3.1.0 tifffile==2023.4.12 tinycss2==1.2.1 tokenizers==0.13.3 toml==0.10.2 tomli==2.0.1 toolz==0.12.0 torch==2.0.1+cu118 torchaudio==2.0.2+cu118 torchdata==0.6.1 torchsummary==1.5.1 torchtext==0.15.2 torchvision==0.15.2+cu118 tornado==6.3.1 tqdm==4.65.0 traitlets==5.7.1 transformers==4.29.2 triton==2.0.0 tweepy==4.13.0 typer==0.7.0 types-setuptools==67.8.0.0 typing_extensions==4.5.0 tzdata==2023.3 tzlocal==4.3 uritemplate==4.1.1 urllib3==1.26.15 vega-datasets==0.9.0 wasabi==1.1.1 wcwidth==0.2.6 webcolors==1.13 webencodings==0.5.1 websocket-client==1.5.1 Werkzeug==2.3.0 wheel==0.40.0 widgetsnbextension==3.6.4 wordcloud==1.8.2.2 wrapt==1.14.1 xarray==2022.12.0 xarray-einstats==0.5.1 xgboost==1.7.5 xlrd==2.0.1 xxhash==3.2.0 yarl==1.9.2 yellowbrick==1.5 yfinance==0.2.18 zict==3.0.0 zipp==3.15.0 ```<|||||>Hello everyone, I found the cause to be `auto_find_batch_size=True`. In the meantime, please confirm disabling it and passing small `per_device_train_batch_size =4` works (I can confirm). I'm working on a PR to resolve this. ![Screenshot 2023-06-07 at 12 37 13 PM](https://github.com/huggingface/transformers/assets/13534540/d0765b4c-77c8-4b38-bfb3-80fdfb09a9a1) <|||||>> Hello everyone, I found the cause to be `auto_find_batch_size=True`. In the meantime, please confirm disabling it and passing small `per_device_train_batch_size =4` works (I can confirm). I'm working on a PR to resolve this. > > ![Screenshot 2023-06-07 at 12 37 13 PM](https://user-images.githubusercontent.com/13534540/243952303-d0765b4c-77c8-4b38-bfb3-80fdfb09a9a1.png) Seems it is working with those changes in parameters (20 min training so far... previously it cancelled at about 4). THANKS!<|||||>@FedericoMontana this has also been fixed with the latest Accelerate release I believe, worst case you can use `pip install git+https://github.com/huggingface/accelerate` until we release the patch, and you can use `auto_find_batch_size=True`<|||||>I am still facing this issue. ``` File "/home/kunal/miniconda3/envs/lora/lib/python3.10/site-packages/transformers/trainer.py", line 1843, in _inner_training_loop self.accelerator.clip_grad_norm_( File "/home/kunal/miniconda3/envs/lora/lib/python3.10/site-packages/accelerate/accelerator.py", line 1913, in clip_grad_norm_ self.unscale_gradients() File "/home/kunal/miniconda3/envs/lora/lib/python3.10/site-packages/accelerate/accelerator.py", line 1876, in unscale_gradients self.scaler.unscale_(opt) File "/home/kunal/miniconda3/envs/lora/lib/python3.10/site-packages/torch/cuda/amp/grad_scaler.py", line 275, in unscale_ raise RuntimeError("unscale_() has already been called on this optimizer since the last update().") RuntimeError: unscale_() has already been called on this optimizer since the last update(). ``` This issue doesnt exist in `transformers@de9255de27abfcae4a1f816b904915f0b1e23cd9`, but I cannot use that build due to `UnboundLocalError: local variable 'load_result' referenced before assignment` error. Environment: ``` - `transformers` version: 4.31.0.dev0 - Platform: Linux-6.3.9-zen1-1-zen-x86_64-with-glibc2.37 - Python version: 3.10.11 - Huggingface_hub version: 0.15.1 - Safetensors version: 0.3.1 - PyTorch version (GPU?): 2.0.1+cu118 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ```<|||||>@kunaldeo, provide a minimal reproducible example post installing Transformers and Accelerate from Main branch. That would be helpful for us to deep dive.<|||||>This is working now.
transformers
24,049
closed
Prevent ZeroDivisionError on `trainer.evaluate` if model and dataset are tiny
Closes #24048 Hello! ## Pull Request overview * Prevent ZeroDivisionError on `trainer.evaluate` if model and dataset are tiny ## Details Please see #24048 for details. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? Tests would be quite flaky for this. ## Who can review? @sgugger, @younesbelkada - Tom Aarsen
06-06-2023 15:07:38
06-06-2023 15:07:38
_The documentation is not available anymore as the PR was closed or merged._
transformers
24,048
closed
ZeroDivisionError on `trainer.evaluate` if model and dataset are tiny
### System Info - `transformers` version: 4.29.2 - Platform: Windows-10-10.0.22621-SP0 - Python version: 3.9.16 - Huggingface_hub version: 0.15.1 - Safetensors version: not installed - PyTorch version (GPU?): 2.0.1 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ### Who can help? @sgugger cc: @younesbelkada ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction Consider the following snippet: ```python from torch import nn from transformers import Trainer from datasets import Dataset model = nn.Identity() eval_dataset = Dataset.from_dict({"tokens": [1]}) trainer = Trainer( model, eval_dataset=eval_dataset, ) metrics = trainer.evaluate() print(metrics) ``` (Sometimes) results in ``` Traceback (most recent call last): File "[sic]\demo.py", line 13, in <module> metrics = trainer.evaluate() File "[sic]\transformers\trainer.py", line 3043, in evaluate speed_metrics( File "[sic]\transformers\trainer_utils.py", line 354, in speed_metrics samples_per_second = num_samples / runtime ZeroDivisionError: float division by zero ``` This is rarely an issue - only when models and datasets are tiny. The reason I am invested in resolving this is testing purposes. See for example this [Action](https://github.com/lvwerra/trl/actions/runs/5179991753/jobs/9351434458) on TRL. To keep the testing efficient, the TRL maintainers chose a small model and dataset - which sometimes caused this flaky test. ### Expected behavior I would expect any of these: ``` 1. {'eval_runtime': 0.0, 'eval_samples_per_second': 0.0, 'eval_steps_per_second': 0.0} 2. {'eval_runtime': 0.0, 'eval_samples_per_second': None, 'eval_steps_per_second': None} 3. {'eval_runtime': 0.0, 'eval_samples_per_second': torch.inf, 'eval_steps_per_second': torch.inf} 4. {'eval_runtime': 0.0} ``` Note that these cases would essentially never occur other than during tests. With other words, I think all are fine as long as there's no exception. However, I prefer option 4 personally, but I am open to suggestions. For simplicity, I'll push a simple PR to implement 4. - Tom Aarsen
06-06-2023 15:05:21
06-06-2023 15:05:21
transformers
24,047
closed
AttributeError: 'NoneType' object has no attribute 'flush'
### System Info **System info** - `transformers` version: 4.29.2 - Platform: Windows-10-10.0.19045-SP0 - Python version: 3.11.3 - Huggingface_hub version: 0.15.1 - Safetensors version: not installed - PyTorch version (GPU?): not installed (NA) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: No - Using distributed or parallel set-up in script?: <fill in> **Issue** **After creating virtual environment and installing requirements.txt, carried out following steps to convert **`.py`** file into **`.exe`** ** using pyinstaller library **step 1 : `pip install pyinstaller`** **step 2 : `pyinstaller --name GrammarCorrector --onefile --windowed new_gram1_Tkinter.py --hidden-import cymem.cymem`** **Then i got this AttributeError:** Traceback (most recent call last): File "new_gram1_Tkinter.py", line 271, in <module> File "new_gram1_Tkinter.py", line 142, in __init__ File "<frozen importlib._bootstrap>", line 1007, in _find_and_load File "<frozen importlib._bootstrap>", line 986, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 680, in _load_unlocked File "PyInstaller\loader\pyimod02_importers.py", line 352, in exec_module File "transformers\__init__.py", line 26, in <module> File "<frozen importlib._bootstrap>", line 1007, in _find_and_load File "<frozen importlib._bootstrap>", line 986, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 680, in _load_unlocked File "PyInstaller\loader\pyimod02_importers.py", line 352, in exec_module File "transformers\dependency_versions_check.py", line 17, in <module> File "<frozen importlib._bootstrap>", line 1007, in _find_and_load File "<frozen importlib._bootstrap>", line 986, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 680, in _load_unlocked File "PyInstaller\loader\pyimod02_importers.py", line 352, in exec_module File "transformers\utils\__init__.py", line 30, in <module> File "<frozen importlib._bootstrap>", line 1007, in _find_and_load File "<frozen importlib._bootstrap>", line 986, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 680, in _load_unlocked File "PyInstaller\loader\pyimod02_importers.py", line 352, in exec_module File "transformers\utils\generic.py", line 29, in <module> File "<frozen importlib._bootstrap>", line 1007, in _find_and_load File "<frozen importlib._bootstrap>", line 986, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 680, in _load_unlocked File "PyInstaller\loader\pyimod02_importers.py", line 352, in exec_module File "transformers\utils\import_utils.py", line 36, in <module> File "transformers\utils\logging.py", line 124, in get_logger File "transformers\utils\logging.py", line 88, in _configure_library_root_logger **AttributeError: 'NoneType' object has no attribute 'flush'** I raised issue in `pyinstaller `repository, and i got answer as followed below from @bwoodsend who is a maintainer **You should be able to get the same error without `PyInstaller `if you run your source code using `pythonw `instead of just `python`.** **Raise a bug to** **`transformers`** if they have their own **windowed-mode-naive logger**. https://github.com/orgs/pyinstaller/discussions/7689#discussion-5270292 ### Who can help? @sgugger @ArthurZucker @LysandreJik ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction I want my **`.py`** file in **`.exe`** file and when i am doing that using `pyinstaller `it is giving Attribute error when i asked `pyinstaller `developers on repository they suggested me to raise bug report on `transformers `saying that **if they have their own windowed-mode-naive logger.** ### Expected behavior i want **`.exe`** file from **`.py`**
06-06-2023 14:05:43
06-06-2023 14:05:43
cc @LysandreJik maybe<|||||>Hello, I have encountered the same problem as you, did you solve it?<|||||>Hi! I also encountered this error. I'm building a package with `pyinstaller` which works on MacOS with M2 amd64. Running inside of a Windows VM running Windows 11, this fails with the same error. ``` File "<frozen importlib._bootstrap>", line 1176, in _find_and_load File "<frozen importlib._bootstrap>", line 1147, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 690, in _load_unlocked File "PyInstaller\loader\pyimod02_importers.py", line 385, in exec_module File "transformers\utils\import_utils.py", line 37, in <module> logger = logging.get_logger(__name__) # pylint: disable=invalid-name ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "transformers\utils\logging.py", line 124, in get_logger _configure_library_root_logger() File "transformers\utils\logging.py", line 88, in _configure_library_root_logger _default_handler.flush = sys.stderr.flush ^^^^^^^^^^^^^^^^ AttributeError: 'NoneType' object has no attribute 'flush' ```<|||||>> 你好!我也遇到了这个错误。我正在构建一个在 MacOS 上使用 M2 amd64 的软件包。在运行 Windows 11 的 Windows VM 中运行,此操作失败并出现相同的错误。`pyinstaller` > > ``` > File "<frozen importlib._bootstrap>", line 1176, in _find_and_load > File "<frozen importlib._bootstrap>", line 1147, in _find_and_load_unlocked > File "<frozen importlib._bootstrap>", line 690, in _load_unlocked > File "PyInstaller\loader\pyimod02_importers.py", line 385, in exec_module > File "transformers\utils\import_utils.py", line 37, in <module> > logger = logging.get_logger(__name__) # pylint: disable=invalid-name > ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ > File "transformers\utils\logging.py", line 124, in get_logger > _configure_library_root_logger() > File "transformers\utils\logging.py", line 88, in _configure_library_root_logger > _default_handler.flush = sys.stderr.flush > ^^^^^^^^^^^^^^^^ > AttributeError: 'NoneType' object has no attribute 'flush' > ``` You can add this code before your transformers import if sys.stdout is None: sys.stdout = open(os.devnull, "w") if sys.stderr is None: sys.stderr = open(os.devnull, "w")
transformers
24,046
closed
Reduce memory usage in TF building
This PR reduces the default shape of dummy inputs from (3, 3) to (2, 2). This slightly reduces the memory usage when building TF models, which should hopefully fix some of our pipeline tests. We could replace the dummy inputs with symbolic tensors, which would mean we could build TF models with 0 memory usage, but this would make TF model building slower (~4X) because it would implicitly compile the model when building, which is probably not an acceptable tradeoff. cc @ydshieh and @amyeroberts as core maintainer
06-06-2023 13:59:20
06-06-2023 13:59:20
_The documentation is not available anymore as the PR was closed or merged._<|||||>Let me run it on CI and see.<|||||>Sorry for the delay - there's an issue with Funnel that wasn't reproducing on my machine. I eventually figured out that the problem is the classic TF one: indices for `tf.gather` are not validated on GPU but are validated on CPU, and so the bug only becomes apparent on CPU. Will fix in just a sec!<|||||>I also tried to run the change in this PR, and got ``` FAILED tests/pipelines/test_pipelines_common.py::PipelineUtilsTest::test_load_default_pipelines_tf - tensorflow.python.framework.errors_impl.ResourceExhaustedError: {{function_node __wrapped__Transpose_device_/job:localhost/replica:0/task:0/device:GPU:0}} OOM when allocating tensor with shape[768,768] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc [Op:Transpose] FAILED tests/pipelines/test_pipelines_common.py::PipelineUtilsTest::test_load_default_pipelines_tf_table_qa - tensorflow.python.framework.errors_impl.ResourceExhaustedError: Exception encountered when calling layer 'tapas' (type TFTapasMainLayer). {{function_node __wrapped__StatelessTruncatedNormalV2_device_/job:localhost/replica:0/task:0/device:GPU:0}} OOM when allocating tensor with shape[30522,768] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc [Op:StatelessTruncatedNormalV2] Call arguments received by layer 'tapas' (type TFTapasMainLayer): • input_ids=tf.Tensor(shape=(2, 2), dtype=int32) • attention_mask=tf.Tensor(shape=(2, 2), dtype=float32) • token_type_ids=tf.Tensor(shape=(2, 2, 7), dtype=int32) • position_ids=None • head_mask=None • inputs_embeds=None • output_attentions=False • output_hidden_states=False • return_dict=True • training=False ``` and 5 other ones (probably due to the above one). @Rocketknight1 I think we will have to reiterate (change->run->change->run) a bit more before we merge.<|||||>Yep, working on it now!<|||||>The `tests/pipelines/test_pipelines_common.py::PipelineUtilsTest::test_load_default_pipelines_tf` run against a list of models, so it's kind normal it fails with other models even some fixes are done previously. I am OK to trigger the run (a subset) whenever you feel it's time. Otherwise I can show you a modified workflow file for you to trigger manually.<|||||>@ydshieh the issues with Funnel have been resolved, so this should be ready for a CI run now!<|||||>You can watch it live [here](https://github.com/huggingface/transformers/actions/runs/5191137996/jobs/9358557442). It will take 20-30 min to finish.<|||||>Looks like they're still failing even with very small dummies. I'll investigate those models and try to figure out why - the new dummies should be smaller than the old ones! <|||||>Maybe this is a sign that we should transition the dummies to symbolic tensors for those models, even if it's probably too slow for our tests to do it across the whole codebase.
transformers
24,045
closed
Fix a tiny typo in `WhisperForConditionalGeneration::generate` docstring
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). @sanchit-gandhi
06-06-2023 13:57:10
06-06-2023 13:57:10
_The documentation is not available anymore as the PR was closed or merged._<|||||>@sadra-barikbin Thanks for fixing this! It seems there is an issue with your CircleCI permissions, the tests won't run. Could you try refreshing your permissions as shown [here](https://support.circleci.com/hc/en-us/articles/360048210711-How-to-Refresh-User-Permissions-)?<|||||>CircleCI has banned my account. Feel free to make another PR.<|||||>@sadra-barikbin OK - as the changes are small and don't affect code logic, the ci checks aren't critical, so I'm going to merge.
transformers
24,044
open
Add keypoint-detection task
### Feature request Add support for keypoint detection. This includes a task, pipeline, dataset label and training pipeline. The task is to take an image and predict the x and y locations of a set of keypoints. Which keypoints are predicted should depend on the model trained for this task. The training pipeline for keypoint detection should allow to swap components. For example, one should be able to choose the backbone to be any suitable vision transformer model that is available on the huggingface hub. ### Motivation Keypoint detection is a use case that is prevalent in computer vision. The computer vision subset of the huggingface ecosystem would benefit from adding the popular keypoint detection task to the existing set of tasks. At the time of writing, existing repositories for keypoint detection often focus on a single particular model, e.g.: - yolov7: https://github.com/RizwanMunawar/yolov7-pose-estimation - yolov8: https://docs.ultralytics.com/tasks/pose/ - vitpose: https://github.com/ViTAE-Transformer/ViTPose The computer vision community could benefit greatly from a high quality community oriented open source hub for keypoint detection. ### Your contribution I am happy to be part of the discussion, but probably can do little in terms of PR's at this point in time.
06-06-2023 12:07:37
06-06-2023 12:07:37
transformers
24,043
closed
[`bnb`] Fix bnb skip modules
# What does this PR do? Fixes https://github.com/huggingface/transformers/issues/24037 https://github.com/huggingface/transformers/pull/23479 removed by mistake the logic introduced in https://github.com/huggingface/transformers/pull/21579 to deal with modules that are not needed to be converted The PR also adds a nice test to make sure this will never happen again
06-06-2023 10:21:43
06-06-2023 10:21:43
_The documentation is not available anymore as the PR was closed or merged._
transformers
24,042
closed
[Lllama] Update tokenization code to ensure parsing of the special tokens [core]
# What does this PR do? Adresses the issues with the fast tokenizer of LLama. Namely: - nit making it return token type ids. - the added tokens are not correctly encoded. There seems to be an issue with the conversion: before the python layer, just loading the tokenizer_config.json file with the rust backend still produced: `tokenizer.encode("this is not<s>").tokens`, ` ['<s>', '▁This', '▁is', '▁not', '</', 's', '>']`
06-06-2023 09:08:40
06-06-2023 09:08:40
Ok, narrowed it down to this line: ```python # Check all our special tokens are registered as "no split" token (we don't cut them) and are in the vocab added_tokens = tokenizer.sanitize_special_tokens() ``` When converting the model from a slow one, the tokenizer correctly processes the inputs up until this point. Meaning that before, the special tokens where already registered as special tokens, but adding them once more most probably breaks the internal regex. Still checking but should be this. <|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>After debugging with @Narsil it seems that the special tokens have to be not normalised, otherwise the normalizer prepends a space when adding it, which is why the token is not recognized. I suspect that there is another bug, as I tried with special tokens set to normalized = True (when calling `from_slow=True`+commenting `self._sanitize_special_tokens`) but the current should fix the conversion. A big discrepancy is that the default `AddedTokens` imported from `tokenizers` will set `normalized` to `!special`, so if you add tokens as special tokens, `normalized` will be False. But in `transformers` this is not the case, which explains why the call to sanitize is a source of problem.<|||||>We have to update the online models to change the `tokenizer.json`, (people might be confused because the `normalized` param is also in the slow files but always ignored)
transformers
24,041
closed
Fix bug in using TPU
### System Info transformers==4.28 ### Who can help? @sgugger and @sanchit-gandhi ### Information - [ ] The official example scripts - [x] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [x] My own task or dataset (give details below) ### Reproduction ``` if (args.fp16 or args.bf16) and self.sharded_ddp is not None: if args.half_precision_backend == "auto": if args.device == torch.device("cpu"): if args.fp16: raise ValueError("Tried to use `fp16` but it is not supported on cpu") elif _is_native_cpu_amp_available: args.half_precision_backend = "cpu_amp" else: raise ValueError("Tried to use cpu amp but native cpu amp is not available") else: args.half_precision_backend = "cuda_amp" logger.info(f"Using {args.half_precision_backend} half precision backend") ``` in file trainer.py code from 615 to 627, I see that when using TPU with this logic so half_precision_backend will auto set to "cuda_amp", which TPU can not works seen the in TPU can not call some function like torch.cuda (I am traning Whisper using TPU) So in my suggest, we should have the condition if is_torch_tpu_available() then args.half_precision_backend = "cpu_amp" The new code: ``` if (args.fp16 or args.bf16) and self.sharded_ddp is not None: if args.half_precision_backend == "auto": if args.device == torch.device("cpu"): if args.fp16: raise ValueError("Tried to use `fp16` but it is not supported on cpu") elif _is_native_cpu_amp_available: args.half_precision_backend = "cpu_amp" else: raise ValueError("Tried to use cpu amp but native cpu amp is not available") elif is_torch_tpu_available() : args.half_precision_backend = "cpu_amp" else: args.half_precision_backend = "cuda_amp" logger.info(f"Using {args.half_precision_backend} half precision backend") ``` However, I do not sure that my new changing is suitable so I post to this to have a better solution, can you review for me is this a good solution so that I can make a pull requests to contribute I would like to advice @sgugger and @sanchit-gandhi to review it ### Expected behavior There's no bug in training using TPU
06-06-2023 08:46:39
06-06-2023 08:46:39
Could you please give us a reproducer? Also cc @muellerzr and @pacman100 <|||||>As an idea workflow, this representation may be limited due to the inability to showcase the complete setting. Consequently, there could potentially be bugs in the way it is set up. The following points outline the reproducers: 1. Download file [run_speech_recognition_seq2seq_streaming.py](https://github.com/huggingface/community-events/blob/main/whisper-fine-tuning-event/run_speech_recognition_seq2seq_streaming.py) and [xla_spawn.py](https://github.com/huggingface/transformers/blob/main/examples/legacy/seq2seq/xla_spawn.py) 2. Adding script ``` echo 'python xla_spawn.py \ --num_cores 8 \ run_speech_recognition_seq2seq_streaming.py \ --tpu_num_cores 8 \ --model_name_or_path="openai/whisper-tiny" \ --dataset_name="mozilla-foundation/common_voice_11_0" \ --dataset_config_name="vi" \ --language="vi" \ --train_split_name="train+validation" \ --eval_split_name="test" \ --model_index_name="Whisper Small Spanish" \ --max_steps="5000" \ --output_dir="./" \ --per_device_train_batch_size="64" \ --per_device_eval_batch_size="32" \ --logging_steps="25" \ --learning_rate="1e-5" \ --warmup_steps="500" \ --evaluation_strategy="steps" \ --eval_steps="1000" \ --save_strategy="steps" \ --save_steps="1000" \ --generation_max_length="225" \ --length_column_name="input_length" \ --max_duration_in_seconds="30" \ --text_column_name="sentence" \ --freeze_feature_encoder="False" \ --report_to="tensorboard" \ --metric_for_best_model="loss" \ --greater_is_better="False" \ --load_best_model_at_end False \ --gradient_checkpointing \ --bf16 \ --overwrite_output_dir \ --do_train \ --do_eval \ --predict_with_generate False \ --do_normalize_eval \ --streaming \ --use_auth_token \ --push_to_hub \ --bf16_full_eval True' >> run.sh ``` 3. run command line `bash run.sh` --- After the execution reaches a certain point, a bug occurs when calling `torch.cuda` on the TPU. To prevent this issue, one possible solution is to include the argument `--half_precision_backend "cpu_amp"` in the script. To implement this fix, I suggest modifying the trainer.py file by adding a condition that checks if the code is running on a TPU (as I mentioned before). If it is, the half_precision_backend should be set to "cpu_amp". I would like to cc @muellerzr and @pacman100 <|||||>The flags `--bf16` and `--bf16_full_eval` are not supported on TPU. I'm not sure using the CPU autocast is a good idea since it will trigger copy of the data to the CPU.<|||||>During my experiment, I observed that the removal of bf16 led to satisfactory results. However, the training speed was slower compared to using bf16 in transformers==4.27.0. Could you please suggest some methods to enhance the training speed using TPU?<|||||>There is some information in the accelerate docs: https://huggingface.co/docs/accelerate/concept_guides/training_tpu Probably using JAX would be faster here on TPU. The script at #21764 works for non-streaming mode fine-tuning (the PR just needs tests + docs before merge), so you can use this already if you want<|||||>thank you so much for your information <3
transformers
24,040
closed
There is bug in trainer of Llama:indices should be either on cpu or on the same device as the indexed tensor (cpu)
### System Info Setting ds_accelerator to cuda (auto detect) Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points. - `transformers` version: 4.30.0.dev0 - Platform: Linux-5.4.0-132-generic-x86_64-with-glibc2.27 - Python version: 3.10.11 - Huggingface_hub version: 0.15.1 - Safetensors version: 0.3.1 - PyTorch version (GPU?): 1.13.1+cu116 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? @ArthurZucker @sgugger @pacman100 ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ``` from dataclasses import dataclass, field from typing import Optional, Tuple import transformers from transformers import Trainer from transformers.models.llama import LlamaForCausalLM # noqa from transformers import ( AutoConfig, AutoModelForCausalLM, AutoTokenizer, AutoModel ) from nl2sql_dataset import Nl2SqlJsonlDataset import env import time @dataclass class ModelArguments: pretrain_path: str = field( default=f"{env.MODEL_ROOT.joinpath('llama-7b-hf')}" # llama-13b-hf ) @dataclass class DataArguments: train_file: str = field( default=f"{env.INPUT_ROOT.joinpath('trainset/config.json')}", metadata={"help": "A josnl file containing the training corpus"}, ) validation_file: str = field( default=f"{env.INPUT_ROOT.joinpath('devset/config.json')}", metadata={"help": "A jsonl file containing the validation corpus"}, ) max_seq_length: int = field( default=512, metadata={"help": "Max sequence length for training"} ) pad_to_max_length: bool = field(default=False) @dataclass class TrainingArguments(transformers.TrainingArguments): cache_dir: Optional[str] = field(default=None) optim: str = field(default="adamw_torch") model_max_length: int = field( default=512, metadata={ "help": "Maximum sequence length. Sequences will be right padded (and possibly truncated)." }, ) num_train_epochs: int = 5 evaluation_strategy: str = field(default="epoch") save_strategy: str = "epoch" fp16: bool = True save_total_limit: int = 5 load_best_model_at_end: bool = False warmup_steps: int = 0 logging_steps: int = 1 gradient_checkpointing: bool = True ddp_timeout: int = 3600 output_dir: str = field( default=f"{env.OUTPUT_ROOT.joinpath(time.strftime('%Y年%m月%d日%H时%M分%S秒'))}", ) def safe_save_model_for_hf_trainer(trainer: transformers.Trainer, output_dir: str): """Collects the state dict and dump to disk.""" state_dict = trainer.model.state_dict() if trainer.args.should_save: cpu_state_dict = {key: value.cpu() for key, value in state_dict.items()} del state_dict trainer._save(output_dir, state_dict=cpu_state_dict) # noqa def parse_args() -> Tuple[ModelArguments, DataArguments, TrainingArguments]: parser = transformers.HfArgumentParser( (ModelArguments, DataArguments, TrainingArguments) ) return parser.parse_args_into_dataclasses() def train(): model_args, data_args, training_args = parse_args() if "chatglm" in model_args.pretrain_path: print(model_args.pretrain_path) model = AutoModel.from_pretrained(model_args.pretrain_path, trust_remote_code=True, empty_init=False) else: model = AutoModelForCausalLM.from_pretrained(model_args.pretrain_path) print(model_args, data_args, training_args) dataset = Nl2SqlJsonlDataset( pretrain_path=model_args.pretrain_path, train_file_path=data_args.train_file, validation_file_path=data_args.validation_file, max_seg_length=data_args.max_seq_length, pad_to_max_length=data_args.pad_to_max_length, ) dataset.setup() # Tell Trainer not to attempt DataParallel model.is_parallelizable = True model.model_parallel = True trainer = Trainer( model=model, args=training_args, train_dataset=dataset.train_dataset, eval_dataset=dataset.val_dataset, data_collator=dataset.collate_fn, ) model.config.use_cache = False trainer.train() trainer.save_state() safe_save_model_for_hf_trainer(trainer=trainer, output_dir=training_args.output_dir) if __name__ == "__main__": train() ``` **error:** To simplify the output information, I ran only on one card Setting ds_accelerator to cuda (auto detect) [2023-06-06 12:16:17,633] [WARNING] [comm.py:152:init_deepspeed_backend] NCCL backend in DeepSpeed not yet implemented [2023-06-06 12:16:17,633] [INFO] [comm.py:594:init_distributed] cdb=None [2023-06-06 12:16:17,633] [INFO] [comm.py:625:init_distributed] Initializing TorchBackend in DeepSpeed with backend nccl [2023-06-06 12:17:21,017] [INFO] [partition_parameters.py:454:__exit__] finished initializing model with 6.74B parameters Loading checkpoint shards: 100%|█████████████████████████████████████████████████████████████████████| 3/3 [00:41<00:00, 13.86s/it]ModelArguments(pretrain_path='/mnt/data/wxc/workspace/pretrained_models/2/llama-7b') DataArguments(train_file='/mnt/data/wxc/workspace/release/data/trainset/config.json', validation_file='/mnt/data/wxc/workspace/release/data/devset/config.json', max_seq_length=512, pad_to_max_length=False) TrainingArguments( Parameter Offload: Total persistent parameters: 266240 in 65 params 0%| | 0/87850 [00:00<?, ?it/s]Traceback (most recent call last): File "/mnt/data/wxc/workspace/release/start_ds_finetune.py", line 118, in train() File "/mnt/data/wxc/workspace/release/start_ds_finetune.py", line 112, in train trainer.train() File "/mnt/data/wxc/workspace/Llama-X/src/transformers/src/transformers/trainer.py", line 1661, in train return inner_training_loop( File "/mnt/data/wxc/workspace/Llama-X/src/transformers/src/transformers/trainer.py", line 1946, in _inner_training_loop tr_loss_step = self.training_step(model, inputs) File "/mnt/data/wxc/workspace/Llama-X/src/transformers/src/transformers/trainer.py", line 2756, in training_step loss = self.compute_loss(model, inputs) File "/mnt/data/wxc/workspace/Llama-X/src/transformers/src/transformers/trainer.py", line 2781, in compute_loss outputs = model(**inputs) File "/home/wxc/miniconda3/envs/llamax/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl return forward_call(*input, **kwargs) File "/home/wxc/miniconda3/envs/llamax/lib/python3.10/site-packages/deepspeed/utils/nvtx.py", line 15, in wrapped_fn ret_val = func(*args, **kwargs) File "/home/wxc/miniconda3/envs/llamax/lib/python3.10/site-packages/deepspeed/runtime/engine.py", line 1733, in forward loss = self.module(*inputs, **kwargs) File "/home/wxc/miniconda3/envs/llamax/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1212, in _call_impl result = forward_call(*input, **kwargs) File "/mnt/data/wxc/workspace/Llama-X/src/transformers/src/transformers/models/llama/modeling_llama.py", line 688, in forward outputs = self.model( File "/home/wxc/miniconda3/envs/llamax/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1212, in _call_impl result = forward_call(*input, **kwargs) File "/mnt/data/wxc/workspace/Llama-X/src/transformers/src/transformers/models/llama/modeling_llama.py", line 570, in forward layer_outputs = torch.utils.checkpoint.checkpoint( File "/home/wxc/miniconda3/envs/llamax/lib/python3.10/site-packages/torch/utils/checkpoint.py", line 249, in checkpoint return CheckpointFunction.apply(function, preserve, *args) File "/home/wxc/miniconda3/envs/llamax/lib/python3.10/site-packages/torch/utils/checkpoint.py", line 107, in forward outputs = run_function(*args) File "/mnt/data/wxc/workspace/Llama-X/src/transformers/src/transformers/models/llama/modeling_llama.py", line 566, in custom_forward return module(*inputs, output_attentions, None) File "/home/wxc/miniconda3/envs/llamax/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1212, in _call_impl result = forward_call(*input, **kwargs) File "/mnt/data/wxc/workspace/Llama-X/src/transformers/src/transformers/models/llama/modeling_llama.py", line 292, in forward hidden_states, self_attn_weights, present_key_value = self.self_attn( File "/home/wxc/miniconda3/envs/llamax/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1212, in _call_impl result = forward_call(*input, **kwargs) File "/mnt/data/wxc/workspace/Llama-X/src/transformers/src/transformers/models/llama/modeling_llama.py", line 202, in forward query_states, key_states = apply_rotary_pos_emb(query_states, key_states, cos, sin, position_ids) File "/mnt/data/wxc/workspace/Llama-X/src/transformers/src/transformers/models/llama/modeling_llama.py", line 134, in apply_rotary_pos_emb cos = cos[position_ids].unsqueeze(1) # [bs, 1, seq_len, dim] RuntimeError: indices should be either on cpu or on the same device as the indexed tensor (cpu) ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 12104) of binary: /home/wxc/miniconda3/envs/llamax/bin/python3.1 Traceback (most recent call last): File "/home/wxc/miniconda3/envs/llamax/bin/torchrun", line 8, in <module> sys.exit(main()) File "/home/wxc/miniconda3/envs/llamax/lib/python3.10/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 346, in wrapper return f(*args, **kwargs) File "/home/wxc/miniconda3/envs/llamax/lib/python3.10/site-packages/torch/distributed/run.py", line 762, in main run(args) File "/home/wxc/miniconda3/envs/llamax/lib/python3.10/site-packages/torch/distributed/run.py", line 753, in run elastic_launch( File "/home/wxc/miniconda3/envs/llamax/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 132, in __call__ return launch_agent(self._config, self._entrypoint, list(args)) File "/home/wxc/miniconda3/envs/llamax/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 246, in launch_agent raise ChildFailedError( torch.distributed.elastic.multiprocessing.errors.ChildFailedError: ============================================================ start_ds_finetune.py FAILED ------------------------------------------------------------ Failures: <NO_OTHER_FAILURES> ------------------------------------------------------------ Root Cause (first observed failure): [0]: time : 2023-06-06_12:20:18 host : omnisky rank : 0 (local_rank: 0) exitcode : 1 (pid: 12104) error_file: <N/A> traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html ============================================================ By addition,my deepspeed is the latest version,installed by git and compile. ### Expected behavior This is a finetune with nl2sql dataset. And my data format is {"source": "dusql_中国历史名城", "table": [{"table_name": "城市", "header": ["词条id", "名称", "所属省份", "常住人口", "城区面积", "建城年数"], "rows": []}, {"table_name": "都城", "header": ["朝代", "古称", "城市id", "建都起始时间", "建都结束时间", "建都年数"], "rows": []}], "sql": "select 名称 , 所属省份 from 城市 where 词条id not in ( select 城市id from 都城 )", "question": "哪些城市没有做过都城,给出这些城市名和其省份。"}
06-06-2023 07:19:12
06-06-2023 07:19:12
Hello, what is the launch command? What version of accelerate is being used? Also see this https://github.com/microsoft/DeepSpeed/issues/3678<|||||>hi~ thanks for your help! CUDA_VISIBLE_DEVICES="1,2,4,5" torchrun --nnodes=1 --nproc_per_node=4 start_ds_finetune.py --deepspeed deepspeed_config.json --learning_rate=2e-5 --per_device_train_batch_size=4 --gradient_accumulation_steps=1 accelerate == 0.19.0 > Hello, what is the launch command? What version of accelerate is being used? Also see this [microsoft/DeepSpeed#3678](https://github.com/microsoft/DeepSpeed/issues/3678) <|||||>Can you update the accelerate from the main branch and use the PR of DeepSpeed LinkedIn the above DeepSpeed issue<|||||>Sorry I'm a beginner, do you mean this? pip install --upgrade accelerate git clone -b olruwase/ds_3678 https://github.com/microsoft/DeepSpeed.git cd DeepSpeed DS_BUILD_CPU_ADAM=1 DS_BUILD_AIO=1 DS_BUILD_UTILS=1 pip install -e .<|||||>My problem is solved!!!Thanks!!!How excellent you are!! > Can you update the accelerate from the main branch and use the PR of DeepSpeed LinkedIn the above DeepSpeed issue
transformers
24,039
closed
Add support for non-rust implemented tokenization for `__getitem__` method.
# Overview This PR is going to add a support for the usage scenario of "getting a slice from the batch-tokenized sequences". Without this PR, it seems to raise KeyError with the following message KeyError: 'Indexing with integers (to access backend Encoding for a given batch index) is not available when using Python based tokenizers' P.S. The above scenario could be reproduced by using some models new uploaded but not support to Rust-implemented tokenization, such as fnlp/moss-moon-003-sft. Also we can run the following examplar script for reproducing this issue: ```python # test script `/home/workspace/test.py` for this PR. from transformers import AutoTokenizer tok = AutoTokenizer.from_pretrained("fnlp/moss-moon-003-sft", trust_remote_code=True) tok.add_special_tokens({"pad_token": "[PAD]"}) texts = ["Today is a good day!", "It's a good idea!", "How's going?"] batch_tok = tok(texts, padding=True) print(batch_tok[0:3]) # report `KeyError` here ``` # Error Message ```txt Traceback (most recent call last): File "/home/workspace/test.py", line 8, in <module> print(batch_tok[0:3]) # report `KeyError` here File "/home/app/anaconda3/envs/test/lib/python3.9/site-packages/transformers/tokenization_utils_base.py", line 242, in __getitem__ raise KeyError( KeyError: 'Indexing with integers (to access backend Encoding for a given batch index) is not available when using Python based tokenizers' ``` All in all, I think it seems useful to implement __getitem__ method behind it in Python side :) Note that this PR is associative with the previous closed one. https://github.com/huggingface/transformers/pull/23645
06-06-2023 04:43:29
06-06-2023 04:43:29
It seems that failed in workflow due to `Read Timeout` on T5-relevant testing. How can I rerun for this? ![图片](https://github.com/huggingface/transformers/assets/54089835/f9922d94-440f-42d6-92b7-69f38e604d81) ![图片](https://github.com/huggingface/transformers/assets/54089835/4a1eec18-f2cf-4509-93aa-1c254286fe2e) <|||||>We can re-run that for you 😉 <|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>Request for review :)<|||||>@jacklanda Could you update the error message as requested by @ArthurZucker? <|||||>> @jacklanda Could you update the error message as requested by @ArthurZucker? @amyeroberts Have updated the mentioned error messages by @ArthurZucker Thanks.<|||||>Ask for final review :)
transformers
24,038
closed
Add VGCN-BERT model
### Model description HI, I am the author of [VGCN-BERT paper](https://arxiv.org/abs/2004.05707), the original implementation is in my gitlab [vgcn-bert repo](https://github.com/Louis-udm/VGCN-BERT), but recently I updated the algorithm and implemented a new version for integrating in Transformer. > Much progress has been made recently on text classification with methods based on neural networks. In particular, models using attention mechanism such as BERT have shown to have the capability of capturing the contextual information within a sentence or document. However, their ability of capturing the global information about the vocabulary of a language is more limited. This latter is the strength of Graph Convolutional Networks (GCN). In this paper, we propose VGCN-BERT model which combines the capability of BERT with a Vocabulary Graph Convolutional Network (VGCN). Local information and global information interact through different layers of BERT, allowing them to influence mutually and to build together a final representation for classification. In our experiments on several text classification datasets, our approach outperforms BERT and GCN alone, and achieve higher effectiveness than that reported in previous studies. I actually finished the integration of my new version and opened the PR. This new VGCN-BERT algorithm has the following improvements: - Greatly speeds up the calculation speed of embedding vocabulary graph convolutinal network (or Word Graph embedding). Taking CoLa as an example, the new model only increases the training time by 11% compared with the base model - Updated subgraph selection algorithm. - Currently using DistilBert as the base model, but it is easy to migrate to other models. - Provide two graph construction methods in vgcn_bert/modeling_graph.py (the same NPMI statistical method as the paper, and the predefined entity-relationship mapping method) I hope that after integrating into transformers, someone can discover some more practical use case. I am ashamed to say that I have not discovered too much real use cases myself, mainly because the word-grounded graph obtained through statistical methods has limited improvement on the LLM model. I think its potential application should be when there are specific/customized graphs that need to be integrated into LLM. ### Open source status - [X] The model implementation is available - [X] The model weights are available ### Provide useful links for the implementation https://arxiv.org/abs/2004.05707 https://github.com/Louis-udm/VGCN-BERT
06-06-2023 04:42:42
06-06-2023 04:42:42
The new implement is here: https://huggingface.co/zhibinlu/vgcn-bert-distilbert-base-uncased
transformers
24,037
closed
BitsAndBytesConfig llm_int8_skip_modules does not work in the new version
### System Info - `transformers` version: 4.30.0.dev0 - Platform: Linux-5.4.0-148-generic-x86_64-with-glibc2.17 - Python version: 3.8.16 - Huggingface_hub version: 0.15.1 - Safetensors version: 0.3.1 - PyTorch version (GPU?): 2.0.1+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help? @ArthurZucker @younesbelkada ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ``` from transformers import RobertaForSequenceClassification, BitsAndBytesConfig quantization_config = BitsAndBytesConfig(load_in_8bit=True, llm_int8_skip_modules=['classifier']) model = RobertaForSequenceClassification.from_pretrained('roberta-large-mnli', quantization_config=quantization_config) ``` ### Expected behavior The 'classifier' layer should be in Float16 but is actually loaded in 8bit. This is problematic because it drastically lower the performance of the model. It also make it impossible to train it (using peft library for example).
06-06-2023 00:52:02
06-06-2023 00:52:02
Hi @AntoineBlanot Thanks for the issue and flagging! https://github.com/huggingface/transformers/pull/24043 should fix the issue!
transformers
24,036
closed
Use TruncatedNormal from Keras initializers
# What does this PR do? This PR updates the types of `get_initializer` to use `TruncatedNormal` from `tf.keras.initializers`. Before this change the type was set to `tf.initializers.TruncatedNormal`, while `tf.keras.initializers.TruncatedNormal` is what was returned from the function. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
06-06-2023 00:31:35
06-06-2023 00:31:35
_The documentation is not available anymore as the PR was closed or merged._
transformers
24,035
closed
Add overloads for PretrainedModel.from_pretrained
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes #23980 This PR fixes the type hints that users see when calling `PretrainedModel.from_pretrained` before: ```python import transformers bert_model = transformers.BertForSequenceClassification.from_pretrained("...") reveal_type(bert_model) # Type of "bert_model" is "tuple[Unknown | BertForSequenceClassification, dict[str, Unbound | Unknown] | dict[str, Unknown | list[Unknown]] | Unknown] | Unknown | BertForSequenceClassification" bert_model_and_loading_info = transformers.BertForSequenceClassification.from_pretrained("...", output_loading_info=True) reveal_type(bert_model_and_loading_info) # Type of "bert_model_and_loading_info" is "tuple[Unknown | BertForSequenceClassification, dict[str, Unbound | Unknown] | dict[str, Unknown | list[Unknown]] | Unknown] | Unknown | BertForSequenceClassification" ``` after: ```python import transformers bert_model = transformers.BertForSequenceClassification.from_pretrained("...") reveal_type(bert_model) # Type of "bert_model" is "BertForSequenceClassification" bert_model_and_loading_info = transformers.BertForSequenceClassification.from_pretrained("...", output_loading_info=True) reveal_type(bert_model_and_loading_info) # Type of "bert_model_and_loading_info" is "Tuple[BertForSequenceClassification, LoadingInfo]" ``` 1. move `output_loading_info` from variadic kwargs to be an explicit kwarg 2. create overloaded signature for `from_pretrained` based on its value 3. add `LoadingInfo` `TypedDict` ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Documentation: @sgugger, @stevhliu and @MKhalusova
06-05-2023 23:37:51
06-05-2023 23:37:51
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24035). All of your documentation changes will be reflected on that endpoint.<|||||>> you can search the source code, it's not present anywhere Searching in the source code, you can see it here: https://github.com/search?q=repo%3Ahuggingface%2Ftransformers+%40overload&type=code Is there a specific part of this `overload` that makes it difficult to merge in? This overload makes it obvious to users that you only get the `LoadingInfo` when you pass in `output_loading_info=True` and that otherwise they will only get the model they expected.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
24,034
closed
AttributeError: ‘EvalPrediction’ object has no attribute ‘prediction’
Im trying to train Minilm using Hugging Face using the following Codes: ![image](https://github.com/huggingface/transformers/assets/92796786/945b681b-3eea-4b49-b710-8387f68227b1) Error Message: ╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮ │ in <cell line: 1>:1 │ │ │ │ /usr/local/lib/python3.10/dist-packages/transformers/trainer.py:1664 in train │ │ │ │ 1661 │ │ inner_training_loop = find_executable_batch_size( │ │ 1662 │ │ │ self._inner_training_loop, self._train_batch_size, args.auto_find_batch_size │ │ 1663 │ │ ) │ │ ❱ 1664 │ │ return inner_training_loop( │ │ 1665 │ │ │ args=args, │ │ 1666 │ │ │ resume_from_checkpoint=resume_from_checkpoint, │ │ 1667 │ │ │ trial=trial, │ │ │ │ /usr/local/lib/python3.10/dist-packages/transformers/trainer.py:2034 in _inner_training_loop │ │ │ │ 2031 │ │ │ │ self.control.should_training_stop = True │ │ 2032 │ │ │ │ │ 2033 │ │ │ self.control = self.callback_handler.on_epoch_end(args, self.state, self.con │ │ ❱ 2034 │ │ │ self._maybe_log_save_evaluate(tr_loss, model, trial, epoch, ignore_keys_for_ │ │ 2035 │ │ │ │ │ 2036 │ │ │ if DebugOption.TPU_METRICS_DEBUG in self.args.debug: │ │ 2037 │ │ │ │ if is_torch_tpu_available(): │ │ │ │ /usr/local/lib/python3.10/dist-packages/transformers/trainer.py:2300 in _maybe_log_save_evaluate │ │ │ │ 2297 │ │ │ │ │ ) │ │ 2298 │ │ │ │ │ metrics.update(dataset_metrics) │ │ 2299 │ │ │ else: │ │ ❱ 2300 │ │ │ │ metrics = self.evaluate(ignore_keys=ignore_keys_for_eval) │ │ 2301 │ │ │ self._report_to_hp_search(trial, self.state.global_step, metrics) │ │ 2302 │ │ │ │ │ 2303 │ │ │ # Run delayed LR scheduler now that metrics are populated │ │ │ │ /usr/local/lib/python3.10/dist-packages/transformers/trainer.py:3029 in evaluate │ │ │ │ 3026 │ │ start_time = time.time() │ │ 3027 │ │ │ │ 3028 │ │ eval_loop = self.prediction_loop if self.args.use_legacy_prediction_loop else se │ │ ❱ 3029 │ │ output = eval_loop( │ │ 3030 │ │ │ eval_dataloader, │ │ 3031 │ │ │ description="Evaluation", │ │ 3032 │ │ │ # No point gathering the predictions if there are no metrics, otherwise we d │ │ │ │ /usr/local/lib/python3.10/dist-packages/transformers/trainer.py:3318 in evaluation_loop │ │ │ │ 3315 │ │ │ │ │ EvalPrediction(predictions=all_preds, label_ids=all_labels, inputs=a │ │ 3316 │ │ │ │ ) │ │ 3317 │ │ │ else: │ │ ❱ 3318 │ │ │ │ metrics = self.compute_metrics(EvalPrediction(predictions=all_preds, lab │ │ 3319 │ │ else: │ │ 3320 │ │ │ metrics = {} │ │ 3321 │ │ in compute_metrics:5 │ ╰──────────────────────────────────────────────────────────────────────────────────────────────────╯ AttributeError: 'EvalPrediction' object has no attribute 'prediction' Any Assistance is welcomed. ### Who can help? Please Help :( @sgugger @ArthurZucker @gan _No response_ ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction Replicated steps using the following Link: https://www.youtube.com/watch?v=u--UVvH-LIQ Data Set Used: https://huggingface.co/datasets/ltkw98/Dataset/blob/main/dataset2_lionel.csv Custom Dataset was used and Mapping was done appropriately. ### Expected behavior A fine tuned model
06-05-2023 22:59:03
06-05-2023 22:59:03
Hey! Looking at your code, your `compute_metric` function is calling `pred.prediction` while it should be `pred.predictions`. <|||||>Much appreciated tyvm for your help.
transformers
24,033
closed
fix type annotation for debug arg
# What does this PR do? Fix type annotation for `debug` argument in `training_args.py` Fixes https://github.com/huggingface/transformers/issues/23958 ## Who can review?
06-05-2023 21:48:53
06-05-2023 21:48:53
_The documentation is not available anymore as the PR was closed or merged._<|||||>Hi @amyeroberts, I have made a fix to the code. I'm not certain if there is a more concise way to address this, but I added an additional check for `self.debug` being `None`. The reason for this is that when using `Union[str, List[DebugOption]]`, even if `default=""` is specified, it is still evaluated as `None`.<|||||>@Bearnardd Thanks for updating. Could you share a snippet to reproduce showing the evaluation of `debug` as `None` if left as default? If I create the training arguments directly, when working from `main`, debug defaults to `[]` with the type changes e.g. ```python In [1]: from transformers import TrainingArguments In [2]: args = TrainingArguments("dummy_dir") In [3]: args.debug Out[3]: [] ``` So it might be an environment or how it's being used in a script thing? <|||||>Hi @amyeroberts! It was one od the failing test cases. I will be back at home tomorrow so I will check that to confirm :)<|||||>Hi @amyeroberts! I have done some quick debugging. I was able to obtain the same results as you while running your snippet. However the problem appears when you try to run things from CLI. One of the test cases that were failing is `test_run_seq2seq_no_dist` from `transformers/tests/extended/test_trainer_ext.py` which uses command line arguments. In a result of running this test case there is a chain of internal function calls as follows: ``` parser = HfArgumentParser((ModelArguments, DataTrainingArguments, Seq2SeqTrainingArguments)) model_args, data_args, training_args = parser.parse_args_into_dataclasses() ``` `parse_args_into_dataclasses` underneath calls `self.parse_known_args(args=args)` which is a method derived from `ArgumentParser`. ``` namespace, remaining_args = self.parse_known_args(args=args) # hf_argparser.py:339 ``` In the `argparse` itself there is a default action `--debug` which is initialize as `None`. And here is a trick: if `debug` argument is of type str then argparse is able to internally cast it into empty string however it leaves if as `None` if it is of type `Union[str, List[DebugOption]`. Thats why this test fails if we change type annotation of `debug` argument. Is this explanation understandable for you or do you need some additional context/information :) ?<|||||>@Bearnardd Thanks for such a detailed investigation and write up! In this case, resolving this with the `--debug` flag in `argparse` would be very involved and this `None` check works well :)
transformers
24,032
closed
Tool types
WIP PR to have specific types for tool outputs, which should clarify interaction to and from Agents. Left to do: - [x] Remove or complete the video integration - [x] Add support for remote tools - [x] Complete documentation - [x] Test it out with real world use-cases - [x] Add a test to ensure that the inputs are cast correctly (so far only the outputs are tested) - [x] Arrange the dependencies so that they don't make all the tests fail
06-05-2023 21:27:55
06-05-2023 21:27:55
_The documentation is not available anymore as the PR was closed or merged._
transformers
24,031
open
Add scGPT Model
### Model description scGPT is a single celled foundation model, based off the GPT architecture. The model is shown to have captured meaningful biological insights into cells and genes. The authors state the model can be fine tuned to downstream tasks included, cell-type annotation, genetic perturbation etc. I'd like to add scGPT to HuggingFace Transformers. ### Open source status - [X] The model implementation is available - [X] The model weights are available ### Provide useful links for the implementation The paper [scGPT: Towards Building a Foundation Model for Single-Cell 2 Multi-omics Using Generative AI](https://www.biorxiv.org/content/10.1101/2023.04.30.538439v1.full.pdf) by [Haotian Cui](https://www.researchgate.net/scientific-contributions/Haotian-Cui-2193100667), [Chloe Wang](https://www.linkedin.com/in/chloe-xueqi-wang-979712158/?originalSubdomain=ca) , [Hassaan Maan](https://hsmaan.com/), [Bo Wang](https://bowang87.github.io/) Github link: [scGPT by subercui](https://github.com/bowang-lab/scGPT) Model Checkpoint: [Google Drive](https://drive.google.com/drive/folders/1kkug5C7NjvXIwQGGaGoqXTk_Lb_pDrBU) - From this checkpoint I can generate the model weights
06-05-2023 20:30:41
06-05-2023 20:30:41
Hi @jprivera44, thanks for opening this issue! The fastest and easiest way to add a model to be used in the transformers library, is to add the model code and its weights on the code. Here's a how-to guide: https://huggingface.co/docs/transformers/custom_models cc @Rocketknight1 <|||||>Hello @amyeroberts , thank you for the comment! After reviewing the content, I plan to stick with the process outlined in this [link](https://huggingface.co/transformers/v4.8.0/add_new_model.html), which goes over how to add a model from scratch. Since the model is SOTA the involved process will make it easier for our community to leverage the model, and make the overall codebase more interpretable. If you have any questions please let me know! To give you an update, I just ran the code to load the model weights and I'm now focusing on tracing the forward pass.<|||||>Hi @jprivera44, Glad to hear you've already got the weight loading logic working! Anyone is welcome to open a model PR in the library. However, please be aware that it is no longer the preferred method. Any model code merged directly into the repo brings its own maintenance costs, and so the barrier to add is a lot higher. Our experience is that model PRs always take a lot longer that one expects, and is a large amount of work for both parties, particularly if the contributor hasn't added models previously. With regards to your points: * SOTA or not, models are just as easily used if they're implemented on the hub. For example, the recent [falcon model](https://huggingface.co/tiiuae/falcon-40b/blob/main/modelling_RW.py) was first added there. * I'm not sure how you're defining interpretability, however model code should be equivalently understandable in either place (I'd argue it's easier on the hub without unrelated code & commits - but it's all subjective) * What will be a blocker is adding through a PR here. As mentioned above, it can be a long process and, as other community members haven't requested this model, it won't be a priority for us to review and merge in. <|||||>Hi @amyeroberts, that all makes sense on my end! I'll go ahead and add the scGPT model via the custom models link you mentioned, as the initial version of this code.
transformers
24,030
closed
Use new parametrization based weight norm if available
# What does this PR do? In https://github.com/pytorch/pytorch/pull/103001 I introduce a new parametrization based version of `weight_norm`. One big benefit of the new API is that the resulting model is deepcopy'able; today, you can't deepcopy Wav2Vec2 models. Since the new API isn't even in PyTorch main yet, I'd like to feature gate it here, so that it gets used whenever PyTorch is recent enough to support it. It would be a big help for me if you could take this change earlier rather than later; otherwise I will have to patch transformers in our own CI to get our benchmark harness working on Wav2Vec2. Signed-off-by: Edward Z. Yang <[email protected]> ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? cc @sanchit-gandhi
06-05-2023 19:43:16
06-05-2023 19:43:16
_The documentation is not available anymore as the PR was closed or merged._<|||||>PyTorch side PR has landed!<|||||>@ezyang With nightly pytorch, we get > AttributeError: encoder.pos_conv_embed.conv.weight_v not found in PyTorch model when trying to load a pytorch model into a TF model. The TF model is looking for `encoder.pos_conv_embed.conv.weight_v` but we now have `encoder.pos_conv_embed.conv.parametrizations.weight.original0`. (This is from our `(TF)Wav2Vec2Model` model class). **Question**: In your PR https://github.com/pytorch/pytorch/pull/103001, is this part `def _weight_norm_compat_hook()` that deals with the backward compatibility? If so, we will copy it :-) <|||||>Yep. The change here is not FC so the ingester needs updating.
transformers
24,029
closed
Add check for tied parameters
# What does this PR do This is the fix in the transformers library of this [PR](https://github.com/huggingface/accelerate/pull/1529). It will fix the case where a user uses their own device map (in the `from_pretrained` method) but forget that parameters that are tied together should be on the same device. We return an error showing which parameters should be on the same device.
06-05-2023 19:35:39
06-05-2023 19:35:39
The test are failing because I used a function that i have added recently in accelerate.utils. Should we use the main for the tests @sgugger ? <|||||>_The documentation is not available anymore as the PR was closed or merged._
transformers
24,028
closed
🚨🚨🚨 Replace DataLoader logic for Accelerate in Trainer, remove unneeded tests 🚨🚨🚨
# What does this PR do? This PR: - Guts the internals for the `DataLoader` in all basic distributed fashions (replacing `pl.Loader` for TPU coming in a follow-up PR) to replace it with `accelerator.prepare` - Removes **two** tests that were deemed unnecessary - Test 1 removed: `tests/trainer/test_trainer.py::TrainerIntegrationTest::test_sampler_seed`, deemed to no longer be necessary to reset the seed, as Accelerate's dataloader setup doesn't need any extra help when iterating/loading back in the seed, regardless of the torch version - Test 2 removed: `tests/trainer/test_trainer.py::TrainerIntegrationTest::test_training_finite_iterable_dataset`, as with Accelerate's new sampler for `IterableDataset` we'll actually catch if it's `None` and raise an error, a new test will be made + clear error message on the `Accelerate` side, with a test added to `Trainer` afterwards. - Modifies two tests to use the proper attribute: Accelerator's `DataLoaders` all have `total_batch_size` rather than `batch_size` - `test_train_and_eval_dataloaders` and `test_data_is_not_parallelized_when_model_is_parallel` Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @sgugger @pacman100
06-05-2023 18:07:44
06-05-2023 18:07:44
_The documentation is not available anymore as the PR was closed or merged._<|||||>Great PR, currently this is breaking my custom collate_fn in the dataloader, still trying to understand why that is. First assumption might be due to multiprocessing?<|||||>@franz101 please open an issue with a reproducer of what you are trying to do so we can help :)
transformers
24,027
closed
Fixes all hidden states output in FlaxT5
Fixes #23960 @sanchit-gandhi
06-05-2023 17:47:23
06-05-2023 17:47:23
_The documentation is not available anymore as the PR was closed or merged._<|||||>need to fix the tests<|||||>Hmm okay so the PyTorch model is also missing this, so we'd need to update it here. Continuing the discussion in the issue!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Based on the discussion in the issue: https://github.com/huggingface/transformers/issues/23960#issuecomment-1579140627, we should probably not update the Flax and PyTorch T5 models since this would be a surprise breaking change to what is one of the most popular models in the lib. Feel free to make these changes locally @ztjhz if you need all the hidden states! Otherwise we can close this one for now cc @amyeroberts
transformers
24,025
closed
Fix device placement for model-parallelism in generate for encoder/de…
…coders When using model parallelism with encoder/decoder models, there is an issue in `generate` using the encoder in isolation. That encoder won't output logits on the same device as the inputs like the whole model does, and we get mismatched device errors. Repro: ```py from transformers import AutoModelForSeq2SeqLM model = AutoModelForSeq2SeqLM.from_pretrained("facebook/nllb-200-distilled-600M", device_map="auto") inputs = {"input_ids": torch.tensor([[256047, 94124, 248079, 15697, 248203, 2]], device=0), 'attention_mask': torch.tensor([[1, 1, 1, 1, 1, 1]], device=0), 'forced_bos_token_id': 256079} model.generate(**inputs, max_length=4000) ```
06-05-2023 16:38:30
06-05-2023 16:38:30
_The documentation is not available anymore as the PR was closed or merged._
transformers
24,024
closed
Pin `deepspeed` to `0.9.2` for now
# What does this PR do? DeepSpeed 0.9.3 has some issue, and introduced many more failures. See for example [here](https://github.com/huggingface/transformers/actions/runs/5166856163/jobs/9307371184) and [there](https://github.com/huggingface/transformers/actions/runs/5166856163/jobs/9307371242). Let's pin `deepspeed==0.9.2` for now 🙏
06-05-2023 15:52:54
06-05-2023 15:52:54
_The documentation is not available anymore as the PR was closed or merged._<|||||>cc @pacman100
transformers
24,023
closed
Fixing single candidate_label return.
# What does this PR do? Fix #24008 <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
06-05-2023 15:30:17
06-05-2023 15:30:17
_The documentation is not available anymore as the PR was closed or merged._
transformers
24,022
closed
Update README.md
# What does this PR do? Remove the mention of `prepare_for_doc_test.py`, as this file is no longer necessary and is removed in #23271 Thanks @NielsRogge for finding this.
06-05-2023 13:58:36
06-05-2023 13:58:36
_The documentation is not available anymore as the PR was closed or merged._
transformers
24,021
closed
How to use LogitsWarper within .generate()?
### System Info - `transformers` version: 4.16.2 - Platform: Linux-5.4.0-100-generic-x86_64-with-glibc2.10 - Python version: 3.8.5 - PyTorch version (GPU?): 1.7.1+cu101 (True) ### Who can help? _No response_ ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction Using a GPT2 Model, I want to affect the logits, as they are used in the generate function. To do this, I created a LogitsWarper, which is the only member of the LogitsProcessorList: ``` logits_processor_list = LogitsProcessorList([ TemperatureLogitsWarper(10.0) ]) ``` This is then given as an argument to the generate() function: ``` beam_output = model.generate( sample, ... logits_processor_list=logits_processor_list, early_stopping=True, num_return_sequences=k, ... do_sample=False, output_scores=True, return_dict_in_generate=True, length_penalty=0, ) ``` ### Expected behavior When changing the temperature, I expect the output sequence probabilities to be different, but they do not differ between a temperature of 1.0 and 10.0. My understanding is that logits_processor_list will be propagated to the specific search function that will be called (beam search, in this case). Should I do this differently, or is there an easier way to affect the temperature for all the search procedures? I know that .generate has a _temperature_ parameter, but this seems to only be used automatically when do_sample=True (https://github.com/huggingface/transformers/issues/22405) . How can I change the temperature when that is not the case?
06-05-2023 13:17:40
06-05-2023 13:17:40
Hi, See this for a great blog post on that: https://towardsdatascience.com/the-power-of-constrained-language-models-cf63b65a035d. The blog post includes a Colab notebook that showcases creating a custom `LogitsProcessor`<|||||>cc @gante <|||||>@JellePiepenbrock 👋 Temperature only has an effect on sample-based text generation method. When no sampling is done (`do_sample=False`), the most likely token(s) will be selected at each token selection step, so temperature scaling doesn't change the output at all. Have a look at this blog post: https://huggingface.co/blog/how-to-generate<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
24,020
closed
Fix typo in Llama docstrings
# What does this PR do? Fix typos in docs. ## Before submitting - [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
06-05-2023 13:13:44
06-05-2023 13:13:44
@amyeroberts yes, I ran it locally, got the error and fixed it, here are the types in my code ``` type(tokenizer) <class 'transformers.models.llama.tokenization_llama.LlamaTokenizer'> type(inputs) <class 'torch.Tensor' ``<|||||>@Kh4L Out of interest - could you share the checkpoint being used? Could you also run this snippet with the checkpoint and share the output? ```python from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained(MODEL_CHECKPOINT) prompt = "Hey, are you conscious? Can you talk to me?" inputs = tokenizer(prompt, return_tensors="pt") print(inputs) print(type(inputs)) print(type(tokenizer)) ``` The current changes need to be checked with a standard checkpoint for all the models affected here. For instance, if I run the snippet with the OPT checkpoint in the example `MODEL_CHECKPOINT = "facebook/opt-350m"` I get the following output: ``` {'input_ids': tensor([[ 2, 13368, 6, 32, 47, 13316, 116, 2615, 47, 1067, 7, 162, 116]]), 'attention_mask': tensor([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]])} <class 'transformers.tokenization_utils_base.BatchEncoding'> <class 'transformers.models.gpt2.tokenization_gpt2_fast.GPT2TokenizerFast'> ``` <|||||>@amyeroberts Checkpoint is the 7B LLama converted to HF, I get the same output! Sorry for the confusion, I was using `LlamaTokenizer` and not `AutoTokenizer` in my code<|||||>Thanks for the detailed review! I am a bit confused as I can't see the latest commit https://github.com/Kh4L/pytorch-transformers/commit/62ea9f244b70ae190b20c69742027e277a241f2e in this PR, even though I pushed it on my branch https://github.com/Kh4L/pytorch-transformers/tree/fix_conscious_typo 🤔 <|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>@Kh4L Github PRs were down for part of yesterday - I think it was just that. I can see the commit now and all tests are passing :)
transformers
24,019
open
Add none check when instantiating tokenizer from auto
# What does this PR do? Many tokenizers require sentencepiece to be installed. When not installed the AutoTokenizer will fail with an unhelpful error: ``` File "/private/var/tmp/_bazel_anthony/26db13ca47961bc86c979e31d4f830d7/execroot/collimator/bazel-out/darwin_arm64-fastbuild/bin/ml_training/ai_assistant/scripts/prompts/collect_prompts_to_jsonl.runfiles/ml_training_transformers/site-packages/transformers/models/auto/tokenization_auto.py", line 395, in tokenizer_class_from_name return getattr(module, class_name) TypeError: getattr(): attribute name must be string ``` This PR fixes that by checking that the tokenizer name is not None before trying to instantiate it and it give a better error message in case it is None. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. - tokenizers: @ArthurZucker <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
06-05-2023 12:56:58
06-05-2023 12:56:58
Thanks a lot for opening the PR and sorry for the delay 🤗
transformers
24,018
closed
Fix `MobileViTV2` checkpoint name
# What does this PR do? For `tests/models/mobilevitv2/test_modeling_mobilevitv2.py::MobileViTV2ModelTest::test_model_from_pretrained`, we get ```bash (line 433) OSError: apple/mobilevitv2-1.0-imagenet1k-256 is not a local folder and is not a valid model identifier listed on 'https://huggingface.co/models' ``` This PR changes the checkpoint name to avoid this failure. **Question**: Should we upload the model to `apple/mobilevitv2-1.0-imagenet1k-256` and change all checkpoint names in the code to `apple/mobilevitv2-1.0-imagenet1k-256`? cc @shehanmunasinghe
06-05-2023 12:47:04
06-05-2023 12:47:04
_The documentation is not available anymore as the PR was closed or merged._<|||||>@ydshieh Sorry, my bad, I'd assumed because the checkpoints in the PR were under the org they were already uploaded and didn't verify. > Question: Should we upload the model to apple/mobilevitv2-1.0-imagenet1k-256 and change all checkpoint names in the code to apple/mobilevitv2-1.0-imagenet1k-256? Yep! <|||||>OK, so I should change everything to `apple` in this PR instead. I am not a member of `apple` on `Hub`. Maybe I can ask @hollance to help me on this?<|||||>Also, I'd like to suggest moving the following checkpoints under `apple` org as well. ``` shehan97/mobilevitv2-1.0-voc-deeplabv3 shehan97/mobilevitv2-2.0-imagenet1k-256 shehan97/mobilevitv2-1.5-voc-deeplabv3 ``` <|||||>@amyeroberts checkpoint is uploaded to `apple`. All tests pass now 🙏 <|||||>Thanks. I missed the other 3 ones 😭 <|||||>> Also, I'd like to suggest moving the following checkpoints under `apple` org as well. > > ``` > shehan97/mobilevitv2-1.0-voc-deeplabv3 > > shehan97/mobilevitv2-2.0-imagenet1k-256 > > shehan97/mobilevitv2-1.5-voc-deeplabv3 > ``` Thank you @shehanmunasinghe for the heads up 🤗
transformers
24,017
closed
Skip `test_multi_gpu_data_parallel_forward` for `MobileViTV2ModelTest`
# What does this PR do? Skip `test_multi_gpu_data_parallel_forward` for `MobileViTV2ModelTest`. This passes on CI, but the other 2x tests running after this one are all failing with `CUDA error: misaligned address`. (Similar to #21991)
06-05-2023 12:35:01
06-05-2023 12:35:01
_The documentation is not available anymore as the PR was closed or merged._<|||||>Yes, see #21991. That PR mentioned torch 2.0, but this test are skipped for some other modes even before that PR for other reasons.
transformers
24,016
closed
Add DINOv2
# What does this PR do? This PR adds DINOv2. Fixes #23739 #23773 To do: - [x] transfer checkpoints to the facebook org (when are we going to have a meta org on the hub?)
06-05-2023 12:31:02
06-05-2023 12:31:02
_The documentation is not available anymore as the PR was closed or merged._
transformers
24,015
closed
change to suitable with half precision in tpu
I have made changes to your statement to make it suitable for a pull request on GitHub. Please review the following: Description: I am utilizing TPU for training and have observed that when setting the half_precision_backend to 'auto', it automatically assigns it as 'cuda_amp'. However, this causes a bug since there is no torch.cuda available on TPUs. To resolve this issue, I have implemented a conditional check where if the device is XLA (TPU), it will switch to 'cpu_amp' instead, and then avoid bug that call torch.cuda when using TPU. However, it appears that this change has led to a decrease in TPU speed. I would appreciate it if you could review my modification and provide suggestions for improvements.
06-05-2023 10:22:17
06-05-2023 10:22:17
cc @sgugger
transformers
24,014
closed
fix accelerator prepare during eval only mode
# What does this PR do? 1. As mentioned in https://github.com/huggingface/transformers/pull/23957#issuecomment-1573810900, currently the accelerate `prepare` method is happening only during training loop. If the user is directly doing `eval`/`predict` without the training loop, the model isn't prepared leading to wrong behaviour. This PR is aimed at fixing it. 2. Should be merged after https://github.com/huggingface/accelerate/pull/1540
06-05-2023 10:08:56
06-05-2023 10:08:56
_The documentation is not available anymore as the PR was closed or merged._<|||||>The thing is that mixed precision application for eval only mode won't work unless we prepare model
transformers
24,013
closed
[doc-build] Use new github workflows
null
06-05-2023 09:55:24
06-05-2023 09:55:24
_The documentation is not available anymore as the PR was closed or merged._<|||||>Supercded by https://github.com/huggingface/transformers/pull/24079
transformers
24,012
closed
[No merge] Just a Test
# What does this PR do? Just a Test
06-05-2023 09:06:58
06-05-2023 09:06:58
transformers
24,011
closed
fix trainer slow tests related to hyperparam search
# What does this PR do? 1. With the Accelerate integration in Trainer, Hyperparam Search tests were failing. This PR fixes it.
06-05-2023 08:24:32
06-05-2023 08:24:32
_The documentation is not available anymore as the PR was closed or merged._
transformers
24,008
closed
Zero-shot image classification with single-label results in 'float is not iterable' error
### System Info - `transformers` version: 4.29.2 - Platform: Linux-5.15.107+-x86_64-with-glibc2.31 - Python version: 3.10.11 - Huggingface_hub version: 0.15.1 - Safetensors version: not installed - PyTorch version (GPU?): 2.0.1+cu118 (False) - Tensorflow version (GPU?): 2.12.0 (False) - Flax version (CPU?/GPU?/TPU?): 0.6.9 (cpu) - Jax version: 0.4.10 - JaxLib version: 0.4.10 - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help? @Narsil ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction **Colab notebook**: https://colab.research.google.com/gist/josephrocca/fdf9537e06fd15d2b5a81727fd70d56b/zero-shot-classification-with-single-label.ipynb ```py !pip install transformers !wget https://i.imgur.com/RKsLoNB.png from transformers import pipeline image_classifier = pipeline("zero-shot-image-classification", model="openai/clip-vit-large-patch14-336") text_classifier = pipeline(model="facebook/bart-large-mnli") # multi-label text - works ✅ text_classifier("houston, we have a problem with the thruster", candidate_labels=["astronaut", "forest cabin", "rabbit and lion"]) # single-label text - works ✅ text_classifier("houston, we have a problem with the thruster", candidate_labels=["astronaut"]) # multi-label image - works ✅ image_classifier("RKsLoNB.png", candidate_labels = ["astronaut", "forest cabin", "rabbit and lion"]) # single-label image - doesn't work ❌ image_classifier("RKsLoNB.png", candidate_labels = ["astronaut"]) ``` ### Expected behavior Like in the text case, I'd expect it to not give an error, and importantly, I'd expect it to give a value between 0 and 1, rather than giving a value of exactly 1 (again, like in the text classification case). In other words, if you provide only 1 label, the scores change from being *relative to other label scores* to being a simple **similarity score** between the text and the image - i.e. a binary classification (with the underlying score exposed so you can choose your own threshold).
06-05-2023 07:15:59
06-05-2023 07:15:59
It might make sense to have some sort of explicit option for opting into the "similarity score" mode - that may be less surprising e.g. for people that are using the pipeline for a variable number of labels, and expect the scores to mean the same thing regardless of how many labels are passed in. But if that approach is taken, then it seems like it would require a breaking change, since zero-shot text classification doesn't return a score of 1 if you pass a single label.<|||||>Yes, The single label odd behavior, is legacy, I don't think it's something we want to support in the same way for a new pipeline. That being said, having a parameter to deactivate normalization, and at the very least *not crashing* is desirable. <|||||>Created a PR to just fix the failure (will just return 1.0 all the time)
transformers
24,007
closed
[WIP] Hivemind Integration
# What does this PR do? This PR (will) add integration for hivemind (https://github.com/learning-at-home/hivemind), a PyTorch library for decentralized deep learning across the Internet. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier --!> Library: <!-- - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker --!> - trainer: @sgugger <!-- Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
06-05-2023 05:32:40
06-05-2023 05:32:40
The idea is to allow for an "infinite run" with hivemind where users can join using the hivemind DHT. The DHT will have to be initialized before initiating the trainingarguments so that the user can mess with it more freely, then the optimizer kwargs will just be passed on internally. Most of the changes are related to `max_steps`. Although hivemind should be able to work with it, I think it would be better to not use it to let it scale freely. (in other words, the user will not be able to predict the ammount of peers that will join to scale the max_steps well enough) Right now I am having some issues with `steps_trained_in_current_epoch`, the tqdm progress bar, and wandb being enabled out of nowhere. However, it seems it can run some steps, it just doesn't reports it (wandb or progress bar) Some feedback would be nice.<|||||>> Thanks for your PR! This rewrites a lot of the `Trainer` internals, so maybe it should be better to define a `Trainer` subclass hosted on the `hivemind` side? I was thinking of that too but then what would happen with the sub trainers? (Seq2Seq for example) I have no problem with that though.<|||||>> Right now I am having some issues with steps_trained_in_current_epoch, the tqdm progress bar, and wandb being enabled out of nowhere. However, it seems it can run some steps, it just doesn't reports it (wandb or progress bar) @sgugger Edit: solved by `self.total_batched_samples = self.total_batched_samples + 1`<|||||>Will reopen this once I believe it's good enough and has a clean integration. For now I will just use it internally
transformers
24,006
closed
Unexpect behaviour
I have a base model and a lora model trained. And a new model which save with merge_and_unload() But the saved model loaded always got OOM, is that normal? This should't happen in my opiiopn. the saved model is same size aas 7B ![image](https://github.com/huggingface/transformers/assets/21303438/a588f3fb-00fc-4d2c-8dd6-8f7d138d42ca) and the script I load model are same.
06-05-2023 05:20:21
06-05-2023 05:20:21
cc @younesbelkada and @pacman100 But they won't be able to help you without seeing a reproducer of the issue.<|||||>this is very weired, I wish I could post a reproducible code here but this could only be done by opensource all my code which is limited due to policy. However, I think merged lora and the basemodel should be likely the same, why they could even get OOM, could be any possible reason?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
24,005
closed
Zero-shot classification pipeline does not appear to batch examples
### System Info - `transformers` version: 4.29.2 - Platform: Linux-5.15.107+-x86_64-with-glibc2.31 - Python version: 3.10.11 - Huggingface_hub version: 0.15.1 - Safetensors version: not installed - PyTorch version (GPU?): 2.0.1+cu118 (True) - Tensorflow version (GPU?): 2.12.0 (True) - Flax version (CPU?/GPU?/TPU?): 0.6.9 (gpu) - Jax version: 0.4.10 - JaxLib version: 0.4.10 ### Who can help? @narsi ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Execute the zero-shot classification pipeline and vary the number of labels: ``` hf_pipeline = pipeline('zero-shot-classification', model="facebook/bart-large-mnli", device=device) hf_pipeline("top 10 destinations to visit this summer", candidate_labels=["travel", "lifestyle", "technology"]) ``` ### Expected behavior If we increase the number of labels, when running inference on a GPU we would expect latency to remain relatively constant as the inputs to the underlying entailment model should be batched together. This is not explicitly documented, but the forum post announcing the zero-shot pipeline does say this: >Assuming you’re using the same model, the pipeline is likely faster because it batches the inputs. If you pass a single sequence with 4 labels, you have an effective batch size of 4, and the pipeline will pass these through the model in a single pass. In practice though we see latency increase more or less linearly with label count (compared against a naive implementation batching up inferences to bart-mnli for the same inputs): ![image](https://github.com/huggingface/transformers/assets/31768/ed1f8ee3-c44c-405e-b247-90704c628459) Here is the colab used to make the graph above: https://colab.research.google.com/drive/19YiQFDcJUm8iz0azYWX35I6-vQ-bTheC?usp=sharing that demonstrates the issue.
06-05-2023 04:31:31
06-05-2023 04:31:31
cc @Narsil <|||||>> Assuming you’re using the same model, the pipeline is likely faster because it batches the inputs. If you pass a single sequence with 4 labels, you have an effective batch size of 4, and the pipeline will pass these through the model in a single pass. This comment could be old, this has been changed 1yr+ ago. Now the batching happens when using `pipeline(..., batch_size=4)` (should you want this batch_size). The batching happens regardless on the number of candidate_labels, and this was changed exactly for this reason. If we batched on candidate_labels automatically, users couldn't control the memory requirements nicely, so it would OOM easily with large number of labels, and couldn't batch more than number of labels if the GPU allowed for it. So now `batch_size=n` will batch `n` samples for all forward passes, in could be `< len(candidate_labels)` or `> len(candidate_labels)`. Meaning you have a much finer control over the batching mecanism. This is documented at the `pipeline` level since this behavior is orthogonal to each specific pipeline's behavior.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
24,003
closed
No inf checks were recorded for this optimizer
### System Info transformers 4.30.0.dev0 ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction `trainer = transformers.Trainer( model = model, train_dataset=data, args =training_args, data_collator=transformers.DataCollatorForLanguageModeling(tokenizer, mlm=False), ) trainer.train() ` ### Expected behavior ``` --------------------------------------------------------------------------- AssertionError Traceback (most recent call last) /tmp/ipykernel_1639/4032920361.py in <cell line: 1>() ----> 1 trainer.train() ~/.local/lib/python3.8/site-packages/transformers/trainer.py in train(self, resume_from_checkpoint, trial, ignore_keys_for_eval, **kwargs) 1659 self._inner_training_loop, self._train_batch_size, args.auto_find_batch_size 1660 ) -> 1661 return inner_training_loop( 1662 args=args, 1663 resume_from_checkpoint=resume_from_checkpoint, ~/.local/lib/python3.8/site-packages/transformers/trainer.py in _inner_training_loop(self, batch_size, args, resume_from_checkpoint, trial, ignore_keys_for_eval) 2013 optimizer_was_run = scale_before <= scale_after 2014 else: -> 2015 self.optimizer.step() 2016 2017 if optimizer_was_run: ~/.local/lib/python3.8/site-packages/accelerate/optimizer.py in step(self, closure) 132 elif self.scaler is not None: 133 scale_before = self.scaler.get_scale() --> 134 self.scaler.step(self.optimizer, closure) 135 self.scaler.update() 136 scale_after = self.scaler.get_scale() ~/.local/lib/python3.8/site-packages/torch/cuda/amp/grad_scaler.py in step(self, optimizer, *args, **kwargs) 370 self.unscale_(optimizer) 371 --> 372 assert len(optimizer_state["found_inf_per_device"]) > 0, "No inf checks were recorded for this optimizer." 373 374 retval = self._maybe_opt_step(optimizer, optimizer_state, *args, **kwargs) AssertionError: No inf checks were recorded for this optimizer. ```
06-05-2023 03:07:25
06-05-2023 03:07:25
Miss configuration LoraConfig caused the issue
transformers
24,002
open
Addition of test code for GPTNeoX Flax support
@sanchit-gandhi I have added a test code for the GPTNeoX Flax support #22950. I implemented it based on a fork at https://github.com/OhadRubin/transformers the above PR and [Flax GPT-Neo](https://github.com/gojiteji/transformers/blob/fix_flax_gpt_neox/tests/models/gpt_neo/test_modeling_flax_gpt_neo.py) test code. During the execution of the tests based on [the doc]( https://huggingface.co/docs/transformers/add_new_model#2-next-prepare-your-environment), the log displayed the following output: ``` platform linux -- Python 3.9.16, pytest-7.3.1, pluggy-1.0.0 rootdir: /myhomedir/transformers configfile: setup.cfg plugins: anyio-3.6.2 collected 43 items tests/models/gpt_neox/test_modeling_flax_gpt_neox.py sssssssssssssssssssssssssssssssssssssssssss [100%] =================================================== 43 skipped in 1.97s =================================================== ```
06-05-2023 02:53:15
06-05-2023 02:53:15
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24002). All of your documentation changes will be reflected on that endpoint.<|||||>Hey @gojiteji! Thanks for picking-up the Flax GPT Neo PR! Would you mind rebasing onto main: ``` git fetch upstream git rebase upstream main ``` And then force pushing the changes: ``` git push -f origin fix_flax_gpt_neox ``` This will then isolate the changes from your PR amongst the other ones<|||||>Hey @gojiteji - not sure if you pushed or force pushed? See previous comment: https://github.com/huggingface/transformers/pull/24002#issuecomment-1577141369 Let's see if we can revive the commit history here. In the case that we can't, we probably need to open a new PR for this<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Hey @gojiteji - feel free to open a new PR for this if you still want to continue the integration. Currently not sure which bits are new since the commit history is broken, but am more than happy to help with any questions / queries on a fresh PR!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
24,000
closed
changed unused args from error to warning
the value error now breaks the code, while it can run perfectly without the unused arguments. This happens for example in https://huggingface.co/OpenAssistant/falcon-40b-sft-mix-1226/discussions/2 # What does this PR do? Fixes issue referenced here: https://huggingface.co/OpenAssistant/falcon-40b-sft-mix-1226/discussions/2 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Library: - generate: @gante
06-04-2023 21:05:47
06-04-2023 21:05:47
This feels like a limitation of being able to discern between a typo (error) and an actual unused argument (warning), but I understand :(
transformers
23,999
closed
TensorBoard callback no longer adds hparams
# What does this PR do? The `TensorBoardCallback.on_train_begin` function calls `add_hparams` with an empty `metric_dict` parameter, meaning that the only information that is logged is from `args.sanitized_dict()`. This is duplicated information from the previous `self.tb_writer.add_text("args", args.to_json_string())`. As a result, the `add_hparams` call is unnecessary and this PR removes it. Adjacently fixes https://github.com/huggingface/transformers/issues/21821. I'm aware of the following TensorBoard related documentation: - https://huggingface.co/docs/hub/tensorboard - https://huggingface.co/docs/transformers/main/en/main_classes/callback#transformers.integrations.TensorBoardCallback None of these docs need to be updated in this PR. A sanity check test: ```python """ Minimal replication of https://github.com/huggingface/transformers/issues/21821 """ from os import listdir from shutil import rmtree from transformers import TrainingArguments from transformers.integrations import TensorBoardCallback output_dir = "output_dir" args = TrainingArguments(output_dir=output_dir, logging_dir=output_dir) def has_extra_file(): return len(listdir(output_dir)) > 1 class DummyControl: should_training_stop = None class DummyState: is_world_process_zero = True is_hyper_param_search = False class NoHParamsTensorBoardCallback(TensorBoardCallback): # This is a copy of `TensorBoardCallback.on_train_begin` unless specified otherwise def on_train_begin(self, args, state, control, **kwargs): if not state.is_world_process_zero: return log_dir = None if state.is_hyper_param_search: trial_name = state.trial_name if trial_name is not None: log_dir = os.path.join(args.logging_dir, trial_name) if self.tb_writer is None: self._init_summary_writer(args, log_dir) if self.tb_writer is not None: self.tb_writer.add_text("args", args.to_json_string()) if "model" in kwargs: model = kwargs["model"] if hasattr(model, "config") and model.config is not None: model_config_json = model.config.to_json_string() self.tb_writer.add_text("model_config", model_config_json) ########################### # START no hparams call ########################### # Original code: # # Version of TensorBoard coming from tensorboardX does not have this method. # if hasattr(self.tb_writer, "add_hparams"): # self.tb_writer.add_hparams(args.to_sanitized_dict(), metric_dict={}) ########################### # END no hparams call ########################### rmtree(output_dir, ignore_errors=True) TensorBoardCallback().on_train_begin(args, DummyState(), DummyControl()) print(f"With the call to `add_hparams`, has extra file is {has_extra_file()}") rmtree(output_dir, ignore_errors=True) NoHParamsTensorBoardCallback().on_train_begin(args, DummyState(), DummyControl()) print(f"WithOUT the call to `add_hparams`, has extra file is {has_extra_file()}") rmtree(output_dir, ignore_errors=True) # Cleanup ``` ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed.
06-04-2023 20:33:36
06-04-2023 20:33:36
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23999). All of your documentation changes will be reflected on that endpoint.
transformers
23,991
closed
When using TimeSeriesTransformerForPrediction, able to train model but not predict on test set
Using the code from this article: https://huggingface.co/blog/time-series-transformers, I was able to successfully run the data they utilized. However when attempting with my own data in the same format, I am able to get to training the model successfully but get an error when creating predictions. I have a dataset with three columns, "start" as the index, "item_id", and "target", as matching the code from the article. `test_sample = test_dataset[0] test_sample.keys()` _OUT `dict_keys(['start', 'target', 'item_id', 'feat_static_cat'])`_ Here is the full error message: --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) Input In [59], in <cell line: 5>() 3 forecasts = [] 5 for batch in test_dataloader: ----> 6 outputs = model.generate( 7 static_categorical_features=batch["static_categorical_features"].to(device) 8 if config.num_static_categorical_features > 0 9 else None, 10 static_real_features=batch["static_real_features"].to(device) 11 if config.num_static_real_features > 0 12 else None, 13 past_time_features=batch["past_time_features"].to(device), 14 past_values=batch["past_values"].to(device), 15 future_time_features=batch["future_time_features"].to(device), 16 past_observed_mask=batch["past_observed_mask"].to(device), 17 ) 18 forecasts.append(outputs.sequences.cpu().numpy()) File ~/.local/lib/python3.8/site-packages/torch/autograd/grad_mode.py:27, in _DecoratorContextManager.__call__.<locals>.decorate_context(*args, **kwargs) 24 @functools.wraps(func) 25 def decorate_context(*args, **kwargs): 26 with self.clone(): ---> 27 return func(*args, **kwargs) File ~/.local/lib/python3.8/site-packages/transformers/models/time_series_transformer/modeling_time_series_transformer.py:1760, in TimeSeriesTransformerForPrediction.generate(self, past_values, past_time_features, future_time_features, past_observed_mask, static_categorical_features, static_real_features, output_attentions, output_hidden_states) 1661 @torch.no_grad() 1662 def generate( 1663 self, (...) 1671 output_hidden_states: Optional[bool] = None, 1672 ) -> SampleTSPredictionOutput: 1673 r""" 1674 Greedily generate sequences of sample predictions from a model with a probability distribution head. 1675 (...) 1758 multivariate predictions. 1759 """ -> 1760 outputs = self( 1761 static_categorical_features=static_categorical_features, 1762 static_real_features=static_real_features, 1763 past_time_features=past_time_features, 1764 past_values=past_values, 1765 past_observed_mask=past_observed_mask, 1766 future_time_features=future_time_features, 1767 future_values=None, 1768 output_attentions=output_attentions, 1769 output_hidden_states=output_hidden_states, 1770 return_dict=True, 1771 use_cache=True, 1772 ) 1774 decoder = self.model.get_decoder() 1775 enc_last_hidden = outputs.encoder_last_hidden_state File ~/.local/lib/python3.8/site-packages/torch/nn/modules/module.py:1194, in Module._call_impl(self, *input, **kwargs) 1190 # If we don't have any hooks, we want to skip the rest of the logic in 1191 # this function, and just call forward. 1192 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks 1193 or _global_forward_hooks or _global_forward_pre_hooks): -> 1194 return forward_call(*input, **kwargs) 1195 # Do not call functions when jit is used 1196 full_backward_hooks, non_full_backward_hooks = [], [] File ~/.local/lib/python3.8/site-packages/transformers/models/time_series_transformer/modeling_time_series_transformer.py:1603, in TimeSeriesTransformerForPrediction.forward(self, past_values, past_time_features, past_observed_mask, static_categorical_features, static_real_features, future_values, future_time_features, future_observed_mask, decoder_attention_mask, head_mask, decoder_head_mask, cross_attn_head_mask, encoder_outputs, past_key_values, output_hidden_states, output_attentions, use_cache, return_dict) 1600 if future_values is not None: 1601 use_cache = False -> 1603 outputs = self.model( 1604 past_values=past_values, 1605 past_time_features=past_time_features, 1606 past_observed_mask=past_observed_mask, 1607 static_categorical_features=static_categorical_features, 1608 static_real_features=static_real_features, 1609 future_values=future_values, 1610 future_time_features=future_time_features, 1611 decoder_attention_mask=decoder_attention_mask, 1612 head_mask=head_mask, 1613 decoder_head_mask=decoder_head_mask, 1614 cross_attn_head_mask=cross_attn_head_mask, 1615 encoder_outputs=encoder_outputs, 1616 past_key_values=past_key_values, 1617 output_hidden_states=output_hidden_states, 1618 output_attentions=output_attentions, 1619 use_cache=use_cache, 1620 return_dict=return_dict, 1621 ) 1623 prediction_loss = None 1624 params = None File ~/.local/lib/python3.8/site-packages/torch/nn/modules/module.py:1194, in Module._call_impl(self, *input, **kwargs) 1190 # If we don't have any hooks, we want to skip the rest of the logic in 1191 # this function, and just call forward. 1192 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks 1193 or _global_forward_hooks or _global_forward_pre_hooks): -> 1194 return forward_call(*input, **kwargs) 1195 # Do not call functions when jit is used 1196 full_backward_hooks, non_full_backward_hooks = [], [] File ~/.local/lib/python3.8/site-packages/transformers/models/time_series_transformer/modeling_time_series_transformer.py:1452, in TimeSeriesTransformerModel.forward(self, past_values, past_time_features, past_observed_mask, static_categorical_features, static_real_features, future_values, future_time_features, decoder_attention_mask, head_mask, decoder_head_mask, cross_attn_head_mask, encoder_outputs, past_key_values, output_hidden_states, output_attentions, use_cache, return_dict) 1445 encoder_outputs = BaseModelOutput( 1446 last_hidden_state=encoder_outputs[0], 1447 hidden_states=encoder_outputs[1] if len(encoder_outputs) > 1 else None, 1448 attentions=encoder_outputs[2] if len(encoder_outputs) > 2 else None, 1449 ) 1451 dec_input = transformer_inputs[:, self.config.context_length :, ...] -> 1452 decoder_outputs = self.decoder( 1453 inputs_embeds=dec_input, 1454 attention_mask=decoder_attention_mask, 1455 encoder_hidden_states=encoder_outputs[0], 1456 head_mask=decoder_head_mask, 1457 cross_attn_head_mask=cross_attn_head_mask, 1458 past_key_values=past_key_values, 1459 use_cache=use_cache, 1460 output_attentions=output_attentions, 1461 output_hidden_states=output_hidden_states, 1462 return_dict=return_dict, 1463 ) 1465 if not return_dict: 1466 return decoder_outputs + encoder_outputs + (loc, scale, static_feat) File ~/.local/lib/python3.8/site-packages/torch/nn/modules/module.py:1194, in Module._call_impl(self, *input, **kwargs) 1190 # If we don't have any hooks, we want to skip the rest of the logic in 1191 # this function, and just call forward. 1192 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks 1193 or _global_forward_hooks or _global_forward_pre_hooks): -> 1194 return forward_call(*input, **kwargs) 1195 # Do not call functions when jit is used 1196 full_backward_hooks, non_full_backward_hooks = [], [] File ~/.local/lib/python3.8/site-packages/transformers/models/time_series_transformer/modeling_time_series_transformer.py:1178, in TimeSeriesTransformerDecoder.forward(self, attention_mask, encoder_hidden_states, encoder_attention_mask, head_mask, cross_attn_head_mask, past_key_values, inputs_embeds, use_cache, output_attentions, output_hidden_states, return_dict) 1167 layer_outputs = torch.utils.checkpoint.checkpoint( 1168 create_custom_forward(decoder_layer), 1169 hidden_states, (...) 1175 None, 1176 ) 1177 else: -> 1178 layer_outputs = decoder_layer( 1179 hidden_states, 1180 attention_mask=attention_mask, 1181 encoder_hidden_states=encoder_hidden_states, 1182 encoder_attention_mask=encoder_attention_mask, 1183 layer_head_mask=(head_mask[idx] if head_mask is not None else None), 1184 cross_attn_layer_head_mask=( 1185 cross_attn_head_mask[idx] if cross_attn_head_mask is not None else None 1186 ), 1187 past_key_value=past_key_value, 1188 output_attentions=output_attentions, 1189 use_cache=use_cache, 1190 ) 1191 hidden_states = layer_outputs[0] 1193 if use_cache: File ~/.local/lib/python3.8/site-packages/torch/nn/modules/module.py:1194, in Module._call_impl(self, *input, **kwargs) 1190 # If we don't have any hooks, we want to skip the rest of the logic in 1191 # this function, and just call forward. 1192 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks 1193 or _global_forward_hooks or _global_forward_pre_hooks): -> 1194 return forward_call(*input, **kwargs) 1195 # Do not call functions when jit is used 1196 full_backward_hooks, non_full_backward_hooks = [], [] File ~/.local/lib/python3.8/site-packages/transformers/models/time_series_transformer/modeling_time_series_transformer.py:611, in TimeSeriesTransformerDecoderLayer.forward(self, hidden_states, attention_mask, encoder_hidden_states, encoder_attention_mask, layer_head_mask, cross_attn_layer_head_mask, past_key_value, output_attentions, use_cache) 609 # cross_attn cached key/values tuple is at positions 3,4 of present_key_value tuple 610 cross_attn_past_key_value = past_key_value[-2:] if past_key_value is not None else None --> 611 hidden_states, cross_attn_weights, cross_attn_present_key_value = self.encoder_attn( 612 hidden_states=hidden_states, 613 key_value_states=encoder_hidden_states, 614 attention_mask=encoder_attention_mask, 615 layer_head_mask=cross_attn_layer_head_mask, 616 past_key_value=cross_attn_past_key_value, 617 output_attentions=output_attentions, 618 ) 619 hidden_states = nn.functional.dropout(hidden_states, p=self.dropout, training=self.training) 620 hidden_states = residual + hidden_states File ~/.local/lib/python3.8/site-packages/torch/nn/modules/module.py:1194, in Module._call_impl(self, *input, **kwargs) 1190 # If we don't have any hooks, we want to skip the rest of the logic in 1191 # this function, and just call forward. 1192 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks 1193 or _global_forward_hooks or _global_forward_pre_hooks): -> 1194 return forward_call(*input, **kwargs) 1195 # Do not call functions when jit is used 1196 full_backward_hooks, non_full_backward_hooks = [], [] File ~/.local/lib/python3.8/site-packages/transformers/models/time_series_transformer/modeling_time_series_transformer.py:371, in TimeSeriesTransformerAttention.forward(self, hidden_states, key_value_states, past_key_value, attention_mask, layer_head_mask, output_attentions) 368 value_states = past_key_value[1] 369 elif is_cross_attention: 370 # cross_attentions --> 371 key_states = self._shape(self.k_proj(key_value_states), -1, bsz) 372 value_states = self._shape(self.v_proj(key_value_states), -1, bsz) 373 elif past_key_value is not None: 374 # reuse k, v, self_attention File ~/.local/lib/python3.8/site-packages/torch/nn/modules/module.py:1194, in Module._call_impl(self, *input, **kwargs) 1190 # If we don't have any hooks, we want to skip the rest of the logic in 1191 # this function, and just call forward. 1192 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks 1193 or _global_forward_hooks or _global_forward_pre_hooks): -> 1194 return forward_call(*input, **kwargs) 1195 # Do not call functions when jit is used 1196 full_backward_hooks, non_full_backward_hooks = [], [] File ~/.local/lib/python3.8/site-packages/torch/nn/modules/linear.py:114, in Linear.forward(self, input) 113 def forward(self, input: Tensor) -> Tensor: --> 114 return F.linear(input, self.weight, self.bias) RuntimeError: CUDA error: CUBLAS_STATUS_EXECUTION_FAILED when calling `cublasLtMatmul( ltHandle, computeDesc.descriptor(), &alpha_val, mat1_ptr, Adesc.descriptor(), mat2_ptr, Bdesc.descriptor(), &beta_val, result_ptr, Cdesc.descriptor(), result_ptr, Cdesc.descriptor(), &heuristicResult.algo, workspace.data_ptr(), workspaceSize, at::cuda::getCurrentCUDAStream())`
06-04-2023 16:30:55
06-04-2023 16:30:55
cc @kashif <|||||>thanks @brett1099 for the issue... just having a look at the error to see if i can figure out the issue...<|||||>@brett1099 can you kindly confirm that the length of the target time series in the test set are bigger that the corresponding training set target arrays by exactly the `prediction_length`?<|||||>Hi @kashif , yes, I can confirm that is the case. Here are the details below from my code: `train_dataset` Dataset({ features: ['start', 'target', 'item_id', 'feat_static_cat'], num_rows: 366 }) `test_dataset` Dataset({ features: ['start', 'target', 'item_id', 'feat_static_cat'], num_rows: 366 }) (limited from over 2k to 366 rows to match the article in case there was a memory issue) `train_sample = train_dataset[0]` `test_sample = test_dataset[0]` `print(len(test_sample["target"]))` 811 `print(len(train_sample["target"]))` 755 `prediction_length = 56` `len(train_df["target"])` 274546 `len(test_df["target"])` 295042 I checked that (295042 - 274546) / 366 = 56, the prediction length.<|||||>@brett1099 can you kindly for the purpose of debugging train and do inference on the CPU device and see what the error is?<|||||>@kashif , seems I am not able to successfully separate onto only the CPU device. Here is the code below attempting to run, specified device as cpu. `from accelerate import Accelerator from torch.optim import AdamW accelerator = Accelerator() #device = accelerator.device device = "cpu" model.to(device) optimizer = AdamW(model.parameters(), lr=2e-4, betas=(0.9, 0.995), weight_decay=1e-2) model, optimizer, train_dataloader = accelerator.prepare( model, optimizer, train_dataloader, ) model.train() for epoch in range(3): for idx, batch in enumerate(train_dataloader): optimizer.zero_grad() outputs = model( static_categorical_features=batch["static_categorical_features"].to(device) if config.num_static_categorical_features > 0 else None, static_real_features=batch["static_real_features"].to(device) if config.num_static_real_features > 0 else None, past_time_features=batch["past_time_features"].to(device), past_values=batch["past_values"].to(device), future_time_features=batch["future_time_features"].to(device), future_values=batch["future_values"].to(device), past_observed_mask=batch["past_observed_mask"].to(device), future_observed_mask=batch["future_observed_mask"].to(device), ) loss = outputs.loss # Backpropagation accelerator.backward(loss) optimizer.step() if idx % 100 == 0: print(loss.item()) Error message: -------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) Input In [64], in <cell line: 18>() 19 for idx, batch in enumerate(train_dataloader): 20 optimizer.zero_grad() ---> 21 outputs = model( 22 static_categorical_features=batch["static_categorical_features"] 23 if config.num_static_categorical_features > 0 24 else None, 25 static_real_features=batch["static_real_features"] 26 if config.num_static_real_features > 0 27 else None, 28 past_time_features=batch["past_time_features"], 29 past_values=batch["past_values"], 30 future_time_features=batch["future_time_features"], 31 future_values=batch["future_values"], 32 past_observed_mask=batch["past_observed_mask"], 33 future_observed_mask=batch["future_observed_mask"], 34 ) 35 loss = outputs.loss 37 # Backpropagation File ~/.local/lib/python3.8/site-packages/torch/nn/modules/module.py:1194, in Module._call_impl(self, *input, **kwargs) 1190 # If we don't have any hooks, we want to skip the rest of the logic in 1191 # this function, and just call forward. 1192 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks 1193 or _global_forward_hooks or _global_forward_pre_hooks): -> 1194 return forward_call(*input, **kwargs) 1195 # Do not call functions when jit is used 1196 full_backward_hooks, non_full_backward_hooks = [], [] File ~/.local/lib/python3.8/site-packages/transformers/models/time_series_transformer/modeling_time_series_transformer.py:1603, in TimeSeriesTransformerForPrediction.forward(self, past_values, past_time_features, past_observed_mask, static_categorical_features, static_real_features, future_values, future_time_features, future_observed_mask, decoder_attention_mask, head_mask, decoder_head_mask, cross_attn_head_mask, encoder_outputs, past_key_values, output_hidden_states, output_attentions, use_cache, return_dict) 1600 if future_values is not None: 1601 use_cache = False -> 1603 outputs = self.model( 1604 past_values=past_values, 1605 past_time_features=past_time_features, 1606 past_observed_mask=past_observed_mask, 1607 static_categorical_features=static_categorical_features, 1608 static_real_features=static_real_features, 1609 future_values=future_values, 1610 future_time_features=future_time_features, 1611 decoder_attention_mask=decoder_attention_mask, 1612 head_mask=head_mask, 1613 decoder_head_mask=decoder_head_mask, 1614 cross_attn_head_mask=cross_attn_head_mask, 1615 encoder_outputs=encoder_outputs, 1616 past_key_values=past_key_values, 1617 output_hidden_states=output_hidden_states, 1618 output_attentions=output_attentions, 1619 use_cache=use_cache, 1620 return_dict=return_dict, 1621 ) 1623 prediction_loss = None 1624 params = None File ~/.local/lib/python3.8/site-packages/torch/nn/modules/module.py:1194, in Module._call_impl(self, *input, **kwargs) 1190 # If we don't have any hooks, we want to skip the rest of the logic in 1191 # this function, and just call forward. 1192 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks 1193 or _global_forward_hooks or _global_forward_pre_hooks): -> 1194 return forward_call(*input, **kwargs) 1195 # Do not call functions when jit is used 1196 full_backward_hooks, non_full_backward_hooks = [], [] File ~/.local/lib/python3.8/site-packages/transformers/models/time_series_transformer/modeling_time_series_transformer.py:1424, in TimeSeriesTransformerModel.forward(self, past_values, past_time_features, past_observed_mask, static_categorical_features, static_real_features, future_values, future_time_features, decoder_attention_mask, head_mask, decoder_head_mask, cross_attn_head_mask, encoder_outputs, past_key_values, output_hidden_states, output_attentions, use_cache, return_dict) 1421 use_cache = use_cache if use_cache is not None else self.config.use_cache 1422 return_dict = return_dict if return_dict is not None else self.config.use_return_dict -> 1424 transformer_inputs, loc, scale, static_feat = self.create_network_inputs( 1425 past_values=past_values, 1426 past_time_features=past_time_features, 1427 past_observed_mask=past_observed_mask, 1428 static_categorical_features=static_categorical_features, 1429 static_real_features=static_real_features, 1430 future_values=future_values, 1431 future_time_features=future_time_features, 1432 ) 1434 if encoder_outputs is None: 1435 enc_input = transformer_inputs[:, : self.config.context_length, ...] File ~/.local/lib/python3.8/site-packages/transformers/models/time_series_transformer/modeling_time_series_transformer.py:1331, in TimeSeriesTransformerModel.create_network_inputs(self, past_values, past_time_features, static_categorical_features, static_real_features, past_observed_mask, future_values, future_time_features) 1329 static_feat = torch.cat((static_real_features, static_feat), dim=1) 1330 if static_categorical_features is not None: -> 1331 embedded_cat = self.embedder(static_categorical_features) 1332 static_feat = torch.cat((embedded_cat, static_feat), dim=1) 1333 expanded_static_feat = static_feat.unsqueeze(1).expand(-1, time_feat.shape[1], -1) File ~/.local/lib/python3.8/site-packages/torch/nn/modules/module.py:1194, in Module._call_impl(self, *input, **kwargs) 1190 # If we don't have any hooks, we want to skip the rest of the logic in 1191 # this function, and just call forward. 1192 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks 1193 or _global_forward_hooks or _global_forward_pre_hooks): -> 1194 return forward_call(*input, **kwargs) 1195 # Do not call functions when jit is used 1196 full_backward_hooks, non_full_backward_hooks = [], [] File ~/.local/lib/python3.8/site-packages/transformers/models/time_series_transformer/modeling_time_series_transformer.py:76, in TimeSeriesFeatureEmbedder.forward(self, features) 72 else: 73 cat_feature_slices = [features] 75 return torch.cat( ---> 76 [ 77 embed(cat_feature_slice.squeeze(-1)) 78 for embed, cat_feature_slice in zip(self.embedders, cat_feature_slices) 79 ], 80 dim=-1, 81 ) File ~/.local/lib/python3.8/site-packages/transformers/models/time_series_transformer/modeling_time_series_transformer.py:77, in <listcomp>(.0) 72 else: 73 cat_feature_slices = [features] 75 return torch.cat( 76 [ ---> 77 embed(cat_feature_slice.squeeze(-1)) 78 for embed, cat_feature_slice in zip(self.embedders, cat_feature_slices) 79 ], 80 dim=-1, 81 ) File ~/.local/lib/python3.8/site-packages/torch/nn/modules/module.py:1194, in Module._call_impl(self, *input, **kwargs) 1190 # If we don't have any hooks, we want to skip the rest of the logic in 1191 # this function, and just call forward. 1192 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks 1193 or _global_forward_hooks or _global_forward_pre_hooks): -> 1194 return forward_call(*input, **kwargs) 1195 # Do not call functions when jit is used 1196 full_backward_hooks, non_full_backward_hooks = [], [] File ~/.local/lib/python3.8/site-packages/torch/nn/modules/sparse.py:160, in Embedding.forward(self, input) 159 def forward(self, input: Tensor) -> Tensor: --> 160 return F.embedding( 161 input, self.weight, self.padding_idx, self.max_norm, 162 self.norm_type, self.scale_grad_by_freq, self.sparse) File ~/.local/lib/python3.8/site-packages/torch/nn/functional.py:2210, in embedding(input, weight, padding_idx, max_norm, norm_type, scale_grad_by_freq, sparse) 2204 # Note [embedding_renorm set_grad_enabled] 2205 # XXX: equivalent to 2206 # with torch.no_grad(): 2207 # torch.embedding_renorm_ 2208 # remove once script supports set_grad_enabled 2209 _no_grad_embedding_renorm_(weight, input, max_norm, norm_type) -> 2210 return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse) RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument index in method wrapper__index_select) <|||||>hmm.. no I guess now things are on different devices... ok perhaps try to train with the number of categoricals set to 0 (i.e. no categorical features...) <|||||>When changing num_static_categorical_features from 1 to 0 in the config code, the model was able to train and test successfully. Have not checked quality of run as it was a brief train for testing purposes. config = TimeSeriesTransformerConfig( prediction_length=prediction_length, # context length: context_length=prediction_length * 2, # lags coming from helper given the freq: lags_sequence=lags_sequence, # we'll add 2 time features ("month of year" and "age", see further): num_time_features=len(time_features) + 1, # we have a single static categorical feature, namely time series ID: **num_static_categorical_features=0,** # it has 366 possible values: cardinality=[len(train_dataset)], # the model will learn an embedding of size 2 for each of the 366 possible values: embedding_dimension=[2], # transformer params: encoder_layers=4, decoder_layers=4, d_model=32, ) model = TimeSeriesTransformerForPrediction(config)<|||||>ok cool then my suspicion was correct... the categorical ids in the train and test set need to have some cardinality... and it is this cardinality which is in the config... it seems there is some categorical id which is larger than the cardinality you gave the model when you configured it<|||||>Hmm so I guess I am a bit confused on that conclusion. The train and test sets have the same IDs and match 1-1. `train_df.item_id.unique()` array(['UPC1', 'UPC10', 'UPC1060910111', 'UPC1061962711', 'UPC1060180111', 'UPC1090018031', ... `test_df.item_id.unique()` array(['UPC1', 'UPC10', 'UPC1060910111', 'UPC1061962711', 'UPC1060180111', 'UPC1090018031', ... I ensured this as I have the following code as well `test_df = test_df[test_df.item_id.isin(train_df.item_id.unique())]` Is it that they are not able to be strings?<|||||>Still after running the following code to reassign the strings to numbers, I still get the original error. ``` train_classes = np.array(train_df["item_id"]) train_classnames, train_indices = np.unique(train_classes, return_inverse=True) test_classes = np.array(test_df["item_id"]) test_classnames, test_indices = np.unique(test_classes, return_inverse=True) train_df["item_id"] = train_indices test_df["item_id"] = test_indices ``` `train_df.item_id.unique()` array([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 291, 292, 293, 294, 295, 296, 297, 298, 299, 300, 301, 302, 303, 304, 305, 306, 307, 308, 309, 310, 311, 312, 313, 314, 315, 316, 317, 318, 319, 320, 321, 322, 323, 324, 325, 326, 327, 328, 329, 330, 331, 332, 333, 334, 335, 336, 337, 338, 339, 340, 341, 342, 343, 344, 345, 346, 347, 348, 349, 350, 351, 352, 353, 354, 355, 356, 357, 358, 359, 360, 361, 362, 363, 364, 365]) `test_df.item_id.unique()` array([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 291, 292, 293, 294, 295, 296, 297, 298, 299, 300, 301, 302, 303, 304, 305, 306, 307, 308, 309, 310, 311, 312, 313, 314, 315, 316, 317, 318, 319, 320, 321, 322, 323, 324, 325, 326, 327, 328, 329, 330, 331, 332, 333, 334, 335, 336, 337, 338, 339, 340, 341, 342, 343, 344, 345, 346, 347, 348, 349, 350, 351, 352, 353, 354, 355, 356, 357, 358, 359, 360, 361, 362, 363, 364, 365])<|||||>ok sorry for the confusion! `item_id` can be a string however the `static_categorical_features` for each of the time series can potentially contain the corresponding integer id, e.g. for the very first one `static_categorical_features = [0]` etc. and thus the `cardinality = [366]` and you can specify the embedding vector dimension... can you confirm that the `cardinality` is correct? <|||||>Ahh you solved it! I made a mistake thinking the item_id was utilized as the static_categorical_features. You are absolultey correct and found the issue. Here was the code causing the issue: ``` class ProcessStartField(): ts_id = 0 def __call__(self, data): data["start"] = data["start"].to_timestamp() data["feat_static_cat"] = [self.ts_id] self.ts_id += 1 return data ``` ``` from gluonts.itertools import Map process_start = ProcessStartField() list_ds_train = list(Map(process_start, ds_train)) list_ds_test = list(Map(process_start, ds_test)) ``` ``` from datasets import Dataset, Features, Value, Sequence features = Features( { "start": Value("timestamp[s]"), "target": Sequence(Value("float32")), "feat_static_cat": Sequence(Value("uint64")), # "feat_static_real": Sequence(Value("float32")), # "feat_dynamic_real": Sequence(Sequence(Value("uint64"))), # "feat_dynamic_cat": Sequence(Sequence(Value("uint64"))), "item_id": Value("string"), } ) ``` ``` train_dataset = Dataset.from_list(list_ds_train, features=features) test_dataset = Dataset.from_list(list_ds_test, features=features) ``` This was causing the train_dataset to get static_categorical_features values from 0-365, and the test_dataset was then getting values from 366-731. I corrected the code to now perform the mapping process individually for each dataset with the function so that it would not begin from the last number. Thanks so much for all your help!
transformers
23,990
closed
add gradient checkpointing for the llama's final layernorm module
Without this, when tuning with LoRA + gradient checkpointing, the last transformer layer's LoRA weights won't be updated! For example, if we use this callback to log the weight change of LoRA weights in each layer, we will find that no weight update for the last layer in TensorBoard. ```python class ParamsTensorBoardCallback(TensorBoardCallback): def __init__(self, tb_writer=None, params=None, process_name=lambda x:x): super().__init__(tb_writer) self.params = params self._process_name = process_name def on_step_end(self, args, state, control, **kwargs): if state.global_step % args.logging_steps == 0: dict_ = {} model = kwargs["model"] for name in self.params: param = model.get_parameter(name) param = param.flatten() name_p = self._process_name(name) dict_tmp = { f"{name_p}_mean": param.mean().item(), f"{name_p}_max": param.max().item(), f"{name_p}_q75": param.quantile(0.75).item(), f"{name_p}_q25": param.quantile(0.25).item(), f"{name_p}_min": param.min().item(), f"{name_p}_median": param.median().item(), f"{name_p}_std": param.std().item(), } dict_.update(dict_tmp) self.on_log(args, state, control, logs=dict_, **kwargs) def get_params_for_logging(model): ls_params = [] for name, param in model.named_parameters(): if param.requires_grad: ls_params.append(name) return ls_params ls_params = get_params_for_logging(model) tb_cb = ParamsTensorBoardCallback( None, ls_params, process_name=lambda x: x[30:] ) trainer = Trainer( model=model, train_dataset=train_data, eval_dataset=val_data, args=args, data_collator=data_collator, callbacks=[tb_cb] ) ``` # What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
06-04-2023 15:19:19
06-04-2023 15:19:19
cc @younesbelkada <|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>> > Hi @zhaoqf123 Thanks for bringing this up! Sadly I couldn't reproduce the issue, here is the snippet I used: > > ```python > import torch > from transformers import AutoModelForCausalLM > from peft import LoraConfig, get_peft_model, prepare_model_for_int8_training > > model_id = "huggyllama/llama-7b" > > config = LoraConfig( > r=16, > lora_alpha=32, > lora_dropout=0.05, > bias="none", > task_type="CAUSAL_LM" > ) > > model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto", load_in_8bit=True) > > # this should activate gradient checkpointing > model = prepare_model_for_int8_training(model) > > optimizer = torch.optim.Adam(model.parameters(), lr=1e-3) > > model = get_peft_model(model, config) > > assert model.training and model.is_gradient_checkpointing > > dummy_input = torch.LongTensor([[0, 1, 0, 1]]).to(0) > logits = model(dummy_input).logits > loss = logits.sum() > loss.backward() > optimizer.step() > > for n, param in model.named_parameters(): > if "lora" in n: > assert param.grad is not None > ``` > > And as you can see the gradients are always non-`None`. Per my understanding as long as the weight have an associated gradient its value will be updated. @younesbelkada Thank you for your reply. I modify your script based on my training setup with V100 GPU as follows, and it can be reproduced. ```python import torch from transformers import AutoModelForCausalLM from peft import LoraConfig, get_peft_model, prepare_model_for_int8_training # 1. load pretrained model # model_id = "huggyllama/llama-7b" model_id = "decapoda-research/llama-7b-hf" cache_dir = "/mnt/workspace/kgg/hf_models" model = AutoModelForCausalLM.from_pretrained(model_id, cache_dir=cache_dir, device_map="auto", load_in_8bit=True) # this should activate gradient checkpointing model = prepare_model_for_int8_training(model) # 2. config peft model config = LoraConfig( r=16, lora_alpha=32, lora_dropout=0.05, bias="none", task_type="CAUSAL_LM", # target_modules=["layers.31.self_attn.q_proj"] ) model = get_peft_model(model, config) assert model.training and model.is_gradient_checkpointing # 3. set up optimizer optimizer = torch.optim.Adam(model.parameters(), lr=1e-3) # 4. train with torch.autocast("cuda"): dummy_input = torch.LongTensor([[0, 1, 0, 1]]).to(0) model.train() logits = model(dummy_input).logits loss = logits.sum() loss.backward() optimizer.step() for n, param in model.named_parameters(): if "lora" in n: print(n) assert param.grad is not None ``` You can see that the params of the last-layer (layer31) has None grad. The main differences of the codes from yours is 3 parts: 1. The optimizer setup is after `get_peft_model` 2. `with torch.autocast("cuda")` 3. `model.train()` as in the `trainsformers/trainer.py` script By the way, my torch version is 2.1.0a0+fe05266 <|||||>Indeed I also managed to reproduce, this time with the latest stable version of torch, also note that this bug also occurs with any other model, for instance OPT. ```python import torch from transformers import AutoModelForCausalLM from peft import LoraConfig, get_peft_model, prepare_model_for_int8_training # 1. load pretrained model # model_id = "huggyllama/llama-7b" model_id = "facebook/opt-350m" # model_id = "decapoda-research/llama-7b-hf" model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto", load_in_8bit=True) # this should activate gradient checkpointing model = prepare_model_for_int8_training(model) # 2. config peft model config = LoraConfig( r=16, lora_alpha=32, lora_dropout=0.05, bias="none", task_type="CAUSAL_LM", # target_modules=["layers.31.self_attn.q_proj"] ) model = get_peft_model(model, config) assert model.training and model.is_gradient_checkpointing # 3. set up optimizer optimizer = torch.optim.Adam(model.parameters(), lr=1e-3) # 4. train with torch.autocast("cuda"): dummy_input = torch.LongTensor([[0, 1, 0, 1]]).to(0) model.train() logits = model(dummy_input).logits loss = logits.sum() loss.backward() optimizer.step() for n, param in model.named_parameters(): if "lora" in n: print(n) assert param.grad is not None ``` However, it seems that the bug disappears when the `torch.autocast("cuda")` context manager is removed. It appears the issue can be reproduced even without PEFT: ```python import torch from transformers import AutoModelForCausalLM model_id = "facebook/opt-350m" model = AutoModelForCausalLM.from_pretrained(model_id).to(0) model.gradient_checkpointing_enable() model.train() assert model.training and model.is_gradient_checkpointing # 3. set up optimizer optimizer = torch.optim.Adam(model.parameters(), lr=1e-3) # 4. train with torch.cuda.amp.autocast(True, dtype=torch.float16): dummy_input = torch.LongTensor([[0, 1, 0, 1]]).to(0) model.train() logits = model(dummy_input).logits loss = logits.sum() loss.backward() optimizer.step() for n, param in model.named_parameters(): if param.grad is None: print(n) ``` And this gives: ```bash model.decoder.layers.23.self_attn.k_proj.weight model.decoder.layers.23.self_attn.k_proj.bias model.decoder.layers.23.self_attn.v_proj.weight model.decoder.layers.23.self_attn.v_proj.bias model.decoder.layers.23.self_attn.q_proj.weight model.decoder.layers.23.self_attn.q_proj.bias model.decoder.layers.23.self_attn.out_proj.weight model.decoder.layers.23.self_attn.out_proj.bias model.decoder.layers.23.fc1.weight model.decoder.layers.23.fc1.bias model.decoder.layers.23.fc2.weight model.decoder.layers.23.fc2.bias ``` Meaning the entire last layer doesn't get updated. From what I can see in the trainer, currently we support mixed precision autocast (`torch.xxx.amp`) context managers: https://github.com/huggingface/transformers/blob/main/src/transformers/trainer.py#L2688-L2705 - and replacing the context manager you have put to `torch.cuda.amp.autocast(True, dtype=torch.float16)` reproduces also the bug. I am not sure if this is a bug on transformers side or torch but I would say OK to merge this and apply this patch to other common architectures (by opening a good first issue maybe?). Wdyt @sgugger @amyeroberts @ArthurZucker <|||||>In line with what @sgugger said, also not sure it even makes a lot of sense to checkpoint something as small as the layer norm grads? Thanks for flagging the issue and proposing a fix! <|||||>> Hi @zhaoqf123 Thanks a lot for helping us finding this important issue. After some digging and internal discussion, we have found a broader fix that includes most models that supports gradient checkpointing: #24247 . To credit you from your help, I have added you as a co-author in that PR and we will close this PR once #24247 will get merged Thanks a lot ! @younesbelkada Thank you for your acknowledgement. Although I have several years of experiences in machine learning (with tf), I just start using `transformers` and `pytorch` for couple of months. It really took me 4 days and nights to figure out where the bug occurs and a workaround solution. Thank you very much for the `transformers` and `peft` project. They are really very helpful.<|||||>Closing the PR as https://github.com/huggingface/transformers/pull/24247 being merged Again thanks so much @zhaoqf123 for all your help on this and your great investigation! <|||||>hi @zhaoqf123 Some training setups that were running fine in a single T4 (with 7GB peak memory) now OOM with that PR, I wanted to double check if you observe the same behaviour in your case as well? For reference, check: https://github.com/huggingface/transformers/pull/24420#issuecomment-1602345683<|||||>Hi @zhaoqf123 @pacman100 has found the rootcause of your original issue and we found out that the recent accelerate integration of Trainer silently fixed your bug. I can confirm I don't get any None grad with llama models using Trainer + autocast: https://github.com/huggingface/transformers/pull/24420#issuecomment-1602680953 | I believe 3 weeks ago the Trainer + accelerate integration was not released yet that could explain why you had the bug Can you try out your script after we revert the PR and let us know? Thanks !<|||||>> hi @zhaoqf123 Some training setups that were running fine in a single T4 (with 7GB peak memory) now OOM with that PR, I wanted to double check if you observe the same behaviour in your case as well? > > For reference, check: [#24420 (comment)](https://github.com/huggingface/transformers/pull/24420#issuecomment-1602345683) @younesbelkada Sorry for the late reply. Just got vocation last 3 days. Yes, I also noticed that the memory consumption increased a lot when making the last layer updatable. For llama 7B, when using V100-32GB, the VRAM increases from 34% to 42%, which is not proportional to the increase of updatable params.<|||||>> Hi @zhaoqf123 @pacman100 has found the rootcause of your original issue and we found out that the recent accelerate integration of Trainer silently fixed your bug. I can confirm I don't get any None grad with llama models using Trainer + autocast: [#24420 (comment)](https://github.com/huggingface/transformers/pull/24420#issuecomment-1602680953) | I believe 3 weeks ago the Trainer + accelerate integration was not released yet that could explain why you had the bug Can you try out your script after we revert the PR and let us know? Thanks ! @younesbelkada May I know how should I try out? For example, re-install transformer: `pip install --upgrade git+https://github.com/huggingface/transformers.git`, and then run my script without `with torch.autocast("cuda"):`?<|||||>@zhaoqf123 thanks for the reply! Yes you can try out that way, uninstall your current transformers lib, reinstall it from source and see if the original bug still persists<|||||>> @zhaoqf123 thanks for the reply! Yes you can try out that way, uninstall your current transformers lib, reinstall it from source and see if the original bug still persists @younesbelkada After re-install transformers from the source, in my V100, if I remove `with torch.autocast("cuda")`, I encounter [this issue](https://github.com/tloen/alpaca-lora/issues/203). If I don't remove `with torch.autocast("cuda")`, the last layer still not updatable. In my 3090 GPU, it works by removing `with torch.autocast("cuda")`. It could be due to the implementation of bitsandbytes for GPU computability < 7.5. Because GPU<7.5 does not have int8 core production, so bitsandbytes do int8 mutliplication using fp16. Check also this [issue](https://github.com/TimDettmers/bitsandbytes/issues/240) and this [issue](https://github.com/TimDettmers/bitsandbytes/issues/165#issuecomment-1518711138)
transformers
23,989
closed
load_in_8bit=True returns gibberish when inferencing on multi GPU
### System Info ```Shell - `Accelerate` version: 0.18.0 - Platform: Linux-5.15.0-72-generic-x86_64-with-glibc2.35 - Python version: 3.10.6 - Numpy version: 1.24.3 - PyTorch version (GPU?): 1.13.1+cu117 (True) - `Accelerate` default config: Not found - using transformers from here, as recommended by openassistant: https://huggingface.co/OpenAssistant/oasst-rlhf-2-llama-30b-7k-steps-xor git clone https://github.com/huggingface/transformers.git cd transformers git checkout d04ec99bec8a0b432fc03ed60cea9a1a20ebaf3c other info: - ubuntu 22.04 - bitsandbytes = 0.38.1 - CUDA 118 detected by bitsandbytes +-----------------------------------------------------------------------------+ | NVIDIA-SMI 525.105.17 Driver Version: 525.105.17 CUDA Version: 12.0 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |===============================+======================+======================| | 0 Tesla V100S-PCI... On | 00000000:00:06.0 Off | 0 | | N/A 31C P0 41W / 250W | 32198MiB / 32768MiB | 37% Default | | | | N/A | +-------------------------------+----------------------+----------------------+ | 1 Tesla V100S-PCI... On | 00000000:00:07.0 Off | 0 | | N/A 31C P0 36W / 250W | 31784MiB / 32768MiB | 0% Default | | | | N/A | +-------------------------------+----------------------+----------------------+ | 2 Tesla V100S-PCI... On | 00000000:00:08.0 Off | 0 | | N/A 33C P0 36W / 250W | 31784MiB / 32768MiB | 23% Default | | | | N/A | +-------------------------------+----------------------+----------------------+ | 3 Tesla V100S-PCI... On | 00000000:00:09.0 Off | 0 | | N/A 33C P0 36W / 250W | 31784MiB / 32768MiB | 16% Default | | | | N/A | +-------------------------------+----------------------+----------------------+ ``` ### Who can help? Big Model Inference: @sgugger @muellerzr ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below): https://huggingface.co/OpenAssistant/oasst-rlhf-2-llama-30b-7k-steps-xor ### Reproduction create a fresh venv and run this: ``` python3.10 -m venv dev_1 source dev_1/bin/activate pip install --upgrade pip git clone https://github.com/huggingface/transformers.git cd transformers git checkout d04ec99bec8a0b432fc03ed60cea9a1a20ebaf3c pip install . pip install torch==1.13.1 accelerate==0.18.0 sentencepiece==0.1.98 protobuf==3.20.1 pip install scipy pip install bitsandbytes==0.38.1 export LD_LIBRARY_PATH=/usr/lib/x86_64-linux-gnu:$LD_LIBRARY_PATH python -m bitsandbytes ``` Then open python, load weights and infer with `load_in8bit=True`. The `model.generate` arguments differ, due to the inf/nan bug with `CUDA 11.8` and bitsandbytes `0.38.1` see https://github.com/tloen/alpaca-lora/issues/408 **Update: see section expected behaviour where I run the exact same `model.generate` call with the same parameters.** ``` python from transformers import LlamaTokenizer, LlamaForCausalLM, TextStreamer tokenizer = LlamaTokenizer.from_pretrained("/mnt/models/oasst-rlhf-2-llama-30b") model = LlamaForCausalLM.from_pretrained("/mnt/models/oasst-rlhf-2-llama-30b", device_map="auto", load_in_8bit=True) streamer = TextStreamer(tokenizer, skip_prompt=True) message = "<|prompter|>This is a demo of a text streamer. What's a cool fact about ducks?<|assistant|>" inputs = tokenizer(message, return_tensors="pt").to(model.device) tokens = model.generate(**inputs, max_new_tokens=25, do_sample=True, num_beams=1, temperature=0.9, streamer=streamer, remove_invalid_values=True) #if I don't use remove invalid, I get the inf/nan bug, see https://github.com/tloen/alpaca-lora/issues/408 ⁇ <|prompter|> This is a demo of a text streamer. What's a cool fact about ducks? <|assistant|> enaracht blood searches anomдів kun Nap wherever learned Laufcalendar ^C #manu ``` Here's what happens when I load the model without the `load_in_8bit=True` flag (good!): ``` python from transformers import LlamaTokenizer, LlamaForCausalLM, TextStreamer tokenizer = LlamaTokenizer.from_pretrained("/mnt/models/oasst-rlhf-2-llama-30b") model = LlamaForCausalLM.from_pretrained("/mnt/models/oasst-rlhf-2-llama-30b", device_map="auto") streamer = TextStreamer(tokenizer, skip_prompt=True) message = "<|prompter|>This is a demo of a text streamer. What's a cool fact about ducks?<|assistant|>" inputs = tokenizer(message, return_tensors="pt").to(model.device) tokens = model.generate(**inputs, max_new_tokens=25, do_sample=True, temperature=0.9, streamer=streamer) Response: The Duck: Small yet Mighty Did you know that, while ducks are relatively ``` I also tried running it without `do_sample=True`: ``` >>> tokens = model.generate(**inputs, max_new_tokens=25, temperature=0.9, streamer=streamer) ⁇ <|prompter|> This is a demo of a text streamer. What's a cool fact about ducks? <|assistant|> ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ``` ### Expected behavior I would expect the following: ``` python from transformers import LlamaTokenizer, LlamaForCausalLM, TextStreamer tokenizer = LlamaTokenizer.from_pretrained("/mnt/models/oasst-rlhf-2-llama-30b") model = LlamaForCausalLM.from_pretrained("/mnt/models/oasst-rlhf-2-llama-30b", device_map="auto", load_in_8bit=True) streamer = TextStreamer(tokenizer, skip_prompt=True) message = "<|prompter|>This is a demo of a text streamer. What's a cool fact about ducks?<|assistant|>" inputs = tokenizer(message, return_tensors="pt").to(model.device) tokens = model.generate(**inputs, max_new_tokens=25, do_sample=True, num_beams=1, temperature=0.9, streamer=streamer, remove_invalid_values=True) Response: The Duck: Small yet Mighty Did you know that, while ducks are relatively ``` edit (additional testing): - I also tried setting `use_cache=False` in `model.generate()`, as hinted in https://huggingface.co/OpenAssistant/falcon-40b-sft-top1-560/discussions/1, but still gibberish output. - I also tried running this with `torch==2.0.1`, but same error behavior. - I tried downgrading from `CUDA 11.8` to `CUDA 11.6` and `bitsandbytes` from `0.38.1` to `0.31.8`, which solves the inf/nan problem (see https://github.com/tloen/alpaca-lora/issues/408). So now I can run the exact same `model.generate()` code with the only difference between the two being `load_in_8bit=True` in the model loading step: ``` python Python 3.10.6 (main, Mar 10 2023, 10:55:28) [GCC 11.3.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> from transformers import LlamaTokenizer, LlamaForCausalLM, TextStreamer >>> >>> tokenizer = LlamaTokenizer.from_pretrained("/mnt/models/oasst-rlhf-2-llama-30b") >>> model = LlamaForCausalLM.from_pretrained("/mnt/models/oasst-rlhf-2-llama-30b", device_map="auto", load_in_8bit=True) #returns gibberish Overriding torch_dtype=None with `torch_dtype=torch.float16` due to requirements of `bitsandbytes` to enable model loading in mixed int8. Either pass torch_dtype=torch.float16 or don't pass this argument at all to remove this warning. ===================================BUG REPORT=================================== Welcome to bitsandbytes. For bug reports, please use this form: ...... ================================================================================ dev_1/lib/python3.10/site-packages/bitsandbytes/cuda_setup/paths.py:110: UserWarning: /usr/lib/x86_64-linux-gnu:/usr/lib/x86_64-linux-gnu:/usr/lib/x86_64-linux-gnu: did not contain libcudart.so as expected! Searching further paths... warn( CUDA_SETUP: WARNING! libcudart.so not found in any environmental path. Searching /usr/local/cuda/lib64... CUDA SETUP: CUDA path found: /usr/local/cuda/lib64/libcudart.so CUDA SETUP: Highest compute capability among GPUs detected: 7.0 CUDA_SETUP: Detected CUDA version 116 CUDA_SETUP: Loading binary dev_1/lib/python3.10/site-packages/bitsandbytes/libbitsandbytes_cuda116_nocublaslt.so... Loading checkpoint shards: 0%| | 0/7 [00:00<?, ?it/s] dev_1/lib/python3.10/site-packages/bitsandbytes/functional.py:227: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() return ct.c_void_p(A.data.storage().data_ptr()) Loading checkpoint shards: 100%|███████████████████████| 7/7 [00:54<00:00, 7.82s/it] >>> streamer = TextStreamer(tokenizer, skip_prompt=True) >>> message = "<|prompter|>This is a demo of a text streamer. What's a cool fact about ducks?<|assistant|>" >>> inputs = tokenizer(message, return_tensors="pt").to(model.device) >>> tokens = model.generate(**inputs, max_new_tokens=25, do_sample=True, temperature=0.9, streamer=streamer) xeabaselogiccmtnzak accomplish MeyifullylandaMP Marshallcitaemann beskre Gil zoomaki Bon companion Vert Mindsetti ``` The issue persists, so it's independent from the inf/nan bug and 100% confirmed caused by a combination of using both `load_in_8bit=True` and multi gpu. This code returns comprehensible language when: - it fits on a single GPU's VRAM and use `load_in_8bit=True`, - or when you load on multi GPU, but without the argument `load_in_8bit=True`.
06-04-2023 14:43:15
06-04-2023 14:43:15
Not OP's issue but for others finding this issue like I did: be aware that if you're using 2x RTX 4090, there's a [driver bug](https://forums.developer.nvidia.com/t/standard-nvidia-cuda-tests-fail-with-dual-rtx-4090-linux-box/233202/51) in Linux causing corrupt results. For me switching from the 530 to the 525 drivers fixed a multi-gpu gibberish issue when using `load_in_4bit` from the latest bitsnbytes.<|||||>cc @younesbelkada <|||||>thanks for the issue, I see you are using a V100, can you try to upgrade `bitsandbytes` to the version `0.39.0`? Also I think that the kernels for 4bit inference are much more robust in my experience (tried 4bit inference in a V100 and it seemed to work fine) - can you try them and let us know how it goes?<|||||>Thanks, I will check it out! If it runs in 4 bit, then the multi-gpu issue is immediately solved, since the model would fit on a single card. Slightly worried about the inference speed from what I read. It did 1 token/s in 16-bit, spread across 4 V100Ss. But that'd be a new issue :) <|||||>**Sort of solved** Thanks @younesbelkada by updating to 4bit requirements as mentioned in https://huggingface.co/blog/4bit-transformers-bitsandbytes ``` accelerate 0.20.0.dev0 bitsandbytes 0.39.0 peft 0.4.0.dev0 transformers 4.30.0.dev0 ``` `load_in_4bit=True` produces comprehensible text on multi-gpu!!! (Even though it now only takes 28GB of VRAM, and thus would also fit on a single V100S GPU.) ``` python from transformers import AutoTokenizer, AutoModelForCausalLM, TextStreamer tokenizer = AutoTokenizer.from_pretrained("falcon-40b-sft-mix-1226", trust_remote_code=True) model = AutoModelForCausalLM.from_pretrained("falcon-40b-sft-mix-1226", device_map="auto", offload_folder="offload", trust_remote_code=True, load_in_4bit=True) streamer = TextStreamer(tokenizer, skip_prompt=True) message = "<|prompter|>This is a demo of a text streamer. What's a cool fact about ducks?<|endoftext|><|assistant|>" inputs = tokenizer(message, return_tensors="pt").to(model.device) tokens = model.generate(**inputs, max_new_tokens=25, do_sample=True, temperature=0.9, streamer=streamer) ``` yields a very cool: ``` /generation/utils.py:1140: UserWarning: The following `model_kwargs` are not used by the model: ['token_type_ids'] (note: typos in the generate arguments will also show up in this list) warnings.warn( Setting `pad_token_id` to `eos_token_id`:11 for open-end generation. Ducks have waterproof feathers which are excellent at repelling water.<|endoftext|> ``` remarks/questions: - Yes it no longer blocks me, but the original issue remains for 8 bit. So shall I keep this bug open, or is the solution for everyone to move to `load_in_4bit`, instead of `load_in_8bit`? - 4 bit inference is not noticeably slower (or faster) than 16 bit, great! - using `load_in 4bit` also solves the inf/nan bug that `load_in_8bit` has.<|||||>Awesome! Great also to hear that load_in_4bit is as fast (maybe faster) than 16bit in V100s, this is very interesting! We can keep this issue open for community members to chime in and comment on your observations. I think that the potential 8bit incompatibility issue on V100s might need to be reported on `bitsanbytes` library. Again thanks @Daryl149 !<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
23,988
closed
Bug Report - Tokenizer Issue with Tensor Device Assignment in transformers/pipelines/text_generation.py
### System Info ``` - `transformers` version: 4.29.0.dev0 - Platform: Linux-5.4.0-110-generic-x86_64-with-glibc2.10 - Python version: 3.8.13 - Huggingface_hub version: 0.14.1 - Safetensors version: 0.3.0 - PyTorch version (GPU?): 2.0.0+cu117 (True) - Tensorflow version (GPU?): 2.9.3 (True) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ``` ### Who can help? @ArthurZucker @Narsil ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction code example of https://huggingface.co/tiiuae/falcon-7b from transformers import AutoTokenizer, AutoModelForCausalLM import transformers import torch model = "tiiuae/falcon-7b" ```python tokenizer = AutoTokenizer.from_pretrained(model) pipeline = transformers.pipeline( "text-generation", model=model, tokenizer=tokenizer, torch_dtype=torch.bfloat16, trust_remote_code=True, device=torch.device(0), ) sequences = pipeline( "Girafatron is obsessed with giraffes, the most glorious animal on the face of this Earth. Giraftron believes all other animals are irrelevant when compared to the glorious majesty of the giraffe.\nDaniel: Hello, Girafatron!\nGirafatron:", max_length=200, do_sample=True, top_k=10, num_return_sequences=1, eos_token_id=tokenizer.eos_token_id, ) for seq in sequences: print(f"Result: {seq['generated_text']}") ``` ### Expected behavior every thing should work well on gpu:0. ### More Detail Dear Transformers GitHub team, I hope this message finds you well. I would like to report a bug that I have identified in the code block provided below: https://github.com/huggingface/transformers/blob/118e9810687dd713b6be07af79e80eeb1d916908/src/transformers/pipelines/text_generation.py#L203-L263 ```python def preprocess(self, prompt_text, prefix="", handle_long_generation=None, **generate_kwargs): inputs = self.tokenizer( prefix + prompt_text, padding=False, add_special_tokens=False, return_tensors=self.framework ) inputs["prompt_text"] = prompt_text # Rest of the code... ``` Problem description: The bug occurs in the `preprocess` method of the code block above. It seems that after tokenizing the `prompt_text`, the resulting tensor is not automatically moved to the same device where the model is located. This behavior causes an error when attempting to use the GPU for computation, specifically resulting in the following error message: ``` RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument index in method wrapper_CUDA__index_select) ``` Expected behavior: Ideally, the `preprocess` method should ensure that the tensor generated by the tokenizer is moved to the same device as the model before further processing or generation takes place. This would prevent any device mismatch errors when using the GPU for computations. Possible solution: To resolve this issue, I suggest modifying the `preprocess` method to include a device assignment step for the tokenized tensor. By using the `to` method, the tensor can be explicitly moved to the device where the model is located. Here's an example of how this could be implemented: ```python inputs = self.tokenizer( prefix + prompt_text, padding=False, add_special_tokens=False, return_tensors=self.framework ) inputs["input_ids"] = inputs["input_ids"].to(self.model.device) if "attention_mask" in inputs: inputs["attention_mask"] = inputs["attention_mask"].to(self.model.device) ``` By adding these lines of code, the tensor and its attention mask (if applicable) will be correctly assigned to the same device as the model. I hope this information helps in resolving the issue. Please let me know if you need any further clarification or assistance. Thank you for your attention to this matter. Best regards, Wenrui
06-04-2023 13:23:24
06-04-2023 13:23:24
The `pipeline` will handle the device if you pass it to it with the `device` kwarg.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
23,987
closed
How can I using deepspeed along with LION optimizer/.?
How can I using deepspeed along with LION optimizer/.?
06-04-2023 11:45:18
06-04-2023 11:45:18
Please use the [forums](https://discuss.huggingface.co/) for such questions.<|||||>@sgugger I have post one: https://discuss.huggingface.co/t/how-to-using-lion-optimizer/42270 Hope you guys could give any help.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
23,986
closed
learn_rate behavior is not expected when using =transformers.TrainingArguments
### System Info - `transformers` version: 4.30.0.dev0 - Platform: Linux-5.15.0-72-generic-x86_64-with-glibc2.31 - Python version: 3.9.7 - Huggingface_hub version: 0.14.1 - Safetensors version: 0.3.1 - PyTorch version (GPU?): 1.13.0+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? _No response_ ### Information - [x] The official example scripts - [ ] My own modified scripts ### Tasks - [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction uniform_finetune.py: ``` python trainer = transformers.Trainer( model=model, train_dataset=train_data, eval_dataset=val_data, args=transformers.TrainingArguments( per_device_train_batch_size=args.per_gpu_train_batch_size, gradient_accumulation_steps=args.gradient_accumulation_steps, warmup_steps=warmup_steps, num_train_epochs=args.epochs, learning_rate=args.learning_rate, fp16=True, logging_steps=20, evaluation_strategy="steps", save_strategy="steps", eval_steps=saving_step , save_steps=saving_step, output_dir=output_dir, save_total_limit=11, load_best_model_at_end=True if args.val_set_size > 0 else False, ddp_find_unused_parameters=False if ddp else None, ), data_collator=transformers.DataCollatorForSeq2Seq(tokenizer, return_tensors="pt", padding=True) ) ```
06-04-2023 09:08:52
06-04-2023 09:08:52
Could you explain what the bug is here?<|||||>> Could you explain what the bug is here? I'm sorry I didn't explain clearly, I will attach a simple script. In the previous version of transformer, this code behaves normally, and the learning rate decreases linearly like the green line. ![image](https://github.com/huggingface/transformers/assets/32215330/f73fe2d0-54b9-4c6d-bc2d-6e63ee1c2bac) But in the latest transformers, the learning rate decreases like the red line.<|||||>> > Could you explain what the bug is here? > > I'm sorry I didn't explain clearly, I will attach a simple script. In the previous version of transformer, this code behaves normally, and the learning rate decreases linearly like the green line. ![image](https://user-images.githubusercontent.com/32215330/244130497-f73fe2d0-54b9-4c6d-bc2d-6e63ee1c2bac.png) But in the latest transformers, the learning rate decreases like the red line. use this command: `torchrun --nnodes 1 --nproc_per_node 4 run.py --model_name_or_path bert-base-cased --task_name mrpc --do_train --max_seq_length 128 --per_device_train_batch_size 8 --gradient_accumulation_steps 2 --learning_rate 3e-5 --num_train_epochs 3 --output_dir ./mrpc --overwrite_output_dir --warmup_steps 1 --logging_steps 5` ``` python #!/usr/bin/env python # coding=utf-8 # Copyright 2020 The HuggingFace Inc. team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ Finetuning the library models for sequence classification on GLUE.""" # You can also adapt this script on your own text classification task. Pointers for this are left as comments. import logging import os import random import sys from dataclasses import dataclass, field from typing import Optional import datasets import evaluate import numpy as np from datasets import load_dataset import transformers from transformers import ( AutoConfig, AutoModelForSequenceClassification, AutoTokenizer, DataCollatorWithPadding, EvalPrediction, HfArgumentParser, PretrainedConfig, Trainer, TrainingArguments, default_data_collator, set_seed, ) from transformers.trainer_utils import get_last_checkpoint from transformers.utils import check_min_version, send_example_telemetry from transformers.utils.versions import require_version # Will error if the minimal version of Transformers is not installed. Remove at your own risks. require_version("datasets>=1.8.0", "To fix: pip install -r examples/pytorch/text-classification/requirements.txt") task_to_keys = { "cola": ("sentence", None), "mnli": ("premise", "hypothesis"), "mrpc": ("sentence1", "sentence2"), "qnli": ("question", "sentence"), "qqp": ("question1", "question2"), "rte": ("sentence1", "sentence2"), "sst2": ("sentence", None), "stsb": ("sentence1", "sentence2"), "wnli": ("sentence1", "sentence2"), } logger = logging.getLogger(__name__) @dataclass class DataTrainingArguments: """ Arguments pertaining to what data we are going to input our model for training and eval. Using `HfArgumentParser` we can turn this class into argparse arguments to be able to specify them on the command line. """ task_name: Optional[str] = field( default=None, metadata={"help": "The name of the task to train on: " + ", ".join(task_to_keys.keys())}, ) dataset_name: Optional[str] = field( default=None, metadata={"help": "The name of the dataset to use (via the datasets library)."} ) dataset_config_name: Optional[str] = field( default=None, metadata={"help": "The configuration name of the dataset to use (via the datasets library)."} ) max_seq_length: int = field( default=128, metadata={ "help": ( "The maximum total input sequence length after tokenization. Sequences longer " "than this will be truncated, sequences shorter will be padded." ) }, ) overwrite_cache: bool = field( default=False, metadata={"help": "Overwrite the cached preprocessed datasets or not."} ) pad_to_max_length: bool = field( default=True, metadata={ "help": ( "Whether to pad all samples to `max_seq_length`. " "If False, will pad the samples dynamically when batching to the maximum length in the batch." ) }, ) max_train_samples: Optional[int] = field( default=None, metadata={ "help": ( "For debugging purposes or quicker training, truncate the number of training examples to this " "value if set." ) }, ) max_eval_samples: Optional[int] = field( default=None, metadata={ "help": ( "For debugging purposes or quicker training, truncate the number of evaluation examples to this " "value if set." ) }, ) max_predict_samples: Optional[int] = field( default=None, metadata={ "help": ( "For debugging purposes or quicker training, truncate the number of prediction examples to this " "value if set." ) }, ) train_file: Optional[str] = field( default=None, metadata={"help": "A csv or a json file containing the training data."} ) validation_file: Optional[str] = field( default=None, metadata={"help": "A csv or a json file containing the validation data."} ) test_file: Optional[str] = field(default=None, metadata={"help": "A csv or a json file containing the test data."}) def __post_init__(self): if self.task_name is not None: self.task_name = self.task_name.lower() if self.task_name not in task_to_keys.keys(): raise ValueError("Unknown task, you should pick one in " + ",".join(task_to_keys.keys())) elif self.dataset_name is not None: pass elif self.train_file is None or self.validation_file is None: raise ValueError("Need either a GLUE task, a training/validation file or a dataset name.") else: train_extension = self.train_file.split(".")[-1] assert train_extension in ["csv", "json"], "`train_file` should be a csv or a json file." validation_extension = self.validation_file.split(".")[-1] assert ( validation_extension == train_extension ), "`validation_file` should have the same extension (csv or json) as `train_file`." @dataclass class ModelArguments: """ Arguments pertaining to which model/config/tokenizer we are going to fine-tune from. """ model_name_or_path: str = field( metadata={"help": "Path to pretrained model or model identifier from huggingface.co/models"} ) config_name: Optional[str] = field( default=None, metadata={"help": "Pretrained config name or path if not the same as model_name"} ) tokenizer_name: Optional[str] = field( default=None, metadata={"help": "Pretrained tokenizer name or path if not the same as model_name"} ) cache_dir: Optional[str] = field( default=None, metadata={"help": "Where do you want to store the pretrained models downloaded from huggingface.co"}, ) use_fast_tokenizer: bool = field( default=True, metadata={"help": "Whether to use one of the fast tokenizer (backed by the tokenizers library) or not."}, ) model_revision: str = field( default="main", metadata={"help": "The specific model version to use (can be a branch name, tag name or commit id)."}, ) use_auth_token: bool = field( default=False, metadata={ "help": ( "Will use the token generated when running `huggingface-cli login` (necessary to use this script " "with private models)." ) }, ) ignore_mismatched_sizes: bool = field( default=False, metadata={"help": "Will enable to load a pretrained model whose head dimensions are different."}, ) def main(): # See all possible arguments in src/transformers/training_args.py # or by passing the --help flag to this script. # We now keep distinct sets of args, for a cleaner separation of concerns. parser = HfArgumentParser((ModelArguments, DataTrainingArguments, TrainingArguments)) if len(sys.argv) == 2 and sys.argv[1].endswith(".json"): # If we pass only one argument to the script and it's the path to a json file, # let's parse it to get our arguments. model_args, data_args, training_args = parser.parse_json_file(json_file=os.path.abspath(sys.argv[1])) else: model_args, data_args, training_args = parser.parse_args_into_dataclasses() # Sending telemetry. Tracking the example usage helps us better allocate resources to maintain them. The # information sent is the one passed as arguments along with your Python/PyTorch versions. send_example_telemetry("run_glue", model_args, data_args) # Setup logging logging.basicConfig( format="%(asctime)s - %(levelname)s - %(name)s - %(message)s", datefmt="%m/%d/%Y %H:%M:%S", handlers=[logging.StreamHandler(sys.stdout)], ) if training_args.should_log: # The default of training_args.log_level is passive, so we set log level at info here to have that default. transformers.utils.logging.set_verbosity_info() log_level = training_args.get_process_log_level() logger.setLevel(log_level) datasets.utils.logging.set_verbosity(log_level) transformers.utils.logging.set_verbosity(log_level) transformers.utils.logging.enable_default_handler() transformers.utils.logging.enable_explicit_format() # Log on each process the small summary: logger.warning( f"Process rank: {training_args.local_rank}, device: {training_args.device}, n_gpu: {training_args.n_gpu}" + f"distributed training: {bool(training_args.local_rank != -1)}, 16-bits training: {training_args.fp16}" ) logger.info(f"Training/evaluation parameters {training_args}") # Detecting last checkpoint. last_checkpoint = None if os.path.isdir(training_args.output_dir) and training_args.do_train and not training_args.overwrite_output_dir: last_checkpoint = get_last_checkpoint(training_args.output_dir) if last_checkpoint is None and len(os.listdir(training_args.output_dir)) > 0: raise ValueError( f"Output directory ({training_args.output_dir}) already exists and is not empty. " "Use --overwrite_output_dir to overcome." ) elif last_checkpoint is not None and training_args.resume_from_checkpoint is None: logger.info( f"Checkpoint detected, resuming training at {last_checkpoint}. To avoid this behavior, change " "the `--output_dir` or add `--overwrite_output_dir` to train from scratch." ) # Set seed before initializing model. set_seed(training_args.seed) # Get the datasets: you can either provide your own CSV/JSON training and evaluation files (see below) # or specify a GLUE benchmark task (the dataset will be downloaded automatically from the datasets Hub). # # For CSV/JSON files, this script will use as labels the column called 'label' and as pair of sentences the # sentences in columns called 'sentence1' and 'sentence2' if such column exists or the first two columns not named # label if at least two columns are provided. # # If the CSVs/JSONs contain only one non-label column, the script does single sentence classification on this # single column. You can easily tweak this behavior (see below) # # In distributed training, the load_dataset function guarantee that only one local process can concurrently # download the dataset. if data_args.task_name is not None: # Downloading and loading a dataset from the hub. raw_datasets = load_dataset( "glue", data_args.task_name, cache_dir=model_args.cache_dir, use_auth_token=True if model_args.use_auth_token else None, ) elif data_args.dataset_name is not None: # Downloading and loading a dataset from the hub. raw_datasets = load_dataset( data_args.dataset_name, data_args.dataset_config_name, cache_dir=model_args.cache_dir, use_auth_token=True if model_args.use_auth_token else None, ) else: # Loading a dataset from your local files. # CSV/JSON training and evaluation files are needed. data_files = {"train": data_args.train_file, "validation": data_args.validation_file} # Get the test dataset: you can provide your own CSV/JSON test file (see below) # when you use `do_predict` without specifying a GLUE benchmark task. if training_args.do_predict: if data_args.test_file is not None: train_extension = data_args.train_file.split(".")[-1] test_extension = data_args.test_file.split(".")[-1] assert ( test_extension == train_extension ), "`test_file` should have the same extension (csv or json) as `train_file`." data_files["test"] = data_args.test_file else: raise ValueError("Need either a GLUE task or a test file for `do_predict`.") for key in data_files.keys(): logger.info(f"load a local file for {key}: {data_files[key]}") if data_args.train_file.endswith(".csv"): # Loading a dataset from local csv files raw_datasets = load_dataset( "csv", data_files=data_files, cache_dir=model_args.cache_dir, use_auth_token=True if model_args.use_auth_token else None, ) else: # Loading a dataset from local json files raw_datasets = load_dataset( "json", data_files=data_files, cache_dir=model_args.cache_dir, use_auth_token=True if model_args.use_auth_token else None, ) # See more about loading any type of standard or custom dataset at # https://huggingface.co/docs/datasets/loading_datasets.html. # Labels if data_args.task_name is not None: is_regression = data_args.task_name == "stsb" if not is_regression: label_list = raw_datasets["train"].features["label"].names num_labels = len(label_list) else: num_labels = 1 else: # Trying to have good defaults here, don't hesitate to tweak to your needs. is_regression = raw_datasets["train"].features["label"].dtype in ["float32", "float64"] if is_regression: num_labels = 1 else: # A useful fast method: # https://huggingface.co/docs/datasets/package_reference/main_classes.html#datasets.Dataset.unique label_list = raw_datasets["train"].unique("label") label_list.sort() # Let's sort it for determinism num_labels = len(label_list) # Load pretrained model and tokenizer # # In distributed training, the .from_pretrained methods guarantee that only one local process can concurrently # download model & vocab. config = AutoConfig.from_pretrained( model_args.config_name if model_args.config_name else model_args.model_name_or_path, num_labels=num_labels, finetuning_task=data_args.task_name, cache_dir=model_args.cache_dir, revision=model_args.model_revision, use_auth_token=True if model_args.use_auth_token else None, ) tokenizer = AutoTokenizer.from_pretrained( model_args.tokenizer_name if model_args.tokenizer_name else model_args.model_name_or_path, cache_dir=model_args.cache_dir, use_fast=model_args.use_fast_tokenizer, revision=model_args.model_revision, use_auth_token=True if model_args.use_auth_token else None, ) model = AutoModelForSequenceClassification.from_pretrained( model_args.model_name_or_path, from_tf=bool(".ckpt" in model_args.model_name_or_path), config=config, cache_dir=model_args.cache_dir, revision=model_args.model_revision, use_auth_token=True if model_args.use_auth_token else None, ignore_mismatched_sizes=model_args.ignore_mismatched_sizes, ) # Preprocessing the raw_datasets if data_args.task_name is not None: sentence1_key, sentence2_key = task_to_keys[data_args.task_name] else: # Again, we try to have some nice defaults but don't hesitate to tweak to your use case. non_label_column_names = [name for name in raw_datasets["train"].column_names if name != "label"] if "sentence1" in non_label_column_names and "sentence2" in non_label_column_names: sentence1_key, sentence2_key = "sentence1", "sentence2" else: if len(non_label_column_names) >= 2: sentence1_key, sentence2_key = non_label_column_names[:2] else: sentence1_key, sentence2_key = non_label_column_names[0], None # Padding strategy if data_args.pad_to_max_length: padding = "max_length" else: # We will pad later, dynamically at batch creation, to the max sequence length in each batch padding = False # Some models have set the order of the labels to use, so let's make sure we do use it. label_to_id = None if ( model.config.label2id != PretrainedConfig(num_labels=num_labels).label2id and data_args.task_name is not None and not is_regression ): # Some have all caps in their config, some don't. label_name_to_id = {k.lower(): v for k, v in model.config.label2id.items()} if sorted(label_name_to_id.keys()) == sorted(label_list): label_to_id = {i: int(label_name_to_id[label_list[i]]) for i in range(num_labels)} else: logger.warning( "Your model seems to have been trained with labels, but they don't match the dataset: ", f"model labels: {sorted(label_name_to_id.keys())}, dataset labels: {sorted(label_list)}." "\nIgnoring the model labels as a result.", ) elif data_args.task_name is None and not is_regression: label_to_id = {v: i for i, v in enumerate(label_list)} if label_to_id is not None: model.config.label2id = label_to_id model.config.id2label = {id: label for label, id in config.label2id.items()} elif data_args.task_name is not None and not is_regression: model.config.label2id = {l: i for i, l in enumerate(label_list)} model.config.id2label = {id: label for label, id in config.label2id.items()} if data_args.max_seq_length > tokenizer.model_max_length: logger.warning( f"The max_seq_length passed ({data_args.max_seq_length}) is larger than the maximum length for the" f"model ({tokenizer.model_max_length}). Using max_seq_length={tokenizer.model_max_length}." ) max_seq_length = min(data_args.max_seq_length, tokenizer.model_max_length) def preprocess_function(examples): # Tokenize the texts args = ( (examples[sentence1_key],) if sentence2_key is None else (examples[sentence1_key], examples[sentence2_key]) ) result = tokenizer(*args, padding=padding, max_length=max_seq_length, truncation=True) # Map labels to IDs (not necessary for GLUE tasks) if label_to_id is not None and "label" in examples: result["label"] = [(label_to_id[l] if l != -1 else -1) for l in examples["label"]] return result with training_args.main_process_first(desc="dataset map pre-processing"): raw_datasets = raw_datasets.map( preprocess_function, batched=True, load_from_cache_file=not data_args.overwrite_cache, desc="Running tokenizer on dataset", ) if training_args.do_train: if "train" not in raw_datasets: raise ValueError("--do_train requires a train dataset") train_dataset = raw_datasets["train"] if data_args.max_train_samples is not None: max_train_samples = min(len(train_dataset), data_args.max_train_samples) train_dataset = train_dataset.select(range(max_train_samples)) if training_args.do_eval: if "validation" not in raw_datasets and "validation_matched" not in raw_datasets: raise ValueError("--do_eval requires a validation dataset") eval_dataset = raw_datasets["validation_matched" if data_args.task_name == "mnli" else "validation"] if data_args.max_eval_samples is not None: max_eval_samples = min(len(eval_dataset), data_args.max_eval_samples) eval_dataset = eval_dataset.select(range(max_eval_samples)) if training_args.do_predict or data_args.task_name is not None or data_args.test_file is not None: if "test" not in raw_datasets and "test_matched" not in raw_datasets: raise ValueError("--do_predict requires a test dataset") predict_dataset = raw_datasets["test_matched" if data_args.task_name == "mnli" else "test"] if data_args.max_predict_samples is not None: max_predict_samples = min(len(predict_dataset), data_args.max_predict_samples) predict_dataset = predict_dataset.select(range(max_predict_samples)) # Log a few random samples from the training set: if training_args.do_train: for index in random.sample(range(len(train_dataset)), 3): logger.info(f"Sample {index} of the training set: {train_dataset[index]}.") # Get the metric function if data_args.task_name is not None: metric = evaluate.load("glue", data_args.task_name) elif is_regression: metric = evaluate.load("mse") else: metric = evaluate.load("accuracy") # You can define your custom compute_metrics function. It takes an `EvalPrediction` object (a namedtuple with a # predictions and label_ids field) and has to return a dictionary string to float. def compute_metrics(p: EvalPrediction): preds = p.predictions[0] if isinstance(p.predictions, tuple) else p.predictions preds = np.squeeze(preds) if is_regression else np.argmax(preds, axis=1) result = metric.compute(predictions=preds, references=p.label_ids) if len(result) > 1: result["combined_score"] = np.mean(list(result.values())).item() return result # Data collator will default to DataCollatorWithPadding when the tokenizer is passed to Trainer, so we change it if # we already did the padding. if data_args.pad_to_max_length: data_collator = default_data_collator elif training_args.fp16: data_collator = DataCollatorWithPadding(tokenizer, pad_to_multiple_of=8) else: data_collator = None # Initialize our Trainer # warmup_steps=warmup_steps, # num_train_epochs=args.epochs, # learning_rate=args.learning_rate, # fp16=True, # logging_steps=20, # evaluation_strategy="steps" if args.val_set_size > 0 else "no", # save_strategy="steps", # eval_steps=saving_step print(training_args) trainer = Trainer( model=model, args=training_args, train_dataset=train_dataset if training_args.do_train else None, eval_dataset=eval_dataset if training_args.do_eval else None, compute_metrics=compute_metrics, tokenizer=tokenizer, data_collator=data_collator, ) # Training if training_args.do_train: checkpoint = None if training_args.resume_from_checkpoint is not None: checkpoint = training_args.resume_from_checkpoint elif last_checkpoint is not None: checkpoint = last_checkpoint train_result = trainer.train(resume_from_checkpoint=checkpoint) metrics = train_result.metrics max_train_samples = ( data_args.max_train_samples if data_args.max_train_samples is not None else len(train_dataset) ) metrics["train_samples"] = min(max_train_samples, len(train_dataset)) trainer.save_model() # Saves the tokenizer too for easy upload trainer.log_metrics("train", metrics) trainer.save_metrics("train", metrics) trainer.save_state() # Evaluation if training_args.do_eval: logger.info("*** Evaluate ***") # Loop to handle MNLI double evaluation (matched, mis-matched) tasks = [data_args.task_name] eval_datasets = [eval_dataset] if data_args.task_name == "mnli": tasks.append("mnli-mm") valid_mm_dataset = raw_datasets["validation_mismatched"] if data_args.max_eval_samples is not None: max_eval_samples = min(len(valid_mm_dataset), data_args.max_eval_samples) valid_mm_dataset = valid_mm_dataset.select(range(max_eval_samples)) eval_datasets.append(valid_mm_dataset) combined = {} for eval_dataset, task in zip(eval_datasets, tasks): metrics = trainer.evaluate(eval_dataset=eval_dataset) max_eval_samples = ( data_args.max_eval_samples if data_args.max_eval_samples is not None else len(eval_dataset) ) metrics["eval_samples"] = min(max_eval_samples, len(eval_dataset)) if task == "mnli-mm": metrics = {k + "_mm": v for k, v in metrics.items()} if task is not None and "mnli" in task: combined.update(metrics) trainer.log_metrics("eval", metrics) trainer.save_metrics("eval", combined if task is not None and "mnli" in task else metrics) if training_args.do_predict: logger.info("*** Predict ***") # Loop to handle MNLI double evaluation (matched, mis-matched) tasks = [data_args.task_name] predict_datasets = [predict_dataset] if data_args.task_name == "mnli": tasks.append("mnli-mm") predict_datasets.append(raw_datasets["test_mismatched"]) for predict_dataset, task in zip(predict_datasets, tasks): # Removing the `label` columns because it contains -1 and Trainer won't like that. predict_dataset = predict_dataset.remove_columns("label") predictions = trainer.predict(predict_dataset, metric_key_prefix="predict").predictions predictions = np.squeeze(predictions) if is_regression else np.argmax(predictions, axis=1) output_predict_file = os.path.join(training_args.output_dir, f"predict_results_{task}.txt") if trainer.is_world_process_zero(): with open(output_predict_file, "w") as writer: logger.info(f"***** Predict results {task} *****") writer.write("index\tprediction\n") for index, item in enumerate(predictions): if is_regression: writer.write(f"{index}\t{item:3.3f}\n") else: item = label_list[item] writer.write(f"{index}\t{item}\n") kwargs = {"finetuned_from": model_args.model_name_or_path, "tasks": "text-classification"} if data_args.task_name is not None: kwargs["language"] = "en" kwargs["dataset_tags"] = "glue" kwargs["dataset_args"] = data_args.task_name kwargs["dataset"] = f"GLUE {data_args.task_name.upper()}" if training_args.push_to_hub: trainer.push_to_hub(**kwargs) else: trainer.create_model_card(**kwargs) def _mp_fn(index): # For xla_spawn (TPUs) main() if __name__ == "__main__": main() ```<|||||>here are run log previously: ``` log 2 {'loss': 1.0339, 'learning_rate': 2.9294117647058824e-05, 'epoch': 0.09} 3 {'loss': 0.7566, 'learning_rate': 2.8411764705882353e-05, 'epoch': 0.17} 4 {'loss': 0.6624, 'learning_rate': 2.7529411764705883e-05, 'epoch': 0.26} 5 {'loss': 0.6149, 'learning_rate': 2.6647058823529412e-05, 'epoch': 0.35} 6 {'loss': 0.5654, 'learning_rate': 2.576470588235294e-05, 'epoch': 0.43} 7 {'loss': 0.5128, 'learning_rate': 2.488235294117647e-05, 'epoch': 0.52} 8 {'loss': 0.5389, 'learning_rate': 2.4e-05, 'epoch': 0.61} 9 {'loss': 0.5487, 'learning_rate': 2.311764705882353e-05, 'epoch': 0.7} 10 {'loss': 0.5497, 'learning_rate': 2.223529411764706e-05, 'epoch': 0.78} 11 {'loss': 0.5916, 'learning_rate': 2.135294117647059e-05, 'epoch': 0.87} 12 {'loss': 0.5519, 'learning_rate': 2.047058823529412e-05, 'epoch': 0.96} 13 {'loss': 0.508, 'learning_rate': 1.9588235294117648e-05, 'epoch': 1.04} 14 {'loss': 0.4503, 'learning_rate': 1.8705882352941178e-05, 'epoch': 1.13} 15 {'loss': 0.4167, 'learning_rate': 1.7823529411764707e-05, 'epoch': 1.22} 16 {'loss': 0.4888, 'learning_rate': 1.6941176470588237e-05, 'epoch': 1.3} 17 {'loss': 0.4184, 'learning_rate': 1.6058823529411766e-05, 'epoch': 1.39} 18 {'loss': 0.5095, 'learning_rate': 1.5176470588235294e-05, 'epoch': 1.48} 19 {'loss': 0.4301, 'learning_rate': 1.4294117647058823e-05, 'epoch': 1.57} 20 {'loss': 0.4162, 'learning_rate': 1.3411764705882354e-05, 'epoch': 1.65} 21 {'loss': 0.3814, 'learning_rate': 1.2529411764705884e-05, 'epoch': 1.74} 22 {'loss': 0.3884, 'learning_rate': 1.1647058823529412e-05, 'epoch': 1.83} 23 {'loss': 0.3852, 'learning_rate': 1.0764705882352941e-05, 'epoch': 1.91} 24 {'loss': 0.4088, 'learning_rate': 9.88235294117647e-06, 'epoch': 2.0} 25 {'loss': 0.2877, 'learning_rate': 9e-06, 'epoch': 2.09} 26 {'loss': 0.3444, 'learning_rate': 8.11764705882353e-06, 'epoch': 2.17} 27 {'loss': 0.2726, 'learning_rate': 7.235294117647059e-06, 'epoch': 2.26} 28 {'loss': 0.3031, 'learning_rate': 6.352941176470589e-06, 'epoch': 2.35} 29 {'loss': 0.2965, 'learning_rate': 5.470588235294117e-06, 'epoch': 2.43} 30 {'loss': 0.2905, 'learning_rate': 4.588235294117648e-06, 'epoch': 2.52} 31 {'loss': 0.2672, 'learning_rate': 3.7058823529411767e-06, 'epoch': 2.61} 32 {'loss': 0.2822, 'learning_rate': 2.823529411764706e-06, 'epoch': 2.7} 33 {'loss': 0.3066, 'learning_rate': 1.9411764705882357e-06, 'epoch': 2.78} 34 {'loss': 0.271, 'learning_rate': 1.0588235294117648e-06, 'epoch': 2.87} 35 {'loss': 0.2694, 'learning_rate': 1.764705882352941e-07, 'epoch': 2.96} ``` latest: ``` log 4 {'loss': 0.6673, 'learning_rate': 1.9588235294117648e-05, 'epoch': 0.26} 5 {'loss': 0.6114, 'learning_rate': 1.6058823529411766e-05, 'epoch': 0.35} 6 {'loss': 0.5715, 'learning_rate': 1.2529411764705884e-05, 'epoch': 0.43} 7 {'loss': 0.5307, 'learning_rate': 9e-06, 'epoch': 0.52} 8 {'loss': 0.5452, 'learning_rate': 5.470588235294117e-06, 'epoch': 0.61} 9 {'loss': 0.5622, 'learning_rate': 1.9411764705882357e-06, 'epoch': 0.7} 10 {'loss': 0.5563, 'learning_rate': 0.0, 'epoch': 0.78} 11 {'loss': 0.569, 'learning_rate': 0.0, 'epoch': 0.87} 12 {'loss': 0.5708, 'learning_rate': 0.0, 'epoch': 0.96} 13 {'loss': 0.5439, 'learning_rate': 0.0, 'epoch': 1.04} 14 {'loss': 0.55, 'learning_rate': 0.0, 'epoch': 1.13} 15 {'loss': 0.5254, 'learning_rate': 0.0, 'epoch': 1.22} 16 {'loss': 0.5534, 'learning_rate': 0.0, 'epoch': 1.3} 17 {'loss': 0.5354, 'learning_rate': 0.0, 'epoch': 1.39} 18 {'loss': 0.5657, 'learning_rate': 0.0, 'epoch': 1.48} 19 {'loss': 0.5479, 'learning_rate': 0.0, 'epoch': 1.57} 20 {'loss': 0.5732, 'learning_rate': 0.0, 'epoch': 1.65} 21 {'loss': 0.5326, 'learning_rate': 0.0, 'epoch': 1.74} 22 {'loss': 0.5624, 'learning_rate': 0.0, 'epoch': 1.83} 23 {'loss': 0.5331, 'learning_rate': 0.0, 'epoch': 1.91} 24 {'loss': 0.5686, 'learning_rate': 0.0, 'epoch': 2.0} 25 {'loss': 0.5393, 'learning_rate': 0.0, 'epoch': 2.09} 26 {'loss': 0.5797, 'learning_rate': 0.0, 'epoch': 2.17} 27 {'loss': 0.5271, 'learning_rate': 0.0, 'epoch': 2.26} 28 {'loss': 0.5346, 'learning_rate': 0.0, 'epoch': 2.35} 29 {'loss': 0.575, 'learning_rate': 0.0, 'epoch': 2.43} 30 {'loss': 0.5359, 'learning_rate': 0.0, 'epoch': 2.52} 31 {'loss': 0.5521, 'learning_rate': 0.0, 'epoch': 2.61} 32 {'loss': 0.5406, 'learning_rate': 0.0, 'epoch': 2.7} 33 {'loss': 0.5593, 'learning_rate': 0.0, 'epoch': 2.78} 34 {'loss': 0.5551, 'learning_rate': 0.0, 'epoch': 2.87} 35 {'loss': 0.5481, 'learning_rate': 0.0, 'epoch': 2.96} ```
transformers
23,985
closed
Regarding pix2struct's bugs in the attentions of the encoder's and decoder's outputs
This change addresses the following bug: incorrect indices were used for "all_attentions" ([link](https://github.com/huggingface/transformers/blob/539e2281cd97c35ef4122757f26c88f44115fa94/src/transformers/models/pix2struct/modeling_pix2struct.py#L1549)) and "all_cross_attentions" ([link](https://github.com/huggingface/transformers/blob/539e2281cd97c35ef4122757f26c88f44115fa94/src/transformers/models/pix2struct/modeling_pix2struct.py#L1550)) where the indices should have been 3 and 5, respectively, but the original code used 2 and 3. code snippet: ``` from PIL import Image import requests from transformers import AutoProcessor, Pix2StructForConditionalGeneration processor = AutoProcessor.from_pretrained("google/pix2struct-textcaps-base") model = Pix2StructForConditionalGeneration.from_pretrained("google/pix2struct-textcaps-base") url = "https://www.ilankelman.org/stopsigns/australia.jpg" image = Image.open(requests.get(url, stream=True).raw) inputs = processor(images=image, return_tensors="pt") outputs = model(**inputs, decoder_input_ids=torch.tensor([[0]]), output_attentions=True) print(outputs.cross_attentions[0].shape) ``` The shape of cross_attentions should be (batchsize, head num, text tokens, image tokens). After the modification, the shape is (1, 12, 1, 2048), which is correct. However, before the modification, the shape was (1, 12, 1, 1), which is clearly incorrect, as it represents the cross-attention between text and image. The "attention" in the original code is also incorrect as it retrieves the position bias instead of the attention. Actually, there is a correct comment regarding this matter: [comment](https://github.com/huggingface/transformers/blob/539e2281cd97c35ef4122757f26c88f44115fa94/src/transformers/models/pix2struct/modeling_pix2struct.py#L1532) , but it is unclear why the subsequent code does not use the indices provided in the comment to retrieve the correct values.
06-04-2023 07:39:03
06-04-2023 07:39:03
cc @younesbelkada <|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23985). All of your documentation changes will be reflected on that endpoint.<|||||>Now it has passed all tests, but this is just a temporary measure. In most cases, it can return the correct attentions and cross-attentions, but I have not figured out why there are index errors in a few cases, which causes the test to fail. In theory, as long as the output_attentions parameter is set to True, it should get enough parameters in the tuple. This is very strange. I will continue to investigate when I have time.<|||||>Hi @loveisp ! Again thanks for your contribution on this Can you share with us why this PR got closed? The PR should also fix #24717 so it would be great to merge it :D <|||||>> Hi @loveisp ! Again thanks for your contribution on this Can you share with us why this PR got closed? The PR should also fix #24717 so it would be great to merge it :D Sorry I don't know how to merge it. Could you help me do that?<|||||>Hi @younesbelkada @amyeroberts @loveisp, sorry for bothering you. I have a question and would appreciate your insights. I was wondering why tensors are passed in a tuple instead of using dataclasses with None-able fields. It seems like using tuples can be error-prone and make bug detection and fixing more challenging due to the variable size. Additionally, it may require additional comments in the code. I'm considering creating an example PR to explore the possibility of using dataclasses, but before proceeding, I wanted to check if there are specific reasons for using tuples. Your input would be greatly appreciated. Thank you!<|||||>@artyomxyz There's no need to apologise for asking questions. People are busy so they may not always reply, but questions, especially if they're thoughtfully written like yours, are always welcome :) Yes, indexing with tuples is error prone! You'll notice that our models accept `return_dict` as an argument in their forward pass, and that by default they return a dataclass. We can't force passing and returning dataclasses as torchscript has only recently (and I'm not sure if fully) [started supporting this](https://github.com/pytorch/pytorch/issues/72901) and we need to maintain backwards compatibility with older library versions. c.f. * a discussion when I raised something similar: https://github.com/huggingface/transformers/pull/22970#discussion_r1195252303. * [Here](https://github.com/huggingface/transformers/blob/aac4c7996837b77abe91cd1ea734b6ef74117156/src/transformers/configuration_utils.py#L408C14-L408C14) in our config code where we force `return_dict=False` if using torchscript. <|||||>Oh, thank you very much for explanation. It makes much more sense to me now
transformers
23,984
closed
Create overload signatures for cached_file
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Related #23980 this is part of a set of changes to improve the type checking in `PreTrainedModel.from_pretrained`. This PR specifically 1. creates `overload` signatures for `cached_file` s.t. when you pass in `_raise_exceptions_for_missing_entries` or `_raise_exceptions_for_connection_errors` either implicitly or explicitly (except via `dict` unpacking), we can get the actual correct return type I asked on the `pyright` discussions to understand pyright's limitations regarding dict unpacking: https://github.com/microsoft/pyright/discussions/5231 ([addressed!](https://github.com/microsoft/pyright/commit/d345e20ebf5fb1d9e322aa3560187571a7f8cf16)) but I think this will only really benefit us if the `dict` isn't just `dict[str, Any]`, which they generally are in the places where we were attempting to perform `dict` unpacking. IMO manually passing it in in the few places where we were doing unpacking isn't really a big deal before: <img width="514" alt="Screenshot 2023-06-03 at 11 09 07 PM" src="https://github.com/huggingface/transformers/assets/27844407/255a4b07-7b15-41e9-97e9-041c5dc8146b"> <img width="567" alt="Screenshot 2023-06-03 at 11 09 45 PM" src="https://github.com/huggingface/transformers/assets/27844407/f3a23532-894d-4b6a-a061-bc320b88e648"> after: <img width="489" alt="Screenshot 2023-06-03 at 11 11 47 PM" src="https://github.com/huggingface/transformers/assets/27844407/d6a1dd36-4d35-4d26-9d93-ab308b55b85b"> <img width="510" alt="Screenshot 2023-06-03 at 11 10 11 PM" src="https://github.com/huggingface/transformers/assets/27844407/e4f2c6e6-f128-47e2-ae94-85e0a5a1ba81"> ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. Documentation: @sgugger, @stevhliu and @MKhalusova <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
06-04-2023 06:26:38
06-04-2023 06:26:38
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23984). All of your documentation changes will be reflected on that endpoint.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
23,983
closed
Add overloads for from_pretrained and from_dict on PretrainedConfig
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Related #23980 this is part of a set of changes to improve the type checking in `PreTrainedModel.from_pretrained` 1. Pull `return_unused_kwargs` out of `kwargs` to create `overload`s for `PretrainedConfig.from_pretrained` & `PretrainedConfig.from_dict` for improved return typing 2. add type hints to class variables in `FlaxPreTrainedModel`, `TFPreTrainedModel`, and `PreTrainedModel` so that we can consume the above improvements before: <img width="656" alt="Screenshot 2023-06-03 at 10 48 18 PM" src="https://github.com/huggingface/transformers/assets/27844407/d435f97a-bd13-40e4-90f7-43287927aedb"> after: <img width="660" alt="Screenshot 2023-06-03 at 10 47 31 PM" src="https://github.com/huggingface/transformers/assets/27844407/c0a9e781-6d42-4a44-a3be-ce05c1589b0c"> ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. Documentation: @sgugger, @stevhliu and @MKhalusova <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
06-04-2023 05:54:50
06-04-2023 05:54:50
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23983). All of your documentation changes will be reflected on that endpoint.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
23,982
open
owl-vit-eval-postprocessor
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) 21206 https://github.com/huggingface/transformers/issues/21206 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @sgugger @NielsRogge @alaradirik @amyeroberts Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
06-04-2023 03:27:23
06-04-2023 03:27:23
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23982). All of your documentation changes will be reflected on that endpoint.<|||||>Also maybe @rafaelpadilla could have a look here to help if you're too busy @amyeroberts <|||||>Yes please @rafaelpadilla, if you have bandwidth!<|||||>Having a method to convert the boxes into a format that can be used by evaluators is a great idea. :+1: I also found similar problems with other models that could benefit by this suggestion. However, owl-vit already has the function `post_process_object_detection` ([here](https://github.com/huggingface/transformers/blob/main/src/transformers/models/owlvit/image_processing_owlvit.py#L384)), which already outputs boxes in format xyx2y2. I'm not familiarized with the owl-vit , but as the suggested function makes usage of `pred_per_img`, I assume that calling the existing `post_process_object_detection` with `threshold=0.` won't produce the same results as if calling `post_process_object_detection_evaluation`. Couldn't the suggested function `post_process_object_detection_evaluation` call `post_process_object_detection` and then use `pred_per_img` to post process the retrieved boxes?
transformers
23,981
closed
Improve typing for bitsandbytes util
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Related #23980 this is part of a set of changes to improve the type checking in `PreTrainedModel.from_pretrained` 1. we add the beginnings of a `TypedDict` as a template for improving the inferred types of the `from_pretrained` `kwargs` 2. add type hints to the `bitsandbytes` util and refactor some signatures and typing based on usage 3. improve type hints of `BitsAndBytesConfig.from_dict` ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. Documentation: @sgugger, @stevhliu and @MKhalusova <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
06-03-2023 22:05:02
06-03-2023 22:05:02
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23981). All of your documentation changes will be reflected on that endpoint.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
23,980
closed
Improve PreTrainedModel.from_pretrained return type
### System Info transformers version: `main` Platform: macOS 13.1 (x86_64) Python version: 3.7.15 PyTorch version (GPU?): 1.13.0 (False) Tensorflow version (GPU?): N/A Using GPU in script?: No Using distributed or parallel set-up in script?: No ### Who can help? Documentation: @sgugger, @stevhliu and @MKhalusova ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ```python import transformers bert_model = transformers.BertForSequenceClassification.from_pretrained("...") reveal_type(bert_model) # expected: BertForSequenceClassification # actual: tuple[Unknown | BertForSequenceClassification, dict[str, Unbound | Unknown] | dict[str, Unknown | list[Unknown]] | Unknown] | Unknown | BertForSequenceClassification ``` ### Expected behavior The hinted type of the return from `from_pretrained` should be the same type as the class. I know we could just annotate `from_pretained` to return `typing.Self`... but IMO this points to a lack of typing in a core part of `transformers`, and I think it would make this core code more robust if the static type checkers agreed that that is the correct return type (since there may be type errors lurking).
06-03-2023 21:05:08
06-03-2023 21:05:08
Transformers does not support any type-checker and we do not want to bloat the code to add support for them: we accept PRs adding simple annotations but we will refuse anything more complex than that.<|||||>> Transformers does not support any type-checker and we do not want to bloat the code to add support for them: we accept PRs adding simple annotations but we will refuse anything more complex than that. I can understand your position of not wanting to bloat the code. It definitely seems reasonable if that is an established philosophy for huggingface internal APIs. I think for user-facing APIs, though, it could be beneficial to have more advanced type hints so that the average user's IDE will tell them what type of object they are dealing with when loading their model, rather than seeing a confusing, gigantic type union. I was originally planning to submit all the work I had done to fix the type issues in `from_pretrained`, but I would be happy to leave it at just improving the signature of `PretrainedModel.from_pretrained` if you will be willing to consider it: https://github.com/huggingface/transformers/pull/24035<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
23,979
open
Handle `g_state` in RWKV's customized CUDA kernel to overcome sequence length limitation
### Feature request Handle `g_state` in RWKV's customized CUDA kernel enables backward pass with a chained forward. As such, the maximum `context_length` will not hinder longer sequences in training, and the behavior of WKV backward is coherent with forward. For BF16 kernels, see [here](https://github.com/Blealtan/RWKV-LM-LoRA/tree/dev-infctx/RWKV-v4neo/cuda). Credits to icecuber on RWKV Discord channel (searching for `chunked GPT mode` in the history will show the original code). ### Motivation The current implementation of RWKV dedicates to a `max_seq_length`, propagating the sequence length parameter down to the CUDA kernel. It can be problematic with longer input sequences. By supporting `g_state` backward, we can fix the maximum sequence length inside CUDA kernel and instead call it several times until the complete sequence gets processed. Also, given the forward pass already supports state chaining, the backward should also support this. > Some not so related advertising: > In [my recent experiments](https://github.com/Blealtan/RWKV-LM-LoRA/tree/dev-infctx), I'm building upon the state chaining functionality (or chunked GPT mode, per icecuber's wording) to achieve near-constant VRAM training with arbitrary sequence length. The basic idea is to do forward pass of the entire model once a piece and perform checkpointing for each piece, so that at the cost of the forward pass repeated twice we get any long sequence trained within fixed VRAM. If `g_state` is supported in `transformers`, it will be easy to port that here. ### Your contribution I can help by submitting the PR, but only later. I'm not locking that in case anyone has the time earlier than me.
06-03-2023 17:34:04
06-03-2023 17:34:04
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
23,978
closed
Fix typo in doc comment of BitsAndBytesConfig
# What does this PR do? Fix a typo in doc comments of `BitsAndBytesConfig` class. It must be `nf4` instead of `fn4`. Here is a comparison code in current main branch: https://github.com/huggingface/transformers/blob/539e2281cd97c35ef4122757f26c88f44115fa94/src/transformers/utils/quantization_config.py#L166-L167 `qlora` implementations also uses `nf4`. https://github.com/artidoro/qlora/blob/bdc655dfa71e5ef1553a078980fb5083c346a4cf/qlora.py#L138-L141 So I think it is better to fix doc, not a code. ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @sgugger, @stevhliu @MKhalusova @younesbelkada and @sourabh112 Or @TimDettmers?
06-03-2023 11:25:20
06-03-2023 11:25:20
_The documentation is not available anymore as the PR was closed or merged._
transformers
23,977
closed
🌐 [i18n-KO] Translated tasks_summary.mdx to Korean
# What does this PR do? Translated task_summary.mdx file of the documentation to Korean. Thank you in advance for your review. Part of https://github.com/huggingface/transformers/issues/20179 ## Before reviewing - [x] Check for missing / redundant translations (번역 누락/중복 검사) - [x] Grammar Check (맞춤법 검사) - [x] Review or Add new terms to glossary (용어 확인 및 추가) - [x] Check Inline TOC (e.g. [[lowercased-header]]) - [x] Check live-preview for gotchas (live-preview로 정상작동 확인) ## Who can review? (Initial) Team PseudoLab, may you please review this PR? @0525hhgus, @KIHOON71, @sim-so, @gabrielwithappy, @HanNayeoniee, @wonhyeongseo, @jungnerd ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review?(Final) @sgugger, @ArthurZucker, @eunseojo May you please review this PR? <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
06-03-2023 08:04:41
06-03-2023 08:04:41
_The documentation is not available anymore as the PR was closed or merged._
transformers
23,976
closed
Pix2Struct: fix wrong broadcast axis of attention mask in visual encoder
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes #23974 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 --> @ArthurZucker @younesbelkada @amyeroberts
06-03-2023 07:39:03
06-03-2023 07:39:03
_The documentation is not available anymore as the PR was closed or merged._<|||||>Sure. Is there something else I have to do for that?<|||||>Should be all good! just pushed!
transformers
23,974
closed
Pix2Struct: wrong attention mask broadcasting in the visual encoder
### System Info - `transformers` version: 4.29.2 - Platform: Linux-5.15.90.1-microsoft-standard-WSL2-x86_64-with-glibc2.10 - Python version: 3.8.16 - Huggingface_hub version: 0.14.1 - Safetensors version: not installed - PyTorch version (GPU?): 2.0.1+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): 0.6.10 (cpu) - Jax version: 0.4.10 - JaxLib version: 0.4.10 - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? @ArthurZucker @younesbelkada @amyeroberts ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction The Pix2Struct visual attention layer masks the attention scores (logits) by adding negative infinities from the attention mask. https://github.com/huggingface/transformers/blob/539e2281cd97c35ef4122757f26c88f44115fa94/src/transformers/models/pix2struct/modeling_pix2struct.py#L200-L220 And the below line is to broadcast the boolean attention mask of which shape is `[batch_size, seq_len]` to make a shape of `[batch_size, num_heads, query_len, key_len]`. But it seems the mask tensor is broadcasted on wrong axes. https://github.com/huggingface/transformers/blob/539e2281cd97c35ef4122757f26c88f44115fa94/src/transformers/models/pix2struct/modeling_pix2struct.py#L213 Here is the common way to extend the mask: https://github.com/huggingface/transformers/blob/539e2281cd97c35ef4122757f26c88f44115fa94/src/transformers/modeling_utils.py#L896 Because of that, the model will attend mask tokens as well while the original implementation uses `flaxformer` as follows: https://github.com/google/flaxformer/blob/9adaa4467cf17703949b9f537c3566b99de1b416/flaxformer/components/attention/dense_attention.py#L2019 ### Expected behavior Fix the order of new axes on the attention mask.
06-03-2023 07:37:14
06-03-2023 07:37:14
transformers
23,972
closed
🌐 [i18n-KO] Default to English for Untranslated Pages
# What does this PR do? Fixes https://github.com/huggingface/transformers/issues/23971 ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? [HF] @sgugger @ArthurZucker [PseudoLab] @0525hhgus @gabrielwithappy @sim-so @HanNayeoniee @jungnerd @KIHOON71
06-03-2023 07:12:40
06-03-2023 07:12:40
In the initial implementation, I opted to prefix untranslated page links with `../en/` to direct the user to the English version of the page. However, this approach has a side effect of redirecting users away from the localized version of our docs and into the English one. To address this, I'm currently exploring an alternative solution where we duplicate the entire English docs into the Korean directory. The upside to this approach is that it keeps users within their chosen language directory, even when viewing untranslated pages in English. Additionally, this approach could also provide valuable insights about which untranslated guides are accessed the most. This data could help us prioritize future translation efforts based on user interest and need. I'm actively working on this alternative approach and will update/open this PR once I have made substantial progress.<|||||>While copying the entire English docs into the Korean language directory has some advantages, I've identified a potential side effect. The copied documents are only up-to-date at the time of copying, which means they do not reflect any subsequent updates made to the original English documents. This could lead to outdated information being displayed for untranslated pages. What might be a good solution? One potential solution I came up with: #### Implement a continuous integration (CI) script to automate the copying process: The script would copy the English documents into the untranslated language directories every time a change is made to the English version. This will ensure that the copied documents always reflect the most up-to-date information. The CI script would trigger on every commit to the main branch (or whichever branch holds the latest version of your documentation). It will then perform a copy operation to the untranslated language directories, ensuring they're always updated with the latest English content. While this does increase the computational requirements (as the script would run for each update), it would likely be a worthwhile tradeoff for maintaining accuracy and consistency in the documentation. This process can be optimized to run only when English documentation is updated, reducing the number of unnecessary runs.<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23972). All of your documentation changes will be reflected on that endpoint.<|||||>Thanks for your PR but this will make the repository heavier for nothing and make the doc building time longer for every PR so we cannot accept it. The redirect should already be implemented on the website, we don't need to add pages.<|||||>Thank you so much for your review, @sgugger . I definitely understand the overhead and will close this PR and issue for now.
transformers
23,971
closed
🌐 [i18n-KO] Display English version of untranslated pages as a default option for improved UX
Part of https://github.com/huggingface/transformers/issues/20179 ### Summary We've been collecting feedback from our users, and a common theme we've identified is concerning the user experience when encountering untranslated pages. Users have expressed a preference to see the English version of a page directly if the localized translation is not yet available. This allows them to utilize tools like Google Chrome's translation feature to understand the content. ### Detail #### Current Behavior: Presently, when a user navigates to a page that hasn't been translated into their selected language, they're presented with a placeholder message indicating that the page is unavailable in their language. This leaves them without immediate access to the content they were seeking. #### Desired Behavior: Instead of showing a placeholder message for untranslated pages, users prefer to be automatically shown the English version of the page. This way, they can use automated translation tools if necessary. In addition, users can also choose to read the page in English if they're comfortable doing so. ### Additional Benefits This approach provides several additional benefits: - **Conflict mitigation:** This will likely reduce the number of merge conflicts that can occur due to manual reference edits in multiple language versions. - **Easier Table of Contents (TOC) control:** By defaulting to English versions, we ensure that the TOC structure remains consistent across the site, providing a more predictable and streamlined user experience. ### Next Steps We suggest implementing this feature to enhance the user experience and streamline the site's operations. This would involve modifying the behavior of the site when a page is unavailable in the selected language. We look forward to feedback and any additional perspectives on this proposed enhancement. cc: [HF] @sgugger @ArthurZucker [PseudoLab] @0525hhgus @gabrielwithappy @sim-so @HanNayeoniee @jungnerd @KIHOON71
06-03-2023 07:10:07
06-03-2023 07:10:07
This should already be the case without having to do anything special. cc @mishig25
transformers
23,970
closed
GPU is needed for quantization in M2 MacOS
### System Info M2 MacOS that has 12 CPU and 38 GPU, 96GB processor, 2TB storage ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction I am getting this issue " No GPU found. A GPU is needed for quantization." for the following code snippet trying to be implemented in M2 MacOS that has 12 CPU and 38 GPU in it. How is QLORA/quantization will work on M2 MacOS systems that has "mps" in it? import torch from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig model_id = "EleutherAI/gpt-neox-20b" bnb_config = BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_use_double_quant=True, bnb_4bit_quant_type="nf4", bnb_4bit_compute_dtype=torch.bfloat16 ) tokenizer = AutoTokenizer.from_pretrained(model_id) # model = AutoModelForCausalLM.from_pretrained(model_id, quantization_config=bnb_config) #, device_map={"":0}) model = AutoModelForCausalLM.from_pretrained(model_id, quantization_config=bnb_config, device=torch.device('cpu')) ### Expected behavior I expect the code will run on "mps" of M2 MacOS
06-03-2023 05:18:19
06-03-2023 05:18:19
No, the bitsandbytes library only works on CUDA GPU.<|||||>To get around this error I set load_in_8bit=False: ``` AutoModelForSeq2SeqLM.from_pretrained(model_id,load_in_8bit=False) ```<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
23,969
closed
🌐 [i18n-KO] Translated `language-modeling.mdx`
<!-- PR의 제목은 "🌐 [i18n-KO] Translated `language-modeling.mdx` to Korean" 으로 부탁드립니다 --> # What does this PR do? Translated the `language-modeling.mdx` file of the documentation to Korean. Thank you in advance for your review. Part of https://github.com/huggingface/transformers/issues/20179 <!-- 메인 이슈에 기록이 남아요! 가짜연구소 리포를 사용해 연습하실때는 제거해주시면 감사하겠습니다! :smile: --> ## Before reviewing - [x] Check for missing / redundant translations (번역 누락/중복 검사) - [x] Grammar Check (맞춤법 검사) - [x] Review or Add new terms to glossary (용어 확인 및 추가) - [x] Check Inline TOC (e.g. `[[lowercased-header]]`) - [x] Check live-preview for gotchas (live-preview로 정상작동 확인) ## Who can review? (Initial) <!-- 1. 위 체크가 모두 완료된 뒤에만 가짜연구소 팀원들에게 리뷰 요청하는 아래 주석을 노출해주세요! --> Team PseudoLab, may you please review this PR? @0525hhgus, @KIHOON71, @sim-so, @gabrielwithappy, @HanNayeoniee, @wonhyeongseo, @jungnerd ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? (Final) <!-- 2. 가짜연구소 팀원들과 리뷰가 끝난 후에만 허깅페이스 직원들에게 리뷰 요청하는 아래 주석을 노출해주세요! --> @sgugger, @ArthurZucker, @eunseojo May you please review this PR?
06-03-2023 05:13:29
06-03-2023 05:13:29
_The documentation is not available anymore as the PR was closed or merged._<|||||>@sgugger, @ArthurZucker, @eunseojo May you please review this PR?
transformers
23,968
closed
🌐 [i18n-KO] Translated `bertology.mdx` to Korean
<!-- PR의 제목은 "🌐 [i18n-KO] Translated `bertology.mdx` to Korean" 으로 부탁드립니다 --> # What does this PR do? Translated the `bertology.mdx` file of the documentation to Korean. Thank you in advance for your review. Part of https://github.com/huggingface/transformers/issues/20179 <!-- 메인 이슈에 기록이 남아요! 가짜연구소 리포를 사용해 연습하실때는 제거해주시면 감사하겠습니다! :smile: --> ## Before reviewing - [x] Check for missing / redundant translations (번역 누락/중복 검사) - [x] Grammar Check (맞춤법 검사) - [x] Review or Add new terms to glossary (용어 확인 및 추가) - [x] Check Inline TOC (e.g. `[[lowercased-header]]`) - [x] Check live-preview for gotchas (live-preview로 정상작동 확인) ## Who can review? (Initial) <!-- 1. 위 체크가 모두 완료된 뒤에만 가짜연구소 팀원들에게 리뷰 요청하는 아래 주석을 노출해주세요! --> Team PseudoLab, may you please review this PR? @0525hhgus, @KIHOON71, @sim-so, @gabrielwithappy, @HanNayeoniee, @wonhyeongseo, @jungnerd ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? (Final) <!-- 2. 가짜연구소 팀원들과 리뷰가 끝난 후에만 허깅페이스 직원들에게 리뷰 요청하는 아래 주석을 노출해주세요! --> @sgugger, @ArthurZucker, @eunseojo May you please review this PR?
06-03-2023 03:00:49
06-03-2023 03:00:49
_The documentation is not available anymore as the PR was closed or merged._<|||||>* Live preview is not working, although checks have passed. I confirmed there are no gotchas using GitHub Codespace. (We should create a pre-built environment (where all `pip install` commands are completed) for mentees and new-comers, preferably with `preview --[lang]` commands built-in as well) Team PseudoLab, may you please review this PR? @0525hhgus, @KIHOON71, @sim-so, @gabrielwithappy, @HanNayeoniee, @wonhyeongseo, @jungnerd<|||||>@sgugger, @ArthurZucker, @eunseojo May you please review this PR?