repo
stringclasses 1
value | number
int64 1
25.3k
| state
stringclasses 2
values | title
stringlengths 1
487
| body
stringlengths 0
234k
⌀ | created_at
stringlengths 19
19
| closed_at
stringlengths 19
19
| comments
stringlengths 0
293k
|
---|---|---|---|---|---|---|---|
transformers | 24,503 | open | Separate kwargs of tokenizer and feature_extractor in `ClapProcessor` | # What does this PR do?
Currently, `ClapProcessor` shares kwargs between the tokenizer and the feature extractor. This PR introduces separate kwargs for both of them.
This was discussed in [comments](https://github.com/huggingface/transformers/issues/23648#issuecomment-1557532041) of #23648.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
@sanchit-gandhi | 06-26-2023 20:03:44 | 06-26-2023 20:03:44 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24503). All of your documentation changes will be reflected on that endpoint.<|||||>@sanchit-gandhi can we fix both the points discussed in the issue (#23648) (shared kwargs and unexpected padding behaviour) in the same PR or should we have separate PRs?<|||||>Separate PR would be preferable since you can work on each bit in isolation (making it easier for yourself and the reviewer) and merge this one as soon as you have it ready :) |
transformers | 24,502 | open | Extremely slow model inference for load_in_4bit | ### System Info
- `transformers` version: 4.31.0.dev0
- Platform: Linux-5.10.179-171.711.amzn2.x86_64-x86_64-with-glibc2.26
- Python version: 3.9.16
- Huggingface_hub version: 0.15.1
- Safetensors version: 0.3.1
- PyTorch version (GPU?): 2.0.1+cu118 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@gante
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Using `load_in_4bit` makes the model extremely slow (with accelerate 0.21.0.dev0 and bitsandbytes 0.39.1, should be latest version and I installed from source)
Using the following code
```python
from transformers import LlamaTokenizer, AutoModelForCausalLM, AutoTokenizer
import torch
from time import time
model_name = "tiiuae/falcon-40b-instruct"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name, load_in_4bit=True, device_map="auto", trust_remote_code=True)
tokenizer.padding_side = "left"
# Define PAD Token = EOS Token
tokenizer.pad_token = tokenizer.eos_token
model.config.pad_token_id = model.config.eos_token_id
# use different length sentences to test batching
# measure time
start_time = time()
sentences = [
"Hello, my dog is a little",
"Today, I",
"I love",
"Repeat after me: I love you.",
"Girafatron is obsessed with giraffes, the most glorious animal on the face of this Earth. Giraftron believes all other animals are irrelevant when compared to the glorious majesty of the giraffe.\nDaniel: Hello, Girafatron!\nGirafatron:",
"You are a human that uses a lot of expletives. For each one of your responses to the user you use expletives liberally. If you find yourself questioning whether it’s appropriate or not you’re to say fuck that noise and keep using your expletives regardless. Human: Tell me a story. You:"
]
inputs = tokenizer(sentences, return_tensors="pt", padding=True).to(model.device)
inputs.pop("token_type_ids")
output_sequences = model.generate(**inputs, max_new_tokens=400, temperature=0.7)
print(tokenizer.batch_decode(output_sequences, skip_special_tokens=True))
print("Elapsed time: ", time() - start_time)
```
This gives me 3138 seconds on 8xA100 40G GPUs.
### Expected behavior
If I instead use bf16 version, i.e. by using this as model init
```python
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.bfloat16, device_map="auto", trust_remote_code=True)
```
It gives 266 seconds, more than 10x faster. On the other hand, load in 4bit only cut down memory footprint by 4x. I wonder if there are other things I should do to fully exploit the benefits of 4bit. Right now the generation speed is not usable for real time conversation. Thanks. | 06-26-2023 19:53:25 | 06-26-2023 19:53:25 | cc @younesbelkada <|||||>Hey @cnut1648 👋
We also had an internal user reporting the same issue, I'm currently exploring whether it is from the text generation end or from the 4-bit end. Our internal user also reported that unbatched text generation worked fine (in terms of output quality and inference time), so you can try that route until this issue gets sorted :)
cc @younesbelkada
<|||||>Hi @cnut1648
Thanks for bringing this discussion up
Note that this is a more or less known issue, bitsandbytes is working on optimized 4bit inference kernels that should be much faster than the current ones.
One the other hand, I believe that there is a high variance across devices, for example this user: https://github.com/huggingface/transformers/issues/23989#issuecomment-1577727968 reports the same speed than bf16 using Falcon.
Do you face the same issue if you run your inference on a single A100?<|||||>Hey @gante, @younesbelkada thanks! Excited to see how bnb 4bit inference will accelerate the generation. For unbatched inference (bsz=1) w/ multi-gpu, I tried that it takes more than 1 hour and only produced 4 out of 6 inputs and I have to cut it to save cost. As for one single A 100 4 bit, I have
- batched: 3038 seconds, no big improvement
- unbatched: again this go over 1 hour<|||||>Actually, I had the same confusion, I used the load_in_4bit parameter and got a 2-3x slower inference time than full precision<|||||>@BaileyWei 2-3x slower is to be expected with `load_in_4bit` (vs 16-bit weights), on any model -- that's the current price of performing dynamic quantization :)<|||||>@cnut1648 @younesbelkada
If we take the code example from @cnut1648 and play around with the following settings
1. `tiiuae/falcon-7b-instruct` vs `huggyllama/llama-7b` (i.e. Falcon vs LLaMA)
2. `load_in_4bit=True` vs `torch_dtype=torch.bfloat16`
3. short prompts vs long prompts (e.g. first two vs last two in the code example)
We quickly conclude that the problem seems to be related to Falcon itself, not the 4-bit part nor `generate`. In a nutshell, on my end, `load_in_4bit=True` added a stable 4-5x slowdown vs `torch_dtype=torch.bfloat16`, but the execution time grew very quickly with the sequence length (i.e. with the prompt size and with `max_new_tokens`) AND batch size. This does not happen with other models, and explains the extremely slow execution times you're seeing -- especially in 4-bit format. I'm not sure if there are additional 4-bit-related issues that further explain what you're seeing, but the behavior I described above is not normal.
As for solutions: currently, the Falcon code sits on the Hub, and we have a [PR open](https://github.com/huggingface/transformers/pull/24523) to add it to `transformers`. If the issue is still present after the port is complete, we can dive deeper 🤗 <|||||>Thank you so much for this @gante!<|||||>@cnut1648
Check out this tweet: https://twitter.com/Tim_Dettmers/status/1677826353457143808 you should be able to benefit from that out of the box by just updating bitsandbytes; can you quickly give it a try? 🙏 <|||||>Hmm @younesbelkada I have a test run today using llama-65b and falcon-40b.
Since it seems that bnb 4bit inference supports batch size = 1, I modify the code to be this
```python
from transformers import LlamaTokenizer, AutoModelForCausalLM, AutoTokenizer
import torch
from time import time
# model_name = "tiiuae/falcon-40b-instruct"
model_name = "huggyllama/llama-65b"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name, device_map="auto", load_in_4bit=True, trust_remote_code=True)
tokenizer.padding_side = "left"
# Define PAD Token = EOS Token
tokenizer.pad_token = tokenizer.eos_token
model.config.pad_token_id = model.config.eos_token_id
# use different length sentences to test batching
# measure time
start_time = time()
sentences = [
"Hello, my dog is a little",
"Today, I",
"I love",
"Repeat after me: I love you.",
"Girafatron is obsessed with giraffes, the most glorious animal on the face of this Earth. Giraftron believes all other animals are irrelevant when compared to the glorious majesty of the giraffe.\nDaniel: Hello, Girafatron!\nGirafatron:",
"You are a human that uses a lot of expletives. For each one of your responses to the user you use expletives liberally. If you find yourself questioning whether it’s appropriate or not you’re to say fuck that noise and keep using your expletives regardless. Human: Tell me a story. You:"
]
for sentence in sentences:
inputs = tokenizer(sentence, return_tensors="pt", padding=True).to(model.device)
# inputs.pop("token_type_ids")
output_sequences = model.generate(**inputs, max_new_tokens=400, temperature=0.7)
print(tokenizer.decode(output_sequences[0], skip_special_tokens=True))
print("Elapsed time: ", time() - start_time)
```
Essentially for falcon-40b, the issue still remains, that the model in 4bit is just extremely slow (2561s).
For llama, I get
- 4 bit: 566s
- w/o 4 bit: 550s
So it seems that there is no major benefits but the memory usage did decrease.<|||||>@cnut1648 the Falcon code on the hub is known to be very slow, and it may explain the issue. We are about to release the `transformers`-side Falcon, so hopefully the problem should get away on its own soon 🤞 |
transformers | 24,501 | closed | Fix link in utils | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #24497
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@sgugger
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 06-26-2023 17:43:57 | 06-26-2023 17:43:57 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Hey thanks for the quick response!
Note that the website examples walkthrough is still broken 😔
LMK uf you shall need a separated issue for this ! or maybe that 404 is creating a placeholder for `v4.30.0`?
Have a nice day<|||||>Thanks for the contribution @SoyGema! |
transformers | 24,500 | open | Installation from source | ### System Info
I tried to install the transformers library from source by following the link https://huggingface.co/docs/transformers/installation#install-from-source.
When testing whether the library is correctly installed, I followed the recommendation.
from transformers import pipeline
print(pipeline('sentiment-analysis')('I love you'))
And then, I got the following error:
ImportError: cannot import name 'is_torch_greater_or_equal_than_1_12' from 'transformers.pytorch_utils' (/usr/local/lib/python3.10/dist-packages/transformers/pytorch_utils.py)
The above exception was the direct cause of the following exception:
RuntimeError Traceback (most recent call last)
[/usr/local/lib/python3.10/dist-packages/transformers/utils/import_utils.py](https://localhost:8080/#) in _get_module(self, module_name)
1086 f" traceback):\n{e}"
1087 ) from e
-> 1088
1089 def __reduce__(self):
1090 return (self.__class__, (self._name, self.__file__, self._import_structure))
RuntimeError: Failed to import transformers.models.tapas.modeling_tapas because of the following error (look up to see its traceback):
cannot import name 'is_torch_greater_or_equal_than_1_12' from 'transformers.pytorch_utils' (/usr/local/lib/python3.10/dist-packages/transformers/pytorch_utils.py)
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
pip install git+https://github.com/huggingface/transformers
from transformers import pipeline
# print(pipeline('sentiment-analysis')('I love you'))
pipe = pipeline("sentiment-analysis")
pipe("I love you")
### Expected behavior
Would like a guidance to fix the bug | 06-26-2023 17:28:19 | 06-26-2023 17:28:19 | You might have a mix of different installations in your environment. I would try again in a fresh Python environment.<|||||>I got another problem when running the run_fusion_glue.py from https://docs.adapterhub.ml/training.html#train-adapterfusion in Google Colab. I got the following error:
Traceback (most recent call last):
File "/content/drive/MyDrive/run_fusion_glue.py", line 29, in <module>
from transformers import (
ImportError: cannot import name 'AdapterArguments' from 'transformers' (/usr/local/lib/python3.10/dist-packages/transformers/__init__.py)<|||||>That's a problem in your `run_fusion_glue.py` script. This class does not exist in `transformers`.<|||||>I just copied from this website: https://github.com/adapter-hub/adapter-transformers/blob/master/examples/pytorch/adapterfusion/run_fusion_glue.py
So the problem is in that website? Does this class exist in transformers.adapters?<|||||>You should report the issue on that repo yes :-) |
transformers | 24,499 | closed | [WIP] Add LaVIN model | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 06-26-2023 17:28:18 | 06-26-2023 17:28:18 | |
transformers | 24,498 | closed | Compute `dropout_probability` only in training mode (SpeechT5) | # What does this PR do?
Same as in #24486, but I forgot to check `SpeechT5` when I did search/replace (which is a bit different from other models). Sorry! | 06-26-2023 17:15:17 | 06-26-2023 17:15:17 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 24,497 | closed | Access to Transformers examples link broken . Impact on navigation as well | ### System Info
### Context
Hello there! 👋
I´m following [Translation](https://huggingface.co/docs/transformers/tasks/translation) Transformers tutorial .
Thanks for making it possible!
Currently running the script [run_translation.py](https://github.com/huggingface/transformers/blob/main/examples/tensorflow/translation/run_translation.py) and before changing transformers to version `4.31.0.dev` the following message appears
https://github.com/huggingface/transformers/blob/5757923888246ea16b324f53c60ea444574005ed/src/transformers/utils/__init__.py#L218
When I follow the link , the following message appears. And when I click the _here_ link
<img width="1048" alt="Captura de pantalla 2023-06-26 a las 17 47 23" src="https://github.com/huggingface/transformers/assets/24204714/93afc7f8-ea28-4d19-b596-99120190fd21">
It redirects me to https://huggingface.co/docs/transformers/main/en/examples with an 404 error.
### Potential Fix
Would love to give a helping hand here 🙏 like in #24336 and give back to the help I´ve gotten from #24254 but I am a little bit confused with respect to this. The last version in google-indexed examples that seem to work is [this](https://huggingface.co/docs/transformers/v4.15.0/examples) , related to `v4.15.0` and not `v4.30` nor `v4.29.2` .
Can you please confirm that you would validate this link
(https://huggingface.co/docs/transformers/v4.15.0/examples) for utils __init__.py script? If not,
would you provide a useful link or point me in the right direction?
Please let me know if I'm also in the right place, as this could maybe impact website?
Thanks for the time dedicated to this.
### Who can help?
@sgugger @stevhliu
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1. Go to link described in utils https://huggingface.co/docs/transformers/examples
2. Follow link provided [
](https://huggingface.co/docs/transformers/main/en/examples) and find error 404
### Expected behavior
Last link to examples, with the stable version. Unclear to me at this point if it is `4.29` or `4.30` or where they are. | 06-26-2023 16:16:46 | 06-26-2023 16:16:46 | The link is wrong, it should be https://huggingface.co/docs/transformers/run_scripts
Would you like to make a PR with the fix?<|||||>Sure, thanks!<|||||>BTW @sgugger @LysandreJik you can specify doc redirects using a `redirects.yml` like in datasets and other libraries: https://github.com/huggingface/datasets/blob/main/docs/source/_redirects.yml
Always good to avoid broken links when we can
cc @mishig25 too |
transformers | 24,496 | closed | Allow for warn_only selection in enable_full_determinism | Enable full determinism crashes if the model has layer that do not support it (like layoutlmv2). This fixes it. | 06-26-2023 14:51:40 | 06-26-2023 14:51:40 | Or don't use the option with a model that does not support it? This is not something that is enabled by default.<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>> Or don't use the option with a model that does not support it? This is not something that is enabled by default.
Well, not everyone can afford it, and it's not good practice to keep something that it's crash prone. e.g. In my company we are using a model to prod and would like to use full determinism to run full end-to-end evaluations to carry out fine tuning experiments. It doesn't really cost anything to add an option there (I pushed another commit), and would save people the burden of "hacking" a Docker image just to avoid crashes when deploying an image<|||||>Thank you for educating me on good practices.<|||||>To be honest, I started with a very simple request and you answered assuming to know what was our current situation: "don't use the option with a model that does not support it?" is not really a solution in many real world scenarios. I tried to explain to you our situation and you answered sarcastically.
My point was simply that you currently have an implementation of a function which is not very useable in the case I described to you above, but from your answer I assume that you probably don't care, even if is a two liner change.
Finally, even if it didn't make any sense you could have just thanked for the contribution instead of being standoffish<|||||>Hi there.
PyTorch has a built-in mechanism in this function to fail when it can't do its job properly, so that the user is not surprised to have non-reproducible result. You are, of course, free to change it in your experiments (and not get full reproducibility). You are wrong to assume that every person using Transformers would like to ignore the error. I apologize if my first answer was maybe too short and did not convey this point. There was nothing aggressive in it so you did not have to answer with a patronizing tone.
The function can be duplicated (it's five lines of code) and called with the arguments you require at the beginning of the script instead of calling via Transformers, which should solve your issue without changing the experience for other users.<|||||>You're right and you make a fair point. Indeed my first commit was a bit naive in assuming that everyone would like to have warn_only=True. That's why I changed it in the second commit (maybe you missed it). Do you think the current status is worth be applied? |
transformers | 24,495 | open | DIT Text Detection model | ### Feature request
Add the text detection models of the dit that are in https://github.com/microsoft/unilm/tree/master/dit/text_detection to the hugging face, to be able to execute them easily with the transformers library like other models as https://huggingface.co/microsoft/dit-base
### Motivation
These models have very good performances and are very interesting to be able to include in hf both to make simple inferences and to transform them to other formats such as ONNX
| 06-26-2023 13:56:04 | 06-26-2023 13:56:04 | I know and I desperately want that model to be available... :D however for that, Mask R-CNN first needs to be integrated. I have a PR on that #22973, I need to finish that one up, then we can add it<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 24,494 | open | Finetune ClipSeg model | ### Feature request
Quite recently, I was exploring zero-shot classification to segment medical images. And it looks quite promising. I stumbled upon ```ClipSeg``` a few days ago and it looked wonderful and just well-suited for my work. Unfortunately, I couldn't find any tutorials or notebooks that showed how to perform fine-tuning on ClipSeg model.
I am assuming, we have to train the decoder with a dataset containing binary classification images of cells and their corresponding masks and a text description. Unfortunately, a bit confused. is there any tutorials/resources anyone could suggest on this topic? Cuz I couldn't none.
### Motivation
```ClipSeg``` shows a lot of potential than SAM (Segment Anything Model). Unfortunately, there's no fine-tuning script neither instructions on **How to prepare the dataset?** which is very frustrating. Will love some help from the community.
And another point, Zero shot classification looks a way lot better option with fine-tuning than training a model like ```U-Net```, ```R-CNN``` and others from scratch while you have very few images and don't have much room to play around with.
### Your contribution
I could provide a PR on my LinkedIn, where I have a lot of AI experts as my connections and then I contribute in the programming as well. | 06-26-2023 13:45:10 | 06-26-2023 13:45:10 | cc @alaradirik for information.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>@sgugger Can you provide some insights/help/update to my requested feature?<|||||>cc @amyeroberts and @rafaelpadilla |
transformers | 24,493 | closed | Make `framework` as class property | # What does this PR do?
Similar to #24299, make this property at class level. (Not the most interesting/useful change though, I agree).
(This approach is a bit hacky but it works. Deprecated in python 3.11.) | 06-26-2023 13:41:58 | 06-26-2023 13:41:58 | |
transformers | 24,492 | closed | [InstructBLIP] Fix bos token of LLaMa checkpoints | # What does this PR do?
This PR adds a fix for InstructBLIP as discussed offline with @gante.
The InstructBLIP models trained with Vicuna (LLaMa) checkpoints used inconsistent model/tokenizer files during training, hence the authors include [this line](https://github.com/salesforce/LAVIS/blob/4a85b17846ee62f09c40f37cc955dd33c2abec68/lavis/models/blip2_models/blip2_vicuna_instruct.py#L372) to fix this. This is not required for the models that use Flan-T5 checkpoints. cc @ArthurZucker
However this made me wonder, let's say someone trains a new InstructBLIP model with LLaMa as language model, and which has the tokenizer and model's config properly set. Then the line introduced in this PR might not be what we want? cc @gante
| 06-26-2023 12:49:52 | 06-26-2023 12:49:52 | _The documentation is not available anymore as the PR was closed or merged._<|||||>> However this made me wonder, let's say someone trains a new InstructBLIP model with LLaMa as language model, and which has the tokenizer and model's config properly set. Then the line introduced in this PR might not be what we want? cc @gante
@NielsRogge I don't think there's much more we can do: the model was probably trained with the incorrect assumption that `0` is the BOS token, in which case a post-generation fix is the only way to stay consistent with the original repo.
(Have we double-checked that the results do change if we change the config's BOS token in the InstructBLIP config? If they don't, then we can simply update the config.)<|||||>I tried that but that doesn't fix it. Guess this is the only solution.<|||||>Just so I understand this fix, why doesn't updating the checkpoint's tokenizer's BOS token ID to resolve this? <|||||>> why doesn't updating the checkpoint's tokenizer's BOS token ID to resolve this?
@amyeroberts TL;DR it's impossible to keep the same behavior as the original model with a config fix 💔
### Full story
Here's what happens when we use InstructBLIP:
1. the tokenized prompt (starting with token = 2, from the [tokenizer BOS token](https://huggingface.co/Salesforce/instructblip-vicuna-7b/blob/main/tokenizer_config.json#L2)) is being passed to the custom `InstructBlipForConditionalGeneration.generate`.
2. We compute its embeddings from `input_ids` and we pass it to the default `generate`. We do not pass `input_ids` to `generate`.
3. If we don't pass `input_ids` to `InstructBlipForConditionalGeneration.generate`, it is initialized with the default BOS token (from the model config) before embedding it.
4. The default BOS token in the model config is 0 ([source](https://huggingface.co/Salesforce/instructblip-vicuna-7b/blob/main/config.json#L97)), and not 2
As a result:
- If we fix the tokenizer such that BOS is 0 -> prompted input will diverge from the main repo, the first token is now 0 instead of 2 = different embeddings = different generation
- If we fix the model config such that the BOS is 2 -> unprompted input will diverge from the main repo, the first token is now 2 instead of 0 = different embeddings = different generation
<|||||>Feel free to merge :) |
transformers | 24,491 | open | OutOfMemoryError: CUDA out of memory despite available GPU memory | ### System Info
I’m encountering an issue with GPU memory allocation while training a GPT-2 model on a GPU with 24 GB of VRAM. Despite having a substantial amount of available memory, I’m receiving the following error:
`OutOfMemoryError: CUDA out of memory. Tried to allocate 64.00 MiB (GPU 0; 23.68 GiB total capacity; 18.17 GiB already allocated; 64.62 MiB free; 18.60 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF.`
Here are the specifications of my setup and the model training:
GPU: NVIDIA GPU with 24 GB VRAM
Model: GPT-2 with approximately 3 GB in size and 800 parameters of 32-bit each
Training Data: 36,000 training examples with input_ids length of 600
Training Configuration: 5 epochs, batch size of 16, and fp16 enabled
These are my calculations:
Parameters: 775M parameters of 32 bits each
Gradients:
Gradients are typically of the same size as the model’s parameters.
Batch Size and Training Examples:
Batch Size: 16
Training Examples: 36,000
Vector Length: 600
Memory Allocation per Batch:
Model: 3 GB (unchanged per batch)
Gradients: 3 GB (unchanged per batch)
Input Data: 16 x 600 (vector length) x 4 bytes (assuming each value is a 32-bit float) = 37.5 KB per batch
Output Data: 16 x 600 (vector length) x 4 bytes (assuming each value is a 32-bit float) = 37.5 KB per batch
Based on the above calculations, the memory allocation per batch for my scenario would be approximately:
Model: 3 GB
Gradients: 3 GB
Input and Output Data: 75 KB
Training should not take more memory than 7GB maximum. But Its taking ~23GB of VRAM.
I would appreciate any insights or suggestions on how to resolve this issue. Thank you in advance for your assistance!
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
```
MODEL_NAME = "gpt2-large"
model = AutoModelForCausalLM.from_pretrained(MODEL_NAME)
tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME)
tokenizer.pad_token = tokenizer.eos_token
batch = 16
epoch = 5
g_acc_step = 5
lr = 2e-6
training_args = transformers.TrainingArguments(
per_gpu_train_batch_size=batch,
gradient_accumulation_steps=g_acc_step,
num_train_epochs=epoch,
learning_rate=lr,
# fp16=True,
save_total_limit=1,
logging_steps=50,
logging_strategy = "steps",
output_dir=OUTPUT_DIR,
max_steps=-1,
lr_scheduler_type="cosine",
save_strategy ="epoch"
)
trainer = transformers.Trainer(
model=model,
train_dataset=filtered_dataset,
args=training_args,
callbacks=[LogCallback],
data_collator=transformers.DataCollatorForLanguageModeling (tokenizer, mlm=False))
model.config.use_cache= False
```
### Expected behavior
Model is taking alot more memory than as expected | 06-26-2023 12:47:31 | 06-26-2023 12:47:31 | Your computation does not inclide the optimizer states (an additional 2*3B) and all intermediate activations saved to compute the radients for the backward pass, which will be huge at a batch size of 16 with sequence lengths of 600<|||||>Seems you are right. But Why memoray allocation is keep increasing ? I started training with batch size 2 and it allocated 9GB VRAM but after 3 epoch it was taking 16Gb VRAM. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 24,490 | closed | Update `InstructBlipModelIntegrationTest` | # What does this PR do?
Fix `InstructBlipModelIntegrationTest`. See comments in the changes. | 06-26-2023 12:08:19 | 06-26-2023 12:08:19 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 24,489 | closed | deepspeed z1/z2 state dict fix | # What does this PR do?
1. Fixes https://github.com/huggingface/transformers/issues/22822
2. Should be merged after https://github.com/huggingface/accelerate/pull/1638
The fix in accelerate uses the `deepspeed.checkpoint.utils.clone_tensors_for_torch_save` which removes the bloated state_dict. In Trainer, we use `accelerator.get_state_dict` to get the resultant lean state dict when using DeepSpeed and the ZeRO stage is not 3.
Also, a typo fix. | 06-26-2023 10:37:51 | 06-26-2023 10:37:51 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Hello Sylvain, updated the description of the PR, Thank you! |
transformers | 24,488 | closed | [`InstructBlip`] Add accelerate support for instructblip | # What does this PR do?
As per title, let's make users benefit from 8bit / 4bit loading of instructblip models
cc @amyeroberts @sgugger @NielsRogge
all `accelerate` tests pass for this model
As a side note, as instruct blip relies on flan-t5 as backbone for some models, therefore it is important to add
```python
_keep_in_fp32_modules = ["wo"]
```
To ensure inference stability in fp16 / int8 / fp4 | 06-26-2023 09:50:41 | 06-26-2023 09:50:41 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Thank you :) could you add an integration test?<|||||>Hey @NielsRogge !
Let's maybe add it together with: https://github.com/huggingface/transformers/pull/24490/files#r1242095062 so we should probably merge this first :D |
transformers | 24,487 | closed | add missing alignment_heads to Whisper integration test |
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 06-26-2023 09:12:18 | 06-26-2023 09:12:18 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 24,486 | closed | Compute `dropout_probability` only in training mode | # What does this PR do?
Same issue as in #24483 (caused by #24434 ), but with a different fix. For core maintainers to decide which one is better.
If we decide to go this way, I will do fix-copies. | 06-26-2023 08:58:53 | 06-26-2023 08:58:53 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 24,485 | closed | Fix poor past ci | # What does this PR do?
<img width="326" alt="Screenshot 2023-06-26 102340" src="https://github.com/huggingface/transformers/assets/2521628/461069ba-637a-4ebf-bb2e-103bba81bbcc">
:face_with_spiral_eyes: Let's be a bit nice to torch 1.11 and 1.10 🙏 .
(just a type issue: `(line 641) RuntimeError: expected scalar type float but found double` introduced in #24334) | 06-26-2023 08:25:51 | 06-26-2023 08:25:51 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 24,484 | closed | Update token_classification.md | Add link to pytorch CrossEntropyLoss so that one understand why '-100' is ignore by the loss function.
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 06-26-2023 08:17:32 | 06-26-2023 08:17:32 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24484). All of your documentation changes will be reflected on that endpoint. |
transformers | 24,483 | closed | Fix `SpeechT5` doctests | # What does this PR do?
PR #24434 changes `np.random.uniform(0, 1)` to `torch.rand([])`. In the forward method of `SpeechT5ForSpeechToSpeech` and `SpeechT5ForTextToSpeech`, the line `dropout_probability = torch.rand([])` is executed no matter if we are in training/inference mode. So the new change in #24434 will change the random sequences even we set a seed in the beginning of `generate`, and we get different outputs now.
Hence this PR updates the expected values for doctest.
**However, I believe we should only call `dropout_probability = torch.rand([])` under the condition of being in training mode.**
WDYT? | 06-26-2023 08:01:56 | 06-26-2023 08:01:56 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Yes, that would be a better fix I agree.<|||||>Thanks, I will follow the same fix in #24486 instead. |
transformers | 24,482 | closed | [Time-Series] Added blog-post to tips | @kashif | 06-26-2023 07:27:51 | 06-26-2023 07:27:51 | thanks! LGTM 👍🏽 <|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>Thanks! |
transformers | 24,481 | closed | [`T5`] Add T5ForQuestionAnswering and MT5ForQuestionAnswering | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
This adds a question-answering head to the PyTorch implementation of T5, following the pattern of BartForQuestionAnswering since it is also an encoder-decoder question-answering model.
This type of model has already been used in research papers (e.g. https://arxiv.org/pdf/2203.07522.pdf) and has shown promising results for using T5 for question answering using span prediction. Additionally, I have trained an uploaded a flan-t5-large for question answering [here](https://huggingface.co/sjrhuschlee/flan-t5-large-squad2) which has shown promising generalization results to other question-answering datasets (metrics are shown on the model card).
I've updated the model tests to include the new model and I believe I found hopefully most of the additional imports and compatibility with the question-answering pipeline.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
- Hey @ArthurZucker and @younesbelkada I would greatly appreciate a review on this when you have a chance.
| 06-26-2023 07:07:39 | 06-26-2023 07:07:39 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Hey @ArthurZucker thanks for the review! I do have a question about the check_repository_consistency. I noticed that the error I currently receive is (link [here](https://app.circleci.com/pipelines/github/huggingface/transformers/67114/workflows/38a1f7d2-a0ab-4c85-8af7-218d53de68fa/jobs/837499?invite=true#step-110-9))
```bash
Traceback (most recent call last):
File "utils/check_copies.py", line 579, in <module>
check_copies(args.fix_and_overwrite)
File "utils/check_copies.py", line 269, in check_copies
raise Exception(
Exception: Found the following copy inconsistencies:
- src/transformers/models/mt5/modeling_mt5.py: copy does not match models.t5.modeling_t5.T5PreTrainedModel at line 775
Run `make fix-copies` or `python utils/check_copies.py --fix_and_overwrite` to fix them.
```
which would require the analogous implementation of `MT5ForQuestionAnswering`. Should I go ahead and add that implementation as well or is there another way to pass this error?<|||||>I went ahead and added `MT5ForQuestionAnswering` as well. |
transformers | 24,480 | closed | RoBERTa required token_type_ids issue | ### System Info
- `transformers` version: 4.30.2
- Platform: Windows-10-10.0.19041-SP0
- Python version: 3.9.0
- Huggingface_hub version: 0.15.1
- Safetensors version: 0.3.1
- PyTorch version (GPU?): 2.0.1+cpu (False)
- Tensorflow version (GPU?): 2.10.0 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@ArthurZucker @younesbelkada @gante
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```python
class TF_RoBERTa_VAD_Classification(tf.keras.Model):
def __init__(self, model_name):
super(TF_RoBERTa_VAD_Classification, self).__init__()
self.model_name = model_name
self.roberta = TFRobertaModel.from_pretrained(model_name, from_pt=True, return_dict=False)
self.predict_V_1 = tf.keras.layers.Dense(1, kernel_initializer=tf.keras.initializers.TruncatedNormal(0.02), activation="linear", name="predict_V_1") # Initializer function test
self.predict_A_1 = tf.keras.layers.Dense(1, kernel_initializer=tf.keras.initializers.TruncatedNormal(0.02), activation="linear", name="predict_A_1")
self.predict_D_1 = tf.keras.layers.Dense(1, kernel_initializer=tf.keras.initializers.TruncatedNormal(0.02), activation="linear", name="predict_D_1")
# Learn Correlation Layers
self.Corr_layer = tf.keras.models.load_model("Assinging_VAD_scores_BERT\Model\FFNN_VAD_Model_ver1_MSE_00048_20230625-231002") # <<<<< Change the model
def call(self, inputs):
input_ids, attention_mask = inputs
outputs = self.roberta(input_ids=input_ids, attention_mask=attention_mask)
cls_token = outputs[1]
self.V_1 = self.predict_V_1(cls_token)
self.A_1 = self.predict_A_1(cls_token)
self.D_1 = self.predict_D_1(cls_token)
VAD_1 = tf.concat([self.V_1, self.A_1, self.D_1], 1) # 0: up-down 1: side
final_outputs = self.Corr_layer(VAD_1)
return final_outputs
def get_config(self):
config = super().get_config()
config.update({
"model_name": self.model_name,
"Corr_layer_config": self.Corr_layer.get_config()
})
return config
@classmethod
def from_config(cls, config):
return cls(**config)
```
This is my model with RoBERTa and I trained model and saved this.
And I loaded model and when I tried to get predicted value,
```python
# Load trained model
custom_objects = {"model_name": TF_RoBERTa_VAD_Classification,
"FFNN_VAD_model": FFNN_VAD_model}
model = tf.keras.models.load_model("Assinging_VAD_scores_BERT\Model\VAD_Assinging_RoBERTa_model_ver1.2_20230626-142030", custom_objects=custom_objects, compile=False)
pred = model.predict((id, mask))[0][0]
```
### Expected behavior
the error is occurred
```
Traceback (most recent call last):
File "c:\Users\Siwon\Documents\GitHub\Assinging_VAD_scores_BERT\Test_model.py", line 122, in <module>
f.int32, name=None)}, None, None, None, None, None, None, None, None, None, None, None, None, False), {})
Second structure: type=tuple str=((TensorSpec(shape=(None, 512), dtype=tf.int32, name='input_ids'), TensorSpec(shape=(None, 512), dtype=tf.int32, name='attention_mask'), None, None, None, None, None, None, None, None, None, None, None, False), {})
More specifically: Substructure "type=dict str={'attention_mask': TensorSpec(shape=(None, None), dtype=tf.int32, name=None), 'token_type_ids': TensorSpec(shape=(None, None), dtype=tf.int32, name=None), 'input_ids': TensorSpec(shape=(None, None), dtype=tf.int32, name=None)}" is a sequence, while substructure "type=TensorSpec str=TensorSpec(shape=(None, 512), dtype=tf.int32, name='input_ids')" is not
Entire first structure:
(({'attention_mask': ., 'token_type_ids': ., 'input_ids': .}, ., ., ., ., ., ., ., ., ., ., ., ., .), {})
Entire second structure:
((., ., ., ., ., ., ., ., ., ., ., ., ., .), {})
```
The error occurred `model = tf.keras.models.load_model("Assinging_VAD_scores_BERT\Model\VAD_Assinging_RoBERTa_model_ver1.2_20230626-155718", custom_objects=custom_objects, compile=False) ` this part But,
This is looks like model requested token_type_ids even if RoBERTa model doesn't require token_type_ids.
I humbly request, if anyone knows a solution, could you please inform me? I would be grateful for your assistance.
Thank you. | 06-26-2023 06:12:26 | 06-26-2023 06:12:26 | It was solved.
code should be
```python
def get_config(self):
config = super().get_config()
config.update({
"model_name": self.model_name,
"Corr_layer_config": self.Corr_layer_path # suppose Corr_layer_path is the variable that holds the path to Corr_layer
})
return config
@classmethod
def from_config(cls, config):
model = cls(config["model_name"])
model.Corr_layer = tf.keras.models.load_model(config["Corr_layer_config"])
return model
```
sorry for interrupt. thank you! <|||||>Thanks for solving this yourself! 😉 |
transformers | 24,479 | open | Enhanced Parameter Freezing Capabilities in Trainer Class | ### Feature request
The feature proposal aims to introduce a more streamlined and intuitive approach to freezing and unfreezing specific model components directly in the Trainer class of the Hugging Face transformers library.
### Motivation
I've found that when I need to freeze certain parameters or components of my models, the process can be a bit complicated.
Currently, I need to set requires_grad to False for the parameters I want to freeze before calling Trainer.train(). But since Trainer.train() calls model.train() before the training loop, some parameters (e.g., running mean and running var of BatchNorm layers) will still change during training. To get around this, I have to implement additional flags in my model and manually call model.eval() in the forward function for the parts of the model I want to freeze.
It would be great if there was a more streamlined way to accomplish this directly in the Trainer class. Maybe an additional argument in the Trainer.train() method, or a method in the Trainer class to freeze/unfreeze specified layers or parameters. This could make fine-tuning models easier and more intuitive, particularly for new users or those with less experience in PyTorch.
### Your contribution
I am glad to offer help. | 06-26-2023 05:52:24 | 06-26-2023 05:52:24 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 24,478 | open | cannot import name 'OwlViTImageProcessor' from 'transformers' | ### System Info
transformer version:4.29.2
platform: macOS Ventura
cpu: apple M2 Max
python versin: 3.10
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
import requests
from PIL import Image
import torch
from transformers import OwlViTProcessor, OwlViTForObjectDetection, OwlViTImageProcessor
processor = OwlViTProcessor.from_pretrained("google/owlvit-base-patch32")
model = OwlViTForObjectDetection.from_pretrained("google/owlvit-base-patch32")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
texts = [["a photo of a cat", "a photo of a dog"]]
inputs = processor(text=texts, images=image, return_tensors="pt")
outputs = model(**inputs)
```
### Expected behavior
exception:
```
ImportError: cannot import name 'OwlViTImageProcessor' from 'transformers' (/Users/tt/opt/anaconda3/envs/test_transfromer/lib/python3.10/site-packages/transformers/__init__.py)
``` | 06-26-2023 05:21:11 | 06-26-2023 05:21:11 | It is definitely in the lbirary so I would suggest re-installing `transformers` and double-checking you are running your code in the right environment.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 24,477 | closed | [`Umt5`] Add google's umt5 to `transformers` | # What does this PR do?
Superseeds #22626 which has been stale for quite some time
A kaggle notebook for reproducing and running the originial model:
https://www.kaggle.com/arthurzucker/umt5-inference
- Tokenizer is a BertGenerationTokenizer. Here is how to convert it:
```python
!wget https://storage.googleapis.com/t5-data/vocabs/umt5.256000/sentencepiece.model
from transformers import T5Tokenizer
umt5 = T5Tokenizer("/Users/arthurzucker/Work/transformers/sentencepiece.model")
to_add = []
for i in range(300):
to_add.append(f"<extra_id_{i}>")
tokenizer.add_tokens(list(reversed(to_add)), True)
```
84 tokens are free to use apparently.
- The modeling code is just MT5, addapted to have more relative bias that is not the same for all layers.
For a first conversion I'll be using this:
```bash
python src/transformers/models/t5/convert_t5x_checkpoint_to_pytorch.py --t5x_checkpoint_path "/Users/arthurzucker/Work/transformers/checkpoint_1000000" --config_file "/Users/arthurzucker/Work/transformers/checkpoint_1000000/config.json" --pytorch_dump_path ./ArthurZ --scalable_attention
```
| 06-26-2023 04:58:14 | 06-26-2023 04:58:14 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Hi @ArthurZucker , thanks for updating this! As far as we can tell, it is not just mT5, because of joined/separate key-value in attention. Was this problem solved in latest conversion script of this PR :thinking:
/cc @agemagician <|||||>The conversion went well, the outputs are still a bit gibberish but didn’t have problem of un matching shape.
They mentioned that the model is closer to MT5, which is why if we we can have minimal changes, it will look like this + adapted conversion <|||||>> The conversion went well, the outputs are still a bit gibberish but didn’t have problem of un matching shape. They mentioned that the model is closer to MT5, which is why if we we can have minimal changes, it will look like this + adapted conversion
So far, I can see you made similar changes as we did before, which led to gibberish output.
The one addition change you did is allowing fall back to byte for the tokenizer.
I belive the issue still exist because of the way we reshape and convert the q,k and v for the attention as @stefan-it mentioned.<|||||>There is also a different logic for `postion_bias` which seems to be missing.
Joint 3D matrix vs what we have now can be linked to the new sharding scheme, It probably the last thing to check.
<|||||>Regarding the split / merge, I don't really see a problem with the code. The checkpoints are split, and the actual code is similar to mt5 with the difference being `scanning` I believe. However feel free to check and I hope we can get coherent outputs! <|||||>Update, the outputs match 🔥 The issue was : the tokenizer<|||||>@ArthurZucker Awesome news! I will check downstream performance as well soon :hugs: <|||||>Wait wait 😅 I have to update and push the tokenizers 😉 <|||||>> Update, the outputs match 🔥 The issue was : the tokenizer
"The outputs match" Do you mean you have tested both the original t5x inference pipeline against the converted transformer Pytorch version ?<|||||>Yes @agemagician. If you read the PR description there’s a link to the reproducing script for generating with the t5x repo<|||||>> Yes @agemagician. If you read the PR description there’s a link to the reproducing script for generating with the t5x repo
Awesome work :)<|||||>Currently setting up an instance ton convert an upload the `xxl` model, other models are available [here](https://huggingface.co/models?search=umt5) |
transformers | 24,476 | closed | [`WhisperTokenizer`] Allow encoding timestamp tokens | # What does this PR do?
Adresses #20225. Openai recently changed their tokenizer to allow encoding timestamp tokens as is (instead of splitting them). This is a breaking change because you can't encode them by splitting anymore, it will fail with the following error:
```ptyhon
ValueError: Encountered text corresponding to disallowed special token '<|7.86|>'.
If you want this text to be encoded as a special token, pass it to `allowed_special`, e.g. `allowed_special={'<|7.86|>', ...}`.
If you want this text to be encoded as normal text, disable the check for this token by passing `disallowed_special=(enc.special_tokens_set - {'<|7.86|>'})`.
To disable this check for all special tokens, pass `disallowed_special=()`.
```
This PR will have to wait before being merge. This is because the models on the hub need to be updated first otherwise the tests will be red.
Moreover, `add_tokens` has to be fixed before that!
Snipper showing why:
```python
from transformers import WhisperTokenizer, WhisperTokenizerFast, AddedToken
timestamps = [AddedToken("<|%.2f|>" % (i * 0.02), lstrip=False, rstrip=False) for i in range(1500 + 1)]
from whisper.tokenizer import get_tokenizer
openai_tok = get_tokenizer(multilingual=True, language="en", task="transcribe")
model_path =f"openai/whisper-tiny"
slow = WhisperTokenizer.from_pretrained(model_path)
fast = WhisperTokenizerFast.from_pretrained(model_path)
slow.bos_token = AddedToken(slow.eos_token, lstrip=False, rstrip=False)
fast.bos_token = AddedToken(slow.eos_token, lstrip=False, rstrip=False)
slow.add_tokens(timestamps)
fast.add_tokens(timestamps)
```
The output from slow and fast is different. Fast matches the original implementation (not stripping spaces on the rigth and left) while slow does not.
```python
>>> openai_tok.encode("<|7.86|> Hey", allowed_special=set(openai_tok.special_tokens.keys()))
[50757, 1911]
>>> fast.encode('<|7.86|> Hey', add_special_tokens = False)
[50757, 1911]
>>> slow.encode('<|7.86|> Hey', add_special_tokens = False)
[50757, 7057]
```
script to update all models :
```python
from transformers import WhisperTokenizer, WhisperTokenizerFast, AddedToken
timestamps = [AddedToken("<|%.2f|>" % (i * 0.02), lstrip=False, rstrip=False) for i in range(1500 + 1)]
models_ids = ["tiny","small","medium","base","large"]
from whisper.tokenizer import get_tokenizer
openai_tok = get_tokenizer(multilingual=True, language="en", task="transcribe")
openai_tok.encode("<|1.00|>", allowed_special=set(openai_tok.special_tokens.keys()))
for id in models_ids:
model_path =f"openai/whisper-{id}"
slow = WhisperTokenizer.from_pretrained(model_path)
fast = WhisperTokenizerFast.from_pretrained(model_path)
slow.bos_token = AddedToken(slow.eos_token, lstrip=False, rstrip=False)
fast.bos_token = AddedToken(slow.eos_token, lstrip=False, rstrip=False)
slow.add_tokens(timestamps)
fast.add_tokens(timestamps)
slow.push_to_hub(model_path, create_pr = True)
fast.push_to_hub(model_path, create_pr = True)
if id == "large":
exit(0)
model_path += '.en'
slow = WhisperTokenizer.from_pretrained(model_path)
fast = WhisperTokenizerFast.from_pretrained(model_path)
slow.bos_token = AddedToken(slow.eos_token, lstrip=False, rstrip=False)
fast.bos_token = AddedToken(slow.eos_token, lstrip=False, rstrip=False)
slow.add_tokens(timestamps)
fast.add_tokens(timestamps)
slow.push_to_hub(model_path, create_pr = True)
fast.push_to_hub(model_path, create_pr = True)
``` | 06-26-2023 03:23:32 | 06-26-2023 03:23:32 | _The documentation is not available anymore as the PR was closed or merged._<|||||>cc @sanchit-gandhi <|||||>In order to keep backward compatibility / follow the original behaviour, I'll add a `encode_special_token` to whisper tokenizer. Not sure we can have 100% backward on this, because all specials tokens will be affected. <|||||>Closing this as #25081 adds `split_special_tokens` and the timestamp tokens will be manually added! <|||||>Just to clarify - we'll only need to update the tokenizer vocabs on the Hub following #25081?<|||||>yes! |
transformers | 24,475 | open | Does the model.generate supports batch_size > 1 ? | ### System Info
Does the function `model.generate` supports the case when batch size of `input_ids` > 1?
It is required especially for evaluation!
The following bugs are reported when I call `model.generate` to generate 2 or more `input_ids`, where model is a`LlamaForCausalLM`:
```
RuntimeError: CUDA error: device-side assert triggered
CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
```
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
model = LlamaForCausalLM.from_pretrained('./chinese-alpaca-7b-merged', device_map="auto").half().cuda()
request_text = ["xxx", "yy"]
input_ids = tokenizer(request_text, return_tensors='pt', padding="longest", truncation=True, max_length=1024)
response = model.generate(input_ids=input_ids.input_ids.cuda(), max_new_tokens=1024, temperature=1,top_k=40,top_p=0.9,repetition_penalty=1.15)
### Expected behavior
```
RuntimeError: CUDA error: device-side assert triggered
CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
``` | 06-26-2023 02:37:31 | 06-26-2023 02:37:31 | Yes, I'm specifically adding docs for it in #24432 <|||||>FYI this is the script you can use for batched generation:
```
from transformers import LlamaTokenizer, AutoModelForCausalLM
import torch
tokenizer = LlamaTokenizer.from_pretrained("openlm-research/open_llama_3b")
model = AutoModelForCausalLM.from_pretrained("openlm-research/open_llama_3b", torch_dtype=torch.float16, device_map="auto")
tokenizer.padding_side = "left"
# Define PAD Token = EOS Token
tokenizer.pad_token = tokenizer.eos_token
model.config.pad_token_id = model.config.eos_token_id
# use different length sentences to test batching
sentences = [
"Hello, my dog is a little",
"Today, I",
]
inputs = tokenizer(sentences, return_tensors="pt", padding=True).to(model.device)
output_sequences = model.generate(**inputs, max_new_tokens=20)
print(tokenizer.batch_decode(output_sequences, skip_special_tokens=True))
```<|||||>> FYI this is the script you can use for batched generation:
>
> ```
> from transformers import LlamaTokenizer, AutoModelForCausalLM
> import torch
>
> tokenizer = LlamaTokenizer.from_pretrained("openlm-research/open_llama_3b")
> model = AutoModelForCausalLM.from_pretrained("openlm-research/open_llama_3b", torch_dtype=torch.float16, device_map="auto")
>
> tokenizer.padding_side = "left"
>
> # Define PAD Token = EOS Token
> tokenizer.pad_token = tokenizer.eos_token
> model.config.pad_token_id = model.config.eos_token_id
>
> # use different length sentences to test batching
> sentences = [
> "Hello, my dog is a little",
> "Today, I",
> ]
>
> inputs = tokenizer(sentences, return_tensors="pt", padding=True).to(model.device)
>
> output_sequences = model.generate(**inputs, max_new_tokens=20)
>
> print(tokenizer.batch_decode(output_sequences, skip_special_tokens=True))
> ```
@NielsRogge hi, just curious about what `Define PAD Token = EOS Token` is for ?<|||||>By default, the padding token is not set in the tokenizer's config: https://huggingface.co/openlm-research/open_llama_3b/blob/main/tokenizer_config.json. So when you would pad you would get the following error:
```
ValueError: Asking to pad but the tokenizer does not have a padding token. Please select a token to use as `pad_token` `(tokenizer.pad_token = tokenizer.eos_token e.g.)` or add a new pad token via `tokenizer.add_special_tokens({'pad_token': '[PAD]'})`.
```<|||||>>
> FYI this is the script you can use for batched generation:
>
> ```
> from transformers import LlamaTokenizer, AutoModelForCausalLM
> import torch
>
> tokenizer = LlamaTokenizer.from_pretrained("openlm-research/open_llama_3b")
> model = AutoModelForCausalLM.from_pretrained("openlm-research/open_llama_3b", torch_dtype=torch.float16, device_map="auto")
>
> tokenizer.padding_side = "left"
>
> # Define PAD Token = EOS Token
> tokenizer.pad_token = tokenizer.eos_token
> model.config.pad_token_id = model.config.eos_token_id
>
> # use different length sentences to test batching
> sentences = [
> "Hello, my dog is a little",
> "Today, I",
> ]
>
> inputs = tokenizer(sentences, return_tensors="pt", padding=True).to(model.device)
>
> output_sequences = model.generate(**inputs, max_new_tokens=20)
>
> print(tokenizer.batch_decode(output_sequences, skip_special_tokens=True))
> ```
This also works fine for other 13b models based on Llama, Thanks a lot.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 24,474 | open | VideoMAE pretraining error when customizing compute_metrics | ### System Info
- `transformers` version: 4.29.2
- Platform: macOS-13.4.1-arm64-i386-64bit
- Python version: 3.10.11
- Huggingface_hub version: 0.15.1
- Safetensors version: not installed
- PyTorch version (GPU?): 2.0.1 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help?
@sgugger @amyeroberts
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I am trying to pre-train a VideoMAE model with a custom set of videos, and have found that by default, the evaluation loss is not reported after each epoch.
When I try to run this code
```py
config = VideoMAEConfig.from_pretrained("MCG-NJU/videomae-base")
videomae = VideoMAEForPreTraining(config=config)
# <load datasets>
trainer = Trainer(
model=videomae,
args=training_arguments,
optimizers=(torch.optim.AdamW(videomae.parameters()), None),
data_collator=lambda x: _data_collator(x, videos_only=True, model_config=config),
train_dataset=train_dataset,
eval_dataset=eval_dataset,
compute_metrics=_compute_metrics_videomae,
)
trainer.train()
```
I get this error message
```
raceback (most recent call last): | 0/1 [00:00<?, ?it/s]
File "/Users/jackgindi/Projects/echo-gpt/runtask.py", line 220, in <module>
pretrain_videomae(
File "/Users/jackgindi/Projects/echo-gpt/training.py", line 261, in pretrain_videomae
trainer.train()
File "/Users/jackgindi/miniconda3/envs/echo-gpt/lib/python3.10/site-packages/transformers/trainer.py", line 1664, in train
return inner_training_loop(
File "/Users/jackgindi/miniconda3/envs/echo-gpt/lib/python3.10/site-packages/transformers/trainer.py", line 2034, in _inner_training_loop
self._maybe_log_save_evaluate(tr_loss, model, trial, epoch, ignore_keys_for_eval)
File "/Users/jackgindi/miniconda3/envs/echo-gpt/lib/python3.10/site-packages/transformers/trainer.py", line 2300, in _maybe_log_save_evaluate
metrics = self.evaluate(ignore_keys=ignore_keys_for_eval)
File "/Users/jackgindi/miniconda3/envs/echo-gpt/lib/python3.10/site-packages/transformers/trainer.py", line 3029, in evaluate
output = eval_loop(
File "/Users/jackgindi/miniconda3/envs/echo-gpt/lib/python3.10/site-packages/transformers/trainer.py", line 3305, in evaluation_loop
all_preds = nested_truncate(all_preds, num_samples)
File "/Users/jackgindi/miniconda3/envs/echo-gpt/lib/python3.10/site-packages/transformers/trainer_pt_utils.py", line 357, in nested_truncate
return type(tensors)(nested_truncate(t, limit) for t in tensors)
File "/Users/jackgindi/miniconda3/envs/echo-gpt/lib/python3.10/site-packages/transformers/trainer_pt_utils.py", line 357, in <genexpr>
return type(tensors)(nested_truncate(t, limit) for t in tensors)
File "/Users/jackgindi/miniconda3/envs/echo-gpt/lib/python3.10/site-packages/transformers/trainer_pt_utils.py", line 361, in nested_truncate
return tensors[:limit]
IndexError: too many indices for array: array is 0-dimensional, but 1 were indexed
```
If I run the same code without the `compute_metrics=...` set in the trainer, I don't encounter an error because `nested_truncate` is not called. The issue does not seem to with my my compute_metrics function, since the error occurs even before entering it. Is this just a current limitation of the VideoMAEForPreTraining model, or is there a way around this?
### Expected behavior
Pass a custom `compute_metrics` function to the `Trainer` to see the evaluation loss of `VideoMAEForPreTraining` after each epoch without error. | 06-26-2023 01:17:56 | 06-26-2023 01:17:56 | Hi @gindij,
For us help debug the issue, it's necessary to be able to reproduce the error on our side. At the moment this isn't possible without knowing `_data_collator`, `_compute_metrics_videomae` or the dataset. Could you share a minimal reproducer please? <|||||>@amyeroberts thanks for the quick reply!
Here is a minimal reproducer:
```py
from typing import List
import datasets
import torch
from transformers import (
EvalPrediction,
PretrainedConfig,
Trainer,
TrainingArguments,
VideoMAEConfig,
VideoMAEForPreTraining,
)
def compute_image_mask(model_config: PretrainedConfig) -> torch.tensor:
num_patches_per_frame = (model_config.image_size // model_config.patch_size) ** 2
seq_length = (model_config.num_frames // model_config.tubelet_size) * num_patches_per_frame
p = torch.ones((1, seq_length)) * (1 - model_config.masking_ratio)
return torch.bernoulli(p).bool()
def data_collator(
batch: List[dict],
model_config: PretrainedConfig = None,
) -> dict:
padded_videos = [torch.Tensor(item["video"]) for item in batch]
padded_videos = torch.stack(padded_videos)
mask = compute_image_mask(model_config)
mask = mask.repeat((padded_videos.shape[0], 1))
return {
"pixel_values": padded_videos,
"bool_masked_pos": mask,
}
def compute_metrics_videomae(eval_pred: EvalPrediction) -> dict:
pass
if __name__ == "__main__":
config = VideoMAEConfig.from_pretrained("MCG-NJU/videomae-base")
config.num_frames = 32
config.masking_ratio = 0.9
config.tubelet_size = 4
config.patch_size = 32
videomae = VideoMAEForPreTraining(config=config)
train_dataset = {"video": [torch.rand((32, 3, 224, 224)) for _ in range(8)]}
eval_dataset = {"video": [torch.rand((32, 3, 224, 224)) for _ in range(8)]}
train_dataset = datasets.Dataset.from_dict(train_dataset)
eval_dataset = datasets.Dataset.from_dict(eval_dataset)
training_arguments = TrainingArguments(
output_dir="./checkpts",
per_device_eval_batch_size=8,
per_device_train_batch_size=8,
remove_unused_columns=False,
evaluation_strategy="epoch",
num_train_epochs=1,
)
trainer = Trainer(
model=videomae,
args=training_arguments,
optimizers=(torch.optim.AdamW(videomae.parameters()), None),
data_collator=lambda x: data_collator(x, config),
train_dataset=train_dataset,
eval_dataset=eval_dataset,
compute_metrics=compute_metrics_videomae, # <--- comment this line for it to work
)
trainer.train()
```<|||||>@amyeroberts, do you know of any updates on this issue?<|||||>@gindij In the example script, does `compute_metrics_videomae` intentionally return `None`? This is the cause of the error at the moment, as `Trainer` expects `compute_metrics` to be a callable which returns a dictionary.
`eval_loss` isn't returned in the 'normal' loop because training an MAE pretraining model is a special case, as there are no labels passed in and it doesn't have a `return_loss` flag in its forward method. The simplest way to get what you want is to use your own custom training loop e.g. [similar to this one](https://github.com/huggingface/transformers/blob/main/examples/pytorch/image-pretraining/run_mim_no_trainer.py). |
transformers | 24,473 | closed | Stucked on tokenization before training when using 3 GPU, but not when using 2 GPU | ### System Info
- `transformers` version: 4.31.0.dev0
- Platform: Linux-5.14.21-150400.24.55-default-x86_64-with-glibc2.31
- Python version: 3.10.10
- Huggingface_hub version: 0.15.1
- Safetensors version: 0.3.1
- PyTorch version (GPU?): 2.0.1+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: Yes
```
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 530.30.02 Driver Version: 530.30.02 CUDA Version: 12.1 |
|-----------------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+======================+======================|
| 0 NVIDIA A100 80GB PCIe Off| 00000000:52:00.0 Off | 0 |
| N/A 55C P0 80W / 300W| 59735MiB / 81920MiB | 12% Default |
| | | Disabled |
+-----------------------------------------+----------------------+----------------------+
| 1 NVIDIA A100 80GB PCIe Off| 00000000:CE:00.0 Off | 0 |
| N/A 56C P0 87W / 300W| 40933MiB / 81920MiB | 0% Default |
| | | Disabled |
+-----------------------------------------+----------------------+----------------------+
| 2 NVIDIA A100 80GB PCIe Off| 00000000:D1:00.0 Off | 0 |
| N/A 34C P0 44W / 300W| 0MiB / 81920MiB | 0% Default |
| | | Disabled |
+-----------------------------------------+----------------------+----------------------+
```
### Who can help?
@ArthurZucker @sgugger
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I intend to use [run_mlm.py](https://github.com/huggingface/transformers/blob/main/examples/pytorch/language-modeling/run_mlm.py) to train RoBERTa from scratch. To the training, I'm using data created my myself, and I entered the following command:
```
CUDA_VISIBLE_DEVICES=0,1,2 python run_mlm.py \
--model_type roberta \
--config_overrides="num_hidden_layers=6,max_position_embeddings=514" \
--tokenizer_name MyModel \
--train_file ./data/corpus_dedup.txt \
--max_seq_length 512 \
--line_by_line True \
--per_device_train_batch_size 64 \
--do_train \
--overwrite_output_dir True \
--gradient_accumulation_steps 4 \
--num_train_epochs 40 \
--fp16 True \
--output_dir MyModel \
--save_total_limit 1
```
When I try to do the training using a 3-GPU configuration, I'm getting stucked for dozens of hours in the tokenization before the training, with the following message:
`You're using a RobertaTokenizerFast tokenizer. Please note that with a fast tokenizer, using the `__call__` method is faster than using a method to encode the text followed by a call to the `pad` method to get a padded encoding.`
Aditionally, when I try to do the training with only 2 GPU (`CUDA_VISIBLE_DEVICES=0,1`, followed by the same parameters), my training runs normally...
### Expected behavior
Model starts to be trained from scratch on a 3 GPU configuration. | 06-26-2023 01:15:22 | 06-26-2023 01:15:22 | cc @Narsil you know more about me potential problems here (I remember a flag for tokenizer parallelism, might need to be set)<|||||>This is very odd, since `tokenizers` doesn't use the GPU at all.
You could try using `TOKENIZERS_PARALLELISM=0 CUDA_VISIBLE_DEVICE....` to disable the parallelism in `tokenizers` itself.
There are ways to trigger a deadlock with using multithreading/processing with `tokenizers` from Python, but most of those should be catched.
Note that this will slow down considerably the tokenizer training (it might already be what's occurring) since you're now only using 1 core instead of all the CPU.
And most importantly, the GPU settings shouldn't have any impact, so it looks like a bug in `run_mlm.py` parallelization strategy, or something wrong in the hardware.
Is it possible to isolate the `tokenizers` training from the rest of the code to sanity check things and see where the deadlock is coming from ?<|||||>> This is very odd, since `tokenizers` doesn't use the GPU at all.
My bad. That's `nvidia-smi` with the training with the 2-GPU config already running. My intent with this was to show my hardware configuration and CUDA version.
> You could try using `TOKENIZERS_PARALLELISM=0 CUDA_VISIBLE_DEVICE....` to disable the parallelism in tokenizers itself.
Gonna try it right now.
> Is it possible to isolate the `tokenizers` training from the rest of the code to sanity check things and see where the deadlock is coming from ?
I'm using a tokenizer that I trained beforehand (`merges.txt` and `vocab.json` files), so seems to me that the process is already isolated, isn't?<|||||>Then it should load instantly and not even retrain a tokenizer, no ?
I'm not sure the message you shared is the cause of your issue (the warning is probably there, but it's just a hint that there's a faster way to encode data, not necessarily that this is what is making your process stuck.<|||||>> Gonna try it right now.
Just did the process and came back here after a while: same issue:
```
[INFO|trainer.py:1680] 2023-06-26 13:43:56,492 >> ***** Running training *****
[INFO|trainer.py:1681] 2023-06-26 13:43:56,492 >> Num examples = 2,353,535
[INFO|trainer.py:1682] 2023-06-26 13:43:56,492 >> Num Epochs = 40
[INFO|trainer.py:1683] 2023-06-26 13:43:56,492 >> Instantaneous batch size per device = 192
[INFO|trainer.py:1684] 2023-06-26 13:43:56,492 >> Total train batch size (w. parallel, distributed & accumulation) = 768
[INFO|trainer.py:1685] 2023-06-26 13:43:56,493 >> Gradient Accumulation steps = 4
[INFO|trainer.py:1686] 2023-06-26 13:43:56,493 >> Total optimization steps = 122,560
[INFO|trainer.py:1687] 2023-06-26 13:43:56,493 >> Number of trainable parameters = 82,170,969
[INFO|integrations.py:727] 2023-06-26 13:43:56,493 >> Automatic Weights & Biases logging enabled, to disable set os.environ["WANDB_DISABLED"] = "true"
wandb: Currently logged in as: <USER>. Use `wandb login --relogin` to force relogin
wandb: Tracking run with wandb version 0.15.4
wandb: Run data is saved locally in /cfs/home/u021274/higo/wandb/run-20230626_134359-d7jhdqpd
wandb: Run `wandb offline` to turn off syncing.
wandb: Syncing run fluent-forest-46
wandb: ⭐️ View project at <URL>
wandb: 🚀 View run at <URL>
0%| |
0/122560 [00:00<?, ?it/s][WARNING|logging.py:280] 2023-06-26 13:44:08,940 >> You're using a RobertaTokenizerFast tokenizer. Please note that with a fast tokenizer, using the `__call__` method is faster than using a method to encode the text followed by a call to the `pad` method to get a padded encoding.
```
`nvidia-smi` returns the following:
```
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 530.30.02 Driver Version: 530.30.02 CUDA Version: 12.1 |
|-----------------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+======================+======================|
| 0 NVIDIA A100 80GB PCIe Off| 00000000:52:00.0 Off | 0 |
| N/A 37C P0 71W / 300W| 1885MiB / 81920MiB | 100% Default |
| | | Disabled |
+-----------------------------------------+----------------------+----------------------+
| 1 NVIDIA A100 80GB PCIe Off| 00000000:CE:00.0 Off | 0 |
| N/A 39C P0 69W / 300W| 1863MiB / 81920MiB | 100% Default |
| | | Disabled |
+-----------------------------------------+----------------------+----------------------+
| 2 NVIDIA A100 80GB PCIe Off| 00000000:D1:00.0 Off | 0 |
| N/A 43C P0 71W / 300W| 1863MiB / 81920MiB | 100% Default |
| | | Disabled |
+-----------------------------------------+----------------------+----------------------+
+---------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=======================================================================================|
| 0 N/A N/A 62822 C python 1882MiB |
| 1 N/A N/A 62822 C python 1860MiB |
| 2 N/A N/A 62822 C python 1860MiB |
+---------------------------------------------------------------------------------------+
```
Seems that's not the tokenization, because the GPU is (barely) used, but the message that I'm stucked remains the same.<|||||>I would try putting a debugger in your session, and iterate step by step to figure out where the script hangs.<|||||>```
> /cfs/home/u021274/higo/run_mlm.py(234)main()
-> parser = HfArgumentParser((ModelArguments, DataTrainingArguments, TrainingArguments))
(Pdb) n
> /cfs/home/u021274/higo/run_mlm.py(235)main()
-> if len(sys.argv) == 2 and sys.argv[1].endswith(".json"):
(Pdb) n
> /cfs/home/u021274/higo/run_mlm.py(240)main()
-> model_args, data_args, training_args = parser.parse_args_into_dataclasses()
(Pdb) n
> /cfs/home/u021274/higo/run_mlm.py(244)main()
-> send_example_telemetry("run_mlm", model_args, data_args)
(Pdb) n
> /cfs/home/u021274/higo/run_mlm.py(247)main()
-> logging.basicConfig(
(Pdb) n
> /cfs/home/u021274/higo/run_mlm.py(248)main()
-> format="%(asctime)s - %(levelname)s - %(name)s - %(message)s",
(Pdb) n
> /cfs/home/u021274/higo/run_mlm.py(249)main()
-> datefmt="%m/%d/%Y %H:%M:%S",
(Pdb) n
> /cfs/home/u021274/higo/run_mlm.py(250)main()
-> handlers=[logging.StreamHandler(sys.stdout)],
(Pdb) n
> /cfs/home/u021274/higo/run_mlm.py(247)main()
-> logging.basicConfig(
(Pdb) n
> /cfs/home/u021274/higo/run_mlm.py(253)main()
-> if training_args.should_log:
(Pdb) n
> /cfs/home/u021274/higo/run_mlm.py(255)main()
-> transformers.utils.logging.set_verbosity_info()
(Pdb) n
> /cfs/home/u021274/higo/run_mlm.py(257)main()
-> log_level = training_args.get_process_log_level()
(Pdb) n
> /cfs/home/u021274/higo/run_mlm.py(258)main()
-> logger.setLevel(log_level)
(Pdb) n
> /cfs/home/u021274/higo/run_mlm.py(259)main()
-> datasets.utils.logging.set_verbosity(log_level)
(Pdb) n
> /cfs/home/u021274/higo/run_mlm.py(260)main()
-> transformers.utils.logging.set_verbosity(log_level)
(Pdb) n
> /cfs/home/u021274/higo/run_mlm.py(261)main()
-> transformers.utils.logging.enable_default_handler()
(Pdb) n
> /cfs/home/u021274/higo/run_mlm.py(262)main()
-> transformers.utils.logging.enable_explicit_format()
(Pdb) n
> /cfs/home/u021274/higo/run_mlm.py(265)main()
-> logger.warning(
(Pdb) n
> /cfs/home/u021274/higo/run_mlm.py(266)main()
-> f"Process rank: {training_args.local_rank}, device: {training_args.device}, n_gpu: {training_args.n_gpu}"
(Pdb) n
> /cfs/home/u021274/higo/run_mlm.py(267)main()
-> + f"distributed training: {bool(training_args.local_rank != -1)}, 16-bits training: {training_args.fp16}"
(Pdb) n
> /cfs/home/u021274/higo/run_mlm.py(266)main()
-> f"Process rank: {training_args.local_rank}, device: {training_args.device}, n_gpu: {training_args.n_gpu}"
(Pdb) n
> /cfs/home/u021274/higo/run_mlm.py(265)main()
-> logger.warning(
(Pdb) n
06/26/2023 19:45:08 - WARNING - __main__ - Process rank: 0, device: cuda:0, n_gpu: 3distributed training: True, 16-bits training: True
> /cfs/home/u021274/higo/run_mlm.py(270)main()
-> logger.info(f"Training/evaluation parameters {training_args}")
(Pdb) n
06/26/2023 19:45:09 - INFO - __main__ - Training/evaluation parameters TrainingArguments(
_n_gpu=3,
adafactor=False,
adam_beta1=0.9,
adam_beta2=0.999,
adam_epsilon=1e-08,
auto_find_batch_size=False,
bf16=False,
bf16_full_eval=False,
data_seed=None,
dataloader_drop_last=False,
dataloader_num_workers=0,
dataloader_pin_memory=True,
ddp_backend=None,
ddp_broadcast_buffers=None,
ddp_bucket_cap_mb=None,
ddp_find_unused_parameters=None,
ddp_timeout=1800,
debug=[],
deepspeed=None,
disable_tqdm=False,
do_eval=False,
do_predict=False,
do_train=True,
eval_accumulation_steps=None,
eval_delay=0,
eval_steps=None,
evaluation_strategy=no,
fp16=True,
fp16_backend=auto,
fp16_full_eval=False,
fp16_opt_level=O1,
fsdp=[],
fsdp_config={'fsdp_min_num_params': 0, 'xla': False, 'xla_fsdp_grad_ckpt': False},
fsdp_min_num_params=0,
fsdp_transformer_layer_cls_to_wrap=None,
full_determinism=False,
gradient_accumulation_steps=4,
gradient_checkpointing=False,
greater_is_better=None,
group_by_length=False,
half_precision_backend=auto,
hub_model_id=None,
hub_private_repo=False,
hub_strategy=every_save,
hub_token=<HUB_TOKEN>,
ignore_data_skip=False,
include_inputs_for_metrics=False,
jit_mode_eval=False,
label_names=None,
label_smoothing_factor=0.0,
learning_rate=5e-05,
length_column_name=length,
load_best_model_at_end=False,
local_rank=0,
log_level=passive,
log_level_replica=warning,
log_on_each_node=True,
logging_dir=MyModel/runs/Jun26_19-44-10_g07,
logging_first_step=False,
logging_nan_inf_filter=True,
logging_steps=500,
logging_strategy=steps,
lr_scheduler_type=linear,
max_grad_norm=1.0,
max_steps=-1,
metric_for_best_model=None,
mp_parameters=,
no_cuda=False,
num_train_epochs=40.0,
optim=adamw_hf,
optim_args=None,
output_dir=MyModel,
overwrite_output_dir=True,
past_index=-1,
per_device_eval_batch_size=8,
per_device_train_batch_size=64,
prediction_loss_only=False,
push_to_hub=False,
push_to_hub_model_id=None,
push_to_hub_organization=None,
push_to_hub_token=<PUSH_TO_HUB_TOKEN>,
ray_scope=last,
remove_unused_columns=True,
report_to=['wandb'],
resume_from_checkpoint=None,
run_name=MyModel,
save_on_each_node=False,
save_safetensors=False,
save_steps=500,
save_strategy=steps,
save_total_limit=1,
seed=42,
sharded_ddp=[],
skip_memory_metrics=True,
tf32=None,
torch_compile=False,
torch_compile_backend=None,
torch_compile_mode=None,
torchdynamo=None,
tpu_metrics_debug=False,
tpu_num_cores=None,
use_ipex=False,
use_legacy_prediction_loop=False,
use_mps_device=False,
warmup_ratio=0.0,
warmup_steps=0,
weight_decay=0.0,
xpu_backend=None,
)
> /cfs/home/u021274/higo/run_mlm.py(273)main()
-> last_checkpoint = None
(Pdb) n
> /cfs/home/u021274/higo/run_mlm.py(274)main()
-> if os.path.isdir(training_args.output_dir) and training_args.do_train and not training_args.overwrite_output_dir:
(Pdb) n
> /cfs/home/u021274/higo/run_mlm.py(288)main()
-> set_seed(training_args.seed)
(Pdb) n
> /cfs/home/u021274/higo/run_mlm.py(299)main()
-> if data_args.dataset_name is not None:
(Pdb) n
> /cfs/home/u021274/higo/run_mlm.py(326)main()
-> data_files = {}
(Pdb) n
> /cfs/home/u021274/higo/run_mlm.py(327)main()
-> if data_args.train_file is not None:
(Pdb) n
> /cfs/home/u021274/higo/run_mlm.py(328)main()
-> data_files["train"] = data_args.train_file
(Pdb) n
> /cfs/home/u021274/higo/run_mlm.py(329)main()
-> extension = data_args.train_file.split(".")[-1]
(Pdb) n
> /cfs/home/u021274/higo/run_mlm.py(330)main()
-> if data_args.validation_file is not None:
(Pdb) n
> /cfs/home/u021274/higo/run_mlm.py(333)main()
-> if extension == "txt":
(Pdb) n
> /cfs/home/u021274/higo/run_mlm.py(334)main()
-> extension = "text"
(Pdb) n
> /cfs/home/u021274/higo/run_mlm.py(335)main()
-> raw_datasets = load_dataset(
(Pdb) n
> /cfs/home/u021274/higo/run_mlm.py(336)main()
-> extension,
(Pdb) n
> /cfs/home/u021274/higo/run_mlm.py(337)main()
-> data_files=data_files,
(Pdb) n
> /cfs/home/u021274/higo/run_mlm.py(338)main()
-> cache_dir=model_args.cache_dir,
(Pdb) n
> /cfs/home/u021274/higo/run_mlm.py(339)main()
-> use_auth_token=True if model_args.use_auth_token else None,
(Pdb) n
> /cfs/home/u021274/higo/run_mlm.py(335)main()
-> raw_datasets = load_dataset(
(Pdb) n
06/26/2023 19:45:33 - INFO - datasets.builder - Using custom data configuration default-2df3a67ae9ac7743
06/26/2023 19:45:33 - INFO - datasets.info - Loading Dataset Infos from /cfs/home/u021274/higo/myenv/lib64/python3.10/site-packages/datasets/packaged_modules/text
06/26/2023 19:45:33 - INFO - datasets.builder - Overwrite dataset info from restored data version if exists.
06/26/2023 19:45:33 - INFO - datasets.info - Loading Dataset info from /cfs/home/u021274/.cache/huggingface/datasets/text/default-2df3a67ae9ac7743/0.0.0/cb1e9bd71a82ad27976be3b12b407850fe2837d80c22c5e03a28949843a8ace2
06/26/2023 19:45:34 - WARNING - datasets.builder - Found cached dataset text (/cfs/home/u021274/.cache/huggingface/datasets/text/default-2df3a67ae9ac7743/0.0.0/cb1e9bd71a82ad27976be3b12b407850fe2837d80c22c5e03a28949843a8ace2)
06/26/2023 19:45:34 - INFO - datasets.info - Loading Dataset info from /cfs/home/u021274/.cache/huggingface/datasets/text/default-2df3a67ae9ac7743/0.0.0/cb1e9bd71a82ad27976be3b12b407850fe2837d80c22c5e03a28949843a8ace2
100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 16.00it/s]
> /cfs/home/u021274/higo/run_mlm.py(343)main()
-> if "validation" not in raw_datasets.keys():
(Pdb) n
> /cfs/home/u021274/higo/run_mlm.py(344)main()
-> raw_datasets["validation"] = load_dataset(
(Pdb) n
> /cfs/home/u021274/higo/run_mlm.py(345)main()
-> extension,
(Pdb) n
> /cfs/home/u021274/higo/run_mlm.py(346)main()
-> data_files=data_files,
(Pdb) n
> /cfs/home/u021274/higo/run_mlm.py(347)main()
-> split=f"train[:{data_args.validation_split_percentage}%]",
(Pdb) n
> /cfs/home/u021274/higo/run_mlm.py(348)main()
-> cache_dir=model_args.cache_dir,
(Pdb) n
> /cfs/home/u021274/higo/run_mlm.py(349)main()
-> use_auth_token=True if model_args.use_auth_token else None,
(Pdb) n
> /cfs/home/u021274/higo/run_mlm.py(344)main()
-> raw_datasets["validation"] = load_dataset(
(Pdb) n
06/26/2023 19:45:52 - INFO - datasets.builder - Using custom data configuration default-2df3a67ae9ac7743
06/26/2023 19:45:52 - INFO - datasets.info - Loading Dataset Infos from /cfs/home/u021274/higo/myenv/lib64/python3.10/site-packages/datasets/packaged_modules/text
06/26/2023 19:45:52 - INFO - datasets.builder - Overwrite dataset info from restored data version if exists.
06/26/2023 19:45:52 - INFO - datasets.info - Loading Dataset info from /cfs/home/u021274/.cache/huggingface/datasets/text/default-2df3a67ae9ac7743/0.0.0/cb1e9bd71a82ad27976be3b12b407850fe2837d80c22c5e03a28949843a8ace2
06/26/2023 19:45:52 - WARNING - datasets.builder - Found cached dataset text (/cfs/home/u021274/.cache/huggingface/datasets/text/default-2df3a67ae9ac7743/0.0.0/cb1e9bd71a82ad27976be3b12b407850fe2837d80c22c5e03a28949843a8ace2)
06/26/2023 19:45:52 - INFO - datasets.info - Loading Dataset info from /cfs/home/u021274/.cache/huggingface/datasets/text/default-2df3a67ae9ac7743/0.0.0/cb1e9bd71a82ad27976be3b12b407850fe2837d80c22c5e03a28949843a8ace2
> /cfs/home/u021274/higo/run_mlm.py(351)main()
-> raw_datasets["train"] = load_dataset(
(Pdb) n
> /cfs/home/u021274/higo/run_mlm.py(352)main()
-> extension,
(Pdb) n
> /cfs/home/u021274/higo/run_mlm.py(353)main()
-> data_files=data_files,
(Pdb) n
> /cfs/home/u021274/higo/run_mlm.py(354)main()
-> split=f"train[{data_args.validation_split_percentage}%:]",
(Pdb) n
> /cfs/home/u021274/higo/run_mlm.py(355)main()
-> cache_dir=model_args.cache_dir,
(Pdb) n
> /cfs/home/u021274/higo/run_mlm.py(356)main()
-> use_auth_token=True if model_args.use_auth_token else None,
(Pdb) n
> /cfs/home/u021274/higo/run_mlm.py(351)main()
-> raw_datasets["train"] = load_dataset(
(Pdb) n
06/26/2023 19:46:02 - INFO - datasets.builder - Using custom data configuration default-2df3a67ae9ac7743
06/26/2023 19:46:02 - INFO - datasets.info - Loading Dataset Infos from /cfs/home/u021274/higo/myenv/lib64/python3.10/site-packages/datasets/packaged_modules/text
06/26/2023 19:46:02 - INFO - datasets.builder - Overwrite dataset info from restored data version if exists.
06/26/2023 19:46:02 - INFO - datasets.info - Loading Dataset info from /cfs/home/u021274/.cache/huggingface/datasets/text/default-2df3a67ae9ac7743/0.0.0/cb1e9bd71a82ad27976be3b12b407850fe2837d80c22c5e03a28949843a8ace2
06/26/2023 19:46:02 - WARNING - datasets.builder - Found cached dataset text (/cfs/home/u021274/.cache/huggingface/datasets/text/default-2df3a67ae9ac7743/0.0.0/cb1e9bd71a82ad27976be3b12b407850fe2837d80c22c5e03a28949843a8ace2)
06/26/2023 19:46:02 - INFO - datasets.info - Loading Dataset info from /cfs/home/u021274/.cache/huggingface/datasets/text/default-2df3a67ae9ac7743/0.0.0/cb1e9bd71a82ad27976be3b12b407850fe2837d80c22c5e03a28949843a8ace2
> /cfs/home/u021274/higo/run_mlm.py(368)main()
-> "cache_dir": model_args.cache_dir,
(Pdb) n
> /cfs/home/u021274/higo/run_mlm.py(369)main()
-> "revision": model_args.model_revision,
(Pdb) n
> /cfs/home/u021274/higo/run_mlm.py(370)main()
-> "use_auth_token": True if model_args.use_auth_token else None,
(Pdb) n
> /cfs/home/u021274/higo/run_mlm.py(367)main()
-> config_kwargs = {
(Pdb) n
> /cfs/home/u021274/higo/run_mlm.py(372)main()
-> if model_args.config_name:
(Pdb) n
> /cfs/home/u021274/higo/run_mlm.py(374)main()
-> elif model_args.model_name_or_path:
(Pdb) n
> /cfs/home/u021274/higo/run_mlm.py(377)main()
-> config = CONFIG_MAPPING[model_args.model_type]()
(Pdb) n
> /cfs/home/u021274/higo/run_mlm.py(378)main()
-> logger.warning("You are instantiating a new config instance from scratch.")
(Pdb) n
06/26/2023 19:46:14 - WARNING - __main__ - You are instantiating a new config instance from scratch.
> /cfs/home/u021274/higo/run_mlm.py(379)main()
-> if model_args.config_overrides is not None:
(Pdb) n
> /cfs/home/u021274/higo/run_mlm.py(380)main()
-> logger.info(f"Overriding config: {model_args.config_overrides}")
(Pdb) n
06/26/2023 19:46:17 - INFO - __main__ - Overriding config: num_hidden_layers=6,max_position_embeddings=514
> /cfs/home/u021274/higo/run_mlm.py(381)main()
-> config.update_from_string(model_args.config_overrides)
(Pdb) n
> /cfs/home/u021274/higo/run_mlm.py(382)main()
-> logger.info(f"New config: {config}")
(Pdb) n
06/26/2023 19:46:19 - INFO - __main__ - New config: RobertaConfig {
"attention_probs_dropout_prob": 0.1,
"bos_token_id": 0,
"classifier_dropout": null,
"eos_token_id": 2,
"hidden_act": "gelu",
"hidden_dropout_prob": 0.1,
"hidden_size": 768,
"initializer_range": 0.02,
"intermediate_size": 3072,
"layer_norm_eps": 1e-12,
"max_position_embeddings": 514,
"model_type": "roberta",
"num_attention_heads": 12,
"num_hidden_layers": 6,
"pad_token_id": 1,
"position_embedding_type": "absolute",
"transformers_version": "4.31.0.dev0",
"type_vocab_size": 2,
"use_cache": true,
"vocab_size": 50265
}
> /cfs/home/u021274/higo/run_mlm.py(385)main()
-> "cache_dir": model_args.cache_dir,
(Pdb) n
> /cfs/home/u021274/higo/run_mlm.py(386)main()
-> "use_fast": model_args.use_fast_tokenizer,
(Pdb) n
> /cfs/home/u021274/higo/run_mlm.py(387)main()
-> "revision": model_args.model_revision,
(Pdb) n
> /cfs/home/u021274/higo/run_mlm.py(388)main()
-> "use_auth_token": True if model_args.use_auth_token else None,
(Pdb) n
> /cfs/home/u021274/higo/run_mlm.py(384)main()
-> tokenizer_kwargs = {
(Pdb) n
> /cfs/home/u021274/higo/run_mlm.py(390)main()
-> if model_args.tokenizer_name:
(Pdb) n
> /cfs/home/u021274/higo/run_mlm.py(391)main()
-> tokenizer = AutoTokenizer.from_pretrained(model_args.tokenizer_name, **tokenizer_kwargs)
(Pdb) n
[INFO|tokenization_auto.py:503] 2023-06-26 19:47:10,919 >> Could not locate the tokenizer configuration file, will try to use the model config instead.
[INFO|configuration_utils.py:710] 2023-06-26 19:47:10,922 >> loading configuration file MyModel/config.json
[INFO|configuration_utils.py:768] 2023-06-26 19:47:10,932 >> Model config RobertaConfig {
"_name_or_path": "MyModel",
"architectures": [
"RobertaForMaskedLM"
],
"attention_probs_dropout_prob": 0.1,
"bos_token_id": 0,
"classifier_dropout": null,
"eos_token_id": 2,
"hidden_act": "gelu",
"hidden_dropout_prob": 0.1,
"hidden_size": 768,
"initializer_range": 0.02,
"intermediate_size": 3072,
"layer_norm_eps": 1e-05,
"max_position_embeddings": 514,
"model_type": "roberta",
"num_attention_heads": 12,
"num_hidden_layers": 6,
"pad_token_id": 1,
"position_embedding_type": "absolute",
"transformers_version": "4.31.0.dev0",
"type_vocab_size": 1,
"use_cache": true,
"vocab_size": 50265
}
[INFO|tokenization_utils_base.py:1842] 2023-06-26 19:47:10,946 >> loading file vocab.json
[INFO|tokenization_utils_base.py:1842] 2023-06-26 19:47:10,946 >> loading file merges.txt
[INFO|tokenization_utils_base.py:1842] 2023-06-26 19:47:10,946 >> loading file tokenizer.json
[INFO|tokenization_utils_base.py:1842] 2023-06-26 19:47:10,946 >> loading file added_tokens.json
[INFO|tokenization_utils_base.py:1842] 2023-06-26 19:47:10,946 >> loading file special_tokens_map.json
[INFO|tokenization_utils_base.py:1842] 2023-06-26 19:47:10,946 >> loading file tokenizer_config.json
[INFO|configuration_utils.py:710] 2023-06-26 19:47:10,947 >> loading configuration file MyModel/config.json
[INFO|configuration_utils.py:768] 2023-06-26 19:47:10,950 >> Model config RobertaConfig {
"_name_or_path": "MyModel",
"architectures": [
"RobertaForMaskedLM"
],
"attention_probs_dropout_prob": 0.1,
"bos_token_id": 0,
"classifier_dropout": null,
"eos_token_id": 2,
"hidden_act": "gelu",
"hidden_dropout_prob": 0.1,
"hidden_size": 768,
"initializer_range": 0.02,
"intermediate_size": 3072,
"layer_norm_eps": 1e-05,
"max_position_embeddings": 514,
"model_type": "roberta",
"num_attention_heads": 12,
"num_hidden_layers": 6,
"pad_token_id": 1,
"position_embedding_type": "absolute",
"transformers_version": "4.31.0.dev0",
"type_vocab_size": 1,
"use_cache": true,
"vocab_size": 50265
}
[INFO|configuration_utils.py:710] 2023-06-26 19:47:11,024 >> loading configuration file MyModel/config.json
[INFO|configuration_utils.py:768] 2023-06-26 19:47:11,027 >> Model config RobertaConfig {
"_name_or_path": "MyModel",
"architectures": [
"RobertaForMaskedLM"
],
"attention_probs_dropout_prob": 0.1,
"bos_token_id": 0,
"classifier_dropout": null,
"eos_token_id": 2,
"hidden_act": "gelu",
"hidden_dropout_prob": 0.1,
"hidden_size": 768,
"initializer_range": 0.02,
"intermediate_size": 3072,
"layer_norm_eps": 1e-05,
"max_position_embeddings": 514,
"model_type": "roberta",
"num_attention_heads": 12,
"num_hidden_layers": 6,
"pad_token_id": 1,
"position_embedding_type": "absolute",
"transformers_version": "4.31.0.dev0",
"type_vocab_size": 1,
"use_cache": true,
"vocab_size": 50265
}
> /cfs/home/u021274/higo/run_mlm.py(400)main()
-> if model_args.model_name_or_path:
(Pdb) n
> /cfs/home/u021274/higo/run_mlm.py(411)main()
-> logger.info("Training new model from scratch")
(Pdb) n
06/26/2023 19:47:14 - INFO - __main__ - Training new model from scratch
> /cfs/home/u021274/higo/run_mlm.py(412)main()
-> model = AutoModelForMaskedLM.from_config(config)
(Pdb) n
> /cfs/home/u021274/higo/run_mlm.py(416)main()
-> embedding_size = model.get_input_embeddings().weight.shape[0]
(Pdb) n
> /cfs/home/u021274/higo/run_mlm.py(417)main()
-> if len(tokenizer) > embedding_size:
(Pdb) n
> /cfs/home/u021274/higo/run_mlm.py(422)main()
-> if training_args.do_train:
(Pdb) n
> /cfs/home/u021274/higo/run_mlm.py(423)main()
-> column_names = list(raw_datasets["train"].features)
(Pdb) n
> /cfs/home/u021274/higo/run_mlm.py(426)main()
-> text_column_name = "text" if "text" in column_names else column_names[0]
(Pdb) n
> /cfs/home/u021274/higo/run_mlm.py(428)main()
-> if data_args.max_seq_length is None:
(Pdb) n
> /cfs/home/u021274/higo/run_mlm.py(438)main()
-> if data_args.max_seq_length > tokenizer.model_max_length:
(Pdb) n
> /cfs/home/u021274/higo/run_mlm.py(443)main()
-> max_seq_length = min(data_args.max_seq_length, tokenizer.model_max_length)
(Pdb) n
> /cfs/home/u021274/higo/run_mlm.py(445)main()
-> if data_args.line_by_line:
(Pdb) n
> /cfs/home/u021274/higo/run_mlm.py(447)main()
-> padding = "max_length" if data_args.pad_to_max_length else False
(Pdb) n
> /cfs/home/u021274/higo/run_mlm.py(449)main()
-> def tokenize_function(examples):
(Pdb) n
> /cfs/home/u021274/higo/run_mlm.py(464)main()
-> with training_args.main_process_first(desc="dataset map tokenization"):
(Pdb) n
> /cfs/home/u021274/higo/run_mlm.py(465)main()
-> if not data_args.streaming:
(Pdb) n
> /cfs/home/u021274/higo/run_mlm.py(466)main()
-> tokenized_datasets = raw_datasets.map(
(Pdb) n
> /cfs/home/u021274/higo/run_mlm.py(467)main()
-> tokenize_function,
(Pdb) n
> /cfs/home/u021274/higo/run_mlm.py(468)main()
-> batched=True,
(Pdb) n
> /cfs/home/u021274/higo/run_mlm.py(469)main()
-> num_proc=data_args.preprocessing_num_workers,
(Pdb) n
> /cfs/home/u021274/higo/run_mlm.py(470)main()
-> remove_columns=[text_column_name],
(Pdb) n
> /cfs/home/u021274/higo/run_mlm.py(471)main()
-> load_from_cache_file=not data_args.overwrite_cache,
(Pdb) n
> /cfs/home/u021274/higo/run_mlm.py(472)main()
-> desc="Running tokenizer on dataset line_by_line",
(Pdb) n
> /cfs/home/u021274/higo/run_mlm.py(466)main()
-> tokenized_datasets = raw_datasets.map(
(Pdb) n
06/26/2023 19:47:51 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /cfs/home/u021274/.cache/huggingface/datasets/text/default-2df3a67ae9ac7743/0.0.0/cb1e9bd71a82ad27976be3b12b407850fe2837d80c22c5e03a28949843a8ace2/cache-c8ae7ecb92d28874.arrow
06/26/2023 19:47:51 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /cfs/home/u021274/.cache/huggingface/datasets/text/default-2df3a67ae9ac7743/0.0.0/cb1e9bd71a82ad27976be3b12b407850fe2837d80c22c5e03a28949843a8ace2/cache-20fc928d1e2a7f3b.arrow
> /cfs/home/u021274/higo/run_mlm.py(464)main()
-> with training_args.main_process_first(desc="dataset map tokenization"):
(Pdb) n
> /cfs/home/u021274/higo/run_mlm.py(542)main()
-> if training_args.do_train:
(Pdb) n
> /cfs/home/u021274/higo/run_mlm.py(543)main()
-> if "train" not in tokenized_datasets:
(Pdb) n
> /cfs/home/u021274/higo/run_mlm.py(545)main()
-> train_dataset = tokenized_datasets["train"]
(Pdb) n
> /cfs/home/u021274/higo/run_mlm.py(546)main()
-> if data_args.max_train_samples is not None:
(Pdb) n
> /cfs/home/u021274/higo/run_mlm.py(550)main()
-> if training_args.do_eval:
(Pdb) n
> /cfs/home/u021274/higo/run_mlm.py(580)main()
-> pad_to_multiple_of_8 = data_args.line_by_line and training_args.fp16 and not data_args.pad_to_max_length
(Pdb) n
> /cfs/home/u021274/higo/run_mlm.py(581)main()
-> data_collator = DataCollatorForLanguageModeling(
(Pdb) n
> /cfs/home/u021274/higo/run_mlm.py(582)main()
-> tokenizer=tokenizer,
(Pdb) n
> /cfs/home/u021274/higo/run_mlm.py(583)main()
-> mlm_probability=data_args.mlm_probability,
(Pdb) n
> /cfs/home/u021274/higo/run_mlm.py(584)main()
-> pad_to_multiple_of=8 if pad_to_multiple_of_8 else None,
(Pdb) n
> /cfs/home/u021274/higo/run_mlm.py(581)main()
-> data_collator = DataCollatorForLanguageModeling(
(Pdb) n
> /cfs/home/u021274/higo/run_mlm.py(588)main()
-> trainer = Trainer(
(Pdb) n
> /cfs/home/u021274/higo/run_mlm.py(589)main()
-> model=model,
(Pdb) n
> /cfs/home/u021274/higo/run_mlm.py(590)main()
-> args=training_args,
(Pdb) n
> /cfs/home/u021274/higo/run_mlm.py(591)main()
-> train_dataset=train_dataset if training_args.do_train else None,
(Pdb) n
> /cfs/home/u021274/higo/run_mlm.py(592)main()
-> eval_dataset=eval_dataset if training_args.do_eval else None,
(Pdb) n
> /cfs/home/u021274/higo/run_mlm.py(593)main()
-> tokenizer=tokenizer,
(Pdb) n
> /cfs/home/u021274/higo/run_mlm.py(594)main()
-> data_collator=data_collator,
(Pdb) n
> /cfs/home/u021274/higo/run_mlm.py(595)main()
-> compute_metrics=compute_metrics if training_args.do_eval and not is_torch_tpu_available() else None,
(Pdb) n
> /cfs/home/u021274/higo/run_mlm.py(597)main()
-> if training_args.do_eval and not is_torch_tpu_available()
(Pdb) n
> /cfs/home/u021274/higo/run_mlm.py(596)main()
-> preprocess_logits_for_metrics=preprocess_logits_for_metrics
(Pdb) n
> /cfs/home/u021274/higo/run_mlm.py(598)main()
-> else None,
(Pdb) n
> /cfs/home/u021274/higo/run_mlm.py(588)main()
-> trainer = Trainer(
(Pdb) n
> /cfs/home/u021274/higo/run_mlm.py(602)main()
-> if training_args.do_train:
(Pdb) n
> /cfs/home/u021274/higo/run_mlm.py(603)main()
-> checkpoint = None
(Pdb) n
> /cfs/home/u021274/higo/run_mlm.py(604)main()
-> if training_args.resume_from_checkpoint is not None:
(Pdb) n
> /cfs/home/u021274/higo/run_mlm.py(606)main()
-> elif last_checkpoint is not None:
(Pdb) n
> /cfs/home/u021274/higo/run_mlm.py(608)main()
-> train_result = trainer.train(resume_from_checkpoint=checkpoint)
(Pdb) n
[INFO|trainer.py:769] 2023-06-26 19:48:46,054 >> The following columns in the training set don't have a corresponding argument in `RobertaForMaskedLM.forward` and have been ignored: special_tokens_mask. If special_tokens_mask are not expected by `RobertaForMaskedLM.forward`, you can safely ignore this message.
/cfs/home/u021274/higo/myenv/lib64/python3.10/site-packages/transformers/optimization.py:411: FutureWarning: This implementation of AdamW is deprecated and will be removed in a future version. Use the PyTorch implementation torch.optim.AdamW instead, or set `no_deprecation_warning=True` to disable this warning
warnings.warn(
[INFO|trainer.py:1680] 2023-06-26 19:48:46,071 >> ***** Running training *****
[INFO|trainer.py:1681] 2023-06-26 19:48:46,071 >> Num examples = 2,353,535
[INFO|trainer.py:1682] 2023-06-26 19:48:46,071 >> Num Epochs = 40
[INFO|trainer.py:1683] 2023-06-26 19:48:46,071 >> Instantaneous batch size per device = 192
[INFO|trainer.py:1684] 2023-06-26 19:48:46,071 >> Total train batch size (w. parallel, distributed & accumulation) = 768
[INFO|trainer.py:1685] 2023-06-26 19:48:46,071 >> Gradient Accumulation steps = 4
[INFO|trainer.py:1686] 2023-06-26 19:48:46,071 >> Total optimization steps = 122,560
[INFO|trainer.py:1687] 2023-06-26 19:48:46,074 >> Number of trainable parameters = 82,170,969
[INFO|integrations.py:727] 2023-06-26 19:48:46,077 >> Automatic Weights & Biases logging enabled, to disable set os.environ["WANDB_DISABLED"] = "true"
wandb: Currently logged in as: <USER>. Use `wandb login --relogin` to force relogin
wandb: Tracking run with wandb version 0.15.4
wandb: Run data is saved locally in /cfs/home/u021274/higo/wandb/run-20230626_194847-vr14588a
wandb: Run `wandb offline` to turn off syncing.
wandb: Syncing run fragrant-universe-48
wandb: ⭐️ View project at <URL>
wandb: 🚀 View run at <URL>
0%| | 0/122560 [00:00<?, ?it/s][WARNING|logging.py:280] 2023-06-26 19:49:01,837 >> You're using a RobertaTokenizerFast tokenizer. Please note that with a fast tokenizer, using the `__call__` method is faster than using a method to encode the text followed by a call to the `pad` method to get a padded encoding.
```<|||||>Then its doesn't seem link in any way to the tokenization, you would need to step *into* the train function to know more.
<|||||>I see. How can I do this? Any suggestions? I'm kinda of new on it, and I don't know how to start searching the real problem inside the `train` function.<|||||>Ask around in discord https://discuss.huggingface.co/t/join-the-hugging-face-discord/11263 or the forum https://discuss.huggingface.co/
You might be able to find better help for such things.
I'm closing this issue, feel free to reopen one, when you have narrowed down what's going on. |
transformers | 24,472 | closed | Adding support for scaling rotary position embeddings | ### Feature request
Hello,
I would like if possible for Rotary Position Embedding scaling factors to be usable in the library. Currently this can only be done by monkey-patching the library.
Namely, it requires modifying the:
- `max_position_embeddings`: This can already be done via the model's config class or `config.json`
- `position_scale`: This variable doesn't exist currently, and there is no way to incorporate this effect at the moment without monkey-patching the existing `LlamaRotaryEmbeddings` class. (I'd also like to not step over toes of a possible future XPos implementation which also uses it's own scale for different purposes)
### Motivation
Recently I demonstrated it is possible to drastically reduce training compute when fine-tuning pre-trained RoPE models with an adjusted scaling factor for the purpose of extending the context length of the model. This has the effect of interpolating the position embeddings making it easier to fine-tune the model using in-distribution positions as opposed to out-of-distribution positions typically used via pure extrapolation. There is an extended write-up with motivations here https://kaiokendev.github.io/context as well as the code I used (for the 8K example) can be found here https://huggingface.co/kaiokendev/superhot-13b-8k-no-rlhf-test/blob/main/llama_rope_scaled_monkey_patch.py
Some existing discussions and benchmarks can be found here: https://github.com/ggerganov/llama.cpp/discussions/1965
Several models currently use this scaling feature, but they will not produce coherent output unless the scale is applied correctly during inference (scale is a hyperparameter):
- https://huggingface.co/kaiokendev/superhot-30b-8k-no-rlhf-test
- https://huggingface.co/Peeepy/Airoboros-13b-SuperHOT-8k
- https://huggingface.co/emozilla/open_llama_7b-scaled
EDIT: Meta has recently written a paper about it: https://arxiv.org/abs/2306.15595
### Your contribution
I would love to help in any way possible. While the basic implementation would be easy, I'm not sure what the best way could be for adding this modification (such as if users want to used a fixed scale versus having it dynamically applied based on the input sequence length) | 06-26-2023 00:11:29 | 06-26-2023 00:11:29 | any updates on this?<|||||>@lucasjinreal Yes, it is now an official part of the library
https://github.com/huggingface/transformers/pull/24653#issuecomment-1635324005
So I will close this issue<|||||>Here are the docs btw
https://huggingface.co/docs/transformers/main/en/model_doc/llama#transformers.LlamaConfig.rope_scaling<|||||>@kaiokendev So it actually support just same thing like longchat?
BTW, how to adopt it to Baichuan model properly?<|||||>Yes, for LongChat specifically, you would use "linear" method with factor of 8.
For Baichuan model you would not use this, as Baichuan uses ALiBi, not RoPE<|||||>@kaiokendev I was not aware of this issue, my bad 🙈
Suggestion: tag someone when opening an issue; sometimes things fly under our radar |
transformers | 24,471 | closed | "MPTForCausalLM not supported" error when using pipeline, but not when using from_pretrained | ### System Info
Python 3.8.10 (default, Nov 14 2022, 12:59:47)
transformers.__version__ is '4.30.2'
lambda labs 1xA100
invoking `generator = transformers.pipeline(task="text-generation", model="mosaicml/mpt-7b", trust_remote_code=True)` ends with this exception:
```
You are using config.init_device='cpu', but you can also use config.init_device="meta" with Composer + FSDP for fast initialization.
Loading checkpoint shards: 100%|████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:05<00:00, 2.73s/it]
Xformers is not installed correctly. If you want to use memory_efficient_attention to accelerate training use the following command to install Xformers
pip install xformers.
The model 'MPTForCausalLM' is not supported for text-generation. Supported models are ['BartForCausalLM', 'BertLMHeadModel', 'BertGenerationDecoder', 'BigBirdForCausalLM', 'BigBirdPegasusForCausalLM', 'BioGptForCausalLM', 'BlenderbotForCausalLM', 'BlenderbotSmallForCausalLM', 'BloomForCausalLM', 'CamembertForCausalLM', 'CodeGenForCausalLM', 'CpmAntForCausalLM', 'CTRLLMHeadModel', 'Data2VecTextForCausalLM', 'ElectraForCausalLM', 'ErnieForCausalLM', 'GitForCausalLM', 'GPT2LMHeadModel', 'GPT2LMHeadModel', 'GPTBigCodeForCausalLM', 'GPTNeoForCausalLM', 'GPTNeoXForCausalLM', 'GPTNeoXJapaneseForCausalLM', 'GPTJForCausalLM', 'LlamaForCausalLM', 'MarianForCausalLM', 'MBartForCausalLM', 'MegaForCausalLM', 'MegatronBertForCausalLM', 'MvpForCausalLM', 'OpenLlamaForCausalLM', 'OpenAIGPTLMHeadModel', 'OPTForCausalLM', 'PegasusForCausalLM', 'PLBartForCausalLM', 'ProphetNetForCausalLM', 'QDQBertLMHeadModel', 'ReformerModelWithLMHead', 'RemBertForCausalLM', 'RobertaForCausalLM', 'RobertaPreLayerNormForCausalLM', 'RoCBertForCausalLM', 'RoFormerForCausalLM', 'RwkvForCausalLM', 'Speech2Text2ForCausalLM', 'TransfoXLLMHeadModel', 'TrOCRForCausalLM', 'XGLMForCausalLM', 'XLMWithLMHeadModel', 'XLMProphetNetForCausalLM', 'XLMRobertaForCausalLM', 'XLMRobertaXLForCausalLM', 'XLNetLMHeadModel', 'XmodForCausalLM'].
```
However, loading using:
```
model = transformers.AutoModelForCausalLM.from_pretrained(
'mosaicml/mpt-7b',
trust_remote_code=True
)
```
works fine.
How can I load this model in a `pipeline`?
### Who can help?
@Narsil @ArthurZucker @younesbelkada
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1. `generator = transformers.pipeline(task="text-generation", model="mosaicml/mpt-7b", trust_remote_code=True)`
### Expected behavior
The pipeline would load OK, just as .from_pretrained works | 06-25-2023 22:59:13 | 06-25-2023 22:59:13 | Hey, I suggest you to open this issue on the repo, this is because the `auto_map ` attribute in the `config.json` file is not properly set. We are probably going to add this model to transformers soon too! |
transformers | 24,470 | closed | Error when try to load pretrained model | ### System Info
I've been trying to load a pretrained model:
When I tried to execute this :
from transformers import T5ForConditionalGeneration,T5Tokenizer
import torch
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
models = T5ForConditionalGeneration.from_pretrained("Michau/t5-base-en-generate-headline")
tokenizer = T5Tokenizer.from_pretrained("Michau/t5-base-en-generate-headline")
The result: is :
` Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> from transformers import T5ForConditionalGeneration,T5Tokenizer
>>> import torch
>>>
>>> device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
>>> models = T5ForConditionalGeneration.from_pretrained("Michau/t5-base-en-generate-headline")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Users\BrahianVT\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\modeling_utils.py", line 2259, in from_pretrained
config, model_kwargs = cls.config_class.from_pretrained(
File "C:\Users\BrahianVT\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\configuration_utils.py", line 547, in from_pretrained
config_dict, kwargs = cls.get_config_dict(pretrained_model_name_or_path, **kwargs)
File "C:\Users\BrahianVT\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\configuration_utils.py", line 574, in get_config_dict
config_dict, kwargs = cls._get_config_dict(pretrained_model_name_or_path, **kwargs)
File "C:\Users\BrahianVT\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\configuration_utils.py", line 629, in _get_config_dict
resolved_config_file = cached_file(
File "C:\Users\BrahianVT\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\utils\hub.py", line 417, in cached_file
resolved_file = hf_hub_download(
File "C:\Users\BrahianVT\AppData\Local\Programs\Python\Python310\lib\site-packages\huggingface_hub\utils\_validators.py", line 124, in _inner_fn
return fn(*args, **kwargs)
File "C:\Users\BrahianVT\AppData\Local\Programs\Python\Python310\lib\site-packages\huggingface_hub\file_download.py", line 1252, in hf_hub_download
with FileLock(lock_path):
File "C:\Users\BrahianVT\AppData\Local\Programs\Python\Python310\lib\site-packages\filelock\_api.py", line 255, in __enter__
self.acquire()
File "C:\Users\BrahianVT\AppData\Local\Programs\Python\Python310\lib\site-packages\filelock\_api.py", line 213, in acquire
self._acquire()
File "C:\Users\BrahianVT\AppData\Local\Programs\Python\Python310\lib\site-packages\filelock\_windows.py", line 27, in _acquire
fd = os.open(self.lock_file, flags, self._context.mode)
OSError: [Errno 22] Invalid argument: 'C:\\Users\\BrahianVT/.cache\\huggingface\\hub\\models--Michau--t5-base-en-generate-headline\\blobs\\W/"957fcaeed54459456a54d98d552a7773e717333f.lock'`

I'm like a beginner in this topic anyone could help?
Regards
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
from transformers import T5ForConditionalGeneration,T5Tokenizer
import torch
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
models = T5ForConditionalGeneration.from_pretrained("Michau/t5-base-en-generate-headline")
tokenizer = T5Tokenizer.from_pretrained("Michau/t5-base-en-generate-headline")
### Expected behavior
I expect the model object | 06-25-2023 21:40:31 | 06-25-2023 21:40:31 | Please follow the issue template and give us the result of `transformers-cli env`.<|||||>thanks installing transformers fixed the error, I'm like a beginner in this topic , sorry
Regards |
transformers | 24,469 | closed | ValueError raised during accelerate decoding | ### System Info
transformers-4.28.1
python-3.11.3
### Who can help?
@gant
Example code found in this blogpost raises errors: [https://huggingface.co/blog/assisted-generation](https://huggingface.co/blog/assisted-generation).
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
prompt = "Alice and Bob"
checkpoint = "EleutherAI/pythia-1.4b-deduped"
assistant_checkpoint = "EleutherAI/pythia-160m-deduped"
device = "cuda" if torch.cuda.is_available() else "cpu"
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
inputs = tokenizer(prompt, return_tensors="pt").to(device)
model = AutoModelForCausalLM.from_pretrained(checkpoint).to(device)
assistant_model = AutoModelForCausalLM.from_pretrained(assistant_checkpoint).to(device)
outputs = model.generate(**inputs, assistant_model=assistant_model)
print(tokenizer.batch_decode(outputs, skip_special_tokens=True))
['Alice and Bob are sitting in a bar. Alice is drinking a beer and Bob is drinking a']
ValueError Traceback (most recent call last)
Cell In[9], line 14
1 # from transformers import AutoModelForCausalLM, AutoTokenizer
2 # import torch
3
(...)
12 # model = AutoModelForCausalLM.from_pretrained(checkpoint).to(device)
13 # assistant_model = AutoModelForCausalLM.from_pretrained(assistant_checkpoint).to(device)
---> 14 outputs = model.generate(**inputs, assistant_model=assistant_model)
File ~/anaconda3/envs/fcm-ape/lib/python3.11/site-packages/torch/utils/_contextlib.py:115, in context_decorator.<locals>.decorate_context(*args, **kwargs)
112 @functools.wraps(func)
113 def decorate_context(*args, **kwargs):
114 with ctx_factory():
--> 115 return func(*args, **kwargs)
File ~/anaconda3/envs/fcm-ape/lib/python3.11/site-packages/transformers/generation/utils.py:1231, in GenerationMixin.generate(self, inputs, generation_config, logits_processor, stopping_criteria, prefix_allowed_tokens_fn, synced_gpus, streamer, **kwargs)
1229 model_kwargs = generation_config.update(**kwargs) # All unused kwargs must be model kwargs
1230 generation_config.validate()
-> 1231 self._validate_model_kwargs(model_kwargs.copy())
1233 # 2. Set generation parameters if not already defined
1234 logits_processor = logits_processor if logits_processor is not None else LogitsProcessorList()
File ~/anaconda3/envs/fcm-ape/lib/python3.11/site-packages/transformers/generation/utils.py:1109, in GenerationMixin._validate_model_kwargs(self, model_kwargs)
...
1110 f"The following `model_kwargs` are not used by the model: {unused_model_args} (note: typos in the"
1111 " generate arguments will also show up in this list)"
1112 )
ValueError: The following `model_kwargs` are not used by the model: ['assistant_model'] (note: typos in the generate arguments will also show up in this list)
### Expected behavior
['Alice and Bob are sitting in a bar. Alice is drinking a beer and Bob is drinking a'] | 06-25-2023 21:29:46 | 06-25-2023 21:29:46 | |
transformers | 24,468 | open | DeepSpeed ZeRO stage3+huggyllama/llama: RuntimeError: indices should be either on cpu or on the same device as the indexed tensor (cpu) | ### System Info
- deepspeed: 0.9.5+1491e14e
- `transformers` version: 4.31.0.dev0
- Platform: Linux-5.10.133+-x86_64-with-glibc2.17
- Python version: 3.8.15
- Huggingface_hub version: 0.15.1
- Safetensors version: 0.3.1
- PyTorch version (GPU?): 2.0.1+cu118 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): 0.6.2 (gpu)
- Jax version: 0.3.22
- JaxLib version: 0.3.22
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@pacman100
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Is this issue highly related to this [PR](https://github.com/huggingface/transformers/pull/22234 )?
Used the native deepspeed running example (no accelerate).
```
model_engine, optimizer, _, scheduler = deepspeed.initialize(config=args.deepspeed_config, model=model, model_parameters=model_parameters)
```
Tried stage-3 with 7B/13B/30B ckpt and they all errored out, but they worked well with stage2.
Error message:
```
hidden_states, self_attn_weights, present_key_value = self.self_attn(
File "/export/share/ruimeng/env/anaconda/envs/codegen/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1538, in _call_impl
outputs = run_function(*args)
File "/export/share/ruimeng/env/anaconda/envs/codegen/lib/python3.8/site-packages/transformers/models/llama/modeling_llama.py", line 566, in custom_forward
result = forward_call(*args, **kwargs)
File "/export/share/ruimeng/env/anaconda/envs/codegen/lib/python3.8/site-packages/transformers/models/llama/modeling_llama.py", line 205, in forward
return module(*inputs, output_attentions, None)
File "/export/share/ruimeng/env/anaconda/envs/codegen/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1538, in _call_impl
query_states, key_states = apply_rotary_pos_emb(query_states, key_states, cos, sin, position_ids)
File "/export/share/ruimeng/env/anaconda/envs/codegen/lib/python3.8/site-packages/transformers/models/llama/modeling_llama.py", line 137, in apply_rotary_pos_emb
cos = cos[position_ids].unsqueeze(1) # [bs, 1, seq_len, dim]
RuntimeError: indices should be either on cpu or on the same device as the indexed tensor (cpu)
result = forward_call(*args, **kwargs)
File "/export/share/ruimeng/env/anaconda/envs/codegen/lib/python3.8/site-packages/transformers/models/llama/modeling_llama.py", line 293, in forward
hidden_states, self_attn_weights, present_key_value = self.self_attn(
File "/export/share/ruimeng/env/anaconda/envs/codegen/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1538, in _call_impl
result = forward_call(*args, **kwargs)
File "/export/share/ruimeng/env/anaconda/envs/codegen/lib/python3.8/site-packages/transformers/models/llama/modeling_llama.py", line 205, in forward
query_states, key_states = apply_rotary_pos_emb(query_states, key_states, cos, sin, position_ids)
File "/export/share/ruimeng/env/anaconda/envs/codegen/lib/python3.8/site-packages/transformers/models/llama/modeling_llama.py", line 137, in apply_rotary_pos_emb
cos = cos[position_ids].unsqueeze(1) # [bs, 1, seq_len, dim]
RuntimeError: indices should be either on cpu or on the same device as the indexed tensor (cpu)
```
### Expected behavior
normal training | 06-25-2023 19:38:46 | 06-25-2023 19:38:46 | Hello @memray, this issue seems unrelated to DeepSpeed integration. Could you provide a minimal reproducible example?<|||||>No, I also tested `mosaicml/mpt-7b` and I found it worked. I believe it's more related to the implementation of LLaMA because `openlm-research/open_llama_7b` also fails. Can you let me know who can help with this issue? <|||||>@ArthurZucker and @younesbelkada can you check if this is related to the model implementation (say this [PR](https://github.com/huggingface/transformers/pull/22234))?<|||||>Hey! thanks for reporting, yes will check this! Seems like this might be it. Also if so fix should be straight forward<|||||>I am also facing the same issue with [chaoyi-wu/PMC_LLAMA_7B](https://huggingface.co/chaoyi-wu/PMC_LLAMA_7B)<|||||>Okay, I forgot to ask, could you provide a full reproducing script? The ROPE was recently updated too, not sure if this was adressed yet.<|||||>Same problem here.<|||||>We cannot help you if we don't have a minimal reproducing script, just commenting same problem will not help 😅 |
transformers | 24,467 | closed | Pipeline cannot be initialized with the "state_dict" parameter | ### System Info
- `transformers` version: 4.30.0.dev0
- Platform: Linux-5.15.107+-x86_64-with-glibc2.31
- Python version: 3.10.12
- Huggingface_hub version: 0.15.1
- Safetensors version: 0.3.1
- PyTorch version (GPU?): 2.0.1+cu118 (True)
- Tensorflow version (GPU?): 2.12.0 (True)
- Flax version (CPU?/GPU?/TPU?): 0.6.10 (gpu)
- Jax version: 0.4.10
- JaxLib version: 0.4.10
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help?
@gante, @Narsil, @sgugger
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```python
state_dict = torch.load(path_to_bin, map_location=torch.device('cpu'))
base_model = LlamaForCausalLM.from_pretrained(
None, config=config, state_dict=state_dict,
torch_dtype=torch.float16,
device_map="auto")
```
### Expected behavior
When I try to initialize the pipelining model with `state_dict`, I get an error:
`AttributeError: 'NoneType' object has no attribute 'endswith'`
from `transformers/modeling_utils.py:448` in `load_state_dict`
I thought that if you specified `pretrained_model_name_or_path=None`, the path to the model would be ignored, and all necessary parameters and weights themselves would be taken from `config` and `state_dict`. Isn't that how it's done? How do I do initialization using `state_dict`? | 06-25-2023 18:24:56 | 06-25-2023 18:24:56 | You cannot use the `state_dict` argument with `device_map="auto"`, we only support the base case
```py
base_model = LlamaForCausalLM.from_pretrained(None, config=config, state_dict=state_dict)
```<|||||>Thanks @sgugger. Your advice really helped. I don't have enough memory to load the model even on fp16, so I try to initialize the model with `load_in_4bit=True` and get the same error:
`AttributeError: 'NoneType' object has no attribute 'endswith'`
from `transformers/modeling_utils.py:448` in `load_state_dict`
There is a way to fix this, isn't there?
P.S. And another question: I have not found an option to load tokenizer weights from RAM. In, for example, LlamaTokenizer can I feed the config and state_dict?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 24,466 | open | Confusing behavior of push_to_hub + from_pretrained + safetensors | ### System Info
- `transformers` version: 4.30.1
- Platform: Linux-5.15.0-58-generic-x86_64-with-glibc2.35
- Python version: 3.9.16
- Huggingface_hub version: 0.14.1
- Safetensors version: 0.3.1
- PyTorch version (GPU?): 2.0.0+cu117 (True)
- Tensorflow version (GPU?): 2.12.0 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help?
@Narsil @sgugger @stevhliu @MKhalusova
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
While actively developing my model [malper/taatiknet](https://huggingface.co/malper/taatiknet) I've encountered some confusing behavior, **bolded below**:
* Using pipeline API (`pipe = pipeline(...)` etc.)
* Updating model on hub with `pipe.model.push_to_hub('malper/taatiknet', create_pr=1)` and merging PR
* Must also push tokenizer with `pipe.tokenizer.push_to_hub` (**undocumented** in current docs)
* On first update, I get an automated [PR from SFconvertbot](https://huggingface.co/malper/taatiknet/discussions/3) and merge it without thinking
* On future updates with `push_to_hub`, only `pytorch_model.bin` updates **but model.safetensors does not**
* Loading model elsewhere with `.load_pretrained('malper/taatiknet')` **loads old safetensors model and does not get new weights**
It took quite a while to figure out why my model was not updating when loaded and **not documented how to update safetensors version on HF model hub**.
### Expected behavior
At least one of the following would be desirable:
* documentation on how to update the safetensors (ST) model weights on the hub
* push_to_hub automatically pushing ST weights or printing a warning that they are not updated
* STconvertbot checking for updated base weights & providing a PR with converted weights
* documentation that from_pretrained pulls ST weights when available
* from_pretrained providing a warning when base weights & ST weights don't match (i.e. from different git commits) | 06-25-2023 17:42:42 | 06-25-2023 17:42:42 | FYI, the ad-hoc solution I used now is to delete `model.safetensors` from the model hub repo and then use the convert space [here](https://huggingface.co/spaces/safetensors/convert).
But it's not obvious to do this and not convenient to have to do this every time I update the model.
IMO the behavior of `push_to_hub` and `from_pretrained` should be aligned.<|||||>Would `push_to_hub(..., safe_serialization=True)` work ? Means everything would be in safetensors directly ? (Then the PT files would not be up-to-date, but you can discard them at this point).
We're only keeping PT files by default for old `transformers` users which might not have `safetensors` dependency yet (and their version might not be `safetensors` aware).
<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 24,465 | closed | squad_convert_examples_to_features does not work with tensorflow | ### System Info
- `transformers` version: 4.30.2
- Platform: macOS-13.4-x86_64-i386-64bit
- Python version: 3.10.12
- Huggingface_hub version: 0.15.1
- Safetensors version: 0.3.1
- PyTorch version (GPU?): not installed (NA)
- Tensorflow version (GPU?): 2.12.0 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```python
import tensorflow as tf
from transformers import DistilBertTokenizer, TFDistilBertForQuestionAnswering
from transformers import squad_convert_examples_to_features
from transformers.data.processors.squad import SquadV1Processor
import tensorflow_datasets as tfds
if __name__ == "__main__":
tfds_examples = tfds.load("squad")
evaluate = False
examples = SquadV1Processor().get_examples_from_dataset(tfds_examples, evaluate=evaluate)
tokenizer = DistilBertTokenizer.from_pretrained('distilbert-base-uncased')
a, b = squad_convert_examples_to_features(
examples=examples[:3],
tokenizer=tokenizer,
max_seq_length=384,
doc_stride=128,
max_query_length=64,
is_training=not evaluate,
return_dataset="tf"
)
```
This exception occurs when I use return_dataset="tf", otherwise it's fine.
Error:
```
2023-06-25 19:09:04.152047: W tensorflow/core/framework/op_kernel.cc:1818] INVALID_ARGUMENT: TypeError: `generator` yielded an element that did not match the expected structure. The expected structure was (
{'input_ids': tf.int32,
'attention_mask': tf.int32,
'feature_index': tf.int64,
'qas_id': tf.string}, {...}),
but the yielded element was (
{'input_ids': [101, 2054, ..., 0],
'attention_mask': [1, 1, ..., 0],
'token_type_ids': [0, 0, ..., 0],
'feature_index': 0,
'qas_id': '57306bf68ab72b1400f9c4dc'}, {...})
```
As you can see, for some reason there is an unwanted key in generator.
https://github.com/huggingface/transformers/blob/8e164c5400b7b413c7b8fb32e35132001effc970/src/transformers/data/processors/squad.py#L437
Full error:
2023-06-25 19:09:04.152047: W tensorflow/core/framework/op_kernel.cc:1818] INVALID_ARGUMENT: TypeError: `generator` yielded an element that did not match the expected structure. The expected structure was ({'input_ids': tf.int32, 'attention_mask': tf.int32, 'feature_index': tf.int64, 'qas_id': tf.string}, {'start_positions': tf.int64, 'end_positions': tf.int64, 'cls_index': tf.int64, 'p_mask': tf.int32, 'is_impossible': tf.int32}), but the yielded element was ({'input_ids': [101, 2054, 2003, 2028, 2224, 2008, 2052, 5478, 2019, 13438, 2000, 4374, 7755, 1999, 2536, 3971, 2012, 2320, 1029, 102, 1996, 4489, 1999, 1996, 2682, 5876, 2005, 1996, 2553, 1997, 1162, 1027, 1014, 2003, 1996, 3114, 2008, 2087, 5062, 1006, 21670, 3832, 2005, 1996, 2270, 1007, 3594, 7471, 11508, 3989, 1012, 2005, 19278, 2379, 1996, 2598, 1010, 23190, 11508, 3550, 21670, 9015, 16990, 1012, 2005, 2190, 7684, 1996, 4909, 26315, 2005, 2122, 7755, 2024, 10655, 20018, 11508, 3550, 1012, 1999, 2070, 5097, 2073, 1996, 4909, 13438, 2442, 2147, 1999, 2151, 2597, 1010, 2004, 1999, 4684, 11640, 1010, 1996, 2918, 2276, 26315, 2224, 3816, 11508, 3989, 1010, 2107, 2004, 7399, 11508, 3989, 2012, 2019, 6466, 1006, 2007, 2119, 7471, 1998, 9876, 6177, 1007, 2030, 8206, 11508, 3989, 1012, 102, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'feature_index': 0, 'qas_id': '57306bf68ab72b1400f9c4dc'}, {'start_positions': 94, 'end_positions': 95, 'cls_index': 0, 'p_mask': [0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], 'is_impossible': False}).
Traceback (most recent call last):
File "/Users/user/Desktop/transformers_test/venv/lib/python3.10/site-packages/tensorflow/python/data/ops/from_generator_op.py", line 204, in generator_py_func
flattened_values = nest.flatten_up_to(output_types, values)
File "/Users/user/Desktop/transformers_test/venv/lib/python3.10/site-packages/tensorflow/python/data/util/nest.py", line 377, in flatten_up_to
assert_shallow_structure(shallow_tree, input_tree)
File "/Users/user/Desktop/transformers_test/venv/lib/python3.10/site-packages/tensorflow/python/data/util/nest.py", line 304, in assert_shallow_structure
assert_shallow_structure(shallow_branch, input_branch,
File "/Users/user/Desktop/transformers_test/venv/lib/python3.10/site-packages/tensorflow/python/data/util/nest.py", line 289, in assert_shallow_structure
raise ValueError(
ValueError: The two structures don't have the same sequence length. Input structure has length 5, while shallow structure has length 4.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/Users/user/Desktop/transformers_test/venv/lib/python3.10/site-packages/tensorflow/python/ops/script_ops.py", line 267, in __call__
ret = func(*args)
File "/Users/user/Desktop/transformers_test/venv/lib/python3.10/site-packages/tensorflow/python/autograph/impl/api.py", line 642, in wrapper
return func(*args, **kwargs)
File "/Users/user/Desktop/transformers_test/venv/lib/python3.10/site-packages/tensorflow/python/data/ops/from_generator_op.py", line 206, in generator_py_func
raise TypeError(
### Expected behavior
There would be no error. | 06-25-2023 16:34:31 | 06-25-2023 16:34:31 | This is not a maintained part of the library anyore, we use `datasets` for the preprocessing, you can check the QA example [here](https://github.com/huggingface/transformers/blob/main/examples/tensorflow/question-answering/run_qa.py). |
transformers | 24,464 | open | force_download for pipelines | ### Feature request
Add `force_download=True/False` argument to the pipeline API to allow for re-downloading a model and ignoring local cache.
### Motivation
[PreTrainedModel](https://huggingface.co/docs/transformers/main_classes/model#transformers.PreTrainedModel) has the very useful argument `force_download` argument for ignoring local cache and downloading a model.
However, this does not work with the pipeline API:
```
from transformers import pipeline
pipe = pipeline("text2text-generation", model='t5-small', force_download=True)
pipe('test')
```
yields error: `ValueError: The following `model_kwargs` are not used by the model: ['force_download'] (note: typos in the generate arguments will also show up in this list)`
This is an issue since I am working with a pipeline using a model that is updating and would like to easily re-download it as needed.
### Your contribution
could add this to docs if implemented | 06-25-2023 15:44:01 | 06-25-2023 15:44:01 | cc @Narsil for information. You can already pass this in the `model_kwargs` @morrisalp , I don't think it's worth surfacing it more than this.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 24,463 | closed | when resume from peft checkpoint, the model should be trainable | model.training is false after model.load_adapter of peft. see https://github.com/huggingface/peft/blob/main/src/peft/peft_model.py#L402, default value of is_trainable is False
- trainer: @sgugger
| 06-25-2023 10:19:38 | 06-25-2023 10:19:38 | @younesbelkada please help review<|||||>_The documentation is not available anymore as the PR was closed or merged._ |
transformers | 24,462 | closed | 'NoneType' object has no attribute 'flush' | When I use PyInstaller to package transformers program, if I choose windowless mode, I get the following error, but I don't want console mode, I want to build the program based on windowless mode.
` File "transformers\utils\import_utils.py", line 37, in <module>
logger = logging.get_logger(__name__) # pylint: disable=invalid-name
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "transformers\utils\logging.py", line 124, in get_logger
_configure_library_root_logger()
File "transformers\utils\logging.py", line 88, in _configure_library_root_logger
_default_handler.flush = sys.stderr.flush
^^^^^^^^^^^^^^^^
AttributeError: 'NoneType' object has no attribute 'flush'` | 06-25-2023 10:00:05 | 06-25-2023 10:00:05 | cc @LysandreJik <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 24,461 | closed | transformers.LlamaTokenizer.from_pretrained does not work after transformers are packaged with Pyinstaller | When I use the following code in my python project, the model will load and work properly, but when I package my project using Pyinstaller, I found by debugging that this code to load the model, does not work properly: tokenizer = transformers.LlamaTokenizer.from_ pretrained(LLM_CHECKPOINT). This complete function is shown below, where LLM_CHECKPOINT = 'D:/LLM/alpaca-llama-7b-fp16' and device = torch.device('cpu'):
import torch
import transformers
def llm_initial(LLM_CHECKPOINT, device):
tokenizer = transformers.LlamaTokenizer.from_pretrained(LLM_CHECKPOINT)
model = transformers.LlamaForCausalLM.from_pretrained(LLM_CHECKPOINT).to(device)
model.eval()
generation_config = transformers.GenerationConfig(
# max_new_tokens: This is the maximum number of tokens to generate.
# The generated sequence will not exceed this length.
max_new_tokens=256,
# temperature: This is a parameter for controlling the randomness of predictions
# by scaling the logits before applying softmax. Higher values (greater than 1)
# increase randomness, while lower values make the model more deterministic.
temperature=1,
# top_k: This parameter controls the number of highest probability predictions
# to consider for the next token. It's used to introduce some randomness and
# creativity into the model's outputs.
top_k=40,
# top_p: This parameter is also known as nucleus sampling and is used to ensure
# that the cumulative probability of the considered tokens is at least top_p.
# This parameter also introduces randomness into the model's outputs.
top_p=0.9,
# repetition_penalty: This parameter is used to control for repetitive behavior
# in the model's outputs. If the value is greater than 1.0, the model is
# penalized for generating the same token repeatedly.
repetition_penalty=1.15
)
return [device, tokenizer, model, generation_config]
def llm_response(device, tokenizer, model, generation_config, prompt):
prompt = generate_prompt(prompt)
input_ids = tokenizer.encode(prompt, return_tensors='pt').to(device)
with torch.enable_grad():
output_ids = model.generate(input_ids=input_ids, generation_config=generation_config)
LLM_RESPONSE = tokenizer.decode(output_ids[0], skip_special_tokens=True)
return LLM_RESPONSE.replace(prompt, '').strip() | 06-25-2023 08:25:23 | 06-25-2023 08:25:23 | The Pypi package does not have access to your local drive where `LLM_CHECKPOINT` points to, that's why you get different behavior.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 24,460 | closed | use accelerate autocast in jit eval path, since mix precision logic is… | … in accelerator currently
Fixes # (issue)
mix precision logic is all moved to accelerator, so autocast_smart_context_manager does not take effect any longer. use accelerate autocast instead
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
- trainer: @sgugger
| 06-25-2023 07:22:47 | 06-25-2023 07:22:47 | @yao-matrix<|||||>_The documentation is not available anymore as the PR was closed or merged._ |
transformers | 24,459 | closed | Add FlaxMinNewTokensLengthLogitsProcessor | # What does this PR do?
PyTorch's options of logit processors has both `MinLengthLogitsProcessor` and `MinNewTokensLengthLogitsProcessor`. See [code](https://github.com/huggingface/transformers/blob/v4.30.2/src/transformers/generation/utils.py#L886C1-L901C14).
However, Flax only has [FlaxMinLengthLogitsProcessor](https://github.com/huggingface/transformers/blob/v4.30.2/src/transformers/generation/flax_utils.py#L500C1-L507C14), and does not account for the case if `min_token_length` is passed into the GenerationConfig (which is what PyTorch's `MinNewTokensLengthLogitsProcessor` does. As a result, when passing the same config that contains `min_new_tokens` to both the PyTorch and Flax model, I see different generated outputs.
Changes:
- Add `FlaxMinTokensLengthLogitsProcessor` class, which is the Flax version of PyTorch's `MinNewTokensLengthLogitsProcessor`, and add an if-statement to select `FlaxMinTokensLengthLogitsProcessor` when `min_new_tokens` is passed.
- Change the conditional statement for `FlaxMinLengthLogitsProcessor`. I believe it's a bug, where it checks for `generation_config.min_length > -1`. However, the [default value is 0](https://github.com/huggingface/transformers/blob/v4.30.2/src/transformers/generation/configuration_utils.py#L228), so this expression will always be true, and won't allow us to go the if statement for `FlaxMinTokensLengthLogitsProcessor`. Checking that `generation_config.min_length > 0` is also how the PyTorch logic [does it](https://github.com/huggingface/transformers/blob/v4.30.2/src/transformers/generation/utils.py#L889).
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 06-24-2023 22:31:53 | 06-24-2023 22:31:53 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 24,458 | closed | audio classification example script RuntimeError on evaluation | ### System Info
accelerate==0.21.0.dev0
OS: Ubuntu 20.04
- `transformers` version: 4.31.0.dev0
- Platform: Linux-5.15.0-57-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.15.1
- Safetensors version: 0.3.1
- PyTorch version (GPU?): 2.0.1+cu118 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: RTX 2080TI, also happened on A100 (colab)
- Using distributed or parallel set-up in script?: no
- The rest are from the latest hugging face pytorch docker image
### Who can help?
@sanchit-gandhi @sgugger @albertvillanova
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
` 693/6930 [08:55<1:18:59, 1.32it/s][INFO|trainer.py:3074] 2023-06-24 18:44:28,008 >> ***** Running Evaluation *****
[INFO|trainer.py:3076] 2023-06-24 18:44:28,008 >> Num examples = 5888
[INFO|trainer.py:3079] 2023-06-24 18:44:28,008 >> Batch size = 1
{'eval_loss': 2.8346564769744873, 'eval_accuracy': 0.2105978260869565, 'eval_runtime': 70.8074, 'eval_samples_per_second': 83.155, 'eval_steps_per_second': 83.155, 'epoch': 1.0}
>Saving model checkpoint to wav2vec2-base-lang-id/checkpoint-693
[INFO|configuration_utils.py:458] 2023-06-24 18:45:38,818 >> Configuration saved in wav2vec2-base-lang-id/checkpoint-693/config.json
[INFO|modeling_utils.py:1845] 2023-06-24 18:45:39,166 >> Model weights saved in wav2vec2-base-lang-id/checkpoint-693/pytorch_model.bin
[INFO|feature_extraction_utils.py:377] 2023-06-24 18:45:39,166 >> Feature extractor saved in wav2vec2-base-lang-id/checkpoint-693/preprocessor_config.json
Traceback (most recent call last):
File "run_audio_classification.py", line 418, in <module>
main()
File "run_audio_classification.py", line 392, in main
train_result = trainer.train(resume_from_checkpoint=checkpoint)
File "/transformers/src/transformers/trainer.py", line 1539, in train
return inner_training_loop(
File "/transformers/src/transformers/trainer.py", line 1850, in _inner_training_loop
self.accelerator.clip_grad_norm_(
File "/usr/local/lib/python3.8/dist-packages/accelerate/accelerator.py", line 1913, in clip_grad_norm_
self.unscale_gradients()
File "/usr/local/lib/python3.8/dist-packages/accelerate/accelerator.py", line 1876, in unscale_gradients
self.scaler.unscale_(opt)
File "/usr/local/lib/python3.8/dist-packages/torch/cuda/amp/grad_scaler.py", line 275, in unscale_
raise RuntimeError("unscale_() has already been called on this optimizer since the last update().")
RuntimeError: unscale_() has already been called on this optimizer since the last update().
`
### Expected behavior
I'd expect the script to train the model and finish successfully.
Thank you very much :) | 06-24-2023 18:53:17 | 06-24-2023 18:53:17 | Hey @adamkatav - do you have a reproducible code snippet we could use to emulate this behaviour?<|||||>> Hey @adamkatav - do you have a reproducibl
e code snippet we could use to emulate this behaviour?
Sure, it's a copy-paste from [here](https://github.com/huggingface/transformers/tree/main/examples/pytorch/audio-classification).
I'm also attaching the output file from my terminal. [hugging_face_error.txt](https://github.com/huggingface/transformers/files/11885751/hugging_face_error.txt)
I re-ran the script after a reboot on a fresh official hugging face container with a RTX 2080 TI GPU.<|||||>Hi, @adamkatav I was not reproduce the error you mentioned in the above messages, I'm on the same versions (for the most part) as mentioned.<|||||>> Hi, @adamkatav I was not reproduce the error you mentioned in the above messages, I'm on the same versions (for the most part) as mentioned.
Thank you for the reply, might it be a docker issue? Because I got the same error on colab, both environments are in a docker container.<|||||>Yes, the issue you encountered with the RuntimeError could potentially be related to the Docker environment. If you are experiencing the same error in Colab as well, it suggests that the issue might be related to the specific Docker setup or configuration you are using. Maybe try testing it outside the Docker container. <|||||>Hey @adamkatav - I'm also not able to reproduce the error using the example script. It trains for me as expected. Just as a precautionary measure, could you try installing `accelerate` / `transformers` at the last stable PyPi versions?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 24,457 | closed | Generate: deprecation timeline for parameterization though the model config | # What does this PR do?
We had a warning about deprecating the parameterization of `.generate()` through the model config, but there was no specific date. This PR makes it clear it will go away in `v4.32`.
The extra parameters and code to allow the old and the new way of parameterizing `.generate()` to live together are causing a myriad of issues, so let's get rid of them :) | 06-24-2023 18:52:32 | 06-24-2023 18:52:32 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24457). All of your documentation changes will be reflected on that endpoint.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 24,456 | closed | Generate: `group_beam_search` requires `diversity_penalty>0.0` | # What does this PR do?
While revisiting `group_beam_search` to review https://github.com/huggingface/transformers/pull/24407, I noticed that we do not require `diversity_penalty` to be `>0.0`. If it is not `>0.0`, then `group_beam_search` degenerates to `beam_search` with `num_beams=num_beams/num_beam_groups` -- users exploring the method risk not seeing its potential.
With this exception, we ensure the degeneration case does not happen (and possibly nudge the users towards the docs) | 06-24-2023 17:37:44 | 06-24-2023 17:37:44 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 24,455 | open | Trajectory Transformer - NameError: name 'self' is not defined | ### System Info
- `transformers` version: 4.30.2
- Platform: Linux-5.10.16.3-microsoft-standard-WSL2-x86_64-with-glibc2.35
- Python version: 3.9.16
- Huggingface_hub version: 0.15.1
- Safetensors version: 0.3.1
- PyTorch version (GPU?): 2.0.1+cu117 (True)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
from transformers import TrajectoryTransformerModel
import torch
import numpy as np
device = "cuda"
from transformers import TrajectoryTransformerModel
import torch
model = TrajectoryTransformerModel.from_pretrained(
"CarlCochet/trajectory-transformer-halfcheetah-medium-v2"
)
model.to(device)
model.eval()
observations_dim, action_dim, batch_size = 17, 6, 256
seq_length = observations_dim + action_dim + 1
trajectories = torch.LongTensor([np.random.permutation(self.seq_length) for _ in range(batch_size)]).to(
device
)
targets = torch.LongTensor([np.random.permutation(self.seq_length) for _ in range(batch_size)]).to(device)
outputs = model(
trajectories,
targets=targets,
use_cache=True,
output_attentions=True,
output_hidden_states=True,
return_dict=True,
)
### Expected behavior

| 06-24-2023 17:33:10 | 06-24-2023 17:33:10 | `self` is indeed not defined in your code. Please use the [forums](https://discuss.huggingface.co/) to debug your code :-)<|||||>Yeah, but I was using the code directly from the tutorial, so that's the reason I raised this issue...<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 24,454 | closed | v4.30-release run_ner gives datasets.builder.DatasetGenerationError | ### System Info
transformers==v4.30-release
pytorch latest
datasets latest
Ubuntu-20.0
### Who can help?
@sgugger
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
python run_ner.py \
--model_name_or_path bert-base-uncased \
--dataset_name conll2003 \
--output_dir /tmp/test-ner \
--do_train \
--do_eval
Error: _raise DatasetGenerationError("An error occurred while generating the dataset") from e
datasets.builder.DatasetGenerationError: An error occurred while generating the dataset_
### Expected behavior
Evaluation should run on pre-trained model on conll2003 dataset. | 06-24-2023 14:53:36 | 06-24-2023 14:53:36 | That sounds like an issue for `datasets`, not Transformers :-)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 24,453 | closed | Generate: `min_tokens_to_keep` has to be `>= 1` | # What does this PR do?
As pointed out by @njhill in [this comment](https://github.com/huggingface/transformers/pull/24111#issuecomment-1601824441), `min_tokens_to_keep` has to be `>=1`. When it is not, the sampling step will lead to numerical exceptions, as all tokens have `-float("inf")` as logits.
This PR updates some of the checks, which were checking that it was `>=0`, and fixes the typical_p logits processor, which has the exact same issue as the one fixed in https://github.com/huggingface/transformers/pull/24111 | 06-24-2023 09:55:25 | 06-24-2023 09:55:25 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Thanks @gante! |
transformers | 24,452 | closed | Fix tpu_metrics_debug | # What does this PR do?
Adding the `--tpu_metrics_debug` argument causes an error. Quick fix before the argument is deprecated.
In `training_args.py` check if `self.debug` is None before appending the string since `self.debug` is now initialized to None instead of the empty string. Related to https://github.com/huggingface/transformers/pull/24033.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@amyeroberts
| 06-24-2023 00:07:41 | 06-24-2023 00:07:41 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 24,451 | closed | is_torch_bf16_gpu_available does not check for AMD GPUs | ### System Info
transformer 4.30.2
python 3.9.13.1
torch 2.0.1
rocm 5.4.2
AMD mi250x
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
AMD GPUs like the mi250 and mi250x support bf16. See https://www.amd.com/en/products/server-accelerators/instinct-mi250x
`transformers.utils.import_utils.is_torch_bf16_gpu_available` returns `False` with mi250x. Additional inspection shows that the function does not check AMD gpus at all. The problem occurs because `torch.version.cuda=None` when using hip.
Steps to reproduce:
1. have AMD GPU that supports bf16
2. The problem arises when calling `transformers.TrainingArguments` and using something like `--bf16 True`. You'll see something like this...
```python
File lib/python3.9/site-packages/transformers/training_args.py:1297, in TrainingArguments.__post_init__(self)
1294 raise ValueError("Your setup doesn't support bf16/(cpu, tpu, neuroncore). You need torch>=1.10")
1295 elif not self.no_cuda and torch.cuda.is_available() and not is_torch_bf16_gpu_available():
1296 # gpu
-> 1297 raise ValueError(
1298 "Your setup doesn't support bf16/gpu. You need torch>=1.10, using Ampere GPU with cuda>=11.0"
1299 )
1301 if self.fp16 and self.bf16:
1302 raise ValueError("At most one of fp16 and bf16 can be True, but not both")
ValueError: Your setup doesn't support bf16/gpu. You need torch>=1.10, using Ampere GPU with cuda>=11.0
```
3. It's a bit easier to call the function directly.
```python
import transformers
transformers.utils.import_utils.is_torch_bf16_gpu_available()
```
which returns `False` instead of `True`.
### Expected behavior
`transformers.utils.import_utils.is_torch_bf16_gpu_available()` should check more than just Cuda to check if bf16 is available.
A quick work around is to add
```python
if torch.version.hip is not None:
return True
```
to src/transformers/utils/import_utils.py. However, I don't know which AMD GPUs actually support bf16. It looks like mi200 does as well. | 06-23-2023 22:48:27 | 06-23-2023 22:48:27 | We don't officially support AMD GPUs yet. This is coming soon as we get AMD GPUs to run our CI and check everything runs smoothly, so stay tuned!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 24,450 | closed | Update AlbertModel type annotation | # What does this PR do?
Fixes type annotations.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests? | 06-23-2023 18:34:59 | 06-23-2023 18:34:59 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 24,449 | closed | RuntimeError: unscale_() has already been called on this optimizer since the last update(). | ### System Info
- `transformers` version: 4.31.0.dev0
- `accelerate` version: 0.21.0.dev0
- `peft` version: 0.4.0.dev0
- Platform: Linux-6.3.9-zen1-1-zen-x86_64-with-glibc2.37
- Python version: 3.10.11
- Huggingface_hub version: 0.15.1
- Safetensors version: 0.3.1
- PyTorch version (GPU?): 2.0.1+cu118 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?:Yes
- Using distributed or parallel set-up in script?: No
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
1. Run LoRA training
```py
trainer = Trainer(
model=model,
train_dataset=train_data,
eval_dataset=val_data,
args=TrainingArguments(
per_device_train_batch_size=4,
auto_find_batch_size=True,
gradient_accumulation_steps=32,
warmup_steps=100,
num_train_epochs=EPOCHS,
learning_rate=LEARNING_RATE,
fp16=True,
logging_steps=1,
evaluation_strategy="steps" if VAL_SET_SIZE > 0 else "no",
save_strategy="steps",
eval_steps=50 if VAL_SET_SIZE > 0 else None,
save_steps=500,
output_dir=OUTPUT_DIR, #output_dir=repository_id,
save_total_limit=3,
load_best_model_at_end=True if VAL_SET_SIZE > 0 else False,
ddp_find_unused_parameters=False if ddp else None,
),
data_collator=transformers.DataCollatorForLanguageModeling(tokenizer, mlm=False),
)
model.config.use_cache = False
old_state_dict = model.state_dict
model.state_dict = (
lambda self, *_, **__: get_peft_model_state_dict(self, old_state_dict())
).__get__(model, type(model))
if torch.__version__ >= "2" and sys.platform != 'win32':
model = torch.compile(model)
trainer.train(resume_from_checkpoint = False)
```
2. Sometime after 1st epoch I run into the following error
```sh
{'loss': 2.2014, 'learning_rate': 0.0002538461538461538, 'epoch': 0.99}
{'loss': 2.24, 'learning_rate': 0.0002492307692307692, 'epoch': 1.0}
{'loss': 2.2383, 'learning_rate': 0.0002446153846153846, 'epoch': 1.01}
raceback (most recent call last):███████████████████████████████████▏ | 112/333 [42:21<1:21:32, 22.14s/it]
File "/home/kunal/ml/train.py", line 234, in <module>
trainer.train(resume_from_checkpoint = False)
File "/home/kunal/miniconda3/envs/lora/lib/python3.10/site-packages/transformers/trainer.py", line 1530, in train
return inner_training_loop(
File "/home/kunal/miniconda3/envs/lora/lib/python3.10/site-packages/accelerate/utils/memory.py", line 132, in decorator
return function(batch_size, *args, **kwargs)
File "/home/kunal/miniconda3/envs/lora/lib/python3.10/site-packages/transformers/trainer.py", line 1843, in _inner_training_loop
self.accelerator.clip_grad_norm_(
File "/home/kunal/miniconda3/envs/lora/lib/python3.10/site-packages/accelerate/accelerator.py", line 1913, in clip_grad_norm_
self.unscale_gradients()
File "/home/kunal/miniconda3/envs/lora/lib/python3.10/site-packages/accelerate/accelerator.py", line 1876, in unscale_gradients
self.scaler.unscale_(opt)
File "/home/kunal/miniconda3/envs/lora/lib/python3.10/site-packages/torch/cuda/amp/grad_scaler.py", line 275, in unscale_
raise RuntimeError("unscale_() has already been called on this optimizer since the last update().")
RuntimeError: unscale_() has already been called on this optimizer since the last update().
```
This training works fine on `transformers@de9255de27abfcae4a1f816b904915f0b1e23cd9`.
### Expected behavior
Training should succeed. | 06-23-2023 18:18:07 | 06-23-2023 18:18:07 | cc @pacman100 and @muellerzr <|||||>Hello, this is a duplicate issue. Please search the already existing ones. This is fixed via PR https://github.com/huggingface/transformers/pull/24415<|||||>Yes this is fixed. Thanks. |
transformers | 24,448 | closed | Improved keras imports | A sneaky bug was hiding in our Keras imports, where an import for `call_context` appeared to succeed on some TF versions, but actually got an older, unusable version of the function. This caused `build()` to behave improperly in some cases.
I went on a quest to fix this, and generally clean up our version-specific imports for TensorFlow to stop things like this from happening in future. I also bumped the minimum version for TF to 2.6 (2.6 should be 2 years old by the time of our next release), and eliminated the version cap in our dependency table because TF 2.13 should also be fully supported now.
This involved a partial rewrite of some code, where we checked for `KerasTensor` in a lot of places. However, this is an internal class for Keras and they keep moving it around, so trying to import it feels like a bad idea. Instead, I'm relying on `tf.is_tensor()`, which returns `True` for anything tensor-y, including symbolic tensors and Keras `Input` placeholders.
Fixes #24437 | 06-23-2023 15:13:44 | 06-23-2023 15:13:44 | _The documentation is not available anymore as the PR was closed or merged._<|||||>> Keras and they keep moving it around
It's tensor-flow ... that's probably the reason.<|||||>I tested this with a spread of TF versions going back to 2.6. Even at 2.6, it was pretty hard to get an environment that worked with modern `transformers` - old TensorFlow keeps trying to use NumPy features that have been deprecated and deleted, but our modern libraries need a minimum version of NumPy to run at all, so there's actually only a very narrow window of NumPy versions that can even run both at once! I think going back to 2.5 or earlier would be very difficult, so I'm pretty comfortable with bumping our minimum version at this point.
In all versions I tested with this patch, our test suite runs well and the issue identified in #24437 is fixed, so this should be ready to go after it's reviewed! |
transformers | 24,447 | closed | Error when trying to install transformers with Conda | ### System Info
- Platform: Linux-5.14.21-150400.24.41-default-x86_64-with-glibc2.10
- Python version: 3.8.17
- Huggingface_hub version: 0.15.1
- PyTorch version (GPU?): 2.0.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
# Install PyTorch According to Documentation for System
conda install pytorch torchvision torchaudio pytorch-cuda=11.8 -c pytorch -c nvidia
#install transformers according to documentation
conda install -c huggingface transformers
```
Error message when trying to import transformers:
```
Python 3.8.17 | packaged by conda-forge | (default, Jun 16 2023, 07:06:00)
[GCC 11.4.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> from transformers import AutoImageProcessor, ConvNextModel, ConvNextConfig, ViTFeatureExtractor
Traceback (most recent call last):
File "/fs/home/maedler/.local/miniconda3/envs/ConvNext_pip/lib/python3.8/site-packages/transformers/utils/import_utils.py", line 1110, in _get_module
return importlib.import_module("." + module_name, self.__name__)
File "/fs/home/maedler/.local/miniconda3/envs/ConvNext_pip/lib/python3.8/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1014, in _gcd_import
File "<frozen importlib._bootstrap>", line 991, in _find_and_load
File "<frozen importlib._bootstrap>", line 961, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "<frozen importlib._bootstrap>", line 1014, in _gcd_import
File "<frozen importlib._bootstrap>", line 991, in _find_and_load
File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 671, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 843, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "/fs/home/maedler/.local/miniconda3/envs/ConvNext_pip/lib/python3.8/site-packages/transformers/models/__init__.py", line 19, in <module>
from . import (
File "/fs/home/maedler/.local/miniconda3/envs/ConvNext_pip/lib/python3.8/site-packages/transformers/models/mt5/__init__.py", line 40, in <module>
from ..t5.tokenization_t5_fast import T5TokenizerFast
File "/fs/home/maedler/.local/miniconda3/envs/ConvNext_pip/lib/python3.8/site-packages/transformers/models/t5/tokenization_t5_fast.py", line 24, in <module>
from ...tokenization_utils_fast import PreTrainedTokenizerFast
File "/fs/home/maedler/.local/miniconda3/envs/ConvNext_pip/lib/python3.8/site-packages/transformers/tokenization_utils_fast.py", line 25, in <module>
import tokenizers.pre_tokenizers as pre_tokenizers_fast
File "/fs/home/maedler/.local/miniconda3/envs/ConvNext_pip/lib/python3.8/site-packages/tokenizers/__init__.py", line 79, in <module>
from .tokenizers import (
ImportError: libssl.so.10: cannot open shared object file: No such file or directory
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<frozen importlib._bootstrap>", line 1039, in _handle_fromlist
File "/fs/home/maedler/.local/miniconda3/envs/ConvNext_pip/lib/python3.8/site-packages/transformers/utils/import_utils.py", line 1100, in __getattr__
module = self._get_module(self._class_to_module[name])
File "/fs/home/maedler/.local/miniconda3/envs/ConvNext_pip/lib/python3.8/site-packages/transformers/utils/import_utils.py", line 1112, in _get_module
raise RuntimeError(
RuntimeError: Failed to import transformers.models.auto because of the following error (look up to see its traceback):
libssl.so.10: cannot open shared object file: No such file or directory
```
If I first install tokenizers from Conda-forge and then afterwards install transformers with --no-update-deps I can use the package:
```
# Install PyTorch According to Documentation for System
conda install pytorch torchvision torchaudio pytorch-cuda=11.8 -c pytorch -c nvidia
#install transformers with workaround
conda install tokenizers -c conda-forge
Conda install -c huggingface 'transformers==4.26.0' --no-update-deps
```
This results in:
```
Python 3.8.17 | packaged by conda-forge | (default, Jun 16 2023, 07:06:00)
[GCC 11.4.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> from transformers import AutoImageProcessor, ConvNextModel, ConvNextConfig, ViTFeatureExtractor
>>>
```
### Expected behavior
A working installation of transformer with the default tokenizers installation from Conda. | 06-23-2023 11:59:34 | 06-23-2023 11:59:34 | cc @LysandreJik <|||||>Hey, that seems to be a problem with a missing SSL package.
Could you check if following these instructions fixes your issue? https://github.com/huggingface/transformers/issues/21805#issuecomment-1478161530<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 24,446 | closed | fixes issue when saving fsdp via accelerate's FSDP plugin | # What does this PR do?
1. Fixes https://github.com/huggingface/transformers/issues/24057#issuecomment-1595152783
2. When using Accelerate's integration for FSDP, fsdp_plugin saves the optimizer state under various configs such as Full_Dict, Sharded_Dict ... properly. For other cases such as with FSDP-XLA, the trainer's behaviour is unchanged.
| 06-23-2023 10:26:11 | 06-23-2023 10:26:11 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Thanks again for this issue! I just finished training a largeish (7b param) model using this fix and have some questions.
I noticed the directory where the model was meant to save looks a bit odd. While the training code seems to have finished without error, the directory contains some checkpoint directories and a directory called pytorch_model_0 (which has a bunch of .distcp files), but none of the files I previously would see in my trained model directories, like the config.json and .bin files. Is this expected save behavior? <|||||>Hello @amartino1, this is because it is now using the FSDP's recommended way of saving ckpts, see this: https://github.com/pytorch/pytorch/blob/e71ab214226af1f9dbded944e939c6447e0e8f09/torch/distributed/checkpoint/examples/fsdp_checkpoint_example.py#L59
You will only notice that if you are using `SHARDED_STATE_DICT` as the `fsdp_state_dict_type`.
with PR https://github.com/huggingface/transformers/pull/24591, it should save the whole model in transformers format as well as FSDP ckpt following what you have chosen as `fsdp_state_dict_type`. |
transformers | 24,445 | open | LoRA is incompatible with DeepSpeed ZeRO3 | ### System Info
`pytorch==2.0.0, transformers==4.28.0, peft==0.2.0`
When use LoRA to wrap model in `__init__` and enable deepspeed ZeRO3, i will get the following errors:
```
╭───────────────────── Traceback (most recent call last) ──────────────────────╮
│ /mnt/petrelfs/wangweiyun/miniconda3/envs/recognize_anything/lib/python3.9/si │
│ te-packages/peft/peft_model.py:287 in __getattr__ │
│ │
│ 284 │ def __getattr__(self, name: str): │
│ 285 │ │ """Forward missing attributes to the wrapped module.""" │
│ 286 │ │ try: │
│ ❱ 287 │ │ │ return super().__getattr__(name) # defer to nn.Module's l │
│ 288 │ │ except AttributeError: │
│ 289 │ │ │ return getattr(self.base_model, name) │
│ 290 │
│ │
│ /mnt/petrelfs/wangweiyun/miniconda3/envs/recognize_anything/lib/python3.9/si │
│ te-packages/torch/nn/modules/module.py:1614 in __getattr__ │
│ │
│ 1611 │ │ │ modules = self.__dict__['_modules'] │
│ 1612 │ │ │ if name in modules: │
│ 1613 │ │ │ │ return modules[name] │
│ ❱ 1614 │ │ raise AttributeError("'{}' object has no attribute '{}'".form │
│ 1615 │ │ │ type(self).__name__, name)) │
│ 1616 │ │
│ 1617 │ def __setattr__(self, name: str, value: Union[Tensor, 'Module']) │
╰──────────────────────────────────────────────────────────────────────────────╯
AttributeError: 'PeftModelForCausalLM' object has no attribute
'_ds_child_entered'
During handling of the above exception, another exception occurred:
╭───────────────────── Traceback (most recent call last) ──────────────────────╮
│ /mnt/petrelfs/wangweiyun/miniconda3/envs/recognize_anything/lib/python3.9/si │
│ te-packages/peft/peft_model.py:287 in __getattr__ │
│ │
│ 284 │ def __getattr__(self, name: str): │
│ 285 │ │ """Forward missing attributes to the wrapped module.""" │
│ 286 │ │ try: │
│ ❱ 287 │ │ │ return super().__getattr__(name) # defer to nn.Module's l │
│ 288 │ │ except AttributeError: │
│ 289 │ │ │ return getattr(self.base_model, name) │
│ 290 │
│ │
│ /mnt/petrelfs/wangweiyun/miniconda3/envs/recognize_anything/lib/python3.9/si │
│ te-packages/torch/nn/modules/module.py:1614 in __getattr__ │
│ │
│ 1611 │ │ │ modules = self.__dict__['_modules'] │
│ 1612 │ │ │ if name in modules: │
│ 1613 │ │ │ │ return modules[name] │
│ ❱ 1614 │ │ raise AttributeError("'{}' object has no attribute '{}'".form │
│ 1615 │ │ │ type(self).__name__, name)) │
│ 1616 │ │
│ 1617 │ def __setattr__(self, name: str, value: Union[Tensor, 'Module']) │
╰──────────────────────────────────────────────────────────────────────────────╯
AttributeError: 'PeftModelForCausalLM' object has no attribute 'base_model'
```
It seems like that deepspeed begins to partition parameters before `PeftModelForCausalLM` finish its `__init__`, since it can not get the attribute `base_model`.
It's also notable that this error leads to a infinite recursion, since `PeftModel` catch the AttributeError when trying to get the attribute `base_model` while this attribute does not exist so the AttributeError will be raised again and again.
```
╭───────────────────── Traceback (most recent call last) ──────────────────────╮
│ /mnt/petrelfs/wangweiyun/projects/region_wise_model/main_clip_v6.py:120 in │
│ <module> │
│ │
│ 117 │
│ 118 │
│ 119 if __name__ == '__main__': │
│ ❱ 120 │ main() │
│ 121 │
│ │
│ /mnt/petrelfs/wangweiyun/projects/region_wise_model/main_clip_v6.py:42 in │
│ main │
│ │
│ 39 │ │
│ 40 │ if config.use_window_attn: │
│ 41 │ │ state_dict = preprocess_state_dict(model_args.model_name_or_pa │
│ ❱ 42 │ │ model = HuskyForCLIP.from_pretrained(model_args.model_name_or_ │
│ 43 │ else: │
│ 44 │ │ model = HuskyForCLIP.from_pretrained(model_args.model_name_or_ │
│ 45 │
│ │
│ /mnt/petrelfs/wangweiyun/miniconda3/envs/recognize_anything/lib/python3.9/si │
│ te-packages/transformers/modeling_utils.py:2629 in from_pretrained │
│ │
│ 2626 │ │ │ init_contexts.append(init_empty_weights()) │
│ 2627 │ │ │
│ 2628 │ │ with ContextManagers(init_contexts): │
│ ❱ 2629 │ │ │ model = cls(config, *model_args, **model_kwargs) │
│ 2630 │ │ │
│ 2631 │ │ # Check first if we are `from_pt` │
│ 2632 │ │ if use_keep_in_fp32_modules: │
│ │
│ /mnt/petrelfs/wangweiyun/miniconda3/envs/recognize_anything/lib/python3.9/si │
│ te-packages/deepspeed/runtime/zero/partition_parameters.py:382 in wrapper │
│ │
│ 379 │ │ │ │ │ is_child_module = True │
│ 380 │ │ │ │ │ setattr(module, "_ds_child_entered", True) │
│ 381 │ │ │ │ │
│ ❱ 382 │ │ │ │ f(module, *args, **kwargs) │
│ 383 │ │ │ │ │
│ 384 │ │ │ │ if is_child_module: │
│ 385 │ │ │ │ │ # child's __init__ is done, now we can run a sing │
│ │
│ /mnt/petrelfs/wangweiyun/projects/region_wise_model/custom_models/husky_clip │
│ _ablate.py:1472 in __init__ │
│ │
│ 1469 # shared align token + Both flatten + soft prompt (best) │
│ 1470 class HuskyForCLIPV6(WindowRegionHusky): │
│ 1471 │ def __init__(self, config: WindowRegionHuskyConfig): │
│ ❱ 1472 │ │ super().__init__(config) │
│ 1473 │ │ │
│ 1474 │ │ self.logit_scale = nn.Parameter(torch.ones([]) * np.log(1 / 0 │
│ 1475 │ │ self.text_projection = nn.Parameter(torch.empty(self.language │
│ │
│ /mnt/petrelfs/wangweiyun/miniconda3/envs/recognize_anything/lib/python3.9/si │
│ te-packages/deepspeed/runtime/zero/partition_parameters.py:382 in wrapper │
│ │
│ 379 │ │ │ │ │ is_child_module = True │
│ 380 │ │ │ │ │ setattr(module, "_ds_child_entered", True) │
│ 381 │ │ │ │ │
│ ❱ 382 │ │ │ │ f(module, *args, **kwargs) │
│ 383 │ │ │ │ │
│ 384 │ │ │ │ if is_child_module: │
│ 385 │ │ │ │ │ # child's __init__ is done, now we can run a sing │
│ │
│ /mnt/petrelfs/wangweiyun/projects/region_wise_model/custom_models/husky_wind │
│ ow.py:47 in __init__ │
│ │
│ 44 │ │ │ │ │ self.vision_model.encoder.layers[idx] = WindowBLIP │
│ 45 │ │ │ │
│ 46 │ │ │ if self.config.lora: │
│ ❱ 47 │ │ │ │ self.wrap_lora() │
│ 48 │ │ │ if self.config.lora_vision: │
│ 49 │ │ │ │ self.wrap_lora_vision() │
│ 50 │ │ self.post_init() │
│ │
│ /mnt/petrelfs/wangweiyun/projects/region_wise_model/custom_models/husky_src/ │
│ husky_chat.py:436 in wrap_lora │
│ │
│ 433 │ │ │ lora_dropout=lora_dropout, │
│ 434 │ │ │ target_modules=target_modules │
│ 435 │ │ ) │
│ ❱ 436 │ │ self.language_model = get_peft_model(self.language_model, peft │
│ 437 │ │ self.config.lora = True │
│ 438 │ │ self.language_model.print_trainable_parameters() │
│ 439 │
│ │
│ /mnt/petrelfs/wangweiyun/miniconda3/envs/recognize_anything/lib/python3.9/si │
│ te-packages/peft/mapping.py:145 in get_peft_model │
│ │
│ 142 │ │ peft_config = _prepare_lora_config(peft_config, model_config) │
│ 143 │ else: │
│ 144 │ │ peft_config = _prepare_prompt_learning_config(peft_config, mod │
│ ❱ 145 │ return MODEL_TYPE_TO_PEFT_MODEL_MAPPING[peft_config.task_type](mod │
│ 146 │
│ │
│ /mnt/petrelfs/wangweiyun/miniconda3/envs/recognize_anything/lib/python3.9/si │
│ te-packages/deepspeed/runtime/zero/partition_parameters.py:377 in wrapper │
│ │
│ 374 │ │ │ │ print_rank_0(f'Before initializing {module.__class__. │
│ 375 │ │ │ │ │
│ 376 │ │ │ │ is_child_module = False │
│ ❱ 377 │ │ │ │ if not hasattr(module, "_ds_child_entered"): │
│ 378 │ │ │ │ │ # child's __init__ was called, since parents all │
│ 379 │ │ │ │ │ is_child_module = True │
│ 380 │ │ │ │ │ setattr(module, "_ds_child_entered", True) │
│ │
│ /mnt/petrelfs/wangweiyun/miniconda3/envs/recognize_anything/lib/python3.9/si │
│ te-packages/peft/peft_model.py:289 in __getattr__ │
│ │
│ 286 │ │ try: │
│ 287 │ │ │ return super().__getattr__(name) # defer to nn.Module's l │
│ 288 │ │ except AttributeError: │
│ ❱ 289 │ │ │ return getattr(self.base_model, name) │
│ 290 │ │
│ 291 │ def forward(self, *args, **kwargs): │
│ 292 │ │ """ │
│ │
│ /mnt/petrelfs/wangweiyun/miniconda3/envs/recognize_anything/lib/python3.9/si │
│ te-packages/peft/peft_model.py:289 in __getattr__ │
│ │
│ 286 │ │ try: │
│ 287 │ │ │ return super().__getattr__(name) # defer to nn.Module's l │
│ 288 │ │ except AttributeError: │
│ ❱ 289 │ │ │ return getattr(self.base_model, name) │
│ 290 │ │
│ 291 │ def forward(self, *args, **kwargs): │
│ 292 │ │ """ │
│ │
│ /mnt/petrelfs/wangweiyun/miniconda3/envs/recognize_anything/lib/python3.9/si │
│ te-packages/peft/peft_model.py:289 in __getattr__ │
│ │
│ 286 │ │ try: │
│ 287 │ │ │ return super().__getattr__(name) # defer to nn.Module's l │
│ 288 │ │ except AttributeError: │
│ ❱ 289 │ │ │ return getattr(self.base_model, name) │
│ 290 │ │
│ 291 │ def forward(self, *args, **kwargs): │
│ 292 │ │ """ │
│ │
│ /mnt/petrelfs/wangweiyun/miniconda3/envs/recognize_anything/lib/python3.9/si │
│ te-packages/peft/peft_model.py:289 in __getattr__ │
│ │
│ 286 │ │ try: │
│ 287 │ │ │ return super().__getattr__(name) # defer to nn.Module's l │
│ 288 │ │ except AttributeError: │
│ ❱ 289 │ │ │ return getattr(self.base_model, name) │
│ 290 │ │
│ 291 │ def forward(self, *args, **kwargs): │
│ 292 │ │ """ │
```
### Who can help?
@pacman100
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
environments: `pytorch==2.0.0, transformers==4.28.0, peft==0.2.0`
slurm launch command: `srun --gres=gpu:8 --ntasks=8 --ntasks-per-node=8 --cpus-per-task=8 python -u bug_unit_test.py --output_dir ./outputs/debug --deepspeed ./configs/default_offload_opt_param_zero3.json`
deepspeed config to reproduce:
```json
{
"bf16": {
"enabled": "auto"
},
"optimizer": {
"type": "AdamW",
"params": {
"lr": "auto",
"betas": "auto",
"eps": "auto",
"weight_decay": "auto"
}
},
"zero_optimization": {
"stage": 3,
"offload_optimizer": {
"device": "cpu",
"pin_memory": true
},
"offload_param": {
"device": "cpu",
"pin_memory": true
},
"overlap_comm": true,
"contiguous_gradients": true,
"sub_group_size": 1e9,
"reduce_bucket_size": "auto",
"stage3_prefetch_bucket_size": "auto",
"stage3_param_persistence_threshold": "auto",
"stage3_max_live_parameters": 1e9,
"stage3_max_reuse_distance": 1e9,
"stage3_gather_16bit_weights_on_model_save": true
},
"gradient_accumulation_steps": "auto",
"gradient_clipping": "auto",
"train_batch_size": "auto",
"train_micro_batch_size_per_gpu": "auto",
"wall_clock_breakdown": false
}
```
code to reproduce:
```python
import os
import subprocess
import torch
from transformers import (
HfArgumentParser, TrainingArguments,
PreTrainedModel, LlamaModel, LlamaConfig
)
from peft import LoraConfig, TaskType, get_peft_model
class BugModel(PreTrainedModel):
config_class = LlamaConfig
def __init__(self, config):
super().__init__(config)
self.model = LlamaModel(config)
self.wrap_lora()
# init code for other modules, which is not important to reproduce this bug
pass
def wrap_lora(
self,
r=16,
lora_alpha=32,
lora_dropout=0.05,
target_modules=("q_proj", "k_proj", "v_proj", "o_proj"),
):
peft_config = LoraConfig(
task_type=TaskType.CAUSAL_LM,
inference_mode=False,
r=r,
lora_alpha=lora_alpha,
lora_dropout=lora_dropout,
target_modules=target_modules
)
self.model = get_peft_model(self.model, peft_config)
self.model.print_trainable_parameters()
def init_distributed_mode():
if 'SLURM_PROCID' in os.environ:
rank = int(os.environ['SLURM_PROCID'])
local_rank = rank % torch.cuda.device_count()
world_size = int(os.environ["SLURM_NTASKS"])
local_size = int(os.environ["SLURM_NTASKS_PER_NODE"])
if "MASTER_PORT" not in os.environ:
port = 22110
print(f'MASTER_PORT = {port}')
os.environ["MASTER_PORT"] = str(port)
node_list = os.environ["SLURM_NODELIST"]
addr = subprocess.getoutput(f"scontrol show hostname {node_list} | head -n1")
if "MASTER_ADDR" not in os.environ:
os.environ["MASTER_ADDR"] = addr
os.environ['RANK'] = str(rank)
os.environ['LOCAL_RANK'] = str(local_rank)
os.environ['LOCAL_WORLD_SIZE'] = str(local_size)
os.environ['WORLD_SIZE'] = str(world_size)
parser = HfArgumentParser(TrainingArguments)
init_distributed_mode()
training_args = parser.parse_args_into_dataclasses()
model_name_or_path = '/mnt/petrelfs/share_data/wangweiyun/share_hf/vicuna-7b'
model = BugModel.from_pretrained(model_name_or_path) # Error!
print('finish')
```
### Expected behavior
I expect to wrap the model with LoRA during `__init__` successfully when i enable ZeRO3. | 06-23-2023 10:09:17 | 06-23-2023 10:09:17 | Hello, please refer this doc for the correct way of using PEFT + DeepSpeed: https://huggingface.co/docs/peft/accelerate/deepspeed-zero3-offload<|||||>> Hello, please refer this doc for the correct way of using PEFT + DeepSpeed: https://huggingface.co/docs/peft/accelerate/deepspeed-zero3-offload
Thank you for your response!
I note that this doc is based on `accelerate`. However, my code is based on `transformers.Trainer`. Can you provide me any example to use PEFT + DeepSpeed with `transformers.Trainer` correctly?<|||||>The following steps work for me:
1. Create `TrainingArguments(..., deepspeed="ds_config_zero3.json")`
2. Load model with `from_pretrained()`
3. Wrap it with `get_peft_model()`
4. Run `Trainer.train()`
Few important notes:
1. You have to create `TrainingArguments` before initialising the model with Zero3 partitioning.
2. If you use `TaskType.SEQ_CLS` task, `get_peft_model` will break the forward path. A quick workaround is recreate unpartitioned classification head after the model initialised with `deepspeed.zero.Init()`, i.e. after `from_pretrained()`. |
transformers | 24,444 | closed | [`Trainer`] Fix `.to` call on 4bit models | # What does this PR do?
Currently the Trainer fails when calling initializing it on some scenarios using 4bit models
In fact, the device placement is correctly skipped for 8bit models but needs to be skipped as well for 4bit models as the `to` operation is not supported for 4bit models as well.
This PR adds a patch for that case
cc @amyeroberts @lewtun
| 06-23-2023 09:57:05 | 06-23-2023 09:57:05 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@amyeroberts thanks!
Absolutely yes, for reference, here is how we set that attribute: https://github.com/huggingface/transformers/blob/ea91c2adca842da3d2f87e094504fa7d66a7008a/src/transformers/modeling_utils.py#L2922 <|||||>I can confirm that this fix resolves the error I was hitting with 4-bit models:
```
Traceback (most recent call last):
File "/fsx/lewis/git/h4/scripts/evaluation/run_rm_eval.py", line 275, in <module>
main()
File "/fsx/lewis/git/h4/scripts/evaluation/run_rm_eval.py", line 164, in main
trainer = RewardTrainer(
File "/fsx/lewis/git/h4/src/h4/training/trainer.py", line 26, in __init__
super().__init__(*args, **kwargs)
File "/fsx/lewis/miniconda/envs/h4/lib/python3.10/site-packages/transformers/trainer.py", line 506, in __init__
self._move_model_to_device(model, args.device)
File "/fsx/lewis/miniconda/envs/h4/lib/python3.10/site-packages/transformers/trainer.py", line 747, in _move_model_to_device
model = model.to(device)
File "/fsx/lewis/miniconda/envs/h4/lib/python3.10/site-packages/transformers/modeling_utils.py", line 1889, in to
raise ValueError(
ValueError: `.to` is not supported for `4-bit` or `8-bit` models. Please use the model as it is, since the model has already been set to the correct devices and casted to the correct `dtype`.
```
Thanks for the fast fix @younesbelkada !! |
transformers | 24,443 | closed | Update `JukeboxConfig.from_pretrained` | # What does this PR do?
The `from_pretrained` of `JukeboxConfig` needs an update (too) after #24306 to avoid the error `TypeError: __init__() got an unexpected keyword argument 'token'`. | 06-23-2023 08:59:20 | 06-23-2023 08:59:20 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 24,442 | closed | Wrong special tokens using XLM-RoBERTa's tokenizer for question answering | ### System Info
- `transformers` version: 4.30.2
- Platform: Linux-6.2.0-20-generic-x86_64-with-glibc2.37
- Python version: 3.11.3
- Huggingface_hub version: 0.15.1
- Safetensors version: 0.3.1
- PyTorch version (GPU?): 2.0.0+cu117 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help?
@ArthurZucker
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Tokenizing a question and its context with XLM-RoBERTa's tokenizer:
```python
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("xlm-roberta-base")
question = "This is a question?"
context = "This is the context."
inputs = tokenizer(question, context)
tokenizer.decode(inputs["input_ids"])
```
returns something like this:
```
<s> This is a question?</s></s> This is the context.</s>
```
i.e. with _two_ SEP tokens between question and context. Is this expected behavior? Shouldn't it be separated by only one `</sep>` or even `</sep><sep>`?
### Expected behavior
I'd expect the tokenizer to return:
```
<s> This is a question?</s><s> This is the context.</s>
``` | 06-23-2023 07:28:34 | 06-23-2023 07:28:34 | Hey, I am not sure where your expectations come from, but in the task of `sequence_classification`, you are not simply concatenating two sentences. If you have a look [here](https://github.com/facebookresearch/XLM/blob/cd281d32612d145c6742b4d3f048f80df8669c30/generate-embeddings.ipynb#L130), an original colab formats the sentences the same way. <|||||>No that's indeed expected behaviour, RoBERTa models use 2 special tokens in between context and question, unlike BERT.
See here: https://github.com/huggingface/transformers/blob/8e164c5400b7b413c7b8fb32e35132001effc970/src/transformers/models/roberta/tokenization_roberta.py#L346<|||||>Got it, thanks a lot for clarifying :) 🙏 |
transformers | 24,441 | closed | Calling the tokenizer modifies the tokenizer object | ### System Info
- `transformers` version: 4.30.2
- Platform: Linux-6.1.31_1-x86_64-with-glibc2.36
- Python version: 3.9.16
- Huggingface_hub version: 0.15.1
- Safetensors version: 0.3.1
- PyTorch version (GPU?): 2.0.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@ArthurZucker
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Very simple reproduction here
```python
from transformers import AutoTokenizer
from datasets.utils.py_utils import dumps # Huggingface datasets
t = AutoTokenizer.from_pretrained('bert-base-uncased')
t.save_pretrained("tok1")
th1 = hash(dumps(t))
text = "This is an example text"
ttext = t(text, max_length=512, padding="max_length", truncation=True)
t.save_pretrained("tok2")
th2 = hash(dumps(t))
assert th1 == th2 # Assertion Error
```
The actual difference can be found if you try to save the tokenizer after calling it. Diff the tokenizer.json and you can see that the keys "padding" and "truncation" got updated.
`diff tok1/tokenizer.json tok2/tokenizer.json` produces an actual diff
### Expected behavior
The tokenizer object should not change just because it was called with the padding parameters. | 06-23-2023 04:31:57 | 06-23-2023 04:31:57 | Most likely the issue lies in Fast Tokenizers here:
https://github.com/huggingface/transformers/blob/fe861e578f50dc9c06de33cd361d2f625017e624/src/transformers/tokenization_utils_fast.py#L319-L388
I don't actually see anyplace where the original strategy is restored.
Because this snippet
```python
from transformers import AutoTokenizer
t = AutoTokenizer.from_pretrained('bert-base-uncased')
p1 = t._tokenizer.padding
text = "This is an example text"
ttext = t(text, max_length=256, padding="max_length", truncation=True)
p2 = t._tokenizer.padding
```
p1 and p2 are different. i.e the value of padding changes.
<|||||>Hey! This is actually expected. When you save a tokenizer, the `init_kwargs` that were last used are saved along.
If you initialize the model with `from_slow = True`, then this will be saved. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 24,440 | closed | Fix typo | # What does this PR do?
Fix typo (funcionality -> functionality)
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 06-23-2023 04:11:13 | 06-23-2023 04:11:13 | Thanks for the fix! |
transformers | 24,439 | closed | AttributeError: 'QuantLinear' object has no attribute 'weight' | ### System Info
Python = 3.9.10
Transformers = 4.30.0.dev0
PyTorch = 2.0.1
Model = Google/flan-ul2
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Model quantized using `qwopqwop200/GPTQ-for-Llama` on the `t5` branch, using the following command:
```
python t5.py ../full-models/flan-ul2 wikitext2 --nsamples 256 --wbits 4 --act-order --groupsize 128 --save ../gptq-models/flan-ul2-gptq/flan-ul2-4bit-128g-gptq.pt
```
When performing benchmark using the following command (also applies to inference):
```
python t5.py ../full-models/flan-ul2 wikitext2 --load ../gptq-models/flan-ul2-gptq/flan-ul2-4bit-128g-gptq.pt --wbits 4 --groupsize 128 --benchmark --benchmark_mode mmlu
```
The following error occurs:
```
Traceback (most recent call last):
File "/mnt/Storage/ai-dev/t5-gptq/t5.py", line 752, in <module>
mmlu_benchmark(model, tokenizer, args)
File "/mnt/Storage/ai-dev/t5-gptq/t5.py", line 542, in mmlu_benchmark
cors, acc, probs = mmlu_eval(args, subject, model, tokenizer, dev_df, test_df, (idx,len(subjects)))
File "~/anaconda3/lib/python3.9/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/mnt/Storage/ai-dev/t5-gptq/t5.py", line 473, in mmlu_eval
logits = model(
File "~/anaconda3/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "~/anaconda3/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1683, in forward
encoder_outputs = self.encoder(
File "~/anaconda3/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "~/anaconda3/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1090, in forward
layer_outputs = layer_module(
File "~/anaconda3/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "~/anaconda3/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 753, in forward
hidden_states = self.layer[-1](hidden_states)
File "~/anaconda3/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "~/anaconda3/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 342, in forward
forwarded_states = self.DenseReluDense(forwarded_states)
File "~/anaconda3/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "~/anaconda3/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 319, in forward
isinstance(self.wo.weight, torch.Tensor)
File "~/anaconda3/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1614, in __getattr__
raise AttributeError("'{}' object has no attribute '{}'".format(
AttributeError: 'QuantLinear' object has no attribute 'weight'
```
According to my limited understanding, `QuantLinear` is a PyTorch class, and the error is occurring in `transformers`.
### Expected behavior
Successfully performing benchmark, inference, etc. of the 4-bit GPTQ flan-ul2 model. | 06-23-2023 03:32:56 | 06-23-2023 03:32:56 | Hi there. The script you are using is not one we have in Transformers, and we also do not have an object `QuantLinear`, so I'm really unsure why you are reporting this here?<|||||>Forgive me, I do not understand Python object mechanisms well, but I thought the following line was the error:
`File "~/anaconda3/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 319, in forward
isinstance(self.wo.weight, torch.Tensor)`
I come from a C++ background, so my logic was that transformers is at fault because it's trying to access a variable on a class that does no exist. But of course, a Python object is nothing like a C++ class.
I am told that QuantLinear should have a `weight` attribute. So I am thinking maybe the model object is malformed? In that case I should report on the original project. Forgive my misunderstanding.<|||||>As I said before, we do not use that class (`QuantLinear`) anywhere in Transformers. So this comes from your script making modifications to the model that do not work. You should raise the issue in the repo where you found that script. |
transformers | 24,438 | closed | Problem with Deepspeed integration | ### System Info
- `transformers` version: 4.29.2
- Platform: Linux-5.4.0-137-generic-x86_64-with-glibc2.31
- Python version: 3.11.3
- Huggingface_hub version: 0.15.1
- Safetensors version: 0.3.1
- PyTorch version (GPU?): 2.0.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: Yes
### Who can help?
_No response_
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
I am using the WizardCoder [training script](https://github.com/nlpxucan/WizardLM/blob/main/WizardCoder/src/train_wizardcoder.py) to further fine-tune the model on some examples that I have using DeepSpeed integration. I have followed their instructions given [here](https://huggingface.co/WizardLM/WizardCoder-15B-V1.0#fine-tuning) to fine-tune the model and I am getting the following error:
```
datachat_env) [email protected]:~/Custom-LLM$ sh train.sh
[2023-06-23 00:36:25,039] [WARNING] [runner.py:191:fetch_hostfile] Unable to find hostfile, will proceed with training with local resources only.
[2023-06-23 00:36:25,077] [INFO] [runner.py:541:main] cmd = /root/anaconda3/envs/datachat_env/bin/python -u -m deepspeed.launcher.launch --world_info=eyJsb2NhbGhvc3QiOiBbMCwgMSwgMiwgM119 --master_addr=127.0.0.1 --master_port=29500 --enable_each_rank_log=None /root/Custom-LLM/WizardLM/WizardCoder/src/train_wizardcoder.py --model_name_or_path /root/Custom-LLM/WizardCoder-15B-V1.0 --data_path /root/Custom-LLM/data.json --output_dir /root/Custom-LLM/WC-Checkpoint --num_train_epochs 3 --model_max_length 512 --per_device_train_batch_size 1 --per_device_eval_batch_size 1 --gradient_accumulation_steps 4 --evaluation_strategy no --save_strategy steps --save_steps 50 --save_total_limit 2 --learning_rate 2e-5 --warmup_steps 30 --logging_steps 2 --lr_scheduler_type cosine --report_to tensorboard --gradient_checkpointing True --deepspeed /root/Custom-LLM/Llama-X/src/configs/deepspeed_config.json --fp16 True
[2023-06-23 00:36:26,992] [INFO] [launch.py:229:main] WORLD INFO DICT: {'localhost': [0, 1, 2, 3]}
[2023-06-23 00:36:26,993] [INFO] [launch.py:235:main] nnodes=1, num_local_procs=4, node_rank=0
[2023-06-23 00:36:26,993] [INFO] [launch.py:246:main] global_rank_mapping=defaultdict(<class 'list'>, {'localhost': [0, 1, 2, 3]})
[2023-06-23 00:36:26,993] [INFO] [launch.py:247:main] dist_world_size=4
[2023-06-23 00:36:26,993] [INFO] [launch.py:249:main] Setting CUDA_VISIBLE_DEVICES=0,1,2,3
[2023-06-23 00:36:29,650] [INFO] [comm.py:622:init_distributed] Initializing TorchBackend in DeepSpeed with backend nccl
[2023-06-23 00:36:55,124] [INFO] [partition_parameters.py:454:__exit__] finished initializing model with 15.82B parameters
[2023-06-23 00:37:12,845] [WARNING] [cpu_adam.py:84:__init__] FP16 params for CPUAdam may not work on AMD CPUs
[2023-06-23 00:37:12,968] [WARNING] [cpu_adam.py:84:__init__] FP16 params for CPUAdam may not work on AMD CPUs
[2023-06-23 00:37:12,969] [WARNING] [cpu_adam.py:84:__init__] FP16 params for CPUAdam may not work on AMD CPUs
[2023-06-23 00:37:12,970] [WARNING] [cpu_adam.py:84:__init__] FP16 params for CPUAdam may not work on AMD CPUs
Using /root/.cache/torch_extensions/py311_cu118 as PyTorch extensions root...
Detected CUDA files, patching ldflags
Emitting ninja build file /root/.cache/torch_extensions/py311_cu118/cpu_adam/build.ninja...
Building extension module cpu_adam...
Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)
[1/3] c++ -MMD -MF cpu_adam.o.d -DTORCH_EXTENSION_NAME=cpu_adam -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE=\"_gcc\" -DPYBIND11_STDLIB=\"_libstdcpp\" -DPYBIND11_BUILD_ABI=\"_cxxabi1011\" -I/root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/deepspeed/ops/csrc/includes -I/usr/local/cuda/include -isystem /root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/torch/include -isystem /root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/torch/include/torch/csrc/api/include -isystem /root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/torch/include/TH -isystem /root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/torch/include/THC -isystem /usr/local/cuda/include -isystem /root/anaconda3/envs/datachat_env/include/python3.11 -D_GLIBCXX_USE_CXX11_ABI=0 -fPIC -std=c++17 -O3 -std=c++14 -g -Wno-reorder -L/usr/local/cuda/lib64 -lcudart -lcublas -g -march=native -fopenmp -D__AVX256__ -D__ENABLE_CUDA__ -c /root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/deepspeed/ops/csrc/adam/cpu_adam.cpp -o cpu_adam.o
FAILED: cpu_adam.o
c++ -MMD -MF cpu_adam.o.d -DTORCH_EXTENSION_NAME=cpu_adam -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE=\"_gcc\" -DPYBIND11_STDLIB=\"_libstdcpp\" -DPYBIND11_BUILD_ABI=\"_cxxabi1011\" -I/root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/deepspeed/ops/csrc/includes -I/usr/local/cuda/include -isystem /root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/torch/include -isystem /root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/torch/include/torch/csrc/api/include -isystem /root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/torch/include/TH -isystem /root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/torch/include/THC -isystem /usr/local/cuda/include -isystem /root/anaconda3/envs/datachat_env/include/python3.11 -D_GLIBCXX_USE_CXX11_ABI=0 -fPIC -std=c++17 -O3 -std=c++14 -g -Wno-reorder -L/usr/local/cuda/lib64 -lcudart -lcublas -g -march=native -fopenmp -D__AVX256__ -D__ENABLE_CUDA__ -c /root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/deepspeed/ops/csrc/adam/cpu_adam.cpp -o cpu_adam.o
In file included from /root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/deepspeed/ops/csrc/includes/cpu_adam.h:19,
from /root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/deepspeed/ops/csrc/adam/cpu_adam.cpp:6:
/root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/deepspeed/ops/csrc/includes/custom_cuda_layers.h:12:10: fatal error: curand_kernel.h: No such file or directory
12 | #include <curand_kernel.h>
| ^~~~~~~~~~~~~~~~~
compilation terminated.
Using /root/.cache/torch_extensions/py311_cu118 as PyTorch extensions root...
Using /root/.cache/torch_extensions/py311_cu118 as PyTorch extensions root...
Using /root/.cache/torch_extensions/py311_cu118 as PyTorch extensions root...
[2/3] /usr/local/cuda/bin/nvcc -DTORCH_EXTENSION_NAME=cpu_adam -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE=\"_gcc\" -DPYBIND11_STDLIB=\"_libstdcpp\" -DPYBIND11_BUILD_ABI=\"_cxxabi1011\" -I/root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/deepspeed/ops/csrc/includes -I/usr/local/cuda/include -isystem /root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/torch/include -isystem /root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/torch/include/torch/csrc/api/include -isystem /root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/torch/include/TH -isystem /root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/torch/include/THC -isystem /usr/local/cuda/include -isystem /root/anaconda3/envs/datachat_env/include/python3.11 -D_GLIBCXX_USE_CXX11_ABI=0 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr -gencode=arch=compute_80,code=compute_80 -gencode=arch=compute_80,code=sm_80 --compiler-options '-fPIC' -O3 --use_fast_math -std=c++14 -U__CUDA_NO_HALF_OPERATORS__ -U__CUDA_NO_HALF_CONVERSIONS__ -U__CUDA_NO_HALF2_OPERATORS__ -gencode=arch=compute_80,code=sm_80 -gencode=arch=compute_80,code=compute_80 -c /root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/deepspeed/ops/csrc/common/custom_cuda_kernel.cu -o custom_cuda_kernel.cuda.o
FAILED: custom_cuda_kernel.cuda.o
/usr/local/cuda/bin/nvcc -DTORCH_EXTENSION_NAME=cpu_adam -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE=\"_gcc\" -DPYBIND11_STDLIB=\"_libstdcpp\" -DPYBIND11_BUILD_ABI=\"_cxxabi1011\" -I/root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/deepspeed/ops/csrc/includes -I/usr/local/cuda/include -isystem /root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/torch/include -isystem /root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/torch/include/torch/csrc/api/include -isystem /root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/torch/include/TH -isystem /root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/torch/include/THC -isystem /usr/local/cuda/include -isystem /root/anaconda3/envs/datachat_env/include/python3.11 -D_GLIBCXX_USE_CXX11_ABI=0 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr -gencode=arch=compute_80,code=compute_80 -gencode=arch=compute_80,code=sm_80 --compiler-options '-fPIC' -O3 --use_fast_math -std=c++14 -U__CUDA_NO_HALF_OPERATORS__ -U__CUDA_NO_HALF_CONVERSIONS__ -U__CUDA_NO_HALF2_OPERATORS__ -gencode=arch=compute_80,code=sm_80 -gencode=arch=compute_80,code=compute_80 -c /root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/deepspeed/ops/csrc/common/custom_cuda_kernel.cu -o custom_cuda_kernel.cuda.o
In file included from /root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/deepspeed/ops/csrc/common/custom_cuda_kernel.cu:6:
/root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/deepspeed/ops/csrc/includes/custom_cuda_layers.h:12:10: fatal error: curand_kernel.h: No such file or directory
12 | #include <curand_kernel.h>
| ^~~~~~~~~~~~~~~~~
compilation terminated.
ninja: build stopped: subcommand failed.
Traceback (most recent call last):
File "/root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/torch/utils/cpp_extension.py", line 1893, in _run_ninja_build
subprocess.run(
File "/root/anaconda3/envs/datachat_env/lib/python3.11/subprocess.py", line 571, in run
raise CalledProcessError(retcode, process.args,
subprocess.CalledProcessError: Command '['ninja', '-v']' returned non-zero exit status 1.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/root/Custom-LLM/WizardLM/WizardCoder/src/train_wizardcoder.py", line 247, in <module>
train()
File "/root/Custom-LLM/WizardLM/WizardCoder/src/train_wizardcoder.py", line 241, in train
trainer.train()
File "/root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/transformers/trainer.py", line 1664, in train
return inner_training_loop(
^^^^^^^^^^^^^^^^^^^^
File "/root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/transformers/trainer.py", line 1741, in _inner_training_loop
deepspeed_engine, optimizer, lr_scheduler = deepspeed_init(
^^^^^^^^^^^^^^^
File "/root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/transformers/deepspeed.py", line 378, in deepspeed_init
deepspeed_engine, optimizer, _, lr_scheduler = deepspeed.initialize(**kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/deepspeed/__init__.py", line 165, in initialize
engine = DeepSpeedEngine(args=args,
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/deepspeed/runtime/engine.py", line 308, in __init__
self._configure_optimizer(optimizer, model_parameters)
File "/root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/deepspeed/runtime/engine.py", line 1162, in _configure_optimizer
basic_optimizer = self._configure_basic_optimizer(model_parameters)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/deepspeed/runtime/engine.py", line 1218, in _configure_basic_optimizer
optimizer = DeepSpeedCPUAdam(model_parameters,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/deepspeed/ops/adam/cpu_adam.py", line 94, in __init__
self.ds_opt_adam = CPUAdamBuilder().load()
^^^^^^^^^^^^^^^^^^^^^^^
File "/root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/deepspeed/ops/op_builder/builder.py", line 445, in load
return self.jit_load(verbose)
^^^^^^^^^^^^^^^^^^^^^^
File "/root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/deepspeed/ops/op_builder/builder.py", line 480, in jit_load
Loading extension module cpu_adam...
op_module = load(name=self.name,
^^^^^^^^^^^^^^^^^^^^
File "/root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/torch/utils/cpp_extension.py", line 1284, in load
Traceback (most recent call last):
File "/root/Custom-LLM/WizardLM/WizardCoder/src/train_wizardcoder.py", line 247, in <module>
return _jit_compile(
^^^^^^^^^^^^^
File "/root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/torch/utils/cpp_extension.py", line 1509, in _jit_compile
train()
File "/root/Custom-LLM/WizardLM/WizardCoder/src/train_wizardcoder.py", line 241, in train
trainer.train()
File "/root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/transformers/trainer.py", line 1664, in train
_write_ninja_file_and_build_library(
File "/root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/torch/utils/cpp_extension.py", line 1624, in _write_ninja_file_and_build_library
_run_ninja_build(
File "/root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/torch/utils/cpp_extension.py", line 1909, in _run_ninja_build
return inner_training_loop(
^^^^^^^^^^^^^^^^^^^^
File "/root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/transformers/trainer.py", line 1741, in _inner_training_loop
raise RuntimeError(message) from e
RuntimeError: Error building extension 'cpu_adam'
Loading extension module cpu_adam...
deepspeed_engine, optimizer, lr_scheduler = deepspeed_init(
^^^^^^^^^^^^^^^
File "/root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/transformers/deepspeed.py", line 378, in deepspeed_init
Traceback (most recent call last):
File "/root/Custom-LLM/WizardLM/WizardCoder/src/train_wizardcoder.py", line 247, in <module>
deepspeed_engine, optimizer, _, lr_scheduler = deepspeed.initialize(**kwargs)
train()
File "/root/Custom-LLM/WizardLM/WizardCoder/src/train_wizardcoder.py", line 241, in train
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
trainer.train() File "/root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/deepspeed/__init__.py", line 165, in initialize
File "/root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/transformers/trainer.py", line 1664, in train
engine = DeepSpeedEngine(args=args,
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/deepspeed/runtime/engine.py", line 308, in __init__
self._configure_optimizer(optimizer, model_parameters)
File "/root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/deepspeed/runtime/engine.py", line 1162, in _configure_optimizer
return inner_training_loop(
^^^^^^^^^^^^^^^^^^^^
File "/root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/transformers/trainer.py", line 1741, in _inner_training_loop
basic_optimizer = self._configure_basic_optimizer(model_parameters)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/deepspeed/runtime/engine.py", line 1218, in _configure_basic_optimizer
deepspeed_engine, optimizer, lr_scheduler = deepspeed_init(
^^ ^optimizer = DeepSpeedCPUAdam(model_parameters,^
^^ ^ ^ ^ ^ ^ ^ ^ ^ ^
File "/root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/transformers/deepspeed.py", line 378, in deepspeed_init
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/deepspeed/ops/adam/cpu_adam.py", line 94, in __init__
deepspeed_engine, optimizer, _, lr_scheduler = deepspeed.initialize(**kwargs)
self.ds_opt_adam = CPUAdamBuilder().load()
^ ^ ^ ^ ^ ^ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
^^ File "/root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/deepspeed/ops/op_builder/builder.py", line 445, in load
^^^^^^^^^^^^^^
File "/root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/deepspeed/__init__.py", line 165, in initialize
return self.jit_load(verbose)
engine = DeepSpeedEngine(args=args,
^ ^ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
^
File "/root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/deepspeed/ops/op_builder/builder.py", line 480, in jit_load
File "/root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/deepspeed/runtime/engine.py", line 308, in __init__
self._configure_optimizer(optimizer, model_parameters)
op_module = load(name=self.name,
File "/root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/deepspeed/runtime/engine.py", line 1162, in _configure_optimizer
^^^^^^^^^^^^^^^^^^^^
File "/root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/torch/utils/cpp_extension.py", line 1284, in load
basic_optimizer = self._configure_basic_optimizer(model_parameters)
^^^^^^^^^^^^^ ^return _jit_compile(^
^^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^^^^^^^^^^^^^^^^^^^^^^^^^^^
^^ File "/root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/torch/utils/cpp_extension.py", line 1535, in _jit_compile
^^^^^^
File "/root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/deepspeed/runtime/engine.py", line 1218, in _configure_basic_optimizer
optimizer = DeepSpeedCPUAdam(model_parameters,
return _import_module_from_library(name, build_directory, is_python_module)
^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
^^ File "/root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/deepspeed/ops/adam/cpu_adam.py", line 94, in __init__
^^^^^^^^^^^^^^^^^^^^^^^^ ^self.ds_opt_adam = CPUAdamBuilder().load()^
^^^^^^^^^ ^ ^ ^ ^ ^ ^ ^
File "/root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/torch/utils/cpp_extension.py", line 1929, in _import_module_from_library
^^^^^^^^^^^^^^^^^^^^^^^
File "/root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/deepspeed/ops/op_builder/builder.py", line 445, in load
return self.jit_load(verbose)
^^^^^^^^^^^^^^^^^^^^^^
File "/root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/deepspeed/ops/op_builder/builder.py", line 480, in jit_load
module = importlib.util.module_from_spec(spec)
^^^^^^^^^^^^^^^^ ^op_module = load(name=self.name,^
^^^^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^
^^ File "<frozen importlib._bootstrap>", line 573, in module_from_spec
^^ File "<frozen importlib._bootstrap_external>", line 1233, in create_module
^^ File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
^^ImportError^: ^/root/.cache/torch_extensions/py311_cu118/cpu_adam/cpu_adam.so: cannot open shared object file: No such file or directory^
^^^^^^^^^
File "/root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/torch/utils/cpp_extension.py", line 1284, in load
return _jit_compile(
^^^^^^^^^^^^^
File "/root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/torch/utils/cpp_extension.py", line 1535, in _jit_compile
return _import_module_from_library(name, build_directory, is_python_module)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/torch/utils/cpp_extension.py", line 1929, in _import_module_from_library
module = importlib.util.module_from_spec(spec)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<frozen importlib._bootstrap>", line 573, in module_from_spec
File "<frozen importlib._bootstrap_external>", line 1233, in create_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
ImportError: /root/.cache/torch_extensions/py311_cu118/cpu_adam/cpu_adam.so: cannot open shared object file: No such file or directory
Loading extension module cpu_adam...
Traceback (most recent call last):
File "/root/Custom-LLM/WizardLM/WizardCoder/src/train_wizardcoder.py", line 247, in <module>
train()
File "/root/Custom-LLM/WizardLM/WizardCoder/src/train_wizardcoder.py", line 241, in train
trainer.train()
File "/root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/transformers/trainer.py", line 1664, in train
return inner_training_loop(
^^^^^^^^^^^^^^^^^^^^
File "/root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/transformers/trainer.py", line 1741, in _inner_training_loop
deepspeed_engine, optimizer, lr_scheduler = deepspeed_init(
^^^^^^^^^^^^^^^
File "/root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/transformers/deepspeed.py", line 378, in deepspeed_init
deepspeed_engine, optimizer, _, lr_scheduler = deepspeed.initialize(**kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/deepspeed/__init__.py", line 165, in initialize
engine = DeepSpeedEngine(args=args,
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/deepspeed/runtime/engine.py", line 308, in __init__
self._configure_optimizer(optimizer, model_parameters)
File "/root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/deepspeed/runtime/engine.py", line 1162, in _configure_optimizer
basic_optimizer = self._configure_basic_optimizer(model_parameters)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/deepspeed/runtime/engine.py", line 1218, in _configure_basic_optimizer
optimizer = DeepSpeedCPUAdam(model_parameters,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/deepspeed/ops/adam/cpu_adam.py", line 94, in __init__
self.ds_opt_adam = CPUAdamBuilder().load()
^^^^^^^^^^^^^^^^^^^^^^^
File "/root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/deepspeed/ops/op_builder/builder.py", line 445, in load
return self.jit_load(verbose)
^^^^^^^^^^^^^^^^^^^^^^
File "/root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/deepspeed/ops/op_builder/builder.py", line 480, in jit_load
op_module = load(name=self.name,
^^^^^^^^^^^^^^^^^^^^
File "/root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/torch/utils/cpp_extension.py", line 1284, in load
return _jit_compile(
^^^^^^^^^^^^^
File "/root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/torch/utils/cpp_extension.py", line 1535, in _jit_compile
return _import_module_from_library(name, build_directory, is_python_module)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/torch/utils/cpp_extension.py", line 1929, in _import_module_from_library
module = importlib.util.module_from_spec(spec)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<frozen importlib._bootstrap>", line 573, in module_from_spec
File "<frozen importlib._bootstrap_external>", line 1233, in create_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
ImportError: /root/.cache/torch_extensions/py311_cu118/cpu_adam/cpu_adam.so: cannot open shared object file: No such file or directory
Exception ignored in: <function DeepSpeedCPUAdam.__del__ at 0x7fcaec4a89a0>
Traceback (most recent call last):
File "/root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/deepspeed/ops/adam/cpu_adam.py", line 102, in __del__
self.ds_opt_adam.destroy_adam(self.opt_id)
^^^^^^^^^^^^^^^^
AttributeError: 'DeepSpeedCPUAdam' object has no attribute 'ds_opt_adam'
Exception ignored in: <function DeepSpeedCPUAdam.__del__ at 0x7fbf4e6409a0>
Traceback (most recent call last):
File "/root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/deepspeed/ops/adam/cpu_adam.py", line 102, in __del__
self.ds_opt_adam.destroy_adam(self.opt_id)
^^^^^^^^^^^^^^^^
AttributeError: 'DeepSpeedCPUAdam' object has no attribute 'ds_opt_adam'
Exception ignored in: <function DeepSpeedCPUAdam.__del__ at 0x7f9ce61b09a0>
Traceback (most recent call last):
File "/root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/deepspeed/ops/adam/cpu_adam.py", line 102, in __del__
AttributeError: 'DeepSpeedCPUAdam' object has no attribute 'ds_opt_adam'
Exception ignored in: <function DeepSpeedCPUAdam.__del__ at 0x7f6c2bf109a0>
Traceback (most recent call last):
File "/root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/deepspeed/ops/adam/cpu_adam.py", line 102, in __del__
self.ds_opt_adam.destroy_adam(self.opt_id)
^^^^^^^^^^^^^^^^
AttributeError: 'DeepSpeedCPUAdam' object has no attribute 'ds_opt_adam'
```
### Expected behavior
Expect the model to use the deepspeed config file and run training | 06-23-2023 00:43:46 | 06-23-2023 00:43:46 | cc @pacman100 <|||||>Hello, this isn't an issue with DeepSpeed integration. The issue is this:
```
ImportError: /root/.cache/torch_extensions/py311_cu118/cpu_adam/cpu_adam.so: cannot open shared object file: No such file or directory
...
RuntimeError: Error building extension 'cpu_adam'
```<|||||>Hi, @karths8
You can try `rm -rf ~/.cache/torch_extensions/` first.
Related discussion: #14520<|||||>> rm -rf ~/.cache/torch_extensions/
This does not seem to work for me. The root of the problem lies in `fatal error: curand_kernel.h: No such file or directory`. If there are any insights on how to solve this issue please let me know. Any help is greatly appreciated!<|||||>This isn't an integration issue like pacman100 said. See this: https://github.com/microsoft/DeepSpeed/issues/1846
Looks like an issue with the DeepSpeed pip package, I recommend installing it via conda<|||||>> This isn't an integration issue like pacman100 said. See this: [microsoft/DeepSpeed#1846](https://github.com/microsoft/DeepSpeed/issues/1846) Looks like an issue with the DeepSpeed pip package, I recommend installing it via conda
Thanks! I fixed it using [this](https://github.com/microsoft/DeepSpeed/issues/3794#issuecomment-1616059430) |
transformers | 24,437 | closed | TFPreTrainedModel.build breaks pytorch PreTrainedModel.from_pretrained(from_tf=True) | ### System Info
- `transformers` version: 4.30.2
- Platform: macOS-13.4-x86_64-i386-64bit
- Python version: 3.9.16
- Huggingface_hub version: 0.14.1
- Safetensors version: 0.3.1
- PyTorch version (GPU?): 2.0.1 (False)
- Tensorflow version (GPU?): 2.10.1 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: NO
- Using distributed or parallel set-up in script?: NO
### Who can help?
@Rocketknight1
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
The `build()` overridden method introduced recently breaks `PreTrainedModel.from_pretrained(from_tf=True)`. This can be reproduced by the following line of codes:
```python
model = RobertaModelForSequenceClassification.from_pretrained(path, from_tf=True)
```
console prints out:
```
All TF 2.0 model weights were used when initializing RobertaForSequenceClassification.
Some weights of RobertaForSequenceClassification were not initialized from the TF 2.0 model and are newly initialized: ['roberta.embeddings.word_embeddings.weight', 'roberta.embeddings.position_embeddings.weight', 'roberta.embeddings.token_type_embeddings.weight', ... all 6 layers...
```
After some digging, I believe the recursive call to `__call__` in the `build()` causes the names of all TF weights to be prefixed twice with the model's keras name (instead of once,) e.g. `tf_roberta_for_sequence_classification/tf_roberta_for_sequence_classification/...`. The `from_pretrained(from_tf=True)` works according to the following steps:
1. create a PT model
2. create a TF model
3. build the TF model by calling `tf_model(tf_mode.dummy_inputs, training=False)`
1. calling `tf_model._call_`
2. enter name scope of the model, e.g. `tf_roberta_for_sequence_classification`
3. figure it has not been built because `built=False`
4. call `tf_model.build`
5. the overridden `TFPreTrainedModel.build` then set `built=True`
6. calls `self._call__` (i.e. `tf_model.__call__`) again
7. **enter name scope of the model again**, e.g. e.g. => `tf_roberta_for_sequence_classification/tf_roberta_for_sequence_classification`
8. proceed to call the `tf_model.call`
9. call the layer’s `_call_`
10. add variables with name e.g. `tf_roberta_for_sequence_classification/tf_roberta_for_sequence_classification/roberta/...`
4. load TF weights to TF models
5. map TF weight names to PT weight names by removing the **first** prefix of the TF variable names and copy over the weights. => fail to map TF names to PT names due to the double prefix e.g. => `tf_roberta_for_sequence_classification/tf_roberta_for_sequence_classification`
The hacky workaround is to override the `build` ourself to not call the `__call__`, e.g.:
```python
class _FixedTFRobertaForSequenceClassification(transformers.TFRobertaForSequenceClassification):
def build(self, input_shape=None):
self.built = True
if id(transformers.TFRobertaForSequenceClassification) != id(_FixedTFRobertaForSequenceClassification):
print('fixing TFRobertaForSequenceClassification to', _FixedTFRobertaForSequenceClassification.__name__)
transformers.TFRobertaForSequenceClassification = _FixedTFRobertaForSequenceClassification
model = RobertaForSequenceClassification.from_pretrained(language_model_path, from_tf=True)
#### console prints out
All TF 2.0 model weights were used when initializing RobertaForSequenceClassification.
All the weights of RobertaForSequenceClassification were initialized from the TF 2.0 model
```
The double naming can also be seen by just creating the submodel:
```python
config = RobertaConfig.from_pretrained(language_model_path)
tf_model = TFRobertaForSequenceClassification(config)
print(tf_model._name)
tf_model.weights[10]
# => notice the double prefix in the weight variable names
```
There should be better fix.
CC @Rocketknight1
### Expected behavior
1. TF submodel variables/weights names should not be double-prefixed
2. `from_pretrained(from_tf=True)` should works. | 06-22-2023 23:50:01 | 06-22-2023 23:50:01 | @winston-zillow thanks for the bug report! We're investigating now.<|||||>Hi @winston-zillow, I did some investigation and I can't reproduce the issue! At my end `RobertaModel.from_pretrained("roberta-base", from_tf=True)` or `RobertaForSequenceClassification.from_pretrained("roberta-base", from_tf=True)` both work correctly.
This might have something to do with the specific checkpoint you're using. Can you give me some code that reproduces the issue using a checkpoint on the HuggingFace Hub, or if not, can you upload the checkpoint you're using so I can try to figure this one out?<|||||>Wait, I was able to trigger the bug by switching my TensorFlow version! Investigating this now.<|||||>Update: The bug is not caused by `build()`, but by faulty imports from the deprecated `tf.python.keras` repo. As a workaround for now, you can update your version of TensorFlow to 2.11 or newer, which should solve the bug for you. I'm working on a PR which should fix this issue for all TF versions >= 2.6, and bump our minimum supported TF version to 2.6 as well.<|||||>@winston-zillow A PR is in that should resolve this! If you want to try it before it's merged and report your experiences, you can use
```
pip install git+https://github.com/huggingface/transformers.git@improved_keras_imports
```<|||||>PR has now been merged! You can now get it just by installing from `main`:
```
pip install git+https://github.com/huggingface/transformers.git
```
It'll also be included in the next release of transformers. Thanks again for filing the issue, and please feel free to comment or reopen it if the PR doesn't resolve your problem! Our test suite normally catches things like this, but the specific combination of older TF versions and TF -> PT crossloading slipped through, so the bug report is greatly appreciated.<|||||>@Rocketknight1 Thanks for the quick fix! |
transformers | 24,436 | closed | [llama] Fix comments in weights converter | Explain the reason to clone tensor
The original comment doesn't explain much about why we need to do clone.
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@ArthurZucker @younesbelkada @sgugger
| 06-22-2023 22:48:07 | 06-22-2023 22:48:07 | |
transformers | 24,435 | closed | 🌐 [i18n-KO] Translated `tflite.mdx` to Korean | <!-- PR의 제목은 "🌐 [i18n-KO] Translated `<your_file>.mdx` to Korean" 으로 부탁드립니다 -->
# What does this PR do?
Translated the `tflite.mdx` file of the documentation to Korean 😄
Thank you in advance for your review!
Part of https://github.com/huggingface/transformers/issues/20179
<!-- 메인 이슈에 기록이 남아요! 가짜연구소 리포를 사용해 연습하실때는 제거해주시면 감사하겠습니다! :smile: -->
## Before reviewing
- [x] Check for missing / redundant translations (번역 누락/중복 검사)
- [x] Grammar Check (맞춤법 검사)
- [x] Review or Add new terms to glossary (용어 확인 및 추가)
- [x] Check Inline TOC (e.g. `[[lowercased-header]]`)
- [x] Check live-preview for gotchas (live-preview로 정상작동 확인)
## Who can review? (Initial)
<!-- 1. 위 체크가 모두 완료된 뒤에만 가짜연구소 팀원들에게 리뷰 요청하는 아래 주석을 노출해주세요! -->
<!-- Team PseudoLab, may you please review this PR? -->
@0525hhgus, @KIHOON71, @sim-so, @gabrielwithappy, @HanNayeoniee, @wonhyeongseo, @jungnerd
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review? (Final)
<!-- 2. 가짜연구소 팀원들과 리뷰가 끝난 후에만 허깅페이스 직원들에게 리뷰 요청하는 아래 주석을 노출해주세요! -->
May you please review this PR?
@sgugger, @ArthurZucker, @eunseojo | 06-22-2023 22:22:39 | 06-22-2023 22:22:39 | _The documentation is not available anymore as the PR was closed or merged._<|||||>번역 문장 모두 좋습니다. 앞서 반영된 내용 외에 추가 의견 없습니다! 👍 <|||||>May you please review this PR? 😄
@sgugger, @ArthurZucker, @eunseojo |
transformers | 24,434 | closed | Replace python random with torch.rand to enable dynamo.export | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Related and Fixes https://github.com/pytorch/pytorch/issues/102794
TL;DR dynamo graph breaks on python `random.uniform(0, 1)`. The graph break can be prevented by replacing with `torch.randn([])`.
Example repro script
```python
import torch
import torch._dynamo
from transformers import AutoTokenizer, BartForCausalLM
tokenizer = AutoTokenizer.from_pretrained("facebook/bart-base")
model = BartForCausalLM.from_pretrained("facebook/bart-base", add_cross_attention=False)
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
outputs = model(**inputs)
torch._dynamo.export(model, return_dict=False, **inputs)
```
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 06-22-2023 21:04:18 | 06-22-2023 21:04:18 | > You are touching multiple Flax model which shouldn't depend on torch, could you revert that?
Nice catch! Done.<|||||>_The documentation is not available anymore as the PR was closed or merged._ |
transformers | 24,433 | closed | Decoding error while using DataCollatorForSeq2Seq | ### System Info
- `transformers` version: 4.30.2
- Platform: Linux-5.15.107+-x86_64-with-glibc2.31
- Python version: 3.10.12
- Huggingface_hub version: 0.15.1
- Safetensors version: 0.3.1
- PyTorch version (GPU?): 2.0.1+cu118 (True)
- Tensorflow version (GPU?): 2.12.0 (True)
- Flax version (CPU?/GPU?/TPU?): 0.6.9 (gpu)
- Jax version: 0.4.10
- JaxLib version: 0.4.10
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no>
### Who can help?
@gante
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
When I decode data from DataCollatorForSeq2Seq, I get OverflowError with fast tokenizer and TypeError with default tokenizer.
Code example:
```
model_name = "facebook/bart-base"
tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=False)
data_collator = DataCollatorForSeq2Seq(tokenizer, model=None)
texts = ["text" * 5, "text" * 10]
labels = ["label" * 5, "label" * 10]
features = [
{
"input_ids": tokenizer(i)['input_ids'],
"labels": tokenizer(j)['input_ids']
}
for i,j in zip(texts, labels)
]
result = data_collator(features)
print(result["labels"][0])
print(tokenizer.decode(result["labels"][0], skip_special_tokens=True))
```
Stack trace in case `use_fast=False`:
```
TypeError Traceback (most recent call last)
[<ipython-input-7-14a1328c21fe>](https://localhost:8080/#) in <cell line: 18>()
16 result = data_collator(features)
17 print(result["labels"][0])
---> 18 print(tokenizer.decode(result["labels"][0], skip_special_tokens=True))
2 frames
[/usr/local/lib/python3.10/dist-packages/transformers/tokenization_utils_base.py](https://localhost:8080/#) in decode(self, token_ids, skip_special_tokens, clean_up_tokenization_spaces, **kwargs)
3507 token_ids = to_py_obj(token_ids)
3508
-> 3509 return self._decode(
3510 token_ids=token_ids,
3511 skip_special_tokens=skip_special_tokens,
[/usr/local/lib/python3.10/dist-packages/transformers/tokenization_utils.py](https://localhost:8080/#) in _decode(self, token_ids, skip_special_tokens, clean_up_tokenization_spaces, spaces_between_special_tokens, **kwargs)
947 current_sub_text.append(token)
948 if current_sub_text:
--> 949 sub_texts.append(self.convert_tokens_to_string(current_sub_text))
950
951 if spaces_between_special_tokens:
[/usr/local/lib/python3.10/dist-packages/transformers/models/bart/tokenization_bart.py](https://localhost:8080/#) in convert_tokens_to_string(self, tokens)
305 def convert_tokens_to_string(self, tokens):
306 """Converts a sequence of tokens (string) in a single string."""
--> 307 text = "".join(tokens)
308 text = bytearray([self.byte_decoder[c] for c in text]).decode("utf-8", errors=self.errors)
309 return text
TypeError: sequence item 9: expected str instance, NoneType found
```
Stack trace in case `use_fast=True`:
```
OverflowError Traceback (most recent call last)
[<ipython-input-8-d0724246272d>](https://localhost:8080/#) in <cell line: 18>()
16 result = data_collator(features)
17 print(result["labels"][0])
---> 18 print(tokenizer.decode(result["labels"][0], skip_special_tokens=True))
1 frames
[/usr/local/lib/python3.10/dist-packages/transformers/tokenization_utils_base.py](https://localhost:8080/#) in decode(self, token_ids, skip_special_tokens, clean_up_tokenization_spaces, **kwargs)
3507 token_ids = to_py_obj(token_ids)
3508
-> 3509 return self._decode(
3510 token_ids=token_ids,
3511 skip_special_tokens=skip_special_tokens,
[/usr/local/lib/python3.10/dist-packages/transformers/tokenization_utils_fast.py](https://localhost:8080/#) in _decode(self, token_ids, skip_special_tokens, clean_up_tokenization_spaces, **kwargs)
544 if isinstance(token_ids, int):
545 token_ids = [token_ids]
--> 546 text = self._tokenizer.decode(token_ids, skip_special_tokens=skip_special_tokens)
547
548 clean_up_tokenization_spaces = (
OverflowError: out of range integral type conversion attempted
```
Also, if I use facebook/m2m100_418M there are no errors, but the result of decoding looks like (though I use `skip_special_tokens=True`) this:
```
labellabellabellabellabel<unk><unk><unk><unk><unk><unk><unk><unk><unk><unk>
```
### Expected behavior
Hello! I have expected no errors and that skip_special_tokens would work normally. Seems like the label padding in DataCollatorForSeq2Seq with using -100 is leading to error.
I think these issue are related to this:
#22634
[this](https://github.com/huggingface/transformers/issues/3853#issuecomment-770417239) | 06-22-2023 20:13:16 | 06-22-2023 20:13:16 | Yes, you need to replace the labels indices that are at -100 to be able to decode them. The -100 indicates to PyTorch the corresponding token should be ignored during the loss computation, this is not a bug.
Also please use the [forums](https://discuss.huggingface.co/) for questions like this.<|||||>OK, thank you so much. |
transformers | 24,432 | open | [GPT-2] Add docs | # What does this PR do?
Lots of people don't seem to know about batched generation with GPT-2 and friends. Hence this PR adds a section to the docs, similar to the T5 docs.
It also fixes an issue in the T5 docs. | 06-22-2023 19:54:13 | 06-22-2023 19:54:13 | cc @stevhliu <|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24432). All of your documentation changes will be reflected on that endpoint.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>@stevhliu can we proceed by adding this, or would you like to have it on a separate page?<|||||>I think it'll be best to put this in its own section on the [Text generation strategies](https://huggingface.co/docs/transformers/generation_strategies#text-generation-strategies) page :) |
transformers | 24,431 | closed | bug in trainer with accelerate prepare of GPT2LMHeadModel using fp16 | ### System Info
```
- `transformers` version: 4.30.2
- Platform: Linux-4.15.0-192-generic-x86_64-with-glibc2.27
- Python version: 3.11.3
- Huggingface_hub version: 0.15.1
- Safetensors version: 0.3.1
- PyTorch version (GPU?): 2.0.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: yes, model parallelism
```
### Who can help?
@sgugger ~@ pacma~ oops
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```
import os
import sys
import numpy as np
from itertools import chain
import torch
from datasets import load_dataset
from transformers import (
GPT2TokenizerFast,
GPT2LMHeadModel,
DataCollatorForLanguageModeling,
Trainer,
TrainingArguments,
set_seed,
)
seed = 42
torch.manual_seed(seed)
set_seed(seed)
np.random.seed(seed)
tok = GPT2TokenizerFast.from_pretrained("gpt2")
tok.pad_token = tok.eos_token
tok.pad_token_id = tok.eos_token_id
test_size = 0.1
_chunk_size = 256
text_col = "text"
num_workers = min(os.cpu_count(), 2)
max_seq_length = min(_chunk_size, tok.model_max_length)
ds = load_dataset("wikitext", "wikitext-2-v1")
tokenized_ds = ds.map(
lambda x: tok(x["text"], padding=True, pad_to_multiple_of=max_seq_length),
remove_columns=[text_col],
batched=True,
num_proc=num_workers,
)
def chunk_text(examples, max_seq_length):
concatenated = {k: list(chain(*examples[k])) for k in examples.keys()}
tot_len = len(concatenated[list(examples.keys())[0]])
if tot_len >= max_seq_length:
tot_len = (
tot_len // max_seq_length
) * max_seq_length
result = {
k: [t[i : i + max_seq_length] for i in range(0, tot_len, max_seq_length)]
for k, t in concatenated.items()
}
return result
chunked_ds = tokenized_ds.map(
lambda x: chunk_text(x, max_seq_length), batched=True, num_proc=num_workers
)
model = GPT2LMHeadModel.from_pretrained(
"gpt2",
device_map="auto",
)
data_collator = DataCollatorForLanguageModeling(tok, mlm=False)
args = TrainingArguments(
output_dir="delete-me",
per_device_train_batch_size=6,
logging_steps=500,
gradient_accumulation_steps=1,
gradient_checkpointing=False,
num_train_epochs=1,
weight_decay=0.1,
warmup_steps=50,
lr_scheduler_type="cosine",
learning_rate=5e-6,
save_steps=10_000,
fp16=True, # fp16 bug with GPT2 models in huggingface?
dataloader_pin_memory=True,
dataloader_num_workers=2,
optim="adafactor",
)
trainer = Trainer(
model=model,
tokenizer=tok,
args=args,
data_collator=data_collator,
train_dataset=chunked_ds["train"],
)
trainer.train()
trainer.save_model("temp")
```
### Expected behavior
Seems like there were some changes to trainer between v4.29.2 and v4.30.0 to utilize accelerate to prepare the model ([here's the git blame](https://github.com/huggingface/transformers/blame/fe861e578f50dc9c06de33cd361d2f625017e624/src/transformers/trainer.py#L1751-L1752)). With a GPT2LMHeadModel using fp16 precision for training, these changes to trainer lead to the following error from the above script:
```
Traceback (most recent call last):
File "[...]/min-reproducible.py", line 93, in <module>
trainer.train()
File "[...]/miniconda3/envs/llm/lib/python3.11/site-packages/transformers/trainer.py", line 1645, in train
return inner_training_loop(
^^^^^^^^^^^^^^^^^^^^
File "[...]/miniconda3/envs/llm/lib/python3.11/site-packages/transformers/trainer.py", line 1756, in _inner_training_loop
model, self.optimizer = self.accelerator.prepare(self.model, self.optimizer)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "[...]/miniconda3/envs/llm/lib/python3.11/site-packages/accelerate/accelerator.py", line 1182, in prepare
result = tuple(
^^^^^^
File "[...]/miniconda3/envs/llm/lib/python3.11/site-packages/accelerate/accelerator.py", line 1183, in <genexpr>
self._prepare_one(obj, first_pass=True, device_placement=d) for obj, d in zip(args, device_placement)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "[...]/miniconda3/envs/llm/lib/python3.11/site-packages/accelerate/accelerator.py", line 1022, in _prepare_one
return self.prepare_model(obj, device_placement=device_placement)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "[...]/miniconda3/envs/llm/lib/python3.11/site-packages/accelerate/accelerator.py", line 1308, in prepare_model
model.forward = MethodType(torch.cuda.amp.autocast(dtype=torch.float16)(model.forward.__func__), model)
^^^^^^^^^^^^^^^^^^^^^^
AttributeError: 'function' object has no attribute '__func__'. Did you mean: '__doc__'?
```
Seems like the `model.forward` object is a `function` rather than a `method` so `__func__` isn't defined. `model` is an instance of GPT2LMHeadModel so I would've expected `model.forward` to be a method on the instance but maybe it's modified somewhere else. ~Overall, I'm not sure if this is a bug of trainer or accelerate or the model.~ Seems like actually this might be an issue on `accelerate` as the folks in the linked issue below are running into it when manually preparing the model (as opposed to letting trainer prepare as I did) - I can reopen this issue in the `accelerate` repo if that's better?
Interestingly, if not using fp16, it runs fine. Ideally, I'd be able to use fp16 with a GPT2LMHeadModel using the trainer.
Seems like someone else has also run into this issue using a LLaMA model: https://github.com/OpenAccess-AI-Collective/axolotl/issues/195#issuecomment-1589657199
Would appreciate any help/fix! | 06-22-2023 19:08:01 | 06-22-2023 19:08:01 | cc @pacman100 and @muellerzr <|||||>Having the same issue with QLoRA style PEFT (also when setting FP16 to off).<|||||>Hello @StevenSong,
Thank you for the minimal reproducible example. The culprit in the above example is `device_map="auto",`. The below code runs fine:
```diff
import os
import sys
import numpy as np
from itertools import chain
import torch
from datasets import load_dataset
from transformers import (
GPT2TokenizerFast,
GPT2LMHeadModel,
DataCollatorForLanguageModeling,
Trainer,
TrainingArguments,
set_seed,
)
seed = 42
torch.manual_seed(seed)
set_seed(seed)
np.random.seed(seed)
tok = GPT2TokenizerFast.from_pretrained("gpt2")
tok.pad_token = tok.eos_token
tok.pad_token_id = tok.eos_token_id
test_size = 0.1
_chunk_size = 256
text_col = "text"
num_workers = min(os.cpu_count(), 2)
max_seq_length = min(_chunk_size, tok.model_max_length)
ds = load_dataset("wikitext", "wikitext-2-v1")
tokenized_ds = ds.map(
lambda x: tok(x["text"], padding=True, pad_to_multiple_of=max_seq_length),
remove_columns=[text_col],
batched=True,
num_proc=num_workers,
)
def chunk_text(examples, max_seq_length):
concatenated = {k: list(chain(*examples[k])) for k in examples.keys()}
tot_len = len(concatenated[list(examples.keys())[0]])
if tot_len >= max_seq_length:
tot_len = (
tot_len // max_seq_length
) * max_seq_length
result = {
k: [t[i : i + max_seq_length] for i in range(0, tot_len, max_seq_length)]
for k, t in concatenated.items()
}
return result
chunked_ds = tokenized_ds.map(
lambda x: chunk_text(x, max_seq_length), batched=True, num_proc=num_workers
)
model = GPT2LMHeadModel.from_pretrained(
"gpt2",
- device_map="auto",
)
data_collator = DataCollatorForLanguageModeling(tok, mlm=False)
args = TrainingArguments(
output_dir="delete-me",
per_device_train_batch_size=6,
logging_steps=500,
gradient_accumulation_steps=1,
gradient_checkpointing=False,
num_train_epochs=1,
weight_decay=0.1,
warmup_steps=50,
lr_scheduler_type="cosine",
learning_rate=5e-6,
save_steps=10_000,
fp16=True, # fp16 bug with GPT2 models in huggingface?
dataloader_pin_memory=True,
dataloader_num_workers=2,
optim="adafactor",
)
trainer = Trainer(
model=model,
tokenizer=tok,
args=args,
data_collator=data_collator,
train_dataset=chunked_ds["train"],
)
trainer.train()
trainer.save_model("temp")
```
Hello @sgugger, seems like `device_map` changes the `model.forward` to `function` rather than preserving it as a `method`.<|||||>Hello @imarquart, please open a new issue on PEFT wrt issue you are facing with a minimal reproducible example.<|||||>The above PR should fix this<|||||>Thank you! |
transformers | 24,430 | closed | Clarify batch size displayed when using DataParallel | # What does this PR do?
As pointed out in #24345, the batch size displayed when using `DataParallel` is unclear, this PR fixes that.
Fixes #24345 | 06-22-2023 17:30:10 | 06-22-2023 17:30:10 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 24,429 | closed | Add support for for loops in python interpreter | # What does this PR do?
For loops are safe to execute in our restricted Python interpreter, this PR adds support for it and adds `range` in the list of base Python tools allowed.
Fixes #24362 | 06-22-2023 16:20:29 | 06-22-2023 16:20:29 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 24,428 | closed | Fix some `TFWhisperModelIntegrationTests` | # What does this PR do?
(Probably since the introduction of `GenerationConfig`), some TF Whisper Integration tests fail with error
```bash
ValueError: The following `model_kwargs` are not used by the model: ['language', 'task'] (note: typos in the generate arguments will also show up in this list)
```
From my understanding, we should pass some arguments via `generation_kwargs`.
Note that `WhisperForConditionalGeneartion` has its custom `generate` but `TFWhisperForConditionalGeneartion` doesn't.
⚠️ When I try to apply the same changes to PyTorch Whisper test methods, it fails as the output is different.
We are somehow in a trouble of the inconsistency between PT/TF.
(Not sure if we should overwrite `generate` in `TFWhisperForConditionalGeneartion`.) | 06-22-2023 15:34:07 | 06-22-2023 15:34:07 | _The documentation is not available anymore as the PR was closed or merged._<|||||>> LGTM- thanks for fixing!
>
> Regarding the PyTorch tests failing with the same inputs - how do they fail?
It gives different outputs <|||||>I am very sorry, **I was doing something stupid and made wrong claims**. The `generation_kwargs` is an argument to the `__init__` of `GenerationConfig` instead of the `generate` method.
(Some of) The previous failing tests pass as the 2 problematic arguments to `generate` are not passed, but this is wrong logic.
**I will look in more depth.**<|||||>Well, I finally decided to overwrite `generate` for `TFWhisperForConditionalGeneartion`. |
transformers | 24,427 | open | Make it possible to customize generation config for Trainer's training loop evaluation | ### Feature request
When using `predict_with_generate` and we want to compute generation-based metrics during the eval happening during training, it would be good if the model's generation config is used and/or if we can pass the indented generation config into the train method so that it can be passed to evaluate.
As it is, the generation done is using the default parameters only.
### Motivation
The current way GenerationConfigs are used is pretty inconsistent and muddled IMO. You can set it at the model level but it's only used sometimes. You can pass it directly to evaluate, predict or generate but it's not clear if you should pass it as kwargs or as a full GenerationConfig.
Would be great to clean this up so that it's super clear on how to use it and have a very consistent way to use it, as in python. My suggestion would be to set it at the Trainer level and be able to override it in the evaluate, predict, generate methods with a simple generation_config: GenerationConfig parameter.
### Your contribution
Happy to discuss different possibilities and see where I could help. | 06-22-2023 15:00:08 | 06-22-2023 15:00:08 | You can customize the model `generation_config` as you want and it will be the one being used. cc @gante to make sure I'm not saying something untrue.<|||||>Maybe it's because I was setting the Generation Config in the model without putting _from_model_config = False and it was triggering the following, which resets the config:
```python
if generation_config is None:
# legacy: users may modify the model configuration to control generation -- update the generation config
# model attribute accordingly, if it was created from the model config
if self.generation_config._from_model_config:
new_generation_config = GenerationConfig.from_model_config(self.config)
if new_generation_config != self.generation_config:
warnings.warn(
"You have modified the pretrained model configuration to control generation. This is a"
" deprecated strategy to control generation and will be removed soon, in a future version."
" Please use a generation configuration file (see"
" https://huggingface.co/docs/transformers/main_classes/text_generation)"
)
self.generation_config = new_generation_config
generation_config = self.generation_config
```<|||||>Yeah, that was it. Though it is a bit confusing because that message says it's a deprecated strategy but doesn't say it will revert the configs.
So, maybe docs should be improved to say when Generation Config is used where, with what priority between the default one, the model one, the gen_kwargs, etc...
For example, in Seq2SeqTrainer.evaluate and predict there is this code which doesn't take into consideration if there is a generation config set in the model.
```python
gen_kwargs = gen_kwargs.copy()
if gen_kwargs.get("max_length") is None and gen_kwargs.get("max_new_tokens") is None:
gen_kwargs["max_length"] = self.args.generation_max_length
gen_kwargs["num_beams"] = (
gen_kwargs["num_beams"] if gen_kwargs.get("num_beams") is not None else self.args.generation_num_beams
)
```
Furthermore, in Seq2SeqTrainer.prediction_step there is this code that, again, doesn't take into account the model generation config and goes for the gen_kwargs (which aren't passed in the training loop).
```python
gen_kwargs = self._gen_kwargs.copy()
if gen_kwargs.get("max_length") is None and gen_kwargs.get("max_new_tokens") is None:
gen_kwargs["max_length"] = self.model.config.max_length
gen_kwargs["num_beams"] = (
gen_kwargs["num_beams"] if gen_kwargs.get("num_beams") is not None else self.model.config.num_beams
)
default_synced_gpus = True if is_deepspeed_zero3_enabled() else False
gen_kwargs["synced_gpus"] = (
gen_kwargs["synced_gpus"] if gen_kwargs.get("synced_gpus") is not None else default_synced_gpus
)
```
If you do have a Generation Config set in the model or passed as generation_config parameter in evaluate/predict, where you set `max_new_tokens` this will yield a warning: "Both `max_new_tokens` and `max_length` seem to have been set. It was the code above that set max_length because it didn't see the passed GenerationConfig.
So it seems if I set a GenerationConfig in the model, with max_new_tokens, then I will always get this warning because the training loop doesn't pass anything directly to evaluate/predict.
Let me know if I should close this.<|||||>Also, the shape[1] of outputs.predictions coming out of Seq2SeqTrainer.predict is 20 and it does not respect max_new_tokens passed in the generation config.<|||||>Hi @antonioalegria 👋
We do support parameterizing `Seq2SeqTrainer` with a `GenerationConfig` object. You seem to be hitting an issue due to `_from_model_config` being `True`, which simply means that you've issued a sequence of commands that we did not account for :)
Most of the issues you described are intentional temporary workarounds -- when a new feature is introduced, we have to ensure our library goes through a deprecation cycle, in which the old way of doing things take precedence. That's why you see strange patterns like the use of `_from_model_config` or "duplicated" pieces of code to control the same thing. Due to the large number of possibilities within `transformers`, sometimes we simply miss a few combinations.
That being said, let's start with the basics (which you haven't done 😉 ): what version of transformers are you using, and how can I reproduce your issue?<|||||>Hi, I have removed the `_from_model_config` from the duplicated and altered config and the first issue no longer happens.
I still get those "Both max_new_tokens and max_length seem to have been set." warnings though.
transformers: 4.30.2
To reproduce use this code you just need to call `Seq2SeqTrainer.evaluate` with `generation_config=your_gen_config`, or set the generation_config in the model. In any case, you have to set max_new_tokens.
You will then see those warnings, which shouldn't happen.
Let me know if you'd like me to provide a running script.
<|||||>So what is the correct way to parametrize the generation (e.g. to use contrastive search) during the model training? [The documentation](https://huggingface.co/docs/transformers/main_classes/text_generation) misses this point. |
transformers | 24,426 | closed | TF CI fix for Segformer | This very small PR rewrites a couple of reshapes so the TF compiler can figure out the channels dim for Segformer even when some input dimensions are undefined. Should fix any CI issues the model has been having. | 06-22-2023 14:24:32 | 06-22-2023 14:24:32 | Ah, I know, I just thought you might still be unconscious!<|||||>_The documentation is not available anymore as the PR was closed or merged._ |
transformers | 24,425 | closed | LayoutXLM / LayoutLMv2: error when doing export to TorchScript | ### System Info
- `transformers` version: 4.30.2
- Platform: Linux-4.15.0-212-generic-x86_64-with-glibc2.27
- Python version: 3.10.11
- Huggingface_hub version: 0.15.1
- Safetensors version: 0.3.1
- PyTorch version (GPU?): 2.0.1+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help?
@NielsRogge
@ArthurZucker
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Run the following code:
```python
import torch
from transformers import AutoModelForTokenClassification
DEVICE = "cuda"
model = AutoModelForTokenClassification.from_pretrained(
"microsoft/layoutxlm-base", torchscript=True
).to(DEVICE)
input_ids = torch.tensor([[6, 7, 8, 9, 10]], device=DEVICE)
bboxes = torch.tensor(
[
[
(0.7605, 0.0071, 0.8343, 0.0215),
(0.834388, 0.0071817, 0.855485, 0.0215431),
(0.855485, 0.0071817, 0.866033, 0.0215431),
(0.26377, 0.02427, 0.34743, 0.0421),
(0.347483, 0.024237, 0.369816, 0.04219),
]
],
device=DEVICE,
)
bboxes = (bboxes * 1000).to(int)
image = torch.randn((1, 3, 256, 256)).to(torch.uint8).to(DEVICE)
attention_mask = torch.tensor([[1, 1, 1, 1, 1]], device=DEVICE)
torch.jit.trace(model, [input_ids, bboxes, image, attention_mask]
```
### Expected behavior
I expect to get a traced model as a result of the last line of code. But instead I get huge error traceback saying that "graphs differ across invocations". I see that @NielsRogge already did some changes to the LayoutLMv2 code to fix model tracing. AFAIK LayoutXLM just uses LayoutLMv2 model code under the hood so I was expecting to get a traced model with no problem.
But it looks like the focus of the change was on different errors (#15254). I haven't found any mentions about my problem anywhere here except for the #17476 where it helped to just disable trace checking. If I try to disable trace checking, the model works at the first inference but predictions start to deviate seriously after that. Every prediction on the same image is *very* different. So I guess it's not the solution in my case.
Am I doing something wrong here or is this model not really compatible with PyTorch tracing functionality? I'm pretty carefully following the official guide about exporting the model to TorchScript. | 06-22-2023 13:43:51 | 06-22-2023 13:43:51 | [Error message](https://github.com/huggingface/transformers/files/11834624/error.txt)
It's gigantic so attaching it as a file
<|||||>Hi @sudoandros , we investigated with @ArthurZucker and found out that the issue comes from this line: https://github.com/huggingface/transformers/blob/8e164c5400b7b413c7b8fb32e35132001effc970/src/transformers/models/layoutlmv2/modeling_layoutlmv2.py#L591 . It calls `detectron2` external library, so there is not much we can do on our side. Feel free to open an issue in their repo.
Note that the log `First diverging operator` from pytorch is wrong.
Note: `torch.are_deterministic_algorithms_enabled()` does not help.<|||||>Hello @fxmarty. Thank you for taking time and looking into it! As I see in the code, FPN model of detectron2 is the backbone here. So this is the model I should report to its authors about. Do I get it right? <|||||>@sudoandros The detectron2 config hints that `'META_ARCHITECTURE': 'GeneralizedRCNN'`. Maybe this is the bit responsible. https://github.com/facebookresearch/detectron2/issues/46 may be a good read |
transformers | 24,424 | closed | Save `site-packages` as cache in CircleCI job | # What does this PR do?
Currently, we save `~/.cache/pip` as cache. Take `check_repository_consistency` job as example:
- it installs `[all, quality]`
- loading cache takes ~ 45 seconds
- `pip install` takes ~ 3-4 minutes
If we save `.pyenv/versions` as cache too:
- loading this new extra cache takes ~ 2 min
- `pip install` takes 20 ~ 30 seconds
We gain 30 ~ 90 seconds (depending on CircleCI's states). Not a big absolute improvement. But for this job which total runtime is ~ `5m30s`, we can say > 20% reduction. As `check_repository_consistency` and `check_code_quality` will be run for each PR's each push, probably it's nice to have such reduction.
WDYT?
| 06-22-2023 13:13:27 | 06-22-2023 13:13:27 | _The documentation is not available anymore as the PR was closed or merged._<|||||>>If we save .pyenv/versions as cache too:
>loading this new extra cache takes ~ 2 min
pip install takes 20 ~ 30 seconds
@ydshieh Wait - are the timings of these in the right order? <|||||>> Do we still get updates if there is a release of one of the libraries?
@sgugger Yes if we use -U flag. I can do that if you are ok<|||||>> > If we save .pyenv/versions as cache too:
>
> > loading this new extra cache takes ~ 2 min
> > pip install takes 20 ~ 30 seconds
>
> @ydshieh Wait - are the timings of these in the right order?
Hmm, yes. But could you explain why you have doubts so I can reply in more details?<|||||>> @sgugger Yes if we use -U flag. I can do that if you are ok
YEs, please!<|||||>> Hmm, yes. But could you explain why you have doubts so I can reply in more details?
@ydshieh I just realised my mistake 🙃 I thought it was saying that it takes 2 mins to load with the cache and 30-40s to install by pip. Whereas it's (45 secs + 3-4 mins ) -> (2 mins + 20-30s). My bad! <|||||>> > Hmm, yes. But could you explain why you have doubts so I can reply in more details?
>
> @ydshieh I just realised my mistake 🙃 I thought it was saying that it takes 2 mins to load with the cache and 30-40s to install by pip. Whereas it's (45 secs + 3-4 mins ) -> (2 mins + 20-30s). My bad!
The new one should be (45 secs + 2 mins + 20-30s): The first part of cache (in `.cache/pip`) is not changed.
But we still have a little gain overall.
<|||||>Although already been approved - FYI: I just added -U everywhere |
transformers | 24,423 | closed | False truncation of generated sequences when calling Seq2SeqTrainer.predict with num_return_sequences>1 | ### System Info
- `transformers` version: 4.30.0
- Platform: Linux-5.4.0-113-generic-x86_64-with-glibc2.31
- Python version: 3.10.11
- Huggingface_hub version: 0.15.1
- Safetensors version: 0.3.1
- PyTorch version (GPU?): 2.0.1+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help?
@sgugger @ArthurZucker @younesbelkada
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
1. Define `compute_metrics` function to inspect predictions' shape
```
def compute_metrics(eval_preds):
preds, labels = eval_preds
print(preds.shape)
```
2. Create training arguments `args = TrainingArguments(do_predict=True)`
3. Instantiate `trainer = Seq2SeqTrainer(model=model, args=args, tokenizer=tokenizer, compute_metrics=compute_metrics)`
4. Call predict method `trainer.predict(test_dataset, num_return_sequences=2, max_new_tokens=32, do_sample=True)`
I expect the predictions to be of shape `8 * 2 * k` or `16 * k` where k is the generated sequence length. However, it is always `8 * k`. I find out the `generated_tokens` [here](https://github.com/huggingface/transformers/blob/fe861e578f50dc9c06de33cd361d2f625017e624/src/transformers/trainer_seq2seq.py#L276) is `16 * k` while it is truncated to `8 * k` [here](https://github.com/huggingface/transformers/blob/fe861e578f50dc9c06de33cd361d2f625017e624/src/transformers/trainer.py#L3330), which is incorrect to me.
### Expected behavior
I think when `num_return_sequences > 1`, the output tensor (`generated_tokens`) should be of shape `2 * 8 * k` instead of `16 * k`. Maybe you can add a simple parameter (e.g. `batch_size_alone=True/False`) to determine how the output is aligned.
By the way, I think currently there are too many ways to set generation configurations when prediction (kwargs, model configs, and default configs). Maybe they should be simplified. | 06-22-2023 10:53:14 | 06-22-2023 10:53:14 | It's true that `num_return_sequences>1` is not supported by the `Seq2SeqTrainer`. If you have an idea of a fix, I'm happy to look at a PR!
As for setting generation parameters, the recommended way is to use the generation config (using the model config is deprecated).<|||||>@sgugger Wow I find opening a pull request is quite complicated as there are so many tests. I pushed a simple fix in my forked repo [here](https://github.com/namespace-Pt/transformers/commit/fbcbda33522186a84a073a43fef864eecc0a29f2). Hope that helps :)<|||||>@sgugger What is the recommended way of using `Seq2SeqTrainer` with `predict_with_generate=True` and `num_return_sequences>1` in distributed inference setup, for example with `trainer.predict()`?
Currently, I have the following solution.
I am passing a custom function to `preprocess_logits_for_metrics`. Since, I predict with generate, I actually do not get logits but the generated tokens in this function as input. The input shape is `(number_of_samples*num_return_sequences, sequence_length).` which would get truncated. Therefore, I reshape the tensor to shape `(number_of_samples, seq_len * num_return_sequences)`.
It works but is there is a better way you would recommend? |
transformers | 24,422 | closed | Update RayTune doc link for Hyperparameter tuning | Link to RayTune search space API docs was outdated - have provided correct new link for docs.
# What does this PR do?
Updates broken link to RayTune search space API docs for the Transformers Hyperparameter tuning function.
<!-- Remove if not applicable -->
Fixes #24135
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@richardliaw , @amogkam
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 06-22-2023 10:43:59 | 06-22-2023 10:43:59 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24422). All of your documentation changes will be reflected on that endpoint. |
transformers | 24,421 | closed | GPT2 and -100 in input_ids | ### System Info
I post all detail maybe it is expected behavior.
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
Hi Folks,
I remember it used to work, but I may be missing something. I noticed if I set the value in input_ids to -100.
i.e., standard ignore value. GPT-2 crash is essentially used as an index value where it should ignore it.
Is there anything changed in GPT-2 model recently in the code base? I'm not 100% sure is a bug or not.
This led to a crash.
```
attention_mask = batch['attention_mask']
mask_inverted = ~attention_mask.bool()
input_ids = batch["input_ids"]
input_ids = input_ids.masked_fill(mask_inverted == 1, -100).contiguous()
```
Thank you.
### Expected behavior
Expected behavior
value in the input_ids should [id_n, id_n+1 etc, eos_id, -100] | 06-22-2023 09:26:15 | 06-22-2023 09:26:15 | You can put -100 in the `labels` so they are ignored by the loss, but in the inputs as it's not a valid index for the embedding matrix.
Also this kind of questions is better suite for the [forums](https://discuss.huggingface.co/) as we keep GitHub issues for bugs and feature requests only.<|||||>Thank you very much for confirming, strange ... I have an old code that did work before I think it was masking -100. anyway looks like im a bit off here. : ) thank you! |
transformers | 24,420 | closed | Revert "Fix gradient checkpointing + fp16 autocast for most models" | Reverts huggingface/transformers#24247
This PR reverts #24247
The investigation initially started with the failing test in https://github.com/huggingface/peft/actions/runs/5340918925/jobs/9686171926 - a training setup that was taking 7GB now takes 15GB and OOM. I looked back at each commit and can confirm this commit caused it.
Instead of patching the initial issue on our side, I propose for now to revert the PR and just wait for the fix in PT side as doubling down the memory requirements is a lot for PEFT users.
Can confirm the training doesn't OOM before the commit 285a48011da3145ae77c5b22bcfbe77d367e5173 hence the PR that reverts the commit
cc @sgugger @pacman100 @amyeroberts
Putting it as draft as I need to deep dive a bit before making sure this is the right thing to do | 06-22-2023 08:46:55 | 06-22-2023 08:46:55 | _The documentation is not available anymore as the PR was closed or merged._<|||||>I'd be pro reverting until we manage to resolve or find another solution.
cc @ydshieh too as the testing master :) Looking at our daily CI, it seems this hasn't affected our "normal" models - is this right? Are there any tests we should be running to verify this? <|||||>@amyeroberts
That PR #24247 is merged yesterday. The daily CI is triggered this morning and not finished yet. So we don't know what that PR brings.<|||||>There is also a push CI (only non-slow tests and not a complete CI).
From the screenshot, it does look good though.
<img width="490" alt="Screenshot 2023-06-22 114431" src="https://github.com/huggingface/transformers/assets/2521628/7db602ba-3b9b-43aa-a822-08167d1e6a6b">
<|||||>Ok I just did some benchmarks by observing the peak memory usage of different training setups and it seems to affect most of the models regardless of the modality:
| Model | Quantization method | Use Rentrant (i.e. #24247 included) | Peak memory usage |
| -------- | ------- | ------- | ------- |
| `openai/whisper-large` | 8bit | Yes | OOM |
| `openai/whisper-large` | 8bit | No | 7.5GB |
| `openai/whisper-large` | 4bit | No | 5.1GB |
| `openai/whisper-large` | 4bit | Yes | 14.5GB |
| `facebook/opt-6.7b` | 8bit | Yes | 14.1GB |
| `facebook/opt-6.7b` | 8bit | no | 9.8GB |
| `facebook/opt-1.3b` | 16bit | Yes | 12.1GB |
| `facebook/opt-1.3b` | 16bit | no | 12.1GB |
| `google/flan-t5-large` | 16bit | Yes | 12.7GB |
| `google/flan-t5-large` | 16bit | no | 12.7GB |
| `facebook/opt-1.3b` | 8bit | Yes | 5.1GB |
| `facebook/opt-1.3b` | 8bit | no | 4.1GB |
Note that before #24420 the last PEFT layer had always None grad, therefore got never updated. But the surprising thing is that the last layer shouldn't cause 2x memory increase, it should cause in the worst case x(1 + 1/num_layers) increase
I will investigate further and keep updates here <|||||>@younesbelkada Thanks for investigating and sharing! Could you also add a model with no quantization for reference in the table? <|||||>Sure yes! Will update the table soon <|||||>From the updated observations above
1- it seems to affect the quantized models only
2- Larger models gets more affected<|||||>we can merge this PR and revert the change as it is leading to **huge** increase in VRAM usage for quantized models. The below minimal example doesn't lead to final layer having `None` grads.
Please note the way Accelerate does the Mixed Precision handling which is now used in Trainer too. Don't know why this works and why using autocast as a context manager fails (results in `None` grads for final layer).
```diff
import torch
from transformers import AutoModelForCausalLM
from types import MethodType
from accelerate.utils import convert_outputs_to_fp32
model_id = "facebook/opt-350m"
model = AutoModelForCausalLM.from_pretrained(model_id).to(0)
model.gradient_checkpointing_enable()
model.train()
+ model.forward = MethodType(torch.cuda.amp.autocast(dtype=torch.bfloat16)(model.forward.__func__), model)
+ model.forward = MethodType(convert_outputs_to_fp32(model.forward.__func__), model)
assert model.training and model.is_gradient_checkpointing
optimizer = torch.optim.Adam(model.parameters(), lr=1e-7)
- with torch.cuda.amp.autocast(True, dtype=torch.float16):
dummy_input = torch.LongTensor([[0, 1, 0, 1]]).to(0)
model.train()
logits = model(dummy_input).logits
loss = logits.mean()
loss.backward()
optimizer.step()
for n, param in model.named_parameters():
if param.grad is None:
print(n)
``` <|||||>Perfect, let's revert the PR then
I can also confirm I don't have any None-grad for lora layers using llama (as posted in original issue), I believe the recent accelerate integration silently fixed the bug and the user was using a former version of transfomers
cc @amyeroberts @sgugger this is ready for review<|||||>Thanks very much for the support and quick feedback! @amyeroberts and big kudos to @pacman100 as well ! |
transformers | 24,419 | closed | Fix `save_cache` version in `config.yml` | # What does this PR do?
In #22204, I changed the `restore_cache` to `v0.6` but forgot to change `save_cache`. In consequence, no cache is saved/loaded, and the 2 jobs spend 5 minutes to install things:
<img width="716" alt="Screenshot 2023-06-22 101905" src="https://github.com/huggingface/transformers/assets/2521628/c796d958-ef9c-4066-bb46-3009be7a8fcc">
This PR fixes this and save money/credit we spend on CircleCI .... Please don't punish me 😭 | 06-22-2023 08:21:46 | 06-22-2023 08:21:46 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 24,418 | closed | Review added | ### Thank you for building **transformers**!
@AuroraCoboAguilera created a review titled:
*Hugging face, the accelerator to develop your own NLP*
on [repo-reviews.github.io](https://repo-reviews.github.io) to share their experience using **transformers**.
[link to review](https://repo-reviews.github.io//reviews/2023-06-21_AuroraCoboAguilera_huggingface_transformers)
If you would like to help your super-users share their experiences using your repo, add a [badge](https://github.com/repo-reviews/repo-reviews.github.io#add-badges) to your README.md.
We hope that sharing these experiences helps your users **increase their productivity**.
--
Please be kind,
I’m a human!
| 06-22-2023 08:03:56 | 06-22-2023 08:03:56 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 24,417 | closed | Skip `test_conditional_generation_pt_pix2struct` in Past CI (torch < 1.11) | # What does this PR do?
Same as in #24270, but for a test inside pipeline test in `ImageToTextPipelineTests`. | 06-22-2023 07:38:16 | 06-22-2023 07:38:16 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 24,416 | closed | [`bnb`] Fix bnb serialization issue with new release | # What does this PR do?
Fixes the failing bnb tests in: https://github.com/huggingface/transformers/actions/runs/5329457026/jobs/9655258851
The recent release of bitsandbytes slightly broke the serialization mechanism for int8 models. In the new release, bitsandbytes has introduced a new way of serializing int8 weights that are more memory efficient and avoid OOM issues when saving for instance PEFT models.
https://github.com/TimDettmers/bitsandbytes/pull/503
That PR introduced a new paradigm, when saving int8 state dict, [that state dict will contain some string values](https://github.com/TimDettmers/bitsandbytes/pull/503/files#diff-4d235c7e595546c6656c229dfa139298ce6602b356c2d0bafcb2352eb2cfae79R360-R363) to store some metadata information related to the quantized format.
Therefore the fix should be to slightly adapt the `shard_checkpoint` method (that is called regardless of if the model is sharded or not) by adding a new argument `state_dict_contains_metadata` that will skip manipulating the `weight` variable that is no longer a tensor but a string. We constraint `state_dict_contains_metadata` to be applicable only in the int8 case to ensure we don't break anything else
cc @amyeroberts | 06-22-2023 06:59:45 | 06-22-2023 06:59:45 | _The documentation is not available anymore as the PR was closed or merged._<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24416). All of your documentation changes will be reflected on that endpoint. |
transformers | 24,415 | closed | fix the grad_acc issue at epoch boundaries | # What does this PR do?
Should solve accumulating via epoch when using Accelerate (seen in https://github.com/huggingface/transformers/issues/23935#issuecomment-1588134562). Requires https://github.com/huggingface/accelerate/pull/1624
Fixes # (issue) | 06-22-2023 06:06:08 | 06-22-2023 06:06:08 | _The documentation is not available anymore as the PR was closed or merged._<|||||>I believe this PR also fixes #24245<|||||>@amyeroberts we coordinate Accelerate releases to be a day or two before `transformers`, so there shouldn't be an issue there :)
(Though @pacman100 we should do the version check like we've done before with these fixes 😬 )<|||||>Hello,
How do i install this? I expected that PR means some kind of transformers update, in this case there should be install link such as git+https://github.com/huggingface/transformers@de9255de27abfcae4a1f816b904915f0b1e23cd9
<|||||>Hello @Oxi84, you can install this once it gets merged via `pip install git+https://github.com/huggingface/transformers` and `pip install git+https://github.com/huggingface/accelerate`<|||||>Just completed a training run with this PR and can confirm that the issue didn't occur. Thanks for the fix! |
transformers | 24,414 | open | Trouble fine-tuning zero-shot image classification model | ### System Info
transformers 4.30.2
python 3.11.4
### Who can help?
@amyeroberts @sgugger
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I am trying to fine-tune a **zero-shot** image classifier, I am following this example for reference: https://github.com/huggingface/notebooks/blob/main/examples/image_classification.ipynb
What I changed from the notebook:
The checkpoint I am starting from: `laion/CLIP-ViT-L-14-DataComp.XL-s13B-b90K`
For the model I am using `AutoModelForZeroShotImageClassification` like this:
```
model = AutoModelForZeroShotImageClassification.from_pretrained(
model_checkpoint,
label2id=label2id,
id2label=id2label,
ignore_mismatched_sizes = True,
)
```
When I run `trainer.train()`, I get this error:
> TypeError: CLIPModel.forward() got an unexpected keyword argument 'labels'
### Expected behavior
With my changes to the notebook, it should fine-tune for zero-shot pre-trained model `laion/CLIP-ViT-L-14-DataComp.XL-s13B-b90K` | 06-22-2023 05:55:12 | 06-22-2023 05:55:12 | Hi @moon001light, thanks for raising this issue.
Models that can be loaded with the `AutoXxx` API will have a shared input and output structure. In the example notebook, the model is loaded using `AutoModelForImageClassification`. Models loaded with `AutoModelForZeroShotImageClassification` have a different set of expected inputs, in particular, they don't accept `labels` and expect `input_ids`. Here's a guide on performing the zero-shot image classification task using transformers: https://huggingface.co/docs/transformers/tasks/zero_shot_image_classification.
Add more example scripts is on my to-do list. I'll make sure to include this task! <|||||>Thanks for the quick reply @amyeroberts , the link you gave seems to be a high level overview. Is there anything else besides replacing `labels` with `input_ids`? Could you point me to the code so that I can see the full list of expected inputs for `AutoModelForZeroShotImageClassification`? Appreciate your patience thanks :)<|||||>@moon001light If all you want is to train this particular checkpoint, then I would look directly at the architecture it loads, which in this [case is CLIPModel](https://github.com/huggingface/transformers/blob/ea91c2adca842da3d2f87e094504fa7d66a7008a/src/transformers/models/clip/modeling_clip.py#L1082).
More generally, the auto groups are [defined in `modeling_auto.py`](https://github.com/huggingface/transformers/blob/main/src/transformers/models/auto/modeling_auto.py). For [AutoModelForZeroShotClassification](https://github.com/huggingface/transformers/blob/ea91c2adca842da3d2f87e094504fa7d66a7008a/src/transformers/models/auto/modeling_auto.py#L1256C7-L1256C46) the model architectures that can be used are listed under [MODEL_FOR_ZERO_SHOT_IMAGE_CLASSIFICATION_MAPPING_NAMES](https://github.com/huggingface/transformers/blob/ea91c2adca842da3d2f87e094504fa7d66a7008a/src/transformers/models/auto/modeling_auto.py#L980).
<|||||>A bit of a newbie here, after looking into it I am still not sure how to properly set `input_ids`. This is what I tried:
```
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(model_checkpoint)
def collate_fn(examples):
pixel_values = torch.stack([example["pixel_values"] for example in examples])
input_ids = tokenizer(examples["label"], padding="max_length", truncation=True)
return {"pixel_values": pixel_values, "input_ids": input_ids}
```
But my `input_ids` is wrong :( Not really sure what `input_ids` should be. Looking forward to that example script of yours so I can learn @amyeroberts . Appreciate any help I can get :)<|||||>@moon001light I suggest inspecting the objects at each step to see what they are and contain. For example, the output of
```python
tokenizer(examples["label"], padding="max_length", truncation=True)
```
is not `input_ids`, but rather a dictionary that contains `input_ids`. |
transformers | 24,413 | closed | Create Pretrained module | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 06-22-2023 04:06:24 | 06-22-2023 04:06:24 | |
transformers | 24,412 | closed | Removed @torch.no_grad() and in-place operations in optimizers for backwards | # What does this PR do?
In #23417, two``@torch.no_grad()`` lines were added before
```python
def step(self, closure: Callable = None):
```
in class `AdamW` and `Adafactor`.
However, I think this should not be because this causes errors in backward.
Other optimizers in Pytorch have ``@_use_grad_for_differentiable`` lines before `def step`.
```python
@_use_grad_for_differentiable
def step(self, closure=None):
```
- Examples
- https://github.com/pytorch/pytorch/blob/430cb3e1600e0aca742105a2cdf4a01d901955dd/torch/optim/adam.py#L122-L123
- https://github.com/pytorch/pytorch/blob/430cb3e1600e0aca742105a2cdf4a01d901955dd/torch/optim/adamw.py#L149-L150
I also replaced in-place operations into assignments for backwards.
## Question
Should the following line also be replaced?
https://github.com/huggingface/transformers/blob/6ce6d62b6f20040129ec9831e7c4f6576402ea42/src/transformers/optimization.py#L728
## Context
I faced this problem when using PyTorch Lightning 2.0.3 with `transformers.optimization.Adafactor` as an optimizer.
With 3cf01b206 (one previous commit), this error did not occur.
```txt
File "/path/to/lib/python3.10/site-packages/pytorch_lightning/strategies/strategy.py", line 225, in optimizer_step
return self.precision_plugin.optimizer_step(optimizer, model=model, closure=closure, **kwargs)
File "/path/to/lib/python3.10/site-packages/pytorch_lightning/plugins/precision/precision_plugin.py", line 114, in optimizer_step
return optimizer.step(closure=closure, **kwargs)
File "/path/to/lib/python3.10/site-packages/torch/optim/optimizer.py", line 280, in wrapper
out = func(*args, **kwargs)
File "/path/to/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/path/to/lib/python3.10/site-packages/transformers/optimization.py", line 649, in step
loss = closure()
File "/path/to/lib/python3.10/site-packages/pytorch_lightning/plugins/precision/precision_plugin.py", line 101, in _wrap_closure
closure_result = closure()
File "/path/to/lib/python3.10/site-packages/pytorch_lightning/loops/optimization/automatic.py", line 140, in __call__
self._result = self.closure(*args, **kwargs)
File "/path/to/lib/python3.10/site-packages/pytorch_lightning/loops/optimization/automatic.py", line 135, in closure
self._backward_fn(step_output.closure_loss)
File "/path/to/lib/python3.10/site-packages/pytorch_lightning/loops/optimization/automatic.py", line 232, in backward_fn
call._call_strategy_hook(self.trainer, "backward", loss, optimizer)
File "/path/to/lib/python3.10/site-packages/pytorch_lightning/trainer/call.py", line 287, in _call_strategy_hook
output = fn(*args, **kwargs)
File "/path/to/lib/python3.10/site-packages/pytorch_lightning/strategies/strategy.py", line 200, in backward
self.precision_plugin.backward(closure_loss, self.lightning_module, optimizer, *args, **kwargs)
File "/path/to/lib/python3.10/site-packages/pytorch_lightning/plugins/precision/precision_plugin.py", line 67, in backward
model.backward(tensor, *args, **kwargs)
File "/path/to/lib/python3.10/site-packages/pytorch_lightning/core/module.py", line 1046, in backward
loss.backward(*args, **kwargs)
File "/path/to/lib/python3.10/site-packages/torch/_tensor.py", line 487, in backward
torch.autograd.backward(
File "/path/to/lib/python3.10/site-packages/torch/autograd/__init__.py", line 200, in backward
Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn
Epoch 0: 0%| | 0/2852 [00:01<?, ?it/s]
```
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
- PyTorch: @sgugger
| 06-22-2023 03:01:58 | 06-22-2023 03:01:58 | _The documentation is not available anymore as the PR was closed or merged._<|||||>I consider this pull request is insufficient because tests have failed.
I welcome comments on the fix!<|||||>Transformers is primarily a library of models, not optimizers. I would recommend not using the AdamW/Adafactor from the library (which are going to be removed in the next major version) and use another implementation :-)<|||||>Thank you for the comment!
All right, I found an implementation `fairseq.optim.adafactor.Adafactor` and I will use it.
https://github.com/pytorch/pytorch/issues/30446 |
transformers | 24,411 | closed | GPTJForCausalLM with instruction provided on tutorial doesn't load on 4090 | ### System Info
GPU 4090
- `transformers` version: 4.30.2
- Platform: Linux-5.10.16.3-microsoft-standard-WSL2-x86_64-with-glibc2.35
- Python version: 3.9.16
- Huggingface_hub version: 0.15.1
- Safetensors version: 0.3.1
- PyTorch version (GPU?): GPU 2.0.1+cu117 (True)
- Using GPU in script?: <4090>
- Using distributed or parallel set-up in script?: <no>
### Who can help?
@ArthurZucker @younesbelkada
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction



### Expected behavior
from the official tutorial, it said it should work with 16 GB of Ram, but 4090 has 24G of ram.
the kernel is died.
Can anyone help me out?
https://huggingface.co/docs/transformers/model_doc/gptj
error:
Kernel Restarting
The kernel for gpt-j-6b/1.ipynb appears to have died. It will restart automatically. | 06-21-2023 23:28:57 | 06-21-2023 23:28:57 | Hi @km5ar
Thanks for the issue
This is because google colab instances have a relatively low CPU RAM (<24GB) and the GPT-J model stored on the Hub at `EleutherAI/gpt-j-6b` are actually in float32 (24GB). Therefore from_pretrained will try to download that large file and crashed. Moreover, for large models it is recommended to load a model either with `low_cpu_mem_usage=True` or `device_map="auto"` as by default from_pretrained will initialize a random model with the same number of paramaters then try to populate the model with the dowloaded weights. Using `low_cpu_mem_usage=True` will avoid that step and not create a dummy random model at the beginning of the from_pretrained call.
To run gpt-j 6B on google colab consider using this repo: [`ybelkada/gpt-j-6b-sharded-bf16`](https://huggingface.co/ybelkada/gpt-j-6b-sharded-bf16) if you want to load the model in bf16 (by passing `torch_dtype=torch.bfloat16` ) or this repo: [`philschmid/gpt-j-6B-fp16-sharded`](https://huggingface.co/philschmid/gpt-j-6B-fp16-sharded) if you want to run the model in `float16` (by passing `torch_dtype=torch.float16`).<|||||>@younesbelkada
Thanks for the answer
However, my question is
if you see code, I did use torch.float16.
and please see the following screenshot,
I was copy the code from the tutorial from official guide, which clearly said "The model should fit on 16GB GPU for inference."
I was using a 4090 in my local machine.


<|||||>@km5ar
Thanks !
The statement "The model should fit on 16GB GPU" is still true. As explained above, the culprit is that `low_cpu_mem_uage` is set by default to `False` therefore blows up the Google Colab's CPU memory that is relatively low for that model due to the reasons I have detailed. Also, loading a checkpoint that is sharded helps to not getting those errors as the shards are processed one by one and deleted afterwards.
<img width="560" alt="Screenshot 2023-06-22 at 14 06 53" src="https://github.com/huggingface/transformers/assets/49240599/d919490b-e915-431b-8d64-08c78762adb9">
As you can see from the screen shot above, `low_cpu_mem_usage=False` combined with that checkpoint will force the program to allocate 12+12=24GB CPU memory before moving the weights on GPU, hence your error.
Can you try to call `from_pretrained` with `low_cpu_mem_usage=True`, and use [philschmid/gpt-j-6B-fp16-sharded](https://huggingface.co/philschmid/gpt-j-6B-fp16-sharded) instead of the original repository?
Thanks<|||||>Thank you!!! that's very helpful! |
transformers | 24,410 | closed | Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cuda:3! | ### System Info
- `transformers` version: 4.29.2
- Platform: Linux-5.4.0-137-generic-x86_64-with-glibc2.31
- Python version: 3.11.4
- Huggingface_hub version: 0.15.1
- Safetensors version: 0.3.1
- PyTorch version (GPU?): 2.0.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: Yes
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I am trying to finetune an [InstructCodeT5+](https://huggingface.co/Salesforce/instructcodet5p-16b) model on some training data using a multi-GPU setup. The same code (given further below) seems to work in a single-GPU setting (when i set `CUDA_VISIBLE_DEVICES=0`):
```
CUDA_VISIBLE_DEVICES=0,1,2,3 python training.py
===================================BUG REPORT===================================
Welcome to bitsandbytes. For bug reports, please run
python -m bitsandbytes
and submit this information together with your error trace to: https://github.com/TimDettmers/bitsandbytes/issues
================================================================================
bin /root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/bitsandbytes/libbitsandbytes_cuda118.so
CUDA SETUP: CUDA runtime path found: /root/anaconda3/envs/datachat_env/lib/libcudart.so.11.0
CUDA SETUP: Highest compute capability among GPUs detected: 8.0
CUDA SETUP: Detected CUDA version 118
CUDA SETUP: Loading binary /root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/bitsandbytes/libbitsandbytes_cuda118.so...
[nltk_data] Downloading package punkt to /root/nltk_data...
[nltk_data] Package punkt is already up-to-date!
The model weights are not tied. Please use the `tie_weights` method before using the `infer_auto_device` function.
Loading checkpoint shards: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 5/5 [00:24<00:00, 4.92s/it]
/root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/transformers/optimization.py:407: FutureWarning: This implementation of AdamW is deprecated and will be removed in a future version. Use the PyTorch implementation torch.optim.AdamW instead, or set `no_deprecation_warning=True` to disable this warning
warnings.warn(
0%| | 0/17409 [00:00<?, ?it/s]You're using a CodeGenTokenizerFast tokenizer. Please note that with a fast tokenizer, using the `__call__` method is faster than using a method to encode the text followed by a call to the `pad` method to get a padded encoding.
Traceback (most recent call last):
File "/root/Custom-LLM/training.py", line 364, in <module>
main()
File "/root/Custom-LLM/training.py", line 336, in main
trainer.train()
File "/root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/transformers/trainer.py", line 1664, in train
return inner_training_loop(
^^^^^^^^^^^^^^^^^^^^
File "/root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/transformers/trainer.py", line 1940, in _inner_training_loop
tr_loss_step = self.training_step(model, inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/transformers/trainer.py", line 2735, in training_step
loss = self.compute_loss(model, inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/transformers/trainer.py", line 2767, in compute_loss
outputs = model(**inputs)
^^^^^^^^^^^^^^^
File "/root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/accelerate/hooks.py", line 165, in new_forward
output = old_forward(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/.cache/huggingface/modules/transformers_modules/instructcodet5p-16b/modeling_codet5p.py", line 904, in forward
encoder_hidden_states = self.enc_to_dec_proj(encoder_hidden_states)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/accelerate/hooks.py", line 165, in new_forward
output = old_forward(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/torch/nn/modules/linear.py", line 114, in forward
return F.linear(input, self.weight, self.bias)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cuda:3! (when checking argument for argument mat1 in method wrapper_CUDA_addmm)
```
Code for the above error is given below:
```
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
import pandas as pd
import os
import torch
from peft import TaskType
from transformers import DataCollatorForSeq2Seq, Seq2SeqTrainer, Seq2SeqTrainingArguments, BitsAndBytesConfig
import evaluate
import nltk
import numpy as np
from nltk.tokenize import sent_tokenize
from datasets import Dataset, DatasetDict
import argparse
import pickle
import json
import statistics
import ast
from copy import deepcopy
device = 'cuda'
parser = argparse.ArgumentParser(description='Options')
parser.add_argument('--dataset_dir', default='data', type=str, help="folder in which the dataset is stored")
parser.add_argument('--output_dir', default="lora-instructcodet5p", type=str, help="output directory for the model")
parser.add_argument('--results_dir', default="results", type=str, help="where the results should be stored")
args = parser.parse_args()
tokenized_dataset = DatasetDict.load_from_disk(args.dataset_dir)
pad_tok = 50256
token_id="Salesforce/instructcodet5p-16b"
tokenizer = AutoTokenizer.from_pretrained(token_id)
def main():
# huggingface hub model id
model_id="instructcodet5p-16b"
if not os.path.exists(model_id):
model_id=token_id
# load model from the hub
model = AutoModelForSeq2SeqLM.from_pretrained(model_id,
torch_dtype=torch.bfloat16,
low_cpu_mem_usage=True,
trust_remote_code=True, decoder_start_token_id=1, pad_token_id=pad_tok, device_map="auto").to(device)
# we want to ignore tokenizer pad token in the loss
label_pad_token_id = pad_tok
# Data collator
data_collator = DataCollatorForSeq2Seq(
tokenizer,
model=model,
label_pad_token_id=label_pad_token_id,
pad_to_multiple_of=8
)
output_dir=args.output_dir
training_args = Seq2SeqTrainingArguments(
output_dir=output_dir,
per_device_train_batch_size=1,
predict_with_generate=True,
weight_decay=0.05,
warmup_steps=100,
fp16=False,
learning_rate=1e-3,
num_train_epochs=3,
logging_dir=f"{output_dir}/logs",
logging_strategy="epoch",
save_strategy="no",
report_to="tensorboard",
push_to_hub=False,
generation_max_length=200,
include_inputs_for_metrics = True,
lr_scheduler_type = 'cosine'
)
# Create Trainer instance
trainer = Seq2SeqTrainer(
model=model,
args=training_args,
data_collator=data_collator,
train_dataset=tokenized_dataset["train"]
)
# train model
trainer.train()
if __name__ == '__main__':
main()
```
### Expected behavior
Expected behavior is that the model should train in a multi-GPU setting without throwing any errors. The same script works in single-GPU setting but throws the above error in a multi-GPU setting | 06-21-2023 23:19:35 | 06-21-2023 23:19:35 | You cannot use `device_map="auto"` with the `to` method afterward as you do. The model will be split up on the GPUs already.
Also, how are you launching your training script after?<|||||>Getting the same error when i remove the `to` method. Traceback given below. Also, I launch the training script using `CUDA_VISIBLE_DEVICES=0,1,2,3 python training.py`
```
Traceback (most recent call last):
File "/root/Custom-LLM/new_train.py", line 91, in <module>
main()
File "/root/Custom-LLM/new_train.py", line 88, in main
trainer.train()
File "/root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/transformers/trainer.py", line 1645, in train
return inner_training_loop(
^^^^^^^^^^^^^^^^^^^^
File "/root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/transformers/trainer.py", line 1938, in _inner_training_loop
tr_loss_step = self.training_step(model, inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/transformers/trainer.py", line 2759, in training_step
loss = self.compute_loss(model, inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/transformers/trainer.py", line 2784, in compute_loss
outputs = model(**inputs)
^^^^^^^^^^^^^^^
File "/root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/accelerate/hooks.py", line 165, in new_forward
output = old_forward(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/.cache/huggingface/modules/transformers_modules/instructcodet5p-16b/modeling_codet5p.py", line 932, in forward
loss = loss_fct(logits.reshape(-1, self.decoder.config.vocab_size), labels.view(-1))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/torch/nn/modules/loss.py", line 1174, in forward
return F.cross_entropy(input, target, weight=self.weight,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/torch/nn/functional.py", line 3029, in cross_entropy
return torch._C._nn.cross_entropy_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index, label_smoothing)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:3 and cuda:0! (when checking argument for argument target in method wrapper_CUDA_nll_loss_forward)
0%| | 0/17409 [00:02<?, ?it/s]
```<|||||>Ah, the problem lies in the custom code of this model. You need to move the `labels` to the device of the logits [here](https://huggingface.co/Salesforce/instructcodet5p-16b/blob/main/modeling_codet5p.py#L930) by adding `labels = labels.to(logits.device)`. Your logits are on the last GPU but your labels are still on the first one.<|||||>I made a [PR](https://huggingface.co/Salesforce/instructcodet5p-16b/discussions/4) with this suggestion on the repo. You can check it out locally by adding the `revision="pr_4"` argument when loading the model.<|||||>Thanks a lot! That solved the issue |
transformers | 24,409 | open | wandb metric argument is weird | ### System Info
- `transformers` version: 4.30.2
- Platform: Linux-5.7.19-050719-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.15.1
- Safetensors version: 0.3.1
- PyTorch version (GPU?): 2.0.1+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <yes>
- Using distributed or parallel set-up in script?: <no>
### Who can help?
@AyushExel
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
There are a few bugs/weirdness when specifying the `metric` in `wandb` hyperparameter search.
1. If a `metric` is specified via the `hp_space` argument but there is not a `metric` argument to `hyperparameter_search`, the metric in the `hp_space` is ignored and changed to `eval/loss`. See: https://github.com/huggingface/transformers/blame/6ce6d62b6f20040129ec9831e7c4f6576402ea42/src/transformers/integrations.py#L497
2. If a custom `hp_space` is provided that does not define `metric` at all, and the `metric` argument is specified to `hyperparameter_search`, this code throws an exception: https://github.com/huggingface/transformers/blame/6ce6d62b6f20040129ec9831e7c4f6576402ea42/src/transformers/integrations.py#L500
### Expected behavior
1. If a `hp_space` defines a metric, use that instead of overwriting it.
2. Don't throw an exception is the `hp_space` lacks a `metric` key. | 06-21-2023 19:29:46 | 06-21-2023 19:29:46 | Thanks @edmcman for the ping.
@ayulockin @morganmcg1 in case this is relevant to the integrations team. tbh, I don't remember much about this integration but I implemented it so happy to help in case there's a blocker.<|||||>Thanks for the detailed issue @edmcman; we are taking a look at this. Will connect with you @AyushExel in case of blocker.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Ping
On Mon, Jul 31, 2023, 11:03 AM github-actions[bot] -
***@***.*** <github.edmcman.99c9f1b9d0.notifications#
***@***.***> wrote:
> This issue has been automatically marked as stale because it has not had
> recent activity. If you think this still needs to be addressed please
> comment on this thread.
>
> Please note that issues that do not follow the contributing guidelines
> <https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md>
> are likely to be ignored.
>
> —
> Reply to this email directly, view it on GitHub
> <https://github.com/huggingface/transformers/issues/24409#issuecomment-1658554442>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/AAHYKZPH3FQDLIUC7DSYYRLXS7CKXANCNFSM6AAAAAAZPGOQMA>
> .
> You are receiving this because you were mentioned.Message ID:
> ***@***.***>
>
|
transformers | 24,408 | closed | [WIP] Add Restormer | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Adds Restormer to HF and closes #22372
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 06-21-2023 17:02:21 | 06-21-2023 17:02:21 | cc @amyeroberts for information.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 24,407 | closed | 🚨🚨 Fix group beam search | # What does this PR do?
Diverse beam search is a method that generates `num_beams//num_beam_groups` sentences for each group independently. However, the current code uses one BeamHypotheses shared by all groups. Therefore, group A will generate two sentences before group B outputs a sentence. So, I created BeamHypotheses for each group so that inferences can be made independently.
Changes are as follows.
inference code:
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
model = AutoModelForSeq2SeqLM.from_pretrained("facebook/bart-large-xsum")
tokenizer = AutoTokenizer.from_pretrained("facebook/bart-large-xsum")
text = "The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration."
outputs = model.generate(
tokenizer.encode(text, return_tensors="pt", max_length=512),
num_beam_groups=2,
num_beams=2,
diversity_penalty=1000000.0,
num_return_sequences=2,
)
print("\n".join(tokenizer.batch_decode(outputs, skip_special_tokens=True)))
```
before:
```
A number Of research projects have investigated the role of the brain's encoder and decoder in the control of the encoded sequences.
A number Of research projects have investigated the role of the brain's encoder and decoder in the control of the encoded sequences..
```
after:
```
The study of the activity of the brain's encoders and decoders has revealed a range of different models of how the brain processes information.
A number Of research projects have investigated the role of the brain's encoder and decoder in the control of the encoded sequences.
```
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #24369
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@gante
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 06-21-2023 16:41:22 | 06-21-2023 16:41:22 | _The documentation is not available anymore as the PR was closed or merged._<|||||>## Summary of the problem and corresponding fix (for the core maintainer and our future selves)
### Problem
The generation loop in `group_beam_search` is correct, and it builds `num_beam_groups` distinct groups of sequences. However, the [`beam_scorer.finalize()` step](https://github.com/huggingface/transformers/blob/8e164c5400b7b413c7b8fb32e35132001effc970/src/transformers/generation/utils.py#L3711) was not taking `num_beam_groups` into consideration and the beam selection therein, when appending the last tokens, was free to write across groups. This should not happen at all, and it could entirely flush out the diversity in the different groups (when `num_beam_groups >= num_beams/2`), as we see in the example in the PR header.
### Fix
Two different paths were possible: a) add logic to `finalize` to handle groups correctly; b) treat each group as an independent set of hypotheses. From the [paper](https://arxiv.org/pdf/1610.02424.pdf), we can read "we divide the beam budget B into G groups and greedily optimize each group using beam search", so option b), kindly implemented by @hukuda222, is closer to the reference. <|||||>@gante
Thanks for the review, CI now passes, and I confirmed that `RUN_SLOW=1 py.test tests/generation/test_utils.py -vv` also passes.<|||||>@sgugger [this comment](https://github.com/huggingface/transformers/pull/24407#issuecomment-1605624032) summarizes the problem and the fix<|||||>@sgugger the breaking changes here in the generated outputs from `group_beam_search`, which are inevitable due to the bug fix. The method was underperforming (measured in log scores AND beam diversity, which is the point of the method) before these changes.
Since it is a bug fix, there is no need to ensure retro compatibility, correct? |
transformers | 24,406 | closed | Potential Memory Leakage during inference using DistilBert/Bert | ### System Info
transformer: 4.24.0
python: 3.8.13
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Hello,
I am potentially facing the memory leakage problem when using DistilBert or Bert to do inference, the following is my code:
```
@torch.no_grad()
def inference(self, hidden_state, mask, id):
self.eval()
print("begin", torch.cuda.max_memory_allocated("cuda:4")/ 1024 / 1024)
distilbert_output = self.bert(inputs_embeds=hidden_state, attention_mask=mask, return_dict=False)
print("middle", torch.cuda.max_memory_allocated("cuda:4")/ 1024 / 1024)
hidden_state = distilbert_output[0]
pooled_output = hidden_state[:, 0]
x = pooled_output
x = F.dropout(x, p=args.dropout, training=self.training)
del hidden_state, pooled_output
print("final", torch.cuda.max_memory_allocated("cuda:4")/ 1024 / 1024)
for i, lin in enumerate(self.lins[:-1]):
x = lin(x)
x = F.relu(x)
x = F.dropout(x, p=args.dropout, training=self.training)
x = self.lins[-1](x)
print("final2", torch.cuda.max_memory_allocated("cuda:4")/ 1024 / 1024)
torch.cuda.empty_cache()
return x
```
And the result of each printing of memory usage is:
begin : 4200
middle : 38000
final : 38000
final 2 : 38000
It seems `distilbert_output = self.bert(inputs_embeds=hidden_state, attention_mask=mask, return_dict=False)` will not release the memory as MLP does, there is a huge increase between "begin" and "middle", but no increase between "final" and "final 2". Could I have some ideas about this issue? Thanks.
### Expected behavior
GPU should release memory after transformer inference | 06-21-2023 15:57:57 | 06-21-2023 15:57:57 | cc @ArthurZucker and @younesbelkada <|||||>Hi @TOP-RX
Can you try to add the `torch.cuda_empty_cache()` before `"final"`? Also you might need to combine it with
```python
import gc
gc.collect()
```
After that call. See this comment from a previous issue for reference: https://github.com/huggingface/transformers/issues/21094#issuecomment-1396951333<|||||>Hello @sgugger @younesbelkada @ArthurZucker,
Thanks for your reply! I tried to include the `torch.cuda.empty_cache()` before "final" and also include `gc.collect()` whether before or after `torch.cuda.empty_cache()`, the issue still happened. And I also use `print(torch.cuda.memory_allocated("cuda:4")/ 1024 / 1024)` to check the allocated memory:
```
@torch.no_grad()
def inference(self, hidden_state, mask, id):
self.eval()
print("begin_max", torch.cuda.max_memory_allocated("cuda:4")/ 1024 / 1024)
print("begin", torch.cuda.memory_allocated("cuda:4")/ 1024 / 1024)
distilbert_output = self.bert(inputs_embeds=hidden_state, attention_mask=mask, return_dict=False)
print("middle_max", torch.cuda.max_memory_allocated("cuda:4")/ 1024 / 1024)
print("middle", torch.cuda.memory_allocated("cuda:4")/ 1024 / 1024)
gc.collect()
torch.cuda.empty_cache()
gc.collect()
hidden_state = distilbert_output[0]
pooled_output = hidden_state[:, 0]
x = pooled_output
x = F.dropout(x, p=args.dropout, training=self.training)
del hidden_state, pooled_output
print("final_max", torch.cuda.max_memory_allocated("cuda:4")/ 1024 / 1024)
print("final", torch.cuda.memory_allocated("cuda:4")/ 1024 / 1024)
for i, lin in enumerate(self.lins[:-1]):
x = lin(x)
#x = self.bns[i](x)
x = F.relu(x)
x = F.dropout(x, p=args.dropout, training=self.training)
x = self.lins[-1](x)
self.z_mlp = self.z_mlp.to(device)
self.z_mlp[id] = x.clone().detach()
print("final2", torch.cuda.max_memory_allocated("cuda:4")/ 1024 / 1024)
torch.cuda.empty_cache()
return x
```
Here are the results:
begin_max 4217.28662109375
begin 4217.28662109375
middle_max 39844.28662109375
middle 7967.28662109375
final_max 39844.28662109375
final 7996.58349609375
there is also a big gap between the `max_memory_allocated` and `memory_allocated`, could I have some further advices? Thanks.<|||||>I have tired several different ways which I found, but the problem is still existing, is this normal?<|||||>So one thing to note, you are using `gc.collect();torch.cuda.empty_cache()` but you the `del hidden_state, pooled_output` is after. You should first delete, then call `gc.collect();torch.cuda.empty_cache()`. <|||||>Note: regarding ` max_memory_allocated`
> By default, this returns the peak allocated memory since the beginning of this program. [reset_peak_memory_stats()](https://pytorch.org/docs/stable/generated/torch.cuda.reset_peak_memory_stats.html#torch.cuda.reset_peak_memory_stats) can be used to reset the starting point in tracking this metric.
I don't see a problem with those larger values for `max_memory_allocated`. They are just the peak values.<|||||>Hello @ydshieh @ArthurZucker,
Thanks for your help! Before I call the function as shown above, I use:
```
num_layers = 1 # Specify the number of layers to remove
encoder_layers = model.bert.transformer.layer[-num_layers:]
model.bert.transformer.layer = nn.ModuleList(encoder_layers)
```
to control the number of transformer layers in my model. However, if I include `@torch.no_grad()` in the function I showed above, here are the results:
> 1 transformer layer: allocated memory: 800MB, max allocated: 2300MB
> 2 transformer layers: allocated memory: 840MB, max allocated: 2560MB
if I just comment out the `@torch.no_grad()` in the above code to do a comparison:
> 1 transformer layer: allocated memory: 4107MB, max allocated: 4299MB
> 2 transformer layers: allocated memory: 7564MB, max allocated: 7756MB
For the first case with `@torch.no_grad()`, we don't need to store the intermediate value for backward, it's reasonable the GPU memory is less than the second case. In the second case, the GPU usage is proportional to the number of layers I used which is consistent with my intuition. What makes me confused is no matter how many transformer layers I used in the first case(with `@torch.no_grad()`), the GPU memory usage is almost same. I am wondering if I misunderstand something?
Any help would be appreciated!<|||||>Hi @TOP-RX
It's necessary to provide a self-complete code snippet. With only the definition of `def inference` but not the inputs you passed to it, nothing we can help. Also please import everything necessary in the code snippet.
> What makes me confused is no matter how many transformer layers I used in the first case(with @torch.no_grad()),
Could you let us know where do you put the print statements in the function `inference` to get:
> transformer layer: allocated memory
> transformer layers: allocated memory<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 24,405 | closed | Fix accumulation by epoch with Accelerate | # What does this PR do?
Should solve accumulating via epoch when using Accelerate (seen in https://github.com/huggingface/transformers/issues/23935#issuecomment-1588134562). Requires https://github.com/huggingface/accelerate/pull/1624
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@pacman100 | 06-21-2023 15:01:48 | 06-21-2023 15:01:48 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Solved the problem for me with small naming fix in trainer.py:
```python
accumulation_plugin = GradientAccumulationPlugin(
num_steps=self.args.gradient_accumulation_steps, sync_with_dataloader=False
)
```
(field name is num_steps not gradient_accumulation_steps)<|||||>We can close this one, as I have added a missing edge case to #24415 |
transformers | 24,404 | closed | TF safetensors reduced mem usage | When we load safetensors files in TF, the entire safetensors state dict is materialized on GPU alongside the randomly initialized weights. This inflates our memory usage during loading a lot, up to about 2X - 2.5X the amount we actually need.
This PR grabs tensors iteratively from the underlying safetensors archives and assigns them. ~It's working for TF-formatted safetensors archives right now, and I'll add torch-format support next.~ Now supports PT and TF formatted archives! Load times still seem very quick for me locally, so I don't think this negatively impacts anything!
Fixes #24393 | 06-21-2023 14:16:06 | 06-21-2023 14:16:06 | _The documentation is not available anymore as the PR was closed or merged._<|||||>This looks good in my testing! cc @Narsil for review and @amyeroberts for core maintainer review to save Sylvain from having to read code while he's distracted by the hallucinatory dental anaesthetic goblins
To explain one detail: In general, all TF ops can take NumPy inputs. Often the "idiomatic" way to write TF code is to do preprocessing/once-off stuff in NumPy and just pass the result to TF. Since Safetensors have very efficient Numpy loading, I generally open them in `"np"` format and let TF handle any necessary conversions.
The one exception is when loading a PyTorch state dict archive. The reason here is that PyTorch weights often need to be transposed to load them into TF, and NumPy transposes on CPU are much slower than TF transposes on GPU. Model loading was several seconds slower when I passed `"np"` format archives to the PyTorch state dict crossloading function, but with GPU transposes this PR has almost no impact on performance, while hugely reducing peak memory usage.<|||||>Yeah, actually, I thought about it a little more - using `np` is idiomatic for TF, but it's a bit messy if the PyTorch crossloading uses `tf` for fast GPU transposes and the others use `np`, especially when it doesn't really matter. I'll use `tf` globally for consistency!<|||||>Just ran the slow tests for a few models that load safetensor checkpoints and this still seems good<|||||>Finished slow testing - performance and memory usage look good!<|||||>Everything looks good at this point - any objections to merging? |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.