repo
stringclasses 1
value | number
int64 1
25.3k
| state
stringclasses 2
values | title
stringlengths 1
487
| body
stringlengths 0
234k
β | created_at
stringlengths 19
19
| closed_at
stringlengths 19
19
| comments
stringlengths 0
293k
|
---|---|---|---|---|---|---|---|
transformers | 22,878 | closed | [tensorflow] Add support for the `is_symbolic_tensor` predicate | # What does this PR do?
This PR adds support for the `is_symbolic_tensor` predicate in TensorFlow. This predicate will become available starting with version 2.14.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 04-20-2023 02:23:10 | 04-20-2023 02:23:10 | _The documentation is not available anymore as the PR was closed or merged._<|||||>cc @Rocketknight1 <|||||>This looks clean! Do you have any links to documentation/discussion about `is_symbolic_tensor` in TF, though? This is the first I've heard of it - I can see the relevant code in the TF codebase, but are we guaranteed that the behaviour won't change before the 2.14 release? Also, given that it's in the codebase already, won't it be in the 2.13 release rather than 2.14?<|||||>I wrote the `is_symbolic_tensor` code in question. The 2.14 was a guess from me, I wasn't sure if the original PR was going to make the TF branch cut, but it looks like it will make 2.13. The intention of `is_symbolic_tensor` is actually to provide more stability:
We're looking into breaking the current inheritance setup in TF. EagerTensor inherits from symbolic Tensor, and this adds a lot of weird complication in the TF codebase, along with awkward checks like type(t) == Tensor. This method was introduced to avoid churning the few users who need to distinguish between eager and symbolic tensors.
We don't have a proposal yet for the split, but we're front-running some of this prep/cleanup.<|||||>LGTM in that case, and thanks for the clarification! |
transformers | 22,877 | closed | Llama fast tokenizer `train_new_from_iterator` returns `TypeError: 'NoneType' object is not subscriptable` | ### System Info
accelerate==0.18.0
aiohttp==3.8.4
aiosignal==1.3.1
anyio==3.6.2
argon2-cffi==21.3.0
argon2-cffi-bindings==21.2.0
arrow==1.2.3
asttokens==2.2.1
async-timeout==4.0.2
attrs==23.1.0
backcall==0.2.0
beautifulsoup4==4.12.2
bitsandbytes==0.38.1
bleach==6.0.0
certifi==2022.12.7
cffi==1.15.1
charset-normalizer==3.1.0
cmake==3.26.3
comm==0.1.3
datasets==2.11.0
debugpy==1.6.7
decorator==5.1.1
defusedxml==0.7.1
dill==0.3.6
evaluate==0.4.0
executing==1.2.0
fastjsonschema==2.16.3
filelock==3.12.0
fqdn==1.5.1
frozenlist==1.3.3
fsspec==2023.4.0
huggingface-hub==0.13.4
idna==3.4
importlib-metadata==6.5.0
importlib-resources==5.12.0
ipykernel==6.22.0
ipython==8.12.0
ipython-genutils==0.2.0
isoduration==20.11.0
jedi==0.18.2
Jinja2==3.1.2
jsonpointer==2.3
jsonschema==4.17.3
jupyter-events==0.6.3
jupyter_client==8.2.0
jupyter_core==5.3.0
jupyter_server==2.5.0
jupyter_server_terminals==0.4.4
jupyterlab-pygments==0.2.2
lit==16.0.1
MarkupSafe==2.1.2
matplotlib-inline==0.1.6
mistune==2.0.5
mpmath==1.3.0
multidict==6.0.4
multiprocess==0.70.14
nbclassic==0.5.5
nbclient==0.7.3
nbconvert==7.3.1
nbformat==5.8.0
nest-asyncio==1.5.6
networkx==3.1
notebook==6.5.4
notebook_shim==0.2.2
numpy==1.24.2
nvidia-cublas-cu11==11.10.3.66
nvidia-cuda-cupti-cu11==11.7.101
nvidia-cuda-nvrtc-cu11==11.7.99
nvidia-cuda-runtime-cu11==11.7.99
nvidia-cudnn-cu11==8.5.0.96
nvidia-cufft-cu11==10.9.0.58
nvidia-curand-cu11==10.2.10.91
nvidia-cusolver-cu11==11.4.0.1
nvidia-cusparse-cu11==11.7.4.91
nvidia-nccl-cu11==2.14.3
nvidia-nvtx-cu11==11.7.91
packaging==23.1
pandas==2.0.0
pandocfilters==1.5.0
parso==0.8.3
pexpect==4.8.0
pickleshare==0.7.5
pkgutil_resolve_name==1.3.10
platformdirs==3.2.0
prometheus-client==0.16.0
prompt-toolkit==3.0.38
protobuf==3.20.0
psutil==5.9.5
ptyprocess==0.7.0
pure-eval==0.2.2
pyarrow==11.0.0
pycparser==2.21
Pygments==2.15.1
pyrsistent==0.19.3
python-dateutil==2.8.2
python-dotenv==1.0.0
python-json-logger==2.0.7
pytz==2023.3
PyYAML==6.0
pyzmq==25.0.2
regex==2023.3.23
requests==2.28.2
responses==0.18.0
rfc3339-validator==0.1.4
rfc3986-validator==0.1.1
Send2Trash==1.8.0
sentencepiece==0.1.98
six==1.16.0
sniffio==1.3.0
soupsieve==2.4.1
stack-data==0.6.2
sympy==1.11.1
terminado==0.17.1
tinycss2==1.2.1
tokenizers==0.13.3
torch==2.0.0
tornado==6.3
tqdm==4.65.0
traitlets==5.9.0
-e git+https://github.com/huggingface/transformers.git@474bf508dfe0d46fc38585a1bb793e5ba74fddfd#egg=transformers
triton==2.0.0
typing_extensions==4.5.0
tzdata==2023.3
uri-template==1.2.0
urllib3==1.26.15
wcwidth==0.2.6
webcolors==1.13
webencodings==0.5.1
websocket-client==1.5.1
xxhash==3.2.0
yarl==1.8.2
zipp==3.15.0
### Who can help?
@ArthurZucker , @Narsil
### Information
- [] The official example scripts
- [X ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1. Convert llama weights to hf format
```
python src/transformers/models/llama/convert_llama_weights_to_hf.py \
--input_dir /path/to/downloaded/llama/weights --model_size tokenizer_only --output_dir /output/path
```
2. Train new tokenizer from old.
```
from transformers import AutoTokenizer
old_tokenizer = AutoTokenizer.from_pretrained(/output/path)
old_tokenizer.train_new_from_iterator(["I love huggingface!"], 50)
```
### Expected behavior
## Behavior
I ran into the error:
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[3], line 5
3 old_tokenizer = AutoTokenizer.from_pretrained(PATH_TO_LLAMA_DIR,)
----> 5 old_tokenizer.train_new_from_iterator(["I love huggingface!"], 50)
File ~/transformers/src/transformers/tokenization_utils_fast.py:709, in PreTrainedTokenizerFast.train_new_from_iterator(self, text_iterator, vocab_size, length, new_special_tokens, special_tokens_map, **kwargs)
[707](file:///home/jovyan/transformers/src/transformers/tokenization_utils_fast.py?line=706) if tokenizer_json["model"]["type"] == "Unigram" and unk_token is not None:
[708](file:///home/jovyan/transformers/src/transformers/tokenization_utils_fast.py?line=707) kwargs["unk_token"] = unk_token
--> [709](file:///home/jovyan/transformers/src/transformers/tokenization_utils_fast.py?line=708) if tokenizer_json["pre_tokenizer"]["type"] == "ByteLevel":
[710](file:///home/jovyan/transformers/src/transformers/tokenization_utils_fast.py?line=709) kwargs["initial_alphabet"] = pre_tokenizers_fast.ByteLevel.alphabet()
[712](file:///home/jovyan/transformers/src/transformers/tokenization_utils_fast.py?line=711) trainer_class = MODEL_TO_TRAINER_MAPPING[tokenizer_json["model"]["type"]]
TypeError: 'NoneType' object is not subscriptable
```
## Analysis
Inspecting my `tokenizer.json` file ([tokenizer.zip](https://github.com/huggingface/transformers/files/11279412/tokenizer.zip)), I realised my `"pre_tokenizer": null,` which led to the error.
I'm not sure if it helps, but I had issue converting the llama weights to hf format (step 1) due to the protobuf version bug described [here](https://github.com/huggingface/transformers/issues/21128). I fixed it by downgrading my protobuf to version 3.20. | 04-20-2023 02:07:07 | 04-20-2023 02:07:07 | Same problem here. The code appears to be looking for a ByteLevel pretokenizer, but the json.load(_tokenizer) at line 644 of tokenization_utils_fast.py is initializing one with pretokenizer equal to None<|||||>Hey! Thanks for reporting! I can reproduce this, indeed it's bug will investigate<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Should have been fixed by #22959 |
transformers | 22,876 | closed | Remove broken test_data symlink in legacy s2s examples | # What does this PR do?
This PR removes the broken `test_data` symlink in `examples/legacy/seq2seq/test_data`
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 04-20-2023 01:04:38 | 04-20-2023 01:04:38 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 22,875 | closed | Generation: only search for eos_token if set | In generation, the current check for `unfinished_sequences.max()`, which is to find sequences that have ended early via `eos_token_id`, creates a synchronization point even when there is no `eos_token`, which slows inference down.
This pull request moves that calculation to inside the condition checking for an `eos_token`, so that such slowdown may be removed by disabling this token.
On my old system with `iommu=soft`, this change is saving me 6 seconds per token on a large model by setting `model.config.eos_token_id = None`.
@gante | 04-20-2023 00:20:12 | 04-20-2023 00:20:12 | _The documentation is not available anymore as the PR was closed or merged._<|||||>> 1. If you rebase with `main`, you'll see that there is another decoding method with the same pattern. Would you be able to rebase and push the change there as well?
Done. I have not reviewed the rest of the function for synchronization points but observe the impact of this change will be much smaller with the nested assistant loop. Nice new feature.
> 2. How's the system where you noticed the big speed change? How are you running `.generate()`? I'd be interested in knowing more about it :)
This is an old Asus KGPE-D16 motherboard. They've been popular with some independent developers as higher-end hackable hardware. It has two old K80s in it and is running the Dasharo third-party bios firmware, which adds support for newer PCI cards.
The firmware is not fully polished and I was observing corruption when transferring data between cards; the solution from nvidia's forums was to pass `iommu=soft` to the kernel. This fixes the issue in a pinch but makes data transfer very slow, and points where data is transferred became the biggest bottlenecks.
The generation call I'm presently using is roughly ````model.generate(input_ids,
do_sample=False,
min_length=10,
max_length=50,
top_p=0.95,
temperature=0.0)```` from the cuda branch of the gptq llama repository.
The model approaches 40GB in size and is spread across all 4 logical cards using huggingface accelerate `device_map="auto"`. (I've made another small patch to transformers, not submitted yet (it would affect every model, hard to test), to additionally reduce the need to transfer data between the cards inside the model layer loop, when running with accelerate. The attention mask and position ids are not properly moved off the first card to prepare for layers on other cards, with the vanilla code.)<|||||>@xloem thank you for the explanation! π |
transformers | 22,874 | closed | ddp fixes for training | # What does this PR do?
While trying to train Stable LM or even Llama, I ran into a couple of issues with multi-gpu and DDP.
I've added a check to skip this for this case since torch doesn't support it: see https://github.com/pytorch/pytorch/blob/main/torch/nn/parallel/distributed.py#L686-L694
```
File "/opt/conda/lib/python3.8/site-packages/transformers/trainer.py", line 1633, in train
return inner_training_loop(
File "/opt/conda/lib/python3.8/site-packages/transformers/trainer.py", line 1720, in _inner_training_loop
model = self._wrap_model(self.model_wrapped)
File "/opt/conda/lib/python3.8/site-packages/transformers/trainer.py", line 1545, in _wrap_model
model = nn.parallel.DistributedDataParallel(
File "/opt/conda/lib/python3.8/site-packages/torch/nn/parallel/distributed.py", line 571, in __init__ self._log_and_throw(
File "/opt/conda/lib/python3.8/site-packages/torch/nn/parallel/distributed.py", line 769, in _log_and_throw raise err_type(err_msg)
RuntimeError: DistributedDataParallel is not needed when a module doesn't have any parameter that requires a gradient.
```
Added another check for the method `no_sync`
```
File "/opt/conda/lib/python3.8/site-packages/transformers/trainer.py", line 1634, in train
return inner_training_loop(
File "/opt/conda/lib/python3.8/site-packages/transformers/trainer.py", line 1900, in _inner_training_loop with model.no_sync():
File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1614, in __getattr__
raise AttributeError("'{}' object has no attribute '{}'".format( AttributeError: 'GPTNeoXForCausalLM' object has no attribute 'no_sync'
```
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 04-19-2023 22:05:13 | 04-19-2023 22:05:13 | _The documentation is not available anymore as the PR was closed or merged._<|||||>cc @sgugger, solution is exactly what we have in Accelerate, and would be a good way to keep it working until the Accelerate integration is fully finished :) |
transformers | 22,873 | closed | While weight conversion of llama-13b getting this error: RuntimeError: Internal: unk is not defined. | ### System Info
OS : Ubunto
Virtual Env :
**accelerate==0.18.0
certifi==2022.12.7
charset-normalizer==3.1.0
cmake==3.26.3
filelock==3.12.0
huggingface-hub==0.13.4
idna==3.4
Jinja2==3.1.2
lit==16.0.1
MarkupSafe==2.1.2
mpmath==1.3.0
networkx==3.1
numpy==1.24.2
nvidia-cublas-cu11==11.10.3.66
nvidia-cuda-cupti-cu11==11.7.101
nvidia-cuda-nvrtc-cu11==11.7.99
nvidia-cuda-runtime-cu11==11.7.99
nvidia-cudnn-cu11==8.5.0.96
nvidia-cufft-cu11==10.9.0.58
nvidia-curand-cu11==10.2.10.91
nvidia-cusolver-cu11==11.4.0.1
nvidia-cusparse-cu11==11.7.4.91
nvidia-nccl-cu11==2.14.3
nvidia-nvtx-cu11==11.7.91
packaging==23.1
psutil==5.9.5
PyYAML==6.0
regex==2023.3.23
requests==2.28.2
sentencepiece==0.1.98
sympy==1.11.1
tokenizers==0.13.3
torch==2.0.0
tqdm==4.65.0
transformers==4.28.1
triton==2.0.0
typing_extensions==4.5.0
urllib3==1.26.15**
### Who can help?
@ArthurZucker
@younesbelkada
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Used following command to convert llama-13 weights into hf.
`python src/transformers/models/llama/convert_llama_weights_to_hf.py --input_dir /home/unconveretd-weights --model_size 13B --output_dir /home/test-converted`
### Expected behavior
**It should generated the converted weights. But instead it is generating this error**
Loading the checkpoint in a Llama model.
Loading checkpoint shards: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 41/41 [00:17<00:00, 2.35it/s]
Saving in the Transformers format.
Saving a LlamaTokenizerFast to /home/test-converted.
Traceback (most recent call last):
File "/home/transformers/src/transformers/models/llama/convert_llama_weights_to_hf.py", line 278, in <module>
main()
File "/home/transformers/src/transformers/models/llama/convert_llama_weights_to_hf.py", line 274, in main
write_tokenizer(args.output_dir, spm_path)
File "/home/transformers/src/transformers/models/llama/convert_llama_weights_to_hf.py", line 248, in write_tokenizer
tokenizer = tokenizer_class(input_tokenizer_path)
File "/home/myenv/lib/python3.10/site-packages/transformers/models/llama/tokenization_llama_fast.py", line 89, in __init__
super().__init__(
File "/home/myenv/lib/python3.10/site-packages/transformers/tokenization_utils_fast.py", line 117, in __init__
slow_tokenizer = self.slow_tokenizer_class(*args, **kwargs)
File "/home/myenv/lib/python3.10/site-packages/transformers/models/llama/tokenization_llama.py", line 96, in __init__
self.sp_model.Load(vocab_file)
File "/home/myenv/lib/python3.10/site-packages/sentencepiece/__init__.py", line 905, in Load
return self.LoadFromFile(model_file)
File "/home/myenv/lib/python3.10/site-packages/sentencepiece/__init__.py", line 310, in LoadFromFile
return _sentencepiece.SentencePieceProcessor_LoadFromFile(self, arg)
RuntimeError: Internal: unk is not defined. | 04-19-2023 19:18:30 | 04-19-2023 19:18:30 | facing the same issue.<|||||>Hey! Thanks for reporting I'll investigate this! <|||||>I have the same issue when I use the latest version of torch.<|||||>I did not find the solution. but if someone wants to download the weights.
following link has all the versions.
https://huggingface.co/elinas<|||||>Okay, We update the conversion script, which should have fixed most issues. I downloaded the tokenizer model, and re-tried the conversion, and I did not have any issue. Make sure you are using the latest transformers version.<|||||>I tried with the latest code from the main branch, but still getting the same issue
<img width="1400" alt="image" src="https://github.com/huggingface/transformers/assets/12937285/bea4eb23-ee9b-4acf-b2a5-60df5411cd24">
<|||||>I am getting the same error message when running the conversion for the 7B model. Tried installing the latest version (4.29.2) but the error persists. Same traceback as @dittops but mine has a nicer formatting.<|||||>Again, the issue is most probably with the tokenizer file that you are using, which is outdated. Yes you need to upgrade to the latest transformers version, but you also need to use the original sentencepiece model in order for the conversion to properly work! <|||||>Thanks for following up. I have the llama weights/tokenizer that were updated on 3/26/23. Isn't that the latest version of the tokenizer?
Also I'm not sure what you mean by the original sentencepiece model (unless you mean the model from prior to the 3/26 update).<|||||>When you say:
> I have the llama weights/tokenizer that were updated on 3/26/23
do you mean the META weights and tokenizer?
Otherwise can you share a notebook with a reproducer? The issue with llama is that a PR was made too early and thus lots of checkpoints and previous tokenizers (meaning hf tokenizers json) are incorrect.<|||||>@ArthurZucker I have the META weights and tokenizer. The issue share is with that. For sentencepiece, is there a specific version to be used?<|||||>>
> > I have the llama weights/tokenizer that were updated on 3/26/23
>
> do you mean the META weights and tokenizer? Otherwise can you share a notebook with a reproducer? The issue with llama is that a PR was made too early and thus lots of checkpoints and previous tokenizers (meaning hf tokenizers json) are incorrect.
Ah I see. The llama weights I have come from [Meta's torrent PR](https://github.com/facebookresearch/llama/pull/73). I did not get them from HuggingFace, if you are referring to [this](https://github.com/facebookresearch/llama/pull/109) PR.<|||||>Ok ππ» I'll give it another go, but I remember trying with those exact weights and getting a correct conversion.
Will get back to you soon! <|||||>Would you mind sending me the file via google drive? The torrent link seems down<|||||>The torrent is showing as up for me right now, but if it isn't working for you I am happy to send you a copy of the 7B folder I am using. The entire folder for the 7B model is ~13-14GB. I'm trying to compress it right now but it will take a little bit to finish.<|||||>Just the tokenizer files are enough! <|||||>Email sent!<|||||>@egoetz where you able to solve this issue?<|||||>@egoetz told me that installing GIT LFS + using the tokenizer at `huggyllama/llama-7b` worked.
I received the email but could not access files as they were not shared using drive but a private mail provider π
If you are trying to convert the original model (by that I mean going from the spm model to transformers) make sure you have the latest version of `transformers` <|||||>I was able to resolve it by replacing `tokenizer.model `with one from hugging face. Thank you/<|||||>I'm not sure I understand. If you are trying to **convert** a checkpoint/tokenizer, then you don't need to use an already converted one. The script is to go from the original tokenizer to the HF format. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 22,872 | closed | moved labels to the same device as logits for OTP, CODEGEN ,gptj and pixel2struct model | # What does this PR do?
As suggested in the [#22561](https://github.com/huggingface/transformers/issues/22561), moved labels to the same device as logits for the OTP model and codegen model
@sgugger can u pls review and merge this pr??
@sgugger I am really sorry for the mess of this pr, I will not repeat this in the future .... pls once review and merge this...I should have created multiple branches for each pr but sorry this will not be repeated...
| 04-19-2023 19:13:46 | 04-19-2023 19:13:46 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Glad to help! |
transformers | 22,871 | closed | moved labels to the same device as logits for opt, GPTJ , codeine and pix2struct models | # What does this PR do?
As suggested in the [#22561 ](https://github.com/huggingface/transformers/issues/22561) , moved labels to the same device as logits for OPT, GPTj, codegen and pix2struct models.
I am new to open source pls give me suggestions for keeping better pr.
| 04-19-2023 18:55:07 | 04-19-2023 18:55:07 | ### there are some things to be changed i will keep pr after that changes<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_22871). All of your documentation changes will be reflected on that endpoint. |
transformers | 22,870 | closed | Fix to removing ESM special tokens | This is a followup to PR #22770 - I forgot that because of the way the ESM tokenizer is structured that the EOS token would come back after it was saved and reloaded. By making the special tokens arguments to the tokenizer, we can set them using `init_kwargs` and ensure that they stay changed permanently. Sorry for overlooking this in the last PR! | 04-19-2023 18:23:06 | 04-19-2023 18:23:06 | that took like 45 seconds how do you see your notifications that quickly<|||||>Ah ah, I was on my GitHub already that's all.<|||||>I'm intimidated nonetheless!<|||||>_The documentation is not available anymore as the PR was closed or merged._ |
transformers | 22,869 | closed | Fixup multigpu local_rank | # What does this PR do?
The `local_rank` wasn't being properly set when using the `PartialState`, causing failures on the nightlies. This PR fixes it.
Fixes # (issue)
Failing nightly tests
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@sgugger
| 04-19-2023 18:15:37 | 04-19-2023 18:15:37 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 22,868 | closed | Update modeling_opt.py | # What does this PR do?
As suggested in the [#22561 ](https://github.com/huggingface/transformers/issues/22561) ,moved labels to the same device as logits for OTP , codegen and gptj
@sgugger can u pls review this once?? | 04-19-2023 18:01:48 | 04-19-2023 18:01:48 | _The documentation is not available anymore as the PR was closed or merged._<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_22868). All of your documentation changes will be reflected on that endpoint. |
transformers | 22,867 | open | `push_to_hub` with `branch` or `revision` keyword argument | ### Feature request
In `datasets`, you can upload a dataset to a `branch`. In the `transformers` package, it doesn't seem like `branch` or `revision` [are supported](https://huggingface.co/docs/transformers/v4.28.1/en/main_classes/model#transformers.PreTrainedModel.push_to_hub)
### Motivation
To push a model to the hub and with a revision seems a little harder. It seems like I would need to find the cache directory of the model and use `upload_folder` from `huggingface_hub` to upload to the correct revision.
I could very well be missing the right documentation but I can't seem to figure out how/where to do this
### Your contribution
Maybe a PR? | 04-19-2023 17:54:26 | 04-19-2023 17:54:26 | cc @sgugger <|||||>If you want to contribute a PR to add this, it would be welcome! |
transformers | 22,866 | closed | Flax Refactor v2 | # What does this PR do?
Alternative to #22627. Instead of making `FlaxPretrainedModel` a Flax `Module`, this PR aims to make all inner `.module`s usable in a standalone way by moving any pre/posprocessing done by its `*Model` container into the Module itself. | 04-19-2023 16:50:19 | 04-19-2023 16:50:19 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_22866). All of your documentation changes will be reflected on that endpoint.<|||||>Hey @cgarciae - we're nearly finished with this PR far as I can tell. Do you have the bandwidth to see this through to completion? Happy to help with the last stages of integration here!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 22,865 | closed | Remove some pipeline skip cases | # What does this PR do?
Remove some pipeline skip cases after #22428: the real tokenizers avoid a lot of failing cases - except for QA with slow tokenizers.
P.S. As discussed once on Slack: the QA pipeline with slow tokenizer uses some methods in `src/transformers/data/processors/squad.py`, and we plan not to make any change to this file. | 04-19-2023 15:32:29 | 04-19-2023 15:32:29 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 22,864 | closed | Add perf_train_gpu_one.mdx italian translation | See issue #17459
Good evening.
I didn't translate technical terms and I preferred to keep them in english. So I hope it's all ok.
Good bye. | 04-19-2023 15:25:36 | 04-19-2023 15:25:36 | @Baelish03 Thanks for adding this! To resolve the failing quality checks, you'll need to run `make fixup` locally and push any changes. |
transformers | 22,863 | closed | GPT-2 trained on 24 yrs of US patent grants | ### Model description
Hi,
I have a GPT2 model available that was made from scratch and has been trained on 1976-2000 US patent grants ~>1M docs. I think it is a cool/useful example of Huggingface's gpt2 implementation. I continue to update this model as I get new data. I have a local streamlit implementation with greedy and beam searches. Top-k, top_p and temperature variables are randomized within optimized ranges . Should you accept to implement it, I will give you the model, and a streamlit implementation to be open-source.
What this model can and cannot do:
Can:
-Given an invention idea e.g. (totally made up)"an ultrasonic toothbrush with sensors to detect dental caries..." it gives intriguing results
- it is not limited to one tech. The training data was randomly sampled so as to cover a large tech space.
-It gives even more intriguing results when prompted by a preamble (first ~ 7-10 words) of an existing granted patent. In a non-scientific or exhaustive testing of this model, I have gotten generated text similar to inventions that were applied for and granted several years later than 2000. This observation may or may not be generalizable to other fields of tech.
Cannot:
-Prompts not grounded in the laws of physics as we currently understand them will generate unsatisfying results. So, no perpetual machines, flying carpets, or unicorns. Additionally, there are certain subject matters that cannot be patented by law in the US. Prompts about these subject matters will not give meaningful responses.
-it is limited by its training to those ideas that have been patented until 2000.
Best regards,
Gojeb Frehywot
[email protected]
P.S. Should you implement this model as you have done for other gpt2 models, I have no interest in any possible patentable invention a user might generate/come up with
### Open source status
- [X] The model implementation is available
- [X] The model weights are available
### Provide useful links for the implementation
My name is Gojeb Frehywot. I have a streamlit implementation of this text generating model on my local machine. I do not use a repository on GITHUB.
Can upload to GITHUB do so if you are interested to look into this request further.
Thank you.
Gojeb Frehywot
[email protected] | 04-19-2023 15:00:47 | 04-19-2023 15:00:47 | Hi @goji-patai - cool model!
This kind of post is best shared in places like our forum's [Show and Tell](https://discuss.huggingface.co/c/show-and-tell/65) section, or the `i-made-this` channel in our [discord](https://t.co/1n75wi976V?amp=1). We try to reserve the issues for feature requests and bug reports.
Anyone can add their [model to the hub](https://huggingface.co/docs/hub/models-uploading). There you should be able to build a demo which you can share with others too.
<|||||><!--
/* Font Definitions */
@font-face
{font-family:"Cambria Math";
panose-1:2 4 5 3 5 4 6 3 2 4;}
@font-face
{font-family:Calibri;
panose-1:2 15 5 2 2 2 4 3 2 4;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
{margin:0in;
font-size:11.0pt;
font-family:"Calibri",sans-serif;}
a:link, span.MsoHyperlink
{mso-style-priority:99;
color:blue;
text-decoration:underline;}
code
{mso-style-priority:99;
font-family:"Courier New";}
.MsoChpDefault
{mso-style-type:export-only;}
@page WordSection1
{size:8.5in 11.0in;
margin:1.0in 1.0in 1.0in 1.0in;}
div.WordSection1
{page:WordSection1;}
-->Thank you.Β Will do.Β Sent from Mail for WindowsΒ From: amyerobertsSent: Wednesday, April 19, 2023 1:13 PMTo: huggingface/transformersCc: GEMIC; MentionSubject: Re: [huggingface/transformers] GPT-2 trained on 24 yrs of US patent grants (Issue #22863)Β Hi @goji-patai - cool model!This kind of post is best shared in places like our forum's Show and Tell section, or the i-made-this channel in our discord. We try to reserve the issues for feature requests and bug reports.Anyone can add their model to the hub. There you should be able to build a demo which you can share with others too.βReply to this email directly, view it on GitHub, or unsubscribe.You are receiving this because you were mentioned.Message ID: ***@***.***>Β <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 22,862 | closed | Generate: assisted decoding with sample | # What does this PR do?
This PR expands the previous [assisted generation PR](https://github.com/huggingface/transformers/pull/22211) so as to work with sampling.
Two important notes to review the PR:
1. I'd suggest starting the review by the docs, so you understand what's going on at a high level. Sampling adds an additional (controllable) heuristic, so the user can control between speed and pure sampling behavior.
2. In terms of implementation, I've decided to overload the assisted generation function with a few extra lines to handle the sample case. This is to avoid adding a close copy of a 500-line function.
_____________________________________________________________________________
Bellow are some results, so you can understand the balancing act. Execution time obtained on a 3090.
<details>
<summary>Script</summary>
```py
from transformers import AutoTokenizer, AutoModelForCausalLM, TextStreamer
import torch
import time
model_id = "EleutherAI/pythia-6.9b-deduped"
assistant_id = "EleutherAI/pythia-160m-deduped"
tokenizer = AutoTokenizer.from_pretrained(model_id)
assistant_model = AutoModelForCausalLM.from_pretrained(assistant_id)
assistant_model = assistant_model.to("cuda")
model_kwargs = {
"pretrained_model_name_or_path": model_id,
"device_map": "auto",
"max_memory": {0: "20GiB", "cpu": "50GiB"},
"torch_dtype": torch.float16,
}
model = AutoModelForCausalLM.from_pretrained(**model_kwargs)
inputs = tokenizer("Here's how to cook a good ramen:", return_tensors="pt").to("cuda")
streamer = TextStreamer(tokenizer=tokenizer)
print("Greedy with assistance:")
start = time.time()
model.generate(**inputs, assistant_model=assistant_model, streamer=streamer, max_new_tokens=64)
print(f"Elapsed time: {time.time() - start:.2f} seconds")
for p in (0.0, 0.2, 0.4, 0.6, 0.8, 1.0):
print(f"Sample with assistance (assisted_keep_proba = {p})")
torch.manual_seed(0)
start = time.time()
model.generate(
**inputs,
do_sample=True,
assistant_model=assistant_model,
assisted_keep_proba=p,
streamer=streamer,
max_new_tokens=64
)
print(f"Elapsed time: {time.time() - start:.2f} seconds")
print("Original sample")
torch.manual_seed(0)
start = time.time()
model.generate(**inputs, do_sample=True, streamer=streamer, max_new_tokens=64)
print(f"Elapsed time: {time.time() - start:.2f} seconds")
```
</details>
<details>
<summary>Sample results</summary>
Decoding strategy | Result | Execution time
:-------------------:|:-------:|:------:|
Greedy (w/assistance) | Here's how to cook a good ramen:<br><br>1. Make sure you have a good stock.<br><br>2. Make sure you have a good broth.<br><br>3. Make sure you have a good ramen.<br><br>4. Make sure you have a good ramen.<br><br>5. Make sure you have a good ramen. | 1.44 seconds
Sample (w/assistance<br>`assisted_keep_proba=0.0`) | Here's how to cook a good ramen:<br><br>1. Get a noodle.<br><br>2. Get a stock.<br><br>3. Get a packet of dried ingredients.<br><br>4. Cook the noodles.<br><br>5. Cook the stock.<br><br>6. Cook the packet of dried ingredients.<br><br>7. Enjoy!<br><br>And | 1.44 seconds
Sample (w/assistance<br>`assisted_keep_proba=0.2`) | Here's how to cook a good ramen:<br><br>1. Get a noodle vendor.<br><br>The noodle vendor makes the noodles. Japanese restaurants often have the noodle vendor on-site.<br><br>2. Get a pot.<br><br>The pot is used to cook ramen.<br><br>3. Get a pot of boiling water. | 1.59 seconds
Sample (w/assistance<br>`assisted_keep_proba=0.4`) | Here's how to cook a good ramen:<br><br>Step 1: Collect your ingredients.<br><br>For this recipe you need a big stock pot. That's good.<br><br>And some water.<br><br>Step 2: Peel the eggs.<br><br>Yes, that's it. Four eggs.<br><br>Step 3: Separate the yolks. | 1.71 seconds
Sample (w/assistance<br>`assisted_keep_proba=0.6`) | Here's how to cook a good ramen:<br><br>Nothing much to take out of the packet. Just a big block of pork fat, some Chinese chilli paste and seasonings.<br><br>Preheat the oven to 210ΒΊC (410ΒΊF/Gas 6).<br><br>Place the pork fat, chilli paste and seasoning into a mixing bowl and | 2.08 seconds
Sample (w/assistance<br>`assisted_keep_proba=0.8`) | Here's how to cook a good ramen:<br><br>**You'll need:** A large pot for boiling noodles<br>A small saucepan for cooking the noodles<br>BBQ chicken or roasted fish, or any grilled healthy protein<br>A box of ramen noodles, noodles that come in<br>shapes and sizes<br>Soups or broth, | 2.32 seconds
Sample (w/assistance<br>`assisted_keep_proba=1.0`) | Here's how to cook a good ramen:<br><br>You take your pre-scalloped noodles, pour boiling water (or your preferred water-to-noodle ratio) over them, and leave them alone for four to five minutes. Once that's done, drain them, season with salt, and heat them up on the stove (microwave won | 2.56 seconds
Original Sample) | Here's how to cook a good ramen:<br><br>You take your pre-scalloped noodles, pour boiling water (or your preferred cooking liquid) over it, and after that you go get your ramen broth, add-ins, and other condiments. You make your seasoning sauce, and heat that up. Mix it all together, and put | 2.05 seconds
As it can be seen above, there is a trade off between time and quality. This will certainly be application specific: factual applications will be able to take the most of assisted decoding. In my brief experiments, `assisted_keep_proba=0.3` seems like a sensible default.
</details>
| 04-19-2023 14:42:36 | 04-19-2023 14:42:36 | _The documentation is not available anymore as the PR was closed or merged._<|||||>I'm closing this PR because I found a much much better way to handle the sample case π§
Stay tuned π |
transformers | 22,861 | closed | LLaMA `generate` output changes depending on batch size | ### System Info
```
- `transformers` version: 4.29.0.dev0
- Platform: Linux-5.10.147+-x86_64-with-glibc2.31
- Python version: 3.9.16
- Huggingface_hub version: 0.13.4
- Safetensors version: not installed
- PyTorch version (GPU?): 2.0.0+cu118 (True)
- Tensorflow version (GPU?): 2.12.0 (True)
- Flax version (CPU?/GPU?/TPU?): 0.6.8 (gpu)
- Jax version: 0.4.8
- JaxLib version: 0.4.7
- Using GPU in script?: True
- Using distributed or parallel set-up in script?: False
```
### Who can help?
@younesbelkada @gante
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
These [cells](https://colab.research.google.com/drive/1nAz2MUphzg5ifWW3CfRCJdYUgSYN5ynH?usp=sharing) should reproduce the error.
### Expected behavior
**Concise version:** I was expecting the results to not change whether or not the inference was batched or not.
Long version: Basically when I run `generate` with just one tokenized sequence, I get a certain result, and when I process the same sequence but inside a batch instead, the result changes. To make sure it wasn't any tokenization shenanigans, I tokenized the whole batch and took just the parts that related to the weird sequence (basically just `{k: v[1:] for k, v in tokenized_inputs.items()}` where the batch is just two items and the weird one is the second item).
For clarity, when I pass just
```python3
"""Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.
### Instruction:
Combine the question and answer into an image caption as succinctly as possible. Be sure to include the phrase "a photo of". Do not draw false conclusions.
### Input:
Is this a baseball game? no
### Response:
"""
```
to the generate function of a `LlamaForCausalLM` wrapped in a `PeftModel`, I get
```
This is not a baseball game.
```
However, if I put it in a two-item batch, the output for some reason is instead
```
A photo of people playing a game.
```
For more details, the base model was 8-bit quantized and the LoRA weights should be at half-precision. The weights were taken from `decapoda-research/llama-7b-hf` and `tloen/alpaca-lora-7b` respectively.
Edit: forgot to make the notebook public, should be fixed now. | 04-19-2023 14:05:41 | 04-19-2023 14:05:41 | Hey @ryan-caesar-ramos π
The particular base checkpoint you're using (`decapoda-research/llama-7b-hf`) is not compatible with transformers, so we do not provide support for related problems :)
If you have access to the original Meta weights, you can use other checkpoints as a starting point (e.g. [these](https://huggingface.co/huggyllama)). If the issue you're seeing still persists after updating the checkpoint, I'd be happy to take a look!<|||||>Hi @gante ! Thanks, I'll try to look into this, but I'm unable to use repos like `huggyllama/llama-7b` that have shard sizes around 10GB since my CPU ram can't handle it. Any way I can tell if a checkpoint is supported or not? Maybe there's a sharded one out there that is compatible<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 22,860 | closed | Remove 'main' from doc links | # What does this PR do?
There's a bunch of models added to the readme which have `main` in their doc link. This shows up in a lot of contributor's PRs' diffs, which is a bit annoying. This resolves this.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
| 04-19-2023 13:47:52 | 04-19-2023 13:47:52 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 22,859 | closed | use `Accelerate@main` | # What does this PR do?
We love `Accelerate@main` for CI. | 04-19-2023 11:52:43 | 04-19-2023 11:52:43 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 22,858 | open | Finetuned Donut model taking too much time on local machine for inference , around 5 minutes. | Finetuned Donut model is taking **4 minutes 37 seconds** for inference on my local windows laptop which has **16GB RAM and 4 cores**. However, inference time is under **5 seconds on a google colab CPU machine**, it has 32GB RAM. On Colab GPU, the inference time is under a second.
**_Why it's taking too much time on my local Windows machine?_** seems like it's not a normal behavior. Could someone help and guide me on what could be wrong here?
I am using **Transformers Version: 4.28.1**, it's the same on my windows machine as well.
Also, below is the prediction function which I am using and it's the model.generate method which it taking time.
```
def run_prediction(image):
pixel_values = processor(image, return_tensors="pt").pixel_values
outputs = model.generate(
pixel_values.to(device),
decoder_input_ids=decoder_input_ids.to(device),
max_length=model.decoder.config.max_position_embeddings,
early_stopping=True,
pad_token_id=processor.tokenizer.pad_token_id,
eos_token_id=processor.tokenizer.eos_token_id,
use_cache=True,
num_beams=1,
bad_words_ids=[[processor.tokenizer.unk_token_id]],
return_dict_in_generate=True)
sequence = processor.batch_decode(outputs.sequences)[0]
sequence = sequence.replace(processor.tokenizer.eos_token, "").replace(processor.tokenizer.pad_token, "")
sequence = re.sub(r"<.*?>", "", sequence, count=1).strip() # remove first task start token
return processor.token2json(sequence)
``` | 04-19-2023 10:32:49 | 04-19-2023 10:32:49 | Hi @shubh1608, thanks for raising this issue.
Could you share the following information, so that we can best help you:
* Checkpoint of the Donut model being used
* Environment being run (locally and on Colab). Copy-paste the output of `transformers-cli env` run in the terminal
* Expanded snippet to allow for full reproduction. In particular showing how the model and processor are loaded and how to code is being timed.
<|||||>Hi @amyeroberts, please find below the requested details:
* model checkpoint - shubh1608/donut_pdf_ocr
* Colab CPU environment
- `transformers` version: 4.28.1
- Platform: Linux-5.10.147+-x86_64-with-glibc2.31
- Python version: 3.9.16
- Huggingface_hub version: 0.13.4
- Safetensors version: not installed
- PyTorch version (GPU?): 2.0.0+cu118 (False)
- Tensorflow version (GPU?): 2.12.0 (False)
- Flax version (CPU?/GPU?/TPU?): 0.6.8 (cpu)
- Jax version: 0.4.8
- JaxLib version: 0.4.7
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
* Local Windows environment
- `transformers` version: 4.28.1
- Platform: Windows-10-10.0.19045-SP0
- Python version: 3.9.16
- Huggingface_hub version: 0.13.4
- Safetensors version: not installed
- PyTorch version (GPU?): 1.12.1 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
* Expanded code snippet. NOTE: I have cloned the model repo locally and loaded weights from there.
model_processor_path = '../model-weights/donut/donut_pdf_ocr'
processor = DonutProcessor.from_pretrained(model_processor_path)
model = VisionEncoderDecoderModel.from_pretrained(model_processor_path)
device = "cuda" if torch.cuda.is_available() else "cpu"
model.to(device)
# prepare decoder inputs
task_prompt = "<s>"
decoder_input_ids = processor.tokenizer(task_prompt, add_special_tokens=False, return_tensors="pt").input_ids
def run_prediction(file):
image = Image.open(file).convert('RGB')
pixel_values = processor(image, return_tensors="pt").pixel_values
outputs = model.generate(
pixel_values.to(device),
decoder_input_ids=decoder_input_ids.to(device),
max_length=model.decoder.config.max_position_embeddings,
early_stopping=True,
pad_token_id=processor.tokenizer.pad_token_id,
eos_token_id=processor.tokenizer.eos_token_id,
use_cache=True,
num_beams=1,
bad_words_ids=[[processor.tokenizer.unk_token_id]],
return_dict_in_generate=True)
sequence = processor.batch_decode(outputs.sequences)[0]
sequence = sequence.replace(processor.tokenizer.eos_token, "").replace(processor.tokenizer.pad_token, "")
sequence = re.sub(r"<.*?>", "", sequence, count=1).strip() # remove first task start token
return processor.token2json(sequence)
Let me know if you need any more information for debugging.
Thanks.<|||||>Guys, any update on this?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 22,857 | closed | fix: Correct small typo in docstring | # What does this PR do?
Fixes #22855
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@amyeroberts
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 04-19-2023 10:25:21 | 04-19-2023 10:25:21 | _The documentation is not available anymore as the PR was closed or merged._<|||||>> Thanks for the fix and quick PR! π
>
> For the quality checks, you'll need to run `make fixup` locally and push and changes.
@amyeroberts Thank you very much!
Done! |
transformers | 22,856 | closed | Type hinting Inconsistency in beam_search.py | ### System Info
Main branch
### Who can help?
@gante
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Hi,
In `beam_search.py`file, the process function (in actual classes), have the following signature
```
def process(
self,
input_ids: torch.LongTensor,
next_scores: torch.FloatTensor,
next_tokens: torch.LongTensor,
next_indices: torch.LongTensor,
pad_token_id: Optional[int] = None,
eos_token_id: Optional[Union[int, List[int]]] = None,
beam_indices: Optional[torch.LongTensor] = None,
) -> Tuple[torch.Tensor]:
```
even though it returns `UserDict`. Any reason why not annotate with Dict, Mapping rather than Tuple?
This type of mismatching might exist in other places, not sure
### Expected behavior
IMO, it should return Dict, or Mapping | 04-19-2023 10:23:06 | 04-19-2023 10:23:06 | Hey @mert-kurttutan π
That is absolutely correct. Would you like to open a PR to fix it? :)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Hello, is anyone working on this issue? If not I can take it on @gante.<|||||>Hey @jprivera44 -- AFAIK no one is working on it, feel free to take it π <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 22,855 | closed | Small typo in `conversation.py` docstring. | ## Reproduction
There is a small typographical error in the docstring for Conversation class in conversational.py.
```python
class Conversation:
"""
Utility class containing a conversation and its history. This class is meant to be used as an input to the
[`ConversationalPipeline`]. The conversation contains a number of utility function to manage the addition of new
user input and generated model responses. A conversation needs to contain an unprocessed user input before being
passed to the [`ConversationalPipeline`]. This user input is either created when the class is instantiated, or by
calling `conversational_pipeline.append_response("input")` after a conversation turn.
...
```
## Proposed Solution
This could be fixed by just rewriting this to:
```python
class Conversation:
"""
Utility class containing a conversation and its history. This class is meant to be used as an input to the
[`ConversationalPipeline`]. The conversation contains several utility functions to manage the addition of new
user inputs and generated model responses. A conversation needs to contain an unprocessed user input before being
passed to the [`ConversationalPipeline`]. This user input is either created when the class is instantiated, or by
calling `conversational_pipeline.append_response("input")` after a conversation turn.
...
```
| 04-19-2023 09:50:30 | 04-19-2023 09:50:30 | @oscar-defelice Good spot! Would you like to open a PR with the amendment? <|||||>Yes, I can open a PR and add a fixing commit.
thank you very much! |
transformers | 22,854 | closed | fix SpeechT5 doc comments | # What does this PR do?
Forgot to run the documentation tests on the SpeechT5 TTS changes.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 04-19-2023 09:18:06 | 04-19-2023 09:18:06 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 22,853 | closed | Add an efficient vision transformer backbone in ICLR 2022: CrossFormer | ### Model description
The CrossFormer has three new things that does not exist in other ViTs (such as Swin):
1. The cross-scale embedding layer(CEL) that generate cross-scale embeddings as ViT's input.
2. The long-short distance attention (LSDA) mechanism, which is an efficient replacement of the vanilla self-attention and shows better performance than Swin
3. A dynamic relative position bias, a kind of relative position bias that support dynamic group size.
### Open source status
- [X] The model implementation is available
- [X] The model weights are available
### Provide useful links for the implementation
The open source website: https://github.com/cheerss/CrossFormer
The paper was accepted in ICLR 2022: https://openreview.net/forum?id=_PHymLIxuI | 04-19-2023 08:38:27 | 04-19-2023 08:38:27 | I'm going to close this as it's a repeat of #22852 |
transformers | 22,852 | open | Add an efficient vision transformer backbone in ICLR 2022: CrossFormer | ### Model description
The CrossFormer has three new things that does not exist in other ViTs (such as Swin):
1. The cross-scale embedding layer(CEL) that generate cross-scale embeddings as ViT's input.
2. The long-short distance attention (LSDA) mechanism, which is an efficient replacement of the vanilla self-attention and shows better performance than Swin
3. A dynamic relative position bias, a kind of relative position bias that support dynamic group size.
### Open source status
- [X] The model implementation is available
- [X] The model weights are available
### Provide useful links for the implementation
The open source website: https://github.com/cheerss/CrossFormer
The paper was accepted in ICLR 2022: https://openreview.net/forum?id=_PHymLIxuI | 04-19-2023 08:37:49 | 04-19-2023 08:37:49 | I can pick this up. |
transformers | 22,851 | closed | Deadlock condition in layoutlmv2 using OMP library | I tried to fork the layoutlmv2 model using kserve workers adding the OMP library variables but it leads to deadlock. Intresting fact is it works well without OMP library variables but gives really high inference time.
Is there a resolution to utilize layoutlmv2 with multithreading and forking
sharing the values used for OPEN_MP
os.environ['OMP_NUM_THREADS'] = '4'
os.environ['OMP_PROC_BIND'] = 'false'
os.environ['OMP_SCHEDULE'] = 'STATIC'
os.environ['KMP_AFFINITY']='granularity=fine,compact,1,0' | 04-19-2023 07:15:28 | 04-19-2023 07:15:28 | Hi @Agarwal-Saurabh, thanks for reporting this issue.
So that we can best help, could you follow the issue template and give information about the running environment (from running `transformers-cli env`) and a reproducible code snippet? <|||||>@amyeroberts here is the details
- `transformers` version: 4.28.1
- Platform: Linux-5.4.0-139-generic-x86_64-with-glibc2.2.5
- Python version: 3.8.5
- Huggingface_hub version: 0.13.4
- Safetensors version: not installed
- PyTorch version (GPU?): 1.8.0 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: yes<|||||>@Agarwal-Saurabh Thank you. Could you also share a minimal code snippet to reproduce the issue? <|||||>Its not in the model rather in the preprocessor that we are loading to
tokenize for layoulmv2 model
On Wed, 19 Apr 2023, 8:03 pm amyeroberts, ***@***.***> wrote:
> @Agarwal-Saurabh <https://github.com/Agarwal-Saurabh> Thank you. Could
> you also share a minimal code snippet to reproduce the issue?
>
> β
> Reply to this email directly, view it on GitHub
> <https://github.com/huggingface/transformers/issues/22851#issuecomment-1514847613>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/AFSMGBGG3MASQ3OFELFA4I3XB7ZTBANCNFSM6AAAAAAXDUJ2KM>
> .
> You are receiving this because you were mentioned.Message ID:
> ***@***.***>
>
<|||||>@Agarwal-Saurabh Without knowing what code you're running and more information about the deadlock behaviour, I'm unable to understand or help with this issue.<|||||>@amyeroberts similar issue caught in other processors as well. Here is the code snippet requested https://gist.github.com/harshyadav17/149f1c990c17111d8340fcf2e89a5b88
reference issue : https://github.com/huggingface/transformers/issues/22978<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>This is not solved yet
On Fri, 19 May 2023, 8:32 pm github-actions[bot], ***@***.***>
wrote:
> This issue has been automatically marked as stale because it has not had
> recent activity. If you think this still needs to be addressed please
> comment on this thread.
>
> Please note that issues that do not follow the contributing guidelines
> <https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md>
> are likely to be ignored.
>
> β
> Reply to this email directly, view it on GitHub
> <https://github.com/huggingface/transformers/issues/22851#issuecomment-1554722971>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/AFSMGBG3TCVG62RD6HZX64TXG6DQBANCNFSM6AAAAAAXDUJ2KM>
> .
> You are receiving this because you were mentioned.Message ID:
> ***@***.***>
>
<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 22,850 | closed | feat(model parallelism): move labels to the same device as logits for M2M100 | # What does this PR do?
Moves labels to same device as logits for M2M100
Related to #22561
@sgugger hello please review. | 04-19-2023 06:09:56 | 04-19-2023 06:09:56 | _The documentation is not available anymore as the PR was closed or merged._<|||||>> Thanks a lot!
Thank you π |
transformers | 22,849 | closed | Fine-tuning wav2vec 2.0 with `torch.compile` | ### System Info
- `transformers` version: 4.28.1
- Platform: Linux-4.19.0-23-cloud-amd64-x86_64-with-glibc2.28
- Python version: 3.9.0
- Huggingface_hub version: 0.13.3
- Safetensors version: not installed
- PyTorch version (GPU?): 2.0.0+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help?
_No response_
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```diff
python run_audio_classification.py \
--model_name_or_path facebook/wav2vec2-base \
--dataset_name superb \
--dataset_config_name ks \
--output_dir wav2vec2-base-ft-keyword-spotting \
--overwrite_output_dir \
--remove_unused_columns False \
--do_train \
--do_eval \
--fp16 \
--learning_rate 3e-5 \
--max_length_seconds 1 \
--attention_mask False \
--warmup_ratio 0.1 \
--num_train_epochs 5 \
--per_device_train_batch_size 32 \
--gradient_accumulation_steps 4 \
--per_device_eval_batch_size 32 \
--dataloader_num_workers 4 \
--logging_strategy steps \
--logging_steps 10 \
--evaluation_strategy epoch \
--save_strategy epoch \
--load_best_model_at_end True \
--metric_for_best_model accuracy \
--save_total_limit 3 \
--seed 0 \
+ --torch_compile True
```
### Expected behavior
I followed the example to fine-tune wav2vec 2.0 for [audio classification](https://github.com/huggingface/transformers/tree/main/examples/pytorch/audio-classification#single-gpu), with the exception of using `torch.compile`, aiming to get faster training. However, I ran to an issue as follows
<details>
<summary> Error Log </summary>
```
[INFO|trainer.py:1769] 2023-04-19 05:28:50,832 >> ***** Running training *****
[INFO|trainer.py:1770] 2023-04-19 05:28:50,832 >> Num examples = 51,094
[INFO|trainer.py:1771] 2023-04-19 05:28:50,832 >> Num Epochs = 5
[INFO|trainer.py:1772] 2023-04-19 05:28:50,832 >> Instantaneous batch size per device = 32
[INFO|trainer.py:1773] 2023-04-19 05:28:50,832 >> Total train batch size (w. parallel, distributed & accumulation) = 128
[INFO|trainer.py:1774] 2023-04-19 05:28:50,833 >> Gradient Accumulation steps = 4
[INFO|trainer.py:1775] 2023-04-19 05:28:50,833 >> Total optimization steps = 1,995
[INFO|trainer.py:1776] 2023-04-19 05:28:50,834 >> Number of trainable parameters = 90,371,212
0%| | 0/1995 [00:00<?, ?it/s]/opt/conda/envs/torch/lib/python3.9/site-packages/transformers/feature_extraction_utils.py:165: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray.
tensor = as_tensor(value)
/opt/conda/envs/torch/lib/python3.9/site-packages/transformers/feature_extraction_utils.py:165: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray.
tensor = as_tensor(value)
/opt/conda/envs/torch/lib/python3.9/site-packages/transformers/feature_extraction_utils.py:165: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray.
tensor = as_tensor(value)
/opt/conda/envs/torch/lib/python3.9/site-packages/transformers/feature_extraction_utils.py:165: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray.
tensor = as_tensor(value)
[2023-04-19 05:28:54,741] torch._inductor.utils: [WARNING] using triton random, expect difference from eager
Traceback (most recent call last):
File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/_dynamo/output_graph.py", line 670, in call_user_compiler
compiled_fn = compiler_fn(gm, self.fake_example_inputs())
File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/_dynamo/debug_utils.py", line 1055, in debug_wrapper
compiled_gm = compiler_fn(gm, example_inputs)
File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/__init__.py", line 1390, in __call__
return compile_fx(model_, inputs_, config_patches=self.config)
File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/_inductor/compile_fx.py", line 455, in compile_fx
return aot_autograd(
File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/_dynamo/backends/common.py", line 48, in compiler_fn
cg = aot_module_simplified(gm, example_inputs, **kwargs)
File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/_functorch/aot_autograd.py", line 2805, in aot_module_simplified
compiled_fn = create_aot_dispatcher_function(
File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/_dynamo/utils.py", line 163, in time_wrapper
r = func(*args, **kwargs)
File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/_functorch/aot_autograd.py", line 2498, in create_aot_dispatcher_function
compiled_fn = compiler_fn(flat_fn, fake_flat_args, aot_config)
File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/_functorch/aot_autograd.py", line 1713, in aot_wrapper_dedupe
return compiler_fn(flat_fn, leaf_flat_args, aot_config)
File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/_functorch/aot_autograd.py", line 2087, in aot_dispatch_autograd
fx_g = make_fx(joint_forward_backward, aot_config.decompositions)(
File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/fx/experimental/proxy_tensor.py", line 714, in wrapped
t = dispatch_trace(wrap_key(func, args, fx_tracer), tracer=fx_tracer, concrete_args=tuple(phs))
File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/_dynamo/eval_frame.py", line 209, in _fn
return fn(*args, **kwargs)
File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/fx/experimental/proxy_tensor.py", line 443, in dispatch_trace
graph = tracer.trace(root, concrete_args)
File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/_dynamo/eval_frame.py", line 209, in _fn
return fn(*args, **kwargs)
File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/fx/_symbolic_trace.py", line 778, in trace
(self.create_arg(fn(*args)),),
File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/fx/_symbolic_trace.py", line 652, in flatten_fn
tree_out = root_fn(*tree_args)
File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/fx/experimental/proxy_tensor.py", line 459, in wrapped
out = f(*tensors)
File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/_functorch/aot_autograd.py", line 1156, in traced_joint
return functionalized_f_helper(primals, tangents)
File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/_functorch/aot_autograd.py", line 1108, in functionalized_f_helper
f_outs = flat_fn_no_input_mutations(fn, f_primals, f_tangents, meta, keep_input_mutations)
File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/_functorch/aot_autograd.py", line 1076, in flat_fn_no_input_mutations
outs = flat_fn_with_synthetic_bases_expanded(fn, primals, primals_after_cloning, maybe_tangents, meta, keep_input_mutations)
File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/_functorch/aot_autograd.py", line 1048, in flat_fn_with_synthetic_bases_expanded
outs = forward_or_joint(fn, primals_before_cloning, primals, maybe_tangents, meta, keep_input_mutations)
File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/_functorch/aot_autograd.py", line 1017, in forward_or_joint
backward_out = torch.autograd.grad(
File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/autograd/__init__.py", line 269, in grad
return handle_torch_function(
File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/overrides.py", line 1534, in handle_torch_function
result = mode.__torch_function__(public_api, types, args, kwargs)
File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/_inductor/overrides.py", line 38, in __torch_function__
return func(*args, **kwargs)
File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/autograd/__init__.py", line 303, in grad
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/utils/_stats.py", line 20, in wrapper
return fn(*args, **kwargs)
File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/fx/experimental/proxy_tensor.py", line 487, in __torch_dispatch__
return self.inner_torch_dispatch(func, types, args, kwargs)
File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/fx/experimental/proxy_tensor.py", line 512, in inner_torch_dispatch
out = proxy_call(self, func, args, kwargs)
File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/fx/experimental/proxy_tensor.py", line 345, in proxy_call
out = func(*args, **kwargs)
File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/_ops.py", line 287, in __call__
return self._op(*args, **kwargs or {})
File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/utils/_stats.py", line 20, in wrapper
return fn(*args, **kwargs)
File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/_subclasses/fake_tensor.py", line 987, in __torch_dispatch__
return self.dispatch(func, types, args, kwargs)
File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/_subclasses/fake_tensor.py", line 1162, in dispatch
op_impl_out = op_impl(self, func, *args, **kwargs)
File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/_subclasses/fake_tensor.py", line 453, in index_tensor
check_no_bool_index_tensors(func, *args, **kwargs)
File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/_subclasses/fake_tensor.py", line 432, in check_no_bool_index_tensors
raise DynamicOutputShapeException(func)
torch._subclasses.fake_tensor.DynamicOutputShapeException: aten.index.Tensor
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/wilson_bookbotkids_com/run_audio_classification.py", line 418, in <module>
main()
File "/home/wilson_bookbotkids_com/run_audio_classification.py", line 392, in main
train_result = trainer.train(resume_from_checkpoint=checkpoint)
File "/opt/conda/envs/torch/lib/python3.9/site-packages/transformers/trainer.py", line 1662, in train
return inner_training_loop(
File "/opt/conda/envs/torch/lib/python3.9/site-packages/transformers/trainer.py", line 1929, in _inner_training_loop
tr_loss_step = self.training_step(model, inputs)
File "/opt/conda/envs/torch/lib/python3.9/site-packages/transformers/trainer.py", line 2699, in training_step
loss = self.compute_loss(model, inputs)
File "/opt/conda/envs/torch/lib/python3.9/site-packages/transformers/trainer.py", line 2731, in compute_loss
outputs = model(**inputs)
File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/_dynamo/eval_frame.py", line 82, in forward
return self.dynamo_ctx(self._orig_mod.forward)(*args, **kwargs)
File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/_dynamo/eval_frame.py", line 209, in _fn
return fn(*args, **kwargs)
File "/opt/conda/envs/torch/lib/python3.9/site-packages/transformers/models/wav2vec2/modeling_wav2vec2.py", line 1817, in forward
outputs = self.wav2vec2(
File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/opt/conda/envs/torch/lib/python3.9/site-packages/transformers/models/wav2vec2/modeling_wav2vec2.py", line 1316, in forward
hidden_states = self._mask_hidden_states(
File "/opt/conda/envs/torch/lib/python3.9/site-packages/transformers/models/wav2vec2/modeling_wav2vec2.py", line 1249, in _mask_hidden_states
if not getattr(self.config, "apply_spec_augment", True):
File "/opt/conda/envs/torch/lib/python3.9/site-packages/transformers/models/wav2vec2/modeling_wav2vec2.py", line 1259, in <graph break in _mask_hidden_states>
mask_time_indices = _compute_mask_indices(
File "/opt/conda/envs/torch/lib/python3.9/site-packages/transformers/models/wav2vec2/modeling_wav2vec2.py", line 1266, in <graph break in _mask_hidden_states>
mask_time_indices = torch.tensor(mask_time_indices, device=hidden_states.device, dtype=torch.bool)
File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/_dynamo/eval_frame.py", line 337, in catch_errors
return callback(frame, cache_size, hooks)
File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 404, in _convert_frame
result = inner_convert(frame, cache_size, hooks)
File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 104, in _fn
return fn(*args, **kwargs)
File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 262, in _convert_frame_assert
return _compile(
File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/_dynamo/utils.py", line 163, in time_wrapper
r = func(*args, **kwargs)
File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 324, in _compile
out_code = transform_code_object(code, transform)
File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/_dynamo/bytecode_transformation.py", line 445, in transform_code_object
transformations(instructions, code_options)
File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 311, in transform
tracer.run()
File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 1726, in run
super().run()
File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 576, in run
and self.step()
File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 540, in step
getattr(self, inst.opname)(inst)
File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 1792, in RETURN_VALUE
self.output.compile_subgraph(
File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/_dynamo/output_graph.py", line 517, in compile_subgraph
self.compile_and_call_fx_graph(tx, list(reversed(stack_values)), root)
File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/_dynamo/output_graph.py", line 588, in compile_and_call_fx_graph
compiled_fn = self.call_user_compiler(gm)
File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/_dynamo/utils.py", line 163, in time_wrapper
r = func(*args, **kwargs)
File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/_dynamo/output_graph.py", line 675, in call_user_compiler
raise BackendCompilerFailed(self.compiler_fn, e) from e
torch._dynamo.exc.BackendCompilerFailed: debug_wrapper raised DynamicOutputShapeException: aten.index.Tensor
Set torch._dynamo.config.verbose=True for more information
You can suppress this exception and fall back to eager by setting:
torch._dynamo.config.suppress_errors = True
```
</details>
I suspect that wav2vec 2.0 is not yet supported in PyTorch 2.0 and needs some modification to ensure compatibility when running `torch.compile`. The same error occurred when fine-tuning for automatic speech recognition. | 04-19-2023 05:33:26 | 04-19-2023 05:33:26 | cc @sanchit-gandhi <|||||>Hi @w11wo, thanks for raising this issue!
Please note that whilst we aim to support a wide variety of use cases with our examples, `torch_compile` is an experimental flag and not one we guarantee will work for for all of our models as the support is progressively rolled in in PyTorch. <|||||>Hi @amyeroberts, no worries and thanks for the heads up. Looking forward to seeing wav2vec 2.0 supported. Cheers.<|||||>Hey @w11wo! Sorry for the late reply here and thanks for the detailed issue description! I had a quick look, and the issue seems to reside with the `_compute_mask_indices` function:
https://github.com/huggingface/transformers/blob/4baa34c18f18274fe028ad5a5511ea3fba9eeece/src/transformers/models/wav2vec2/modeling_wav2vec2.py#L132
The function is both dynamic and in NumPy - we'd need to make the function static (fixed shapes) for it to be compatible with torch compile. I sadly won't have time to look into this myself, but feel free to open a PR if you want to take a stab at updating this!
In the meantime, you can set SpecAug to 0 to avoid calling this dynamic function - you'll loose regularisation in the feature encoder outputs, but you should be able to torch compile the model. To do this, you simply need to set `apply_spec_augment` to False in the config: https://huggingface.co/facebook/wav2vec2-large-960h-lv60-self/blob/54074b1c16f4de6a5ad59affb4caa8f2ea03a119/config.json#L4<|||||>cc @hollance <|||||>Hey @w11wo - any luck here? Did it work with specaug set to 0?<|||||>Hi @sanchit-gandhi, unfortunately I haven't been able to test it out without SpecAugment, since my use case requires it to be used. I will try and test it out when I can.<|||||>Hey @w11wo - sure, sounds good! The offer still stands for opening a PR to fix this if you feel like having a go at re-working the SpecAug logic in the modelling file, think this could make for a nice PR :)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Extending the offer of opening a PR to fix the SpecAug logic in the modelling file to the community! Would be a nice PR addition to re-work the SpecAug function so that it's compatible with torch compile (note that `torch.compile` is not guaranteed for the transformers library, but is a nice feature if it can be done without backwards breaking changes)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 22,848 | open | Add LLaVA model | ### Model description
[LLaVA](https://llava-vl.github.io/) is a multimodal model that combines a vision encoder and Vicuna for general-purpose visual and language understanding, "achieving impressive chat capabilities mimicking spirits of the multimodal GPT-4".
### Open source status
- [X] The model implementation is available
- [X] The model weights are available
### Provide useful links for the implementation
https://github.com/haotian-liu/LLaVA
| 04-19-2023 04:18:47 | 04-19-2023 04:18:47 | @sgugger and @youssefadr, I want to work on this issue. I am new to open source and hugging face, can u pls provide me some guidance to work on this issue. Any reference issue that helps on getting an idea on this..pls help me out.<|||||>@sushmanthreddy It's great you want to contribute a model!
There's a [detailed guide in the docs](https://huggingface.co/docs/transformers/add_new_model) outlining important information about the model class, how it fits in the library and the steps to take to add a model. Let us know if there's anything which is unclear or you hit a blocker. Looking forward to seeing the PR! π€ <|||||>@sushmanthreddy I don't know if you are still planning to work on this model, but if not, I would be glad to contribute on my side π€!
Please let me know if working on this issue is still in your plans π!<|||||>@youssefadr Sorry, I am busy with my google summer of code work...couldn't contribute much you can go ahead and contribute to it<|||||>Hello @youssefadr are you going to take on the work of adding this model in? I'd be happy to collaborate or take this task on <|||||>@sushmanthreddy Okay, thank you and good luck with your Google program!
@jprivera44 Hello! Yes, I am going to open a draft PR this week, do not hesitate to collaborate!<|||||>That's fantastic @youssefadr, do you mind adding me as a collaborator on your branch so we can plan there on which sections of LLava we are going to tackle? I've got time today to create a branch and add you there if you prefer. Excited for this :)
@amyeroberts any other suggestions on the best way to collaborate with peers on a new model such as this? I read through the suggestions and I appreciate the philosophy of transformers.<|||||>@jprivera44 @youssefadr - great to hear that you're both keen to work on this model!
The main piece of advice I have if you're both collaborating on a PR is to make sure that it's clear who is working on what and when - you don't want to find out that one piece has been implemented twice! If working on the same branch, make sure not to force push as well :)
<|||||>Thanks @amyeroberts, I'm waiting for the approval for the LLaMA weights from Meta at the moment do you know if there is any way to speed up that process?
@youssefadr hey nice job with the pr! I noticed you added a lot of changes, are you working with the 7B, 13B, or 65B parameter count?<|||||>@jprivera44 I am planning to work with 7B parameter checkpoint. I think it would be better if we could communicate directly to better collaborate on this model together. What do you think of discussing through Discord ? Here is my username 'Youssef Adarrab#3595'<|||||>Fantastic, I'll reach out to you on discord. |
transformers | 22,847 | closed | Creating XLNetTokenizer from Custom ByteLevelBPETokenizer Throws OSError | ### System Info
- `transformers` version: 4.28.1
- Platform: Linux-3.10.0-1160.76.1.el7.x86_64-x86_64-with-glibc2.17
- Python version: 3.9.12
- Huggingface_hub version: 0.13.3
- Safetensors version: not installed
- PyTorch version (GPU?): 2.0.0+cu117 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@ArthurZucker @ArthurZucker @younesbelkada
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Hello,
I'm running into some issues using a custom tokenizer with XLNet. I have a ByteLevelBPETokenizer (located in `./tokenizer`) that I already trained, but when trying to load it with XLNetTokenizer, I get an OSError.
```
>>> from transformers import XLNetTokenizer
>>> tokenizer = XLNetTokenizer.from_pretrained("tokenizer", local_files_only=True)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/hiekense/.local/lib/python3.9/site-packages/transformers/tokenization_utils_base.py", line 1795, in from_pretrained
raise EnvironmentError(
OSError: Can't load tokenizer for 'tokenizer'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'tokenizer' is the correct path to a directory containing all relevant files for a XLNetTokenizer tokenizer.
```
I've read in a few places that it's likely due to a missing `tokenizer_config.json`, so I tried dropping in the default one from `xlnet-base-cased`.
```
{
"additional_special_tokens": [
"<eop>",
"<eod>"
],
"bos_token": "<s>",
"clean_up_tokenization_spaces": true,
"cls_token": "<cls>",
"do_lower_case": false,
"eos_token": "</s>",
"keep_accents": false,
"mask_token": {
"__type": "AddedToken",
"content": "<mask>",
"lstrip": true,
"normalized": true,
"rstrip": false,
"single_word": false
},
"model_max_length": 1000000000000000019884624838656,
"pad_token": "<pad>",
"remove_space": true,
"sep_token": "<sep>",
"sp_model_kwargs": {},
"tokenizer_class": "XLNetTokenizer",
"unk_token": "<unk>"
}
```
... which led to an even stranger error:
```
>>> from transformers import XLNetTokenizer
>>> tokenizer = XLNetTokenizer.from_pretrained("tokenizer", local_files_only=True)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/hiekense/.local/lib/python3.9/site-packages/transformers/tokenization_utils_base.py", line 1811, in from_pretrained
return cls._from_pretrained(
File "/home/hiekense/.local/lib/python3.9/site-packages/transformers/tokenization_utils_base.py", line 1965, in _from_pretrained
tokenizer = cls(*init_inputs, **init_kwargs)
File "/home/hiekense/.local/lib/python3.9/site-packages/transformers/models/xlnet/tokenization_xlnet.py", line 179, in __init__
self.sp_model.Load(vocab_file)
File "/home/hiekense/.local/lib/python3.9/site-packages/sentencepiece/__init__.py", line 905, in Load
return self.LoadFromFile(model_file)
File "/home/hiekense/.local/lib/python3.9/site-packages/sentencepiece/__init__.py", line 310, in LoadFromFile
return _sentencepiece.SentencePieceProcessor_LoadFromFile(self, arg)
TypeError: not a string
```
Also, there's nothing wrong with the tokenizer itself - testing it with GPT2's tokenizer (`GPT2Tokenizer.from_pretrained("tokenizer", local_files_only=True)`) yielded no errors.
Thank you.
### Expected behavior
N/A | 04-19-2023 04:04:33 | 04-19-2023 04:04:33 | UPDATE: According to [this summary](https://huggingface.co/docs/transformers/tokenizer_summary#sentencepiece), XLNet uses SentencePiece tokenization; so, I tried swapping in a `SentencePieceBPETokenizer` instead of a `ByteLevelBPETokenizer` (you should really include `SentencePieceBPETokenizer` in the docs by the way... the only mention I could find of it was [here](https://discuss.huggingface.co/t/training-sentencepiece-from-scratch/3477/2)). I'm receiving the exact same issue though.
Also, looks like I didn't include the code for training the tokenizer above, so I'll drop it down here:
```
def batch_iterator(dataset):
for i in dataset:
yield i["text"]
def getTokenizer(train=True, train_dataset=None):
tokenizer = None
if train:
tokenizer = SentencePieceBPETokenizer()
print("Training tokenizer...")
tokenizer.train_from_iterator(batch_iterator(train_dataset), show_progress=True, vocab_size=VOCAB_SIZE, min_frequency=2, special_tokens=[
"<s>",
"<pad>",
"</s>",
"<unk>",
"<mask>",
NEWLINE
])
print("Training complete. Saving tokenizer...")
tokenizer.save_model("tokenizer")
```
... and my dataset:
```
Dataset({
features: ['text'],
num_rows: 2080000
})
```<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 22,846 | closed | NameError: name 'PartialState' is not defined. | I am using the following version of transformer, datasets and huggingface_hub.

I am running into the following error:
```sh
NameError: name 'PartialState' is not defined.
```
How to resolve this issue to work with my versions of the transformer, datasets and huggingface_hub ?
My code was running just fine until yesterday.
| 04-19-2023 01:23:53 | 04-19-2023 01:23:53 | Closing, as this issue is a duplicate of a comment on #22816, where it it being followed up on. |
transformers | 22,845 | closed | CodeGenAttention does not work with defaults in forward pass | ### System Info
- `transformers` version: 4.28.1
- Platform: Linux-5.15.0-1034-azure-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.13.4
- Safetensors version: not installed
- PyTorch version (GPU?): 2.0.0a0+1767026 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help?
The current version of models.codegen.modeling_codegen.CodeGenAttention forward throws error on line 193 when position_ids are not specified and defaults to None. This can be fixed by defining default position_ids as self.position_ids in the init. The issue was an issue introduced in commit 4e94c6c.
@ArthurZucker @younesbelkada
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```
import torch
from transformers.models.codegen.modeling_codegen import CodeGenAttention
from transformers import AutoConfig, AutoModelForCausalLM
config = AutoConfig.from_pretrained("Salesforce/codegen-350M-nl")
model = CodeGenAttention(config)
x= torch.randn(4, config.n_ctx, config.n_embd)
model(x)
```
### Expected behavior
The block should instantiate the codegenattention with default position ids as torch.arange(seq_offset:seqlen) | 04-18-2023 23:03:03 | 04-18-2023 23:03:03 | Hi @sgunasekar
Thanks for the issue
As per my understanding, since the class `CodeGenAttention` is not a public class, it should be only used by `CodeGenModel`. In the modeling script if position ids is `None` we indeed manually create `position_ids` based on past length and sequence length. I personally don't think we should do this inside `CodeGenAttention`, but if you want to use that class as a standalone class you should manually create position ids and pass it in the forward pass.
I also want to hear from @ArthurZucker, @sgugger @amyeroberts to make sure we are aligned on this<|||||>ππ» on @younesbelkada 's answer, on almost all of our attention modules, the attention should be passed and there are not reason to give them a default value because this is handled in the modelling file. <|||||>Completely in line with the comments above.<|||||>+1 - As @younesbelkada says, `CodeGenAttention` isn't a public class and this is easily resolved by passing in `position_ids` directly to the layer.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 22,844 | closed | Make ClipSeg compatible with model parallelism | # What does this PR do?
Add model parallelism for `ClipSeg`.
Related to #22561
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@sgugger | 04-18-2023 22:52:44 | 04-18-2023 22:52:44 | _The documentation is not available anymore as the PR was closed or merged._<|||||>My pleasure, I'm glad I could help! |
transformers | 22,843 | closed | Fix default position_ids in CodeGenAttention module | CodeGenAttention forward throws error when position_ids are not specified.
# What does this PR do?
The current version of models.codegen.modeling_codegen.CodeGenAttention forward throws error on line 193 when position_ids are not specified and defaults to None. Added a default behavior by defining default position_ids as self.position_ids in the init.
<!-- Remove if not applicable -->
Fixes # (models.codegen.modeling_codegen.CodeGenAttention forward throws error on line 193 when position_ids are not specified and defaults to None)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 04-18-2023 22:16:39 | 04-18-2023 22:16:39 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_22843). All of your documentation changes will be reflected on that endpoint.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 22,842 | closed | None check for encoder | In the case that BartForConditionalGeneration decoder is being used without an encoder, this change maintains the ability to resize embeddings.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [Not Necessary] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [Not Necessary] Did you write any new necessary tests?
@ArthurZucker and @younesbelkada
| 04-18-2023 19:27:44 | 04-18-2023 19:27:44 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Hey! As you can see from the red tests, this cannot be merged as it is breaking a lot of the API π
|
transformers | 22,841 | closed | Raise err if minimum Accelerate version isn't available | # What does this PR do?
This PR will raise an explicit `ImportError` during `TrainingArguments` if `Accelerate` isn't installed (or isn't the required minimal version) and Accelerate is going to be utilized
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@sgugger | 04-18-2023 17:40:21 | 04-18-2023 17:40:21 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 22,840 | closed | Add `automatic-mask-generation` pipeline for Segment Anything Model (SAM) | # What does this PR do?
This need the SAM model + rebasing once merged
```python
from transformers import pipeline
import matplotlib.pyplot as plt
from PIL import Image
import numpy as np
import time
generator = pipeline("automatic-mask-generation", device = 0)
image_url = "https://huggingface.co/ybelkada/segment-anything/resolve/main/assets/car.png"
dog_url = "/home/arthur_huggingface_co/transformers/Arthur/dog.jpg"
raw_image = Image.open(dog_url).convert("RGB")
start = time.time()
outputs = generator(raw_image, points_per_batch = 256, pred_iou_thresh=1)
print(f"point_batch_size : {256}, {time.time() - start}")
def show_mask(mask, ax, random_color=False):
if random_color:
color = np.concatenate([np.random.random(3), np.array([0.6])], axis=0)
else:
color = np.array([30 / 255, 144 / 255, 255 / 255, 0.6])
h, w = mask.shape[-2:]
mask_image = mask.reshape(h, w, 1) * color.reshape(1, 1, -1)
ax.imshow(mask_image)
plt.imshow(np.array(raw_image))
ax = plt.gca()
for mask in outputs["masks"]:
show_mask(mask, ax=ax, random_color=True)
plt.axis("off")
plt.show()
plt.savefig("dog_results_2.png")
```


<img width="621" alt="image" src="https://user-images.githubusercontent.com/48595927/232853562-9858cdc5-dc1c-41b3-b067-1ea013c63e0f.png">
| 04-18-2023 17:05:41 | 04-18-2023 17:05:41 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Thank you all for your reviews! <|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_22840). All of your documentation changes will be reflected on that endpoint. |
transformers | 22,839 | closed | Fix weight tying in TF-ESM | TF ESM cloned weights instead of tying, which worked when loading from PT but broke when loading from safetensors. This resolves the issue by correctly tying weights when this is enabled in the config. Fixes an ongoing CI error raised by @ydshieh | 04-18-2023 16:57:40 | 04-18-2023 16:57:40 | Also cc @gante in case he hates how I handled weight tying here, I don't want to break TF convention too much!<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>@Rocketknight1 I'm cool with this :D |
transformers | 22,838 | closed | π [i18n-KO] Translated `tasks/masked_language_modeling.mdx` to Korean | <!-- PRμ μ λͺ©μ "π [i18n-KO] Translated `<your_file>.mdx` to Korean" μΌλ‘ λΆνλ립λλΉ -->
# What does this PR do?
Translated the `tasks/masked_language_modeling.mdx` file of the documentation to Korean.
Thank you in advance for your review.
Part of https://github.com/huggingface/transformers/issues/20179
<!-- λ©μΈ μ΄μμ κΈ°λ‘μ΄ λ¨μμ! κ°μ§μ°κ΅¬μ 리ν¬λ₯Ό μ¬μ©ν΄ μ°μ΅νμ€λλ μ κ±°ν΄μ£Όμλ©΄ κ°μ¬νκ² μ΅λλ€! :smile: -->
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
<!-- μ μΆ μ 체ν¬λ¦¬μ€νΈλ‘, κ°μ§μ°κ΅¬μλ§μ 체ν¬λ¦¬μ€νΈλ <details>λ‘ κ°μΈμ λ§λ€μ΄λλ©΄ λ μ’μ κ² κ°μμ. -->
## Who can review?
<!-- κ°μ§μ°κ΅¬μ νμλ€κ³Ό λ¦¬λ·°κ° λλ νμλ§ νκΉ
νμ΄μ€ μ§μλ€μκ² λ¦¬λ·° μμ²νλ μλ μ£Όμμ λ
ΈμΆν΄μ£ΌμΈμ! -->
<!-- @sgugger, @ArthurZucker, @eunseojo May you please review this PR? -->
Team PseudoLab, may you please review this PR?
@0525hhgus, @KIHOON71, @sim-so, @gabrielwithappy, @HanNayeoniee, @wonhyeongseo, @jungnerd
May you please review this PR?
@sgugger, @ArthurZucker, @eunseojo | 04-18-2023 16:44:48 | 04-18-2023 16:44:48 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 22,837 | closed | Fix from_pretrained when model is instantiated on the meta device | # What does this PR do?
#22437 broke the `from_pretrained` method whenever the model is instantiated on the meta device and the state dict passed is not complete (see [this issue](https://github.com/huggingface/accelerate/issues/1333) for one example).
Basically the check will remove all keys from `missing_keys` since all parameters on the meta device share the same data pointer. I had advocated to use another solution in that PR but the contributor did not listen
Since we rely on those `missing_keys` later on to re-initialize the weights that are not in the state dict, the model ends up with weights on the meta device. | 04-18-2023 16:04:36 | 04-18-2023 16:04:36 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 22,836 | closed | Neptune fix bug init run | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
We realised that the `init_run` function embedded in the integration was accepting a deprecated kwarg `run` which was replaced with `with_id` some time ago. Without this fix there might be cases where the NeptuneCallback will not run correctly and throw an error, that the function received an unexpected argument.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 04-18-2023 15:39:50 | 04-18-2023 15:39:50 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@sgugger Do you know more or less when will that be released?<|||||>@AleksanderWWW It was released this week in v4.29.0<|||||>Ah yes, my bad. I didn't realize that I had a bug in my own tests :smile: Thank you @amyeroberts! |
transformers | 22,835 | closed | Include decoder_attention_mask in T5 model inputs | # What does this PR do?
This PR includes decoder_attention_mask as an argument in the prepare_inputs_for_generation function, helping enable the use of custom attention masks in the decoder.
Duplicate of https://github.com/huggingface/transformers/pull/22819
@gante @amyeroberts | 04-18-2023 15:33:23 | 04-18-2023 15:33:23 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@amyeroberts Could you merge this PR please? |
transformers | 22,834 | closed | fix CLAP integration tests | # What does this PR do?
I noticed that the CLAP feature extractor tests were not being run, and that once enabled, they fail.
This PR fixes these tests.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 04-18-2023 14:46:30 | 04-18-2023 14:46:30 | _The documentation is not available anymore as the PR was closed or merged._<|||||>I don't have merge rights, so if all is good, feel free to merge. :-) |
transformers | 22,833 | closed | Update accelerate version + warning check fix | # What does this PR do?
This PR bumps the accelerate version, and flips the logic for the warning to be accurate on the distributed mode check
Fixes # (issue)
Solves https://github.com/huggingface/transformers/issues/22816
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@sgugger @pacman100
| 04-18-2023 14:39:59 | 04-18-2023 14:39:59 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 22,832 | closed | WER = 100% !! (Whisper medium) | Hello everyone,
I am having an issue when finetuning OpenAI's Whisper Medium on Mozilla's Common Voice 11 Dataset with the Arabic language.
The training and validation loss are both decreasing but the WER is being 100% after some steps (specially when the loss becomes < 1) and I see that the model is performing well and that WER is just miscalculated.

Notes :
- This error is just happening with the medium model, other models (small, tiny, large-v2 ,etc.) are working fine.
- I am following the famous blog about Whisper's finetuning (https://huggingface.co/blog/fine-tune-whisper).
@sanchit-gandhi | 04-18-2023 14:27:37 | 04-18-2023 14:27:37 | Hi @Seif-aber, thanks for raising this issue!
So that we can best help, could you share the running environment: run `transformers-cli env` in the terminal and copy-paste the output.
Have you looked any of the inputs to/ outputs of the model when this occurs? After the model has finished training, if you feed a single sample to the model to predict in eval model, what does the prediction look like?
<|||||>Hey @Seif-aber! I believe I answered a duplicate of your question on the Hugging Face Hub earlier today: https://huggingface.co/spaces/openai/whisper/discussions/84#64466139e113660053727da7
My suggestions were similar to those of @amyeroberts - let's take a look at the predictions the model is making to work out what's going on.<|||||>Addressed on the HF Hub: https://huggingface.co/spaces/openai/whisper/discussions/84#644aa699af97dfd24c0e0767 |
transformers | 22,831 | closed | Seq2SeqTrainingArguments.generation_config not json serializable | ### System Info
π
Following #22323, the `Seq2SeqTrainingArguments` `generation_config` attribute can be a `GenerationConfig` object.
When saving a `Seq2SeqTrainingArguments` object as json (as done during training / with tensorboard), it is first converted to a dictionary. But a `GenerationConfig` is not serializable --> error
### Who can help?
cc @gante @sgugger
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```Python
generation_config = GenerationConfig(
max_new_tokens=64,
top_k=20,
top_p=0.9,
)
gen_training_args = Seq2SeqTrainingArguments(generation_config=generation_config)
as_dict = gen_finetune_training_args.to_dict()
as_json_str = gen_finetune_training_args.to_json_string() # error here
```
### Expected behavior
To fix this, two possible solutions are :
1. Modify the expected types of `generation_config` to `[str, Path, dict]`, possibly converting arguments passed as `GenerationConfig` to dictionaries in `__post_init__`, and modify the behavior of `Seq2SeqTrainer.load_generation_config` to handle dictionaries;
2. Make `Seq2SeqTrainingArguments` override [`to_dict()`](https://github.com/huggingface/transformers/blob/03462875cc2d6506eb66a74de7d19b93ce968596/src/transformers/training_args.py#L1833) (or directly modify in `TrainingArguments`), to recursively convert non-json-serializable attributes to dictionaries.
Which sounds better do you think ? Or maybe you have a better one.
In any case I can handle it, as I feel responsible for this error. | 04-18-2023 13:49:31 | 04-18-2023 13:49:31 | Hey @Natooz π
We're more responsible than you, since we are supposed to catch that sort of issue in advance during the review process ;) But that's okay! It's normal to create bugs while trying to move forward at a good pace π
I believe that option 2 (recursively converting attributes to dictionaries) would be preferable. @sgugger, WDYT?<|||||>Option 2 is fine, but just on `Seq2SeqTrainingArguments`, to replace the generation config by `generation_config.to_dict()`. |
transformers | 22,830 | closed | [i18n-KO] Translated `accelerate.mdx` to Korean | # What does this PR do?
Translated the `accelerate.mdx` file of the documentation to Korean.
Thank you in advance for your review:)
Part of https://github.com/huggingface/transformers/issues/20179
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
<!-- μ μΆ μ 체ν¬λ¦¬μ€νΈλ‘, κ°μ§μ°κ΅¬μλ§μ 체ν¬λ¦¬μ€νΈλ <details>λ‘ κ°μΈμ λ§λ€μ΄λλ©΄ λ μ’μ κ² κ°μμ. -->
## Who can review?
<!-- κ°μ§μ°κ΅¬μ νμλ€κ³Ό λ¦¬λ·°κ° λλ νμλ§ νκΉ
νμ΄μ€ μ§μλ€μκ² λ¦¬λ·° μμ²νλ μλ μ£Όμμ λ
ΈμΆν΄μ£ΌμΈμ! -->
Team PseudoLab, may you please review this PR?
@0525hhgus, @KIHOON71, @sim-so, @gabrielwithappy, @HanNayeoniee, @wonhyeongseo, @jungnerd
May you please review this PR?
@sgugger, @ArthurZucker, @eunseojo
| 04-18-2023 13:46:42 | 04-18-2023 13:46:42 | _The documentation is not available anymore as the PR was closed or merged._<|||||>May you please review this PR?
@sgugger, @ArthurZucker, @eunseojo |
transformers | 22,829 | open | Add CLIP-ViP | ### Model description
[CLIP-ViP](https://github.com/microsoft/XPretrain/tree/main/CLIP-ViP) is a video-language model which is based on a pre-trained image-text model [CLIP](https://openai.com/blog/clip/) then further pre-trained (post-pretraining) on a large-scale video-text dataset [HD-VILA-100M](https://github.com/microsoft/XPretrain/tree/main/hd-vila-100m). This work is accepted by ICLR 2023.
### Open source status
- [X] The model implementation is available
- [X] The model weights are available
### Provide useful links for the implementation
[The official implementation](https://github.com/microsoft/XPretrain/tree/main/CLIP-ViP)
This repo has model implementation and pre-trained weights.
@hellwayxue | 04-18-2023 13:31:36 | 04-18-2023 13:31:36 | Cool model, I've contributed X-CLIP in the past: https://huggingface.co/docs/transformers/model_doc/xclip which is an extension of CLIP for video-language pretraining. Looks like CLIP-ViP focuses more on retrieval.
Looks like a great candidate for a first model contribution as the implementation is already in HF format.<|||||>Hi, I'd like to help out getting this model integrated.
<|||||>I have a general question about unit testing. The implementation guidelines indicate that the HF implementation should align with the reference model to a tolerance of .001, but I don't see that tested in all models.
In my PR I'll include an integration test analogous to clip's [test](https://github.com/huggingface/transformers/blob/88399476c3892435395618ed37993176dbb0de73/tests/models/clip/test_modeling_clip.py#L709-L737).
But I've noticed some model's don't seem to do this kind of integration test. (For example, I don't see an analogous test in [gptneo](https://github.com/huggingface/transformers/blob/88399476c3892435395618ed37993176dbb0de73/tests/models/gpt_neo/test_modeling_gpt_neo.py#L4)
Out of curiosity, why do some models not have these kinds of integration tests?<|||||>Yes, ideally GPT-Neo also has integration tests that test exact logit values. However you can see [here](https://github.com/huggingface/transformers/blob/88399476c3892435395618ed37993176dbb0de73/tests/models/gpt_neo/test_modeling_gpt_neo.py#L519) that expected output IDs and generated texts are tested.
But in any case it's always best to have an expected slice of the logits in the integration test.<|||||>Thanks for the clarification!
I have another question, the reference implementation reuses CLIPConfig, CLIPTextConfig, and CLIPVisionConfig directly.
Can we reuse them (via importing) in the PR directly as well? Or should we copy-paste these files and comment "Copied from transformers.clip..." at the top?<|||||>In that case, you can copy the classes, call them `CLIPVipConfig`, `CLIPVipTextConfig`, etc. and add `Copied from` on top of them. If you then run `make fix-copies` from the root of the repository, all files will automatically be copied to make sure they stay consistent.
Note that you can copy entire classes, like so:
```
# Copied from transformers.models.clip.configuration_clip.CLIPConfig
class CLIPVipConfig(...)
```
but also place them on top of methods, in case only a method is the same but the class is different:
```
class CLIPVipConfig(...)
# Copied from transformers.models.clip.configuration_clip.CLIPConfig.__init__
def __init__(config):<|||||>Got it, thanks for the quick response! |
transformers | 22,828 | closed | XGLM: Fix left-padding (PT and TF) | # What does this PR do?
Fixes left-padding for XGLM, on PT and TF. It is the usual problem: `position_ids` was not being used/passed around, and the code assumed that position ids = past length + input length (which is not true when left-padding is present).
Fixes #22707
While touching XGLM, other issues were sorted:
1. on PT, docs were duplicated (and one of the copies was wrong)
2. XGLM generate with sampling integration test was pinned on CPU (as always, GPU gives different results, which was making our slow CI report an error)
3. TF XLA was failing because of this (left padding support), so now we have TF XLA XGLM π | 04-18-2023 12:56:07 | 04-18-2023 12:56:07 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@amyeroberts now it is ready :)<|||||>Adding a new keyword argument is not considered breaking, so that's fine! |
transformers | 22,827 | closed | Generate method Time Series Transformer throws an error | ### System Info
- `transformers` version: 4.28.1
- Platform: Windows-10-10.0.19041-SP0
- Python version: 3.7.9
- Huggingface_hub version: 0.13.4
- Safetensors version: not installed
- PyTorch version (GPU?): 1.13.1+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@ArthurZucker, @younesbelkada, @sgugger
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
import torch
from transformers import TimeSeriesTransformerConfig, TimeSeriesTransformerForPrediction
batch_size = 32
context_length = 100
prediction_length = 1
input_size = 25
num_time_features = 1
config = TimeSeriesTransformerConfig(prediction_length=prediction_length,
context_length=context_length,
input_size=input_size,
lags_sequence=[0],
num_time_features=num_time_features,
num_static_categorical_features=0,
num_static_real_features=0,
num_dynamic_real_features=0,
embedding_dimension=64,
encoder_ffn_dim=32,
decoder_ffn_dim=32,
encoder_attention_heads=2,
decoder_attention_heads=2,
encoder_layers=2,
decoder_layers=2,
is_encoder_decoder=True,
activation_function="gelu",
d_model=64,
dropout=0.1,
encoder_layerdrop=0.1,
decoder_layerdrop=0.1,
attention_dropout=0.1,
activation_dropout=0.1,
num_parallel_samples=100,
init_std=0.02
)
model = TimeSeriesTransformerForPrediction(config)
outputs = model.generate(past_values=torch.empty((batch_size, context_length, input_size)),
past_time_features=torch.empty((batch_size, context_length, num_time_features)),
past_observed_mask=torch.ones((batch_size, context_length, input_size)),
future_time_features=torch.empty((batch_size, prediction_length, input_size)),
)
print(outputs.keys())
```
```
File ".\venv\lib\site-packages\transformers\models\time_series_transformer\modeling_time_series_transformer.py", line 1807, in generate
decoder_input = torch.cat((reshaped_lagged_sequence, repeated_features[:, : k + 1]), dim=-1)
RuntimeError: Sizes of tensors must match except in dimension 2. Expected size 100 but got size 1 for tensor number 1 in the list.
```
Error in this row:
```
decoder_input = torch.cat((reshaped_lagged_sequence, repeated_features[:, : k + 1]), dim=-1)
```
An attempt to concatenate tensors with dimensions `[3200, 100, 25]` and `[3200, 1, 75]`.
### Expected behavior
I expected to get the correct result of the model. | 04-18-2023 11:48:06 | 04-18-2023 11:48:06 | cc @gante <|||||>@amyeroberts -- `TimeSeriesTransformerForPrediction` has its own `generate` method, so I'm passing the tag to @kashif, who implemented it π€ <|||||>@yurkoff-mv the issue is that the `lags_sequence=[1]` and then the size of the past values and past time features needs to be larger. Let me paste an example below<|||||>```python
import torch
from transformers import TimeSeriesTransformerConfig, TimeSeriesTransformerForPrediction
batch_size = 32
context_length = 100
prediction_length = 1
input_size = 25
num_time_features = 1
lags_sequence = [1]
config = TimeSeriesTransformerConfig(prediction_length=prediction_length,
context_length=context_length,
input_size=input_size,
lags_sequence=lags_sequence,
num_time_features=num_time_features,
num_static_categorical_features=0,
num_static_real_features=0,
num_dynamic_real_features=0,
embedding_dimension=64,
encoder_ffn_dim=32,
decoder_ffn_dim=32,
encoder_attention_heads=2,
decoder_attention_heads=2,
encoder_layers=2,
decoder_layers=2,
is_encoder_decoder=True,
activation_function="gelu",
d_model=64,
dropout=0.1,
encoder_layerdrop=0.1,
decoder_layerdrop=0.1,
attention_dropout=0.1,
activation_dropout=0.1,
num_parallel_samples=100,
init_std=0.02
)
model = TimeSeriesTransformerForPrediction(config)
# input past seq length is context_length plus largest lag value:
outputs = model.generate(past_values=torch.randn((batch_size, context_length+max(lags_sequence), input_size)),
past_time_features=torch.randn((batch_size, context_length+max(lags_sequence), num_time_features)),
past_observed_mask=torch.ones((batch_size, context_length+max(lags_sequence), input_size)),
future_time_features=torch.randn((batch_size, prediction_length, num_time_features)),
)
print(outputs.keys())
outputs['sequences'].shape
# torch.Size([32, 100, 1, 25]) [batch_size, num_parallel_samples, prediction_length, input_size]
```<|||||>Thank you! It's Working for me! |
transformers | 22,826 | closed | Fix `test_eos_token_id_int_and_list_top_k_top_sampling` | # What does this PR do?
In #22204, I updated the expected value in `test_eos_token_id_int_and_list_top_k_top_sampling` to pass the `CircleCI`. However, the daily CI fails with that new value. It turns out that we need a seed that could give the same (generation) output (at minimum, the same output length) on both CPU/GPU machines. The difference is very likely coming from somehow larger numerical differences after certain generation steps.
### remark
With seed `0`, the output `generated_tokens[0]` is:
- `cpu`: `[ 40, 416, 79, 12, 230, 89, 231, 432, 301, 212, 933, 225, 33, 33,
846]`
- `gpu`: `[ 40, 416, 79, 12, 230, 89, 231, 432, 301, 212, 933, 225, 476, 682,
319, 832, 873, 853, 873, 832]` | 04-18-2023 10:07:16 | 04-18-2023 10:07:16 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 22,825 | closed | Not work cache_dir of AutoTokenizer.from_pretrained('gpt2') | ### System Info
My transformers is version 4.11.3, python version is 3.8.5, and Ubuntu 20.04.1.
I want to know the cache directory when downloading AutoTokenizer.from_pretrained('gpt2')
I run the below code
```
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained('gpt2')
tokenizer.cache_dir
```
then, the result is `AttributeError` like
```python
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: 'GPT2TokenizerFast' object has no attribute 'cache_dir'
```
When run `tokenizer.cache_dir()` , the result is the same `AttributeError`.
The downloaded tokenizer is from CodeParrot.
CodeParrot is in `transformers/examples/research_projects/codeparrot/`, and `codeparrot/scripts/bpe_training.py` download `AutoTokenizer.from_pretrained('gpt2')`.
How can I get the cache directory path of tokenizer??
What is my problems?
I want to know from method or variable of tokenizer, not the path.
(I already find ~/.cache/huggingface/transformers have cache files.)
If possible, I would like to know for tokenizer how to use the three files, .json, .lock, and the last file with no extension.
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
<img width="659" alt="Screenshot 2023-04-18 at 6 37 54 PM" src="https://user-images.githubusercontent.com/62585026/232737073-b317ae95-5ce8-4545-99c8-9239ed39d29c.png">
The tokenizer code in CodeParrot.
My running code is
<img width="470" alt="Screenshot 2023-04-18 at 6 38 32 PM" src="https://user-images.githubusercontent.com/62585026/232737239-a7b51916-4d9b-4e9e-ae99-fba31abf0f1c.png">
and scripts/bpe_training.py code is
<img width="722" alt="Screenshot 2023-04-18 at 6 39 41 PM" src="https://user-images.githubusercontent.com/62585026/232737566-d70052e3-9ed4-40ab-9ef6-1030a80a36a4.png">
### Expected behavior
I want to get the cache directory path of downloaded tokenizer.
I want to know from method or variable of tokenizer, not the path.
(I already find ~/.cache/huggingface/transformers have cache files.)
Moreover, if possible, I would like to know how to use the three files, .json, .lock, and the last file with no extension. | 04-18-2023 09:55:10 | 04-18-2023 09:55:10 | Hi @irene622, thanks for raising this issue!
`cache_dir` isn't an attribute of the class, and so calling `tokenizer.cache_dir` will raise an error.
You can find the cache directory, importing from utils:
```python
from transformers.utils import TRANSFORMERS_CACHE
```
When a tokenizer is created, should have the `name_or_path` attribute set, which will tell you from which model repo, or path it was loaded from.
```python
>>> from transformers import AutoTokenizer
>>> tokenizer = AutoTokenizer.from_pretrained("xlm-mlm-en-2048")
>>> tokenizer.name_or_path
'xlm-mlm-en-2048'
```<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 22,824 | closed | Allow initializing HuggingFaceEmbeddings from the cached weight | ### Feature request
### Suggestion
The only change has only a few lines in `__init__()`,
```python
class HuggingFaceEmbeddings(BaseModel, Embeddings):
"""Wrapper around sentence_transformers embedding models.
To use, you should have the ``sentence_transformers`` python package installed.
Example:
.. code-block:: python
from langchain.embeddings import HuggingFaceEmbeddings
model_name = "sentence-transformers/all-mpnet-base-v2"
hf = HuggingFaceEmbeddings(model_name=model_name)
"""
client: Any #: :meta private:
model_name: str = DEFAULT_MODEL_NAME
"""Model name to use."""
def __init__(self, cache_folder=None, **kwargs: Any):
"""Initialize the sentence_transformer."""
super().__init__(**kwargs)
try:
import sentence_transformers
self.client = sentence_transformers.SentenceTransformer(model_name_or_path=self.model_name, cache_folder=cache_folder)
except ImportError:
raise ValueError(
"Could not import sentence_transformers python package. "
"Please install it with `pip install sentence_transformers`."
)
class Config:
"""Configuration for this pydantic object."""
extra = Extra.forbid
def embed_documents(self, texts: List[str]) -> List[List[float]]:
"""Compute doc embeddings using a HuggingFace transformer model.
Args:
texts: The list of texts to embed.
Returns:
List of embeddings, one for each text.
"""
texts = list(map(lambda x: x.replace("\n", " "), texts))
embeddings = self.client.encode(texts)
return embeddings.tolist()
def embed_query(self, text: str) -> List[float]:
"""Compute query embeddings using a HuggingFace transformer model.
Args:
text: The text to embed.
Returns:
Embeddings for the text.
"""
text = text.replace("\n", " ")
embedding = self.client.encode(text)
return embedding.tolist()
```
### Usage
```python
embedding_model = HuggingFaceEmbeddings(model_name=model_name, cache_folder=cache_folder)
```
### Motivation
Right now, HuggingFaceEmbeddings doesn't support loading an embedding model's weights from the cache but downloading the weights every time. Fixing this would be a low hanging fruit.
### Your contribution
I can submit a PR if this request makes sense, and I've read `CONTRIBUTING.MD` | 04-18-2023 09:31:59 | 04-18-2023 09:31:59 | This issue should be fixed in `LangChain` sorry for the misreport. |
transformers | 22,823 | closed | Fix Past CI not running against the latest `main` | # What does this PR do?
Fix Past CI not running against the latest `main`. See comment in the changes. | 04-18-2023 08:58:01 | 04-18-2023 08:58:01 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 22,822 | closed | Size of saved model checkpoints after trainer.train() is much larger when using trainer with deepspeed stage2 | ### System Info
- `transformers` version: 4.28.0.dev0
- Platform: Linux-4.18.0-372.32.1.el8_6.x86_64-x86_64-with-glibc2.17
- Python version: 3.8.16
- Huggingface_hub version: 0.13.3
- Safetensors version: not installed
- PyTorch version (GPU?): 1.12.1+cu116 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: yes
### Who can help?
@stas00 @sgugger
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I'm using Trainer with deepspeed integration to fine-tune a Llama model.
This is the stage2 config im using:
```json
{
"optimizer": {
"type": "AdamW",
"params": {
"lr": "auto",
"betas": "auto",
"eps": "auto",
"weight_decay": "auto"
}
},
"zero_optimization": {
"stage": 2,
"offload_optimizer": {
"device": "cpu",
"pin_memory": true
},
"allgather_partitions": true,
"allgather_bucket_size": 2e8,
"overlap_comm": true,
"reduce_scatter": true,
"reduce_bucket_size": 2e8,
"contiguous_gradients": true
},
"gradient_accumulation_steps": "auto",
"gradient_clipping": "auto",
"train_batch_size": "auto",
"train_micro_batch_size_per_gpu": "auto"
}
```
So I'm using zero2 with optimizer offload. I found the size of the model checkpoints after `trainer.train()` become much larger than what they should be.
Using official [run_clm.py](https://github.com/huggingface/transformers/blob/main/examples/pytorch/language-modeling/run_clm.py) script as an example :
```bash
deepspeed --num_gpus=1 run_clm.py \
--num_train_epochs 0.01 \
--model_name_or_path decapoda-research/llama-7b-hf \
--dataset_name wikitext \
--dataset_config_name wikitext-2-raw-v1 \
--per_device_train_batch_size 2 \
--do_train \
--output_dir /tmp/test-plm \
--deepspeed ds_config.json
```
I add these two save_model lines around `trainer.train()` for testing:
```python
trainer.save_model("test1")
train_result = trainer.train(resume_from_checkpoint=checkpoint)
trainer.save_model("test2")
```
Now check the size:
```bash
du -sh test1
26G test1
du -sh test2
76G test2
```
Note, I have deleted `global_step*` folder in `test2` before calculating the size.
I believe 26G is the correct size for an fp32 llama 7b. So, after training with trainer, the model size is wrong? Interestingly, seems the wrong size model still works with `.from_pretrain`.
I have located the issue raised after this [line](https://github.com/huggingface/transformers/blob/dacd34568d1a27b91f84610eab526640ed8f94e0/src/transformers/deepspeed.py#L378), which changed the model assignment in trainer `_inner_training_loop` [here](https://github.com/huggingface/transformers/blob/dacd34568d1a27b91f84610eab526640ed8f94e0/src/transformers/trainer.py#L1733) afterward. After this the model saved by `trainer._save()` will have the wrong size.
Does deepspeed engine add some extra things to pytorch_model.bin? is this expected?
My current solution to this is always using `self.deepspeed.save_16bit_model()` in [trainer.save_model()](https://github.com/huggingface/transformers/blob/dacd34568d1a27b91f84610eab526640ed8f94e0/src/transformers/trainer.py#L2771) for zerostage2:
```python
elif self.deepspeed:
# this takes care of everything as long as we aren't under zero3
if self.args.should_save:
self._save(output_dir)
if is_deepspeed_zero3_enabled():
# It's too complicated to try to override different places where the weights dump gets
# saved, so since under zero3 the file is bogus, simply delete it. The user should
# either user deepspeed checkpoint to resume or to recover full weights use
# zero_to_fp32.py stored in the checkpoint.
if self.args.should_save:
file = os.path.join(output_dir, WEIGHTS_NAME)
if os.path.isfile(file):
# logger.info(f"deepspeed zero3: removing {file}, see zero_to_fp32.py to recover weights")
os.remove(file)
# now save the real model if stage3_gather_16bit_weights_on_model_save=True
# if false it will not be saved.
# This must be called on all ranks
if not self.deepspeed.save_16bit_model(output_dir, WEIGHTS_NAME):
logger.warning(
"deepspeed.save_16bit_model didn't save the model, since"
" stage3_gather_16bit_weights_on_model_save=false. Saving the full checkpoint instead, use"
" zero_to_fp32.py to recover weights"
)
self.deepspeed.save_checkpoint(output_dir)
else:
if self.args.should_save:
for filename in os.listdir(output_dir):
full_filename = os.path.join(output_dir, filename)
# If we have a shard file that is not going to be replaced, we delete it, but only from the main process
# in distributed settings to avoid race conditions.
weights_no_suffix = WEIGHTS_NAME.replace(".bin", "").replace(".safetensors", "")
# delete everything start with weights_no_suffix, usually are "pytorch_model".
if (
filename.startswith(weights_no_suffix)
and os.path.isfile(full_filename)
):
os.remove(full_filename)
self.deepspeed.save_16bit_model(output_dir, WEIGHTS_NAME)
```
### Expected behavior
Model checkpoint size should be unchanged after `trainer.train()` | 04-18-2023 08:46:00 | 04-18-2023 08:46:00 | cc @stas00 <|||||>deepspeed saves the optimizer states as well as fp32 master weights, so of course the checkpoint folder is larger. look at the contents of the saved checkpoint folder.
I'm not quite sure what the problem is.<|||||>@stas00 thanks for the reply. are these states are saved in the pytorch_model.bin file?<|||||>no, they are saved in their own files under `global_step*`. You might want to inspect the contents of the folder.
Please feel free report the full listing and their sizes here if you'd like to continue this discussion more specifically.<|||||>Hi, here are the file sizes in each folder:
```bash
du -a -h --max-depth=1 test1
496K test1/tokenizer.model
512 test1/config.json
32K test1/pytorch_model.bin.index.json
16K test1/training_args.bin
512 test1/tokenizer_config.json
512 test1/special_tokens_map.json
9.2G test1/pytorch_model-00001-of-00003.bin
9.3G test1/pytorch_model-00002-of-00003.bin
6.7G test1/pytorch_model-00003-of-00003.bin
512 test1/generation_config.json
26G test1
du -a -h --max-depth=1 test2
496K test2/tokenizer.model
512 test2/config.json
32K test2/pytorch_model.bin.index.json
16K test2/training_args.bin
512 test2/tokenizer_config.json
512 test2/special_tokens_map.json
26G test2/pytorch_model-00001-of-00003.bin
26G test2/pytorch_model-00002-of-00003.bin
26G test2/pytorch_model-00003-of-00003.bin
512 test2/generation_config.json
76G test2
```
So, the pytorch_model.bin files are much larger. Although there is a max file size of 10g that has been set for the second save, it still exceeds the file size. I guess something is wrong there?<|||||>> no, they are saved in their own files under `global_step*`. You might want to inspect the contents of the folder.
>
> Please feel free report the full listing and their sizes here if you'd like to continue this discussion more specifically.
I call trainer.save_model() manually and Im using stage2, so `global_step*` is not created. but indeed these folders will be created in checkpoints saving during training. Btw, is there any way to skip saving `global_step*` for stage2? this folder is extremely large and I think may not necessarily be needed for fine-tune cases.<|||||>oh, thank you! now that you're showing the actual file sizes, it's much easier to see what you're talking about. Indeed this looks wrong.
I have seen this happening in one situation where saving not updating the tensor's data structure. I wrote a script to fix that. Can you run this script and see if the shrink to a normal size?
https://github.com/stas00/toolbox/blob/master/pytorch/torch-checkpoint-shrink.py
Then we can look at the cause.<|||||>Hi @stas00 seems your tool can only support `.pt` files? can you give me more instructions on how to use it for transformer checkpoints folder? thanks!<|||||>
> Hi @stas00 seems your tool can only support `.pt` files? can you give me more instructions on how to use it for transformer checkpoints folder? thanks!
Never mind, I modified your script and it works now. Indeed it gets back to the correct size after shrinking:
```bash
python3 torch-checkpoint-shrink.py --checkpoint_dir test2/ --patterns "pytorch_model*.bin"
Processing zero checkpoint 'test2/'
-> test2/pytorch_model-00001-of-00003.bin
-> test2/pytorch_model-00002-of-00003.bin
-> test2/pytorch_model-00003-of-00003.bin
Done. Before 77115.10MB, after 25705.12MB, saved 51409.98MB
du -a -h --max-depth=1 test2
496K test2/tokenizer.model
512 test2/config.json
32K test2/pytorch_model.bin.index.json
16K test2/training_args.bin
512 test2/tokenizer_config.json
512 test2/special_tokens_map.json
9.2G test2/pytorch_model-00001-of-00003.bin
9.3G test2/pytorch_model-00002-of-00003.bin
6.7G test2/pytorch_model-00003-of-00003.bin
512 test2/generation_config.json
26G test2
```
So I bet the problem is this...<|||||>Wonderful. It was fixed in PP saving code in Deepspeed at https://github.com/microsoft/DeepSpeed/pull/1324 when I first seen this problem in Megatron-Deepspeed a year ago.
So probably need to do the same for ZeRO. Would you like to try replicating the above fix for ZeRO? Basically the need is to reclone the tensors, so they are recreated with the final actual size of the storage.
It should be pretty simple to do, by applying the same change of the PR above to this line:
https://github.com/microsoft/DeepSpeed/blob/036c5d6d7b6028853a4e15ef3f5df466ba335f33/deepspeed/runtime/checkpoint_engine/torch_checkpoint_engine.py#L20
and then test that your issue goes away, file a PR with Deepspeed and become a Deepspeed committer ;)
<|||||>actually, it will require a bit of efficiency changes to it. PP was already having small `state_dict` so it wasn't a problem to clone tensors in small groups. But here it'd be very expensive as it'd end up having 2 copies of the model, which can be huge. So I won't use dict comprehension and instead loop normally over the `state_dict` and clone and immediately overwrite the tensor - one tensor at a time. So the overhead will be one largest tensor and not 2x `state_dict`<|||||>hmm, but deepspeed doesn't do checkpoint sharding, those shards come from `transformers`:
```
32K test2/pytorch_model.bin.index.json
9.2G test2/pytorch_model-00001-of-00003.bin
9.3G test2/pytorch_model-00002-of-00003.bin
6.7G test2/pytorch_model-00003-of-00003.bin
```
So I am actually not sure that the suggestions I gave you is the right one. I looked at the code you shared, but that's not the code that HF Trainer runs. So we need to do that cloning there instead I think.
<|||||>Yeah, the code I shared is my temporary fix for this issue, using `self.deepspeed.save_16bit_model(output_dir, WEIGHTS_NAME)` gives the correct size `pytorch_model.bin` file, but indeed will save in a single file, not sharded.<|||||>I think `state_dict` should be re-cloned right after this line:
https://github.com/huggingface/transformers/blob/84a6570e7bce91ba7d18c0782186241c5f1fde75/src/transformers/trainer.py#L2872
Please check if I got to the right code branch, I'm doing it by reading the code - so possibly I got it wrong.
<|||||>> I think `state_dict` should be re-cloned here:
>
> https://github.com/huggingface/transformers/blob/84a6570e7bce91ba7d18c0782186241c5f1fde75/src/transformers/trainer.py#L2873
>
> Please check if I got to the right code branch, I'm doing it by reading the code - so possibly I got it wrong.
but I think here cannot solve for the `PreTrainedModel ` classes? Im afraid need to change `save_pretrained` here https://github.com/huggingface/transformers/blob/84a6570e7bce91ba7d18c0782186241c5f1fde75/src/transformers/modeling_utils.py#L1761 in `PreTrainedModel` if we want to fix for `transformers ` models<|||||>so I tried this in `save_pretrained ` and it works
```python
# Save the model
if state_dict is None:
# state_dict = model_to_save.state_dict()
orig_state_dict = model_to_save.state_dict()
state_dict = type(orig_state_dict)(
{k: v.clone()
for k,
v in orig_state_dict.items()})
```<|||||>Excellent, but we can't do that in `save_pretrained` since we don't want everybody paying a penalty because of a special case.
So let's go up the call stack and find where it needs to be called for the deepspeed case only. I think my suggestion should be around the right place. just need to add `if deepspeed`.
Actually, let's ping @tjruwase - Tunji any idea why we get the tensors bloated in the model during zero-2 w/ optim offload when they are saved? Remember we had that issue in PP in Megatron-Deepspeed and we had to re-clone the model's state dict? https://github.com/microsoft/DeepSpeed/pull/1324 So it seems @ArvinZhuang is hitting this same issue with ZeRO-2. Since the model is not sharded and the saving happens outside of Deepspeed, this is just `torch.save(module.model.state_dict())`, I am not sure how this can be fixed on the deepspeed side.
The bloating is about 2.5x times of the real size, you can see the good and the bad cases here: https://github.com/huggingface/transformers/issues/22822#issuecomment-1513853704
and my checkpoint shrinking post-processing workaround restores the normal size.
Does this perhaps have anything to do with offloading? But only the optimizer is offloaded here - so I don't see a connection.
@ArvinZhuang, could you try with a smaller model and test whether the bloating goes away if you don't use offload? And perhaps w/o deepspeed at all just to validate if the issue is indeed coming from deepspeed. But most likely it is.<|||||>Good point @stas00, I have tried several things already.
Using gpt-2 (a small model) with deepspeed does not have this problem.
LLaMa Without using deepspeed does not have this problem (was using fsdp).
Unfortunately, I don't have enough GPU memory to run without offloading, so I cannot test
I can confirm that for llama case the issue comes from here
https://github.com/huggingface/transformers/blob/84a6570e7bce91ba7d18c0782186241c5f1fde75/src/transformers/deepspeed.py#L378
After giving the model to deepspeed initial then the `model.save_pretrained()` will have the wrong size. Model savings before this line are correct.<|||||>@stas00 Probably we can change this line
https://github.com/huggingface/transformers/blob/84a6570e7bce91ba7d18c0782186241c5f1fde75/src/transformers/trainer.py#L2804
to
```python
if self.args.should_save:
state_dict = self.model.state_dict()
state_dict = type(state_dict)(
{k: v.clone()
for k,
v in state_dict.items()})
self._save(output_dir, state_dict=state_dict)
```
This will only affect saving behavior of deepspeed. and I tested it also works. <|||||>Excellent. That is the right place, @ArvinZhuang
But since the issue comes from Deepspeed, let's see if perhaps the cause can be removed there in the first place, since if we fix it directly in HF Trainer it'll still have this problem in any other training loop. Like Accelerate and any custom user training loop. Let's first wait for Tunji to respond.
The other option is to file your repro with saving before and after directly at https://github.com/microsoft/DeepSpeed/issues since clearly the issue is coming from there.
The shortest repro to send there is probably something like this (untested):
```
ds_config = {
"zero_optimization": {
"stage": 2,
},
"offload_optimizer": {
"device": "cpu",
"pin_memory": true
},
"train_batch_size": "1",
"train_micro_batch_size_per_gpu": "1"
}
model = ...from_pretrained("decapoda-research/llama-7b-hf")
model.save_pretrained("before")
deepspeed_engine, _* = deepspeed.initialize(model=model, config_params=ds_config)
deepspeed_engine.module.save_pretrained("after")
```
please fill in the missing bits, but I think that's all that is needed. I am not sure if optimizer/schedulers are even needed, but it'll assign the defaults.
I hope the above indeed reproduces the issue.<|||||>> oh, thank you! now that you're showing the actual file sizes, it's much easier to see what you're talking about. Indeed this looks wrong.
>
> I have seen this happening in one situation where saving not updating the tensor's data structure. I wrote a script to fix that. Can you run this script and see if the shrink to a normal size? https://github.com/stas00/toolbox/blob/master/pytorch/torch-checkpoint-shrink.py
>
> Then we can look at the cause.
I use the script, but the pt file not change
<img width="483" alt="image" src="https://user-images.githubusercontent.com/12690488/233274800-07a9b7e4-ab60-4fc0-8bba-aa6a050a9597.png">
<|||||>Hi @lw3259111 , what is your setting? like which model, deepspeed config, etc.<|||||>@ArvinZhuang
I use llama 33B model and the deepspeed config is :
```
{
"bf16": {
"enabled": "auto"
},
"optimizer": {
"type": "AdamW",
"params": {
"lr": "auto",
"betas": "auto",
"eps": "auto",
"weight_decay": "auto"
}
},
"scheduler": {
"type": "WarmupDecayLR",
"params": {
"total_num_steps": "auto",
"warmup_min_lr": "auto",
"warmup_max_lr": "auto",
"warmup_num_steps": "auto"
}
},
"zero_optimization": {
"stage": 3,
"offload_optimizer": {
"device": "cpu",
"pin_memory": true
},
"offload_param": {
"device": "cpu",
"pin_memory": true
},
"overlap_comm": true,
"contiguous_gradients": true,
"sub_group_size": 1e9,
"reduce_bucket_size": "auto",
"stage3_prefetch_bucket_size": "auto",
"stage3_param_persistence_threshold": "auto",
"stage3_max_live_parameters": 1e9,
"stage3_max_reuse_distance": 1e9,
"stage3_gather_16bit_weights_on_model_save": false
},
"gradient_accumulation_steps": "auto",
"gradient_clipping": "auto",
"steps_per_print": 5,
"train_batch_size": "auto",
"train_micro_batch_size_per_gpu": "auto",
"wall_clock_breakdown": false
}
```<|||||>Please note the discussion continues here: https://github.com/microsoft/DeepSpeed/issues/3303#issuecomment-1516798523
We understand well the cause of the problem - explained at https://github.com/microsoft/DeepSpeed/issues/3303#issuecomment-1516801635
This impacts only z1/z2 models that are sharded.
Apparently, FSDP has the same issue.
So the 2 workarounds for now are:
1. edit `save_pretrained` call to do `save_pretrained(..., max_shard_size=100GB)` - this will create a single shard which won't have any bloat - just choose any `max_shard_size` bigger than the model size.
2. Use the full clone solution here https://github.com/huggingface/transformers/issues/22822#issuecomment-1514096667 you might want to move the cloned tensors to cpu - i.e. `v.clone().cpu()` as you are likely not to have enough memory of gpu
<|||||>@stas00 I remember I was using FSDP and it saves the correct size model shards. I feel the issue only happens with deepspeed.<|||||>I was just relaying a report from someone else reporting the same problem with FSDP. Perhaps it depends on circumstances.
But it doesn't matter who else has this problem. This one will get fixed as soon as the Deepspeed side provides a utility for shrinking the `state_dict` and makes a new release.
<|||||>> Please note the discussion continues here: [microsoft/DeepSpeed#3303 (comment)](https://github.com/microsoft/DeepSpeed/issues/3303#issuecomment-1516798523)
>
> We understand well the cause of the problem - explained at [microsoft/DeepSpeed#3303 (comment)](https://github.com/microsoft/DeepSpeed/issues/3303#issuecomment-1516801635)
>
> This impacts only z1/z2 models that are sharded.
>
> Apparently, FSDP has the same issue.
>
> So the 2 workarounds for now are:
>
> 1. edit `save_pretrained` call to do `save_pretrained(..., max_shard_size=100GB)` - this will create a single shard which won't have any bloat - just choose any `max_shard_size` bigger than the model size.
> 2. Use the full clone solution here [Size of saved model checkpoints after trainer.train() is much larger when using trainer with deepspeed stage2Β #22822 (comment)](https://github.com/huggingface/transformers/issues/22822#issuecomment-1514096667) you might want to move the cloned tensors to cpu - i.e. `v.clone().cpu()` as you are likely not to have enough memory of gpu
@stas00 when I cloned tensors to CPU, The saved model is only 400M, my code:
```
def safe_save_model_for_hf_trainer(trainer: transformers.Trainer,
output_dir: str):
"""Collects the state dict and dump to disk."""
state_dict = trainer.model.state_dict()
if trainer.args.should_save:
cpu_state_dict = {
key: value.cpu()
for key, value in state_dict.items()
}
del state_dict
trainer._save(output_dir, state_dict=cpu_state_dict) # noqa
```<|||||>please reread the comment you quoted - it says `clone` and then optionally move to cpu. Your code is missing the key operation.
<|||||>> please reread the comment you quoted - it says `clone` and then optionally move to cpu. Your code is missing the key operation.
I am using the following code, but I still cannot save the model properlyοΌcode:
```
def safe_save_model_for_hf_trainer_clone(trainer: transformers.Trainer,
output_dir: str):
"""Collects the state dict and dump to disk."""
state_dict = trainer.model.state_dict()
if trainer.args.should_save:
cpu_state_dict = type(state_dict)(
{k: v.cpu().clone()
for k,
v in state_dict.items()})
del state_dict
trainer._save(output_dir, state_dict=cpu_state_dict) # noqa
```
or
```
def safe_save_model_for_hf_trainer_clone(trainer: transformers.Trainer,
output_dir: str):
"""Collects the state dict and dump to disk."""
state_dict = trainer.model.state_dict()
if trainer.args.should_save:
cpu_state_dict = type(state_dict)(
{k: v.clone().cpu()
for k,
v in state_dict.items()})
del state_dict
trainer._save(output_dir, state_dict=cpu_state_dict) # noqa
```
the result:
<img width="508" alt="image" src="https://user-images.githubusercontent.com/12690488/234154680-d56eef0a-6358-41bd-b1b1-b574c1c458b2.png">
<img width="525" alt="image" src="https://user-images.githubusercontent.com/12690488/234154727-87d45ac5-6df5-44be-a7a6-592b44aa0abc.png">
<|||||>@lw3259111 this problem seems only occurs with deepspeed Zero1/2, and a large model saved with shared checkpoints. Your setting and model may not have this issue.<|||||>What @ArvinZhuang said.
What model size are you expecting? Let's do a simple math How many parameters is it and what regime is it trained in - half precision or fp32?
Reverse engineering it's probably one of these 2:
- fp16/bf16: `408/2` 204M params.
- fp32: `408/4` 102M params<|||||>Hi @stas00 is this issue fixed? I saw deepspeed has updated the utility function.<|||||>Hi @ArvinZhuang
Yes, they did, but we can't start using the new util function in `transformers` until a new release is made by Deepspeed.
So in this case we are waiting for 0.9.3 release by Deepspeed.
-----------------
Meanwhile, Accelerate took over HF Trainer internals, therefore I'm going to pass this issue to @pacman100 who is in charge of Deepspeed integration in Accelerate.
Sourab, https://github.com/microsoft/DeepSpeed/pull/3348 introduced this new util `clone_tensors_for_torch_save` which needs to be run before saving `state_dict` when using deepspeed zero3.
I hope it's OK that I have re-assigned this ticket to you. I'm here to help, so please don't hesitate to reach out if you have any questions. <|||||>Hi @pacman100 , any updates on this?
Edit: if you can point me in the right direction, i'd be happy to make a PR!<|||||>Hello, the above PRs should fix this |
transformers | 22,821 | open | set fsdp and bf16 don't save memory | ### System Info
- `transformers` version: 4.28.0
- Platform: Linux-5.4.0-122-generic-x86_64-with-glibc2.31
- Python version: 3.9.12
- Huggingface_hub version: 0.13.4
- Safetensors version: not installed
- PyTorch version (GPU?): 1.13.1+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script? yes
- Using distributed or parallel set-up in script? yes
### Who can help?
@ArthurZucker @sgu
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1. download the dataset
```
lang = "Python"
import subprocess
subprocess.call(["wget", f"https://s3.amazonaws.com/code-search-net/CodeSearchNet/v2/{lang}.zip"])
subprocess.call(["unzip", f"/content/{lang}.zip"])
!mkdir "log"
log_dir = "/content/log"
!mkdir "data"
data_dir = "/content/data"
!mkdir "model"
model_dir = "/content/model"
!mkdir "tokenizer"
tokenizer_dir = "/content/tokenizer"
```
2. data preprocess
```
import os
import json
import torch
from pathlib import Path
from transformers import (Trainer,
pipeline,
RobertaConfig,
TrainingArguments,
RobertaForMaskedLM,
RobertaTokenizerFast,
LineByLineTextDataset,
DataCollatorForLanguageModeling)
from tokenizers import ByteLevelBPETokenizer
from tokenizers.processors import BertProcessing
from tokenizers.implementations import ByteLevelBPETokenizer
def prepare_text(dir_path):
for path in os.listdir(dir_path):
os.system(f"gunzip -k {dir_path}/{path}")
texts = ""
for path in os.listdir(dir_path):
if path.endswith(".jsonl"):
with open(dir_path + "/" + path, 'r') as f:
sample_file = f.readlines()
for sample in sample_file:
obj = json.loads(sample)
texts += obj["original_string"].replace("\n", "").replace("\t", "") + "\n"
return texts
train1_texts = prepare_text(f"/content/{lang}/final/jsonl/train")
train2_texts = prepare_text(f"/content/{lang}/final/jsonl/valid")
train_texts = train1_texts + "\n" + train2_texts
valid_texts = prepare_text(f"/content/{lang}/final/jsonl/test")
for path, text in zip(["train_texts.txt", "valid_texts.txt"],
[train_texts, valid_texts]):
with open(f"{data_dir}/{path}","w") as f:
f.write(text)
```
3. Train a tokenizer
```
paths = [str(x) for x in Path(f"{data_dir}/").glob("**/*.txt")]
tokenizer = ByteLevelBPETokenizer()
tokenizer.train(files=paths, vocab_size=52_000, min_frequency=2, special_tokens=[
"<s>",
"<pad>",
"</s>",
"<unk>",
"<mask>",
])
tokenizer.save_model(tokenizer_dir)
tokenizer = ByteLevelBPETokenizer(
"tokenizer/vocab.json",
"tokenizer/merges.txt",
)
tokenizer._tokenizer.post_processor = BertProcessing(
("</s>", tokenizer.token_to_id("</s>")),
("<s>", tokenizer.token_to_id("<s>")),
)
tokenizer.enable_truncation(max_length=512)
```
4. Build model
```
config = RobertaConfig(
vocab_size=52_000,
max_position_embeddings=514,
num_attention_heads=12,
num_hidden_layers=6,
type_vocab_size=1,
)
tokenizer = RobertaTokenizerFast.from_pretrained(tokenizer_dir, max_len=512)
model = RobertaForMaskedLM(config=config)
model.num_parameters()
train_dataset = LineByLineTextDataset(
tokenizer=tokenizer,
file_path=f"{data_dir}/train_texts.txt",
block_size=128,
)
test_dataset = LineByLineTextDataset(
tokenizer=tokenizer,
file_path=f"{data_dir}/valid_texts.txt",
block_size=128,
)
data_collator = DataCollatorForLanguageModeling(
tokenizer=tokenizer, mlm=True, mlm_probability=0.15
)
training_args = TrainingArguments(
output_dir=model_dir,
overwrite_output_dir=True,
num_train_epochs=4,
per_gpu_train_batch_size=64,
save_steps=5000,
do_eval=True,
logging_dir=log_dir,
)
trainer = Trainer(
model=model,
args=training_args,
data_collator=data_collator,
train_dataset=train_dataset,
eval_dataset = test_dataset
)
trainer.train()
trainer.save_model(model_dir)
tokenizer.save_pretrained(tokenizer_dir)
```
### Expected behavior
before set fsdp and bf16:
```
training_args = TrainingArguments(
output_dir=model_dir,
overwrite_output_dir=True,
num_train_epochs=4,
per_gpu_train_batch_size=64,
save_steps=5000,
do_eval=True,
logging_dir=log_dir,
)
```
<img width="417" alt="Snipaste_2023-04-18_15-42-22" src="https://user-images.githubusercontent.com/41561936/232707188-2579965b-92fd-4ba6-87de-b82ca948ec54.png">
after set fsdp and bf16:
```
training_args = TrainingArguments(
output_dir=model_dir,
overwrite_output_dir=True,
num_train_epochs=4,
per_gpu_train_batch_size=64,
save_steps=5000,
do_eval=True,
logging_dir=log_dir,
fsdp=True,
bf16=True,
)
```
<img width="415" alt="Snipaste_2023-04-18_15-42-45" src="https://user-images.githubusercontent.com/41561936/232707483-2b89c658-172d-4a23-a7fc-fe40cd1dfe83.png">
The memory usage is not much different and does not achieve the desired effect. Why?
I also try to set `per_gpu_train_batch_size=4` when `fsdp=True, bf16=True`:
<img width="426" alt="Snipaste_2023-04-18_15-49-23" src="https://user-images.githubusercontent.com/41561936/232708818-efa676d9-4e6b-440a-b0e0-e66e54026da5.png">
Compared with the results of the previous set of experiments, the increase of memory usage is much greater than the increase of batch size. Why?
| 04-18-2023 07:53:15 | 04-18-2023 07:53:15 | cc @younesbelkada <|||||>cc @pacman100 as I am not really familiar with FSDP + Trainer yet<|||||>Hello @skye95git, you are using FSDP incorrectly, just setting `fsdp=True` won't reduce memory usage. Please refer:
1. the docs here if you want to use Trainer's arguments: https://huggingface.co/docs/transformers/main_classes/trainer#pytorch-fully-sharded-data-parallel
2. the docs here if you want to use the `accelerate launch` with trainer: https://huggingface.co/docs/transformers/main/en/main_classes/trainer#using-accelerate-launcher-with-trainer
<|||||>> Hello @skye95git, you are using FSDP incorrectly, just setting `fsdp=True` won't reduce memory usage. Please refer:
>
> 1. the docs here if you want to use Trainer's arguments: https://huggingface.co/docs/transformers/main_classes/trainer#pytorch-fully-sharded-data-parallel
> 2. the docs here if you want to use the `accelerate launch` with trainer: https://huggingface.co/docs/transformers/main/en/main_classes/trainer#using-accelerate-launcher-with-trainer
Hi @pacman100 thanks for the reply here. However, from https://github.com/huggingface/transformers/blob/fe861e578f50dc9c06de33cd361d2f625017e624/src/transformers/trainer.py#L1526C1-L1526C39
it seems that only when XLA enables FSDP, is this correct? If `fsdp_config['xla']` is `None`, how FSDP is used in this version?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 22,820 | closed | Add MobileViTv2 | # What does this PR do?
Adds the MobileViTv2 model into transformers library (PS: Work in Progress)
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # ([issue](https://github.com/huggingface/transformers/issues/22570))
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case. https://github.com/huggingface/transformers/issues/22570
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@amyeroberts
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 04-18-2023 07:09:56 | 04-18-2023 07:09:56 | @amyeroberts please note that this is still a work in progress. I'll let you know when it is ready for your review.<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>@amyeroberts , this is now ready for your review.<|||||>@amyeroberts , any updates?<|||||>Hi @amyeroberts, thanks for your review. I have applied the suggestions and pushed the updated code.<|||||>@amyeroberts, thanks for your feedback and I have now applied the suggested changes. |
transformers | 22,819 | closed | Include decoder_attention_mask in T5 model inputs | # What does this PR do?
This PR includes decoder_attention_mask as an argument in the `prepare_inputs_for_generation` function, helping enable the use of custom attention masks in the decoder.
@gante | 04-18-2023 06:48:20 | 04-18-2023 06:48:20 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 22,818 | closed | How to use Distill-BERT with different datasets? | ### System Info
- `transformers` version: 4.11.3
- Platform: Linux-5.4.0-58-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- PyTorch version (GPU?): 1.12.0+cu102 (True)
- Tensorflow version (GPU?): 2.10.0 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@ArthurZucker @younesbelkada @sgugger
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I recently read [this](https://huggingface.co/docs/transformers/quicktour#train-with-tensorflow:~:text=The%20most%20important%20thing%20to%20remember%20is%20you%20need%20to%20instantiate%20a%20tokenizer%20with%20the%20same%20model%20name%20to%20ensure%20you%E2%80%99re%20using%20the%20same%20tokenization%20rules%20a%20model%20was%20pretrained%20with.) and was wondering how to use distill-BERT (which is pre-trained with imdb dataset) with a different dataset (for eg. [this](https://huggingface.co/datasets/yhavinga/imdb_dutch) dataset)?
### Expected behavior
Distill-BERT should work with different datasets. | 04-18-2023 06:22:43 | 04-18-2023 06:22:43 | Closing this issue as it's a repeat of #22817 |
transformers | 22,817 | closed | How to use distill-BERT with different datasets? | ### System Info
- `transformers` version: 4.11.3
- Platform: Linux-5.4.0-58-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- PyTorch version (GPU?): 1.12.0+cu102 (True)
- Tensorflow version (GPU?): 2.10.0 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I recently read [this](https://huggingface.co/docs/transformers/quicktour#train-with-tensorflow:~:text=The%20most%20important%20thing%20to%20remember%20is%20you%20need%20to%20instantiate%20a%20tokenizer%20with%20the%20same%20model%20name%20to%20ensure%20you%E2%80%99re%20using%20the%20same%20tokenization%20rules%20a%20model%20was%20pretrained%20with.) and was wondering how to use distill-BERT (which is pre-trained with imdb dataset) with a different dataset (for eg. [this](https://huggingface.co/datasets/yhavinga/imdb_dutch) dataset)?
### Expected behavior
Distill-BERT should work with different datasets. | 04-18-2023 06:18:39 | 04-18-2023 06:18:39 | Hi, @sauravtii. Thanks for raising an issue!
In general, this is a question best placed in our [forums](https://discuss.huggingface.co/). We try to reserve the github issues for feature requests and bug reports.
I recommend looking at the [NLP course](https://huggingface.co/learn/nlp-course/) which will take you through using and training tokenizers, datasets, and models. <|||||>@amyeroberts Thanks for your response. I was able to use Distil-BERT with different datasets.
Now, I am trying out this [tutorial](https://flower.dev/docs/quickstart-huggingface.html) which basically trains distil-BERT with IMDB dataset (very similar to this [tutorial](https://huggingface.co/docs/transformers/main/tasks/sequence_classification)). But I don't know why my accuracy isn't increasing even after training for a significant amount of time and also by using the entire dataset. Below I have attached `client.py` file:
`client.py`:
```
from collections import OrderedDict
import warnings
import flwr as fl
import torch
import numpy as np
import random
from torch.utils.data import DataLoader
from datasets import load_dataset, load_metric
from transformers import AutoTokenizer, DataCollatorWithPadding
from transformers import AutoModelForSequenceClassification
from transformers import AdamW
warnings.filterwarnings("ignore", category=UserWarning)
DEVICE = "cuda:1"
CHECKPOINT = "distilbert-base-uncased" # transformer model checkpoint
def load_data():
"""Load IMDB data (training and eval)"""
raw_datasets = load_dataset("imdb")
raw_datasets = raw_datasets.shuffle(seed=42)
# remove unnecessary data split
del raw_datasets["unsupervised"]
tokenizer = AutoTokenizer.from_pretrained(CHECKPOINT)
def tokenize_function(examples):
return tokenizer(examples["text"], truncation=True)
tokenized_datasets = raw_datasets.map(tokenize_function, batched=True)
tokenized_datasets = tokenized_datasets.remove_columns("text")
tokenized_datasets = tokenized_datasets.rename_column("label", "labels")
data_collator = DataCollatorWithPadding(tokenizer=tokenizer)
trainloader = DataLoader(
tokenized_datasets["train"],
shuffle=True,
batch_size=32,
collate_fn=data_collator,
)
testloader = DataLoader(
tokenized_datasets["test"], batch_size=32, collate_fn=data_collator
)
return trainloader, testloader
def train(net, trainloader, epochs):
optimizer = AdamW(net.parameters(), lr=5e-5)
net.train()
for i in range(epochs):
print("Epoch: ", i+1)
j = 1
print("####################### The length of the trainloader is: ", len(trainloader))
for batch in trainloader:
print("####################### The batch number is: ", j)
batch = {k: v.to(DEVICE) for k, v in batch.items()}
outputs = net(**batch)
loss = outputs.loss
loss.backward()
optimizer.step()
optimizer.zero_grad()
j += 1
def test(net, testloader):
metric = load_metric("accuracy")
loss = 0
net.eval()
for batch in testloader:
batch = {k: v.to(DEVICE) for k, v in batch.items()}
with torch.no_grad():
outputs = net(**batch)
logits = outputs.logits
loss += outputs.loss.item()
predictions = torch.argmax(logits, dim=-1)
metric.add_batch(predictions=predictions, references=batch["labels"])
loss /= len(testloader.dataset)
accuracy = metric.compute()["accuracy"]
return loss, accuracy
def main():
net = AutoModelForSequenceClassification.from_pretrained(
CHECKPOINT, num_labels=2
).to(DEVICE)
trainloader, testloader = load_data()
# Flower client
class IMDBClient(fl.client.NumPyClient):
def get_parameters(self, config):
return [val.cpu().numpy() for _, val in net.state_dict().items()]
def set_parameters(self, parameters):
params_dict = zip(net.state_dict().keys(), parameters)
state_dict = OrderedDict({k: torch.Tensor(v) for k, v in params_dict})
net.load_state_dict(state_dict, strict=True)
def fit(self, parameters, config):
self.set_parameters(parameters)
print("Training Started...")
train(net, trainloader, epochs=1)
print("Training Finished.")
return self.get_parameters(config={}), len(trainloader), {}
def evaluate(self, parameters, config):
self.set_parameters(parameters)
loss, accuracy = test(net, testloader)
print({"loss": float(loss), "accuracy": float(accuracy)})
return float(loss), len(testloader), {"loss": float(loss), "accuracy": float(accuracy)}
# Start client
fl.client.start_numpy_client(server_address="localhost:5040", client=IMDBClient())
if __name__ == "__main__":
main()
```
Can I get any help, please?<|||||>Hi @sauravtii, glad to hear you were able to use a different dataset :)
As mentioned above, this is really a question best placed in our [forums](https://discuss.huggingface.co/). We try to reserve the github issues for feature requests and bug reports.
As a side note, training time and performance is all relative. To help people help you in the forum, it's best to give as much information as possible e.g. how long the model was training for, logs of the accuracy observed and the behaviour you expect. In the shared script, it looks like the model is only training for a single epoch - I would start with increasing this first. <|||||>@amyeroberts Thanks for your reponse. I tried searching for the answer to my question in the forums but wasn't able to, therefore I would really appreciate if you can provide me the link to the answer (if you find one in the forums).
Also, I have trained the model for a large number of epochs (ranging from 500-1000), and the one mentioned in the script is just for the sake of an example :)<|||||>@sauravtii I don't know if there's an answer in the forums. What I'm suggesting is you post in the forums with your question and people in the community will be able to discuss with you there. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 22,816 | closed | Name Error: "Partial State" is not defind | # Parital State is not defined
- In your recent release: 4.29.0.dev.0 has some issues with the code. The function or method "Partial State" is not defined. Today - I am not able to train my model. I just downloaded 4.28.0 to resolve this issue. Can you kindly check ASAP?
- This error I am getting in the "Training arguments" method.
- The training arguments script does not define or import the "Partial State" method or function.
# Solution:
- For now, install the previous stable version of transformers.
```pip install transformers==4.28.0``` | 04-18-2023 05:30:19 | 04-18-2023 05:30:19 | cc @muellerzr <|||||>@RAravindDS Thanks for reporting. I suspect the issue is coming from the version of accelerate in your environment. Could you:
* Share the running environment info: copy-paste the output from running `transformers-cli env` in your terminal
THEN
* Upgrade accelerate using `pip install --upgrade accelerate`
* Retry<|||||>As @amyeroberts mentions, please try following those steps. I'll also look at changing the min Accelerate needed/add a check.<|||||>@amyeroberts I ran the code on Colab, and while training the LLM (LMV3), I got the error, Then I downloaded the previous version of the transformer, and it worked fine.Β <|||||>@RAravindDS Yes, this is because the `PartialState` import was added as a dependency on the transformers development branch yesterday. `PartialState` was added in the 0.17.0 release in accelerate, and so for the development branch of transformers, accelerate >= 0.17.0 is required.
Downgrading the transformers version removes the code which is importing `PartialState`. <|||||>I am using the following version of transformer, datasets and huggingface_hub.

I am running into the following error:
```sh
NameError: name 'PartialState' is not defined.
```
How to resolve this issue to work with my versions of the transformer, datasets and huggingface_hub ?<|||||>@gli-mrunal please do `pip install git+https://github.com/huggingface/accelerate` to install the dev version, or `pip install accelerate -U` if you are not using multi-GPUs (such as in Colab). <|||||>
<|||||>@gli-mrunal sorry for the typo, there are two c's for accelerate :)<|||||>> 
Bro, you don't need to worry too much. Please downgrade the version. They are having stable version. Don't stress too much. Previous version working as usual. We changed all our requirements today. Hectic process :( <|||||>True. `!pip install transformers==4.28.0` for previous version is easier solution. The newer version runs into dependency issues. <|||||>I tried to run using the following training arguments in Colab.
`training_args = TrainingArguments(
output_dir=*,
num_train_epochs=num_train_epochs,
learning_rate=learning_rate,
per_device_train_batch_size=batch_size,
per_device_eval_batch_size=batch_size,
weight_decay=weight_decay,
evaluation_strategy="epoch",
disable_tqdm=False,
logging_steps=logging_steps,
push_to_hub=False,
log_level="error",
save_strategy="epoch",
load_best_model_at_end=True,
)`
Then the following error occured.
`NameError: name 'PartialState' is not defined`
I attempted all of above advice, but this error wasn't resolved.
Please tell me how to fix this error.<|||||>Hi @creek411 install version 4.28.0 of transformers by running this code `!pip install transformers==4.28.0`. Then restart and run all the code( if ur using colab).<|||||>Thank you for your reply.
I tried to install 4.28.0 and run the code. However, this error recurred.
In this code, I install and use `transformers datasets`.
So should I install `transformers datasets` of previsous version?<|||||>@creek411 the solution would be to do `pip install accelerate` (and as now we have a release, it works OOTB with the normal pypi install), however the fact you have the error means you still probably are installing from dev and there's some cache working in there. You can try `pip uninstall transformers -y`, run your code, make sure it fails because `transformers` isn't installed, then install `transformers` again, either 4.28.0 or 4.29.0 and do `pip install accelerate` as well<|||||>I attempted to do your solution and could avoid the error.
I appreciate for your advise.<|||||>> @creek411 the solution would be to do `pip install accelerate` (and as now we have a release, it works OOTB with the normal pypi install), however the fact you have the error means you still probably are installing from dev and there's some cache working in there. You can try `pip uninstall transformers -y`, run your code, make sure it fails because `transformers` isn't installed, then install `transformers` again, either 4.28.0 or 4.29.0 and do `pip install accelerate` as well
I get the same error with
```
Requirement already satisfied: accelerate in /usr/local/lib/python3.10/dist-packages (0.19.0)
Requirement already satisfied: transformers in /usr/local/lib/python3.10/dist-packages (4.29.1)
```
on Colab
I had to install accelerate manually.
`!pip install torch "argilla" datasets accelerate transformers setfit`<|||||>I'm getting the same error while using the Transfor4rec library from Nvidia. All the solutions proffered here didn't work for me.
I tried to provide training argument here
"train_args = T4RecTrainingArguments(local_rank = -1,..."<|||||>Esto me funciono en colab, pero es importante reiniciar el entorno de ejecuciΓ³n
!pip uninstall -y -r transformers accelerate
!pip install transformers==4.29.0
!pip install git+https://github.com/huggingface/accelerate<|||||>> This worked for me in colab, but it is important to restart the execution environment
>
> !pip uninstall -y -r transformers accelerate !pip install transformers==4.29.0 !pip install git+https://github.com/huggingface/accelerate
Gracias amigo<|||||>I came from the same error, but the previous is likeβ¦β¦Did this mean it's not set to "cuda" (I run my code with GPU
''' python
File ~/miniconda3/lib/python3.8/site-packages/transformers/training_args.py:1333, in TrainingArguments.__post_init__(self)
1327 if version.parse(version.parse(torch.__version__).base_version) == version.parse("2.0.0") and self.fp16:
1328 raise ValueError("--optim adamw_torch_fused with --fp16 requires PyTorch>2.0")
1330 if (
1331 self.framework == "pt"
1332 and is_torch_available()
-> 1333 and (self.device.type != "cuda")
1334 and (get_xla_device_type(self.device) != "GPU")
1335 and (self.fp16 or self.fp16_full_eval)
1336 ):
1337 raise ValueError(
1338 "FP16 Mixed precision training with AMP or APEX (`--fp16`) and FP16 half precision evaluation"
1339 " (`--fp16_full_eval`) can only be used on CUDA devices."
1340 )
1342 if (
1343 self.framework == "pt"
1344 and is_torch_available()
(...)
1349 and (self.bf16 or self.bf16_full_eval)
1350 ):
File ~/miniconda3/lib/python3.8/site-packages/transformers/training_args.py:1697, in TrainingArguments.device(self)
1693 """
1694 The device used by this process.
1695 """
1696 requires_backends(self, ["torch"])
-> 1697 return self._setup_devices
File ~/miniconda3/lib/python3.8/site-packages/transformers/utils/generic.py:54, in cached_property.__get__(self, obj, objtype)
52 cached = getattr(obj, attr, None)
53 if cached is None:
---> 54 cached = self.fget(obj)
55 setattr(obj, attr, cached)
56 return cached
File ~/miniconda3/lib/python3.8/site-packages/transformers/training_args.py:1631, in TrainingArguments._setup_devices(self)
1629 self._n_gpu = 1
1630 else:
-> 1631 self.distributed_state = PartialState(backend=self.ddp_backend)
1632 self._n_gpu = 1
1633 if not is_sagemaker_mp_enabled():
NameError: name 'PartialState' is not defined
''' <|||||>For those having issues, can you tell me more about if you are working in Jupyter, Colab, or in regular Python? Again the solution hasn't changed: in the correct environment you need to make sure that `accelerate` is installed and viewable. To test this in your environment you can try importing it `import accelerate`. If it fails, it's not installed correctly. <|||||>I'm using Jupyter (as well as the VS Code notebooks extension, which is essentially the same) on Python 3.11 with no venv and the interpreter provided by `asdf`.
On re-test, `accelerate` 0.19 _did_ work with `transformers` 4.29, as it turned out; I'm just not accustomed to notebooks and forgot that I needed to restart the kernel to freshen the dependencies. Classic n00b mistake.
I'm still a bit mystified as to why I had an older `accelerate`, as I had created my entire Python environment on the same day I commented. Possibly, it was a transitive dependency of something else I'd already installed.<|||||>Please also remember to restart the kernel ( Given you are using Colab/Jupyter ) ( I know it is silly but yes ) <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 22,815 | closed | Mark auto models as important | # What does this PR do?
This PR marks the auto model as important so that the corresponding tests are not skipped. This is what caused a break on main after #22698 was merged.
The change in the Korean doc file is a change of line ending, which is currently making it impossible to do anything on main (the remote branch as CRLF line endings but GitHub really wants LF, this shouldn't be possible but there was a bug when merging the PR touching that file). | 04-17-2023 19:08:09 | 04-17-2023 19:08:09 | _The documentation is not available anymore as the PR was closed or merged._<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_22815). All of your documentation changes will be reflected on that endpoint. |
transformers | 22,814 | closed | Use code on the Hub from another repo | # What does this PR do?
Continuation of #22698 with tests fixed (coming soon). | 04-17-2023 18:24:54 | 04-17-2023 18:24:54 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 22,813 | closed | Revert "Use code on the Hub from another repo" | Reverts huggingface/transformers#22698 as it broke three tests on main. | 04-17-2023 18:22:03 | 04-17-2023 18:22:03 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_22813). All of your documentation changes will be reflected on that endpoint. |
transformers | 22,812 | closed | Ignore, testing CI | Disregard | 04-17-2023 18:18:15 | 04-17-2023 18:18:15 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_22812). All of your documentation changes will be reflected on that endpoint. |
transformers | 22,811 | closed | Simplify update metadata job | # What does this PR do?
This should fix the issue with the update metadata job on main. Simplify the job execution by removing the cached and just doing a pip install dev. Since it's running on main, we don't really care about the 2-3 minutes the cache would make us gain. | 04-17-2023 17:06:48 | 04-17-2023 17:06:48 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 22,810 | closed | Add ALiBi Support for GPTNeoX - GPTNeoXALiBi | # What does this PR do?
The GPT NeoX Library supports training with ALiBi positional embeddings, however the `GPTNeoXModel` only supports rotary embeddings. This PR creates a new `GPTNeoXALiBi` model that uses ALiBi positional embeddings.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
- text models: @ArthurZucker and @younesbelkada
- Documentation: @sgugger
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 04-17-2023 16:44:51 | 04-17-2023 16:44:51 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_22810). All of your documentation changes will be reflected on that endpoint.<|||||>Hi, @ArthurZucker @younesbelkada @sgugger ,
Please can you help review this PR. Thank you very much!<|||||>Hey! Given how similar this is to the already existing model, I would recommend sharing this on the hub following this [tutorial!](https://huggingface.co/docs/transformers/custom_models) Would that work alright for you? |
transformers | 22,809 | closed | Support identity normalizer in SentencePiece model | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
SentencePiece can train a model with a specified normalizer (for example `normalization_rule_name="nfkc"`).
https://github.com/google/sentencepiece/blob/master/doc/normalization.md
However, no normalization is done with `normalization_rule_name="identity"`, and `proto.normalizer_spec.precompiled_charsmap` in the SentencePiece model is empty.
Loading this model with `AlbertTokenizerFast.from_pretrained` occurres the following error:
```
>>> tokenizer = AlbertTokenizerFast.from_pretrained('.')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/nminami/.pyenv/versions/3.10.4/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 1804, in from_pretrained
return cls._from_pretrained(
File "/Users/nminami/.pyenv/versions/3.10.4/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 1959, in _from_pretrained
tokenizer = cls(*init_inputs, **init_kwargs)
File "/Users/nminami/.pyenv/versions/3.10.4/lib/python3.10/site-packages/transformers/models/albert/tokenization_albert_fast.py", line 148, in __init__
super().__init__(
File "/Users/nminami/.pyenv/versions/3.10.4/lib/python3.10/site-packages/transformers/tokenization_utils_fast.py", line 114, in __init__
fast_tokenizer = convert_slow_tokenizer(slow_tokenizer)
File "/Users/nminami/.pyenv/versions/3.10.4/lib/python3.10/site-packages/transformers/convert_slow_tokenizer.py", line 1162, in convert_slow_tokenizer
return converter_class(transformer_tokenizer).converted()
File "/Users/nminami/.pyenv/versions/3.10.4/lib/python3.10/site-packages/transformers/convert_slow_tokenizer.py", line 503, in converted
tokenizer.normalizer = self.normalizer(self.proto)
File "/Users/nminami/.pyenv/versions/3.10.4/lib/python3.10/site-packages/transformers/convert_slow_tokenizer.py", line 535, in normalizer
list_normalizers.append(normalizers.Precompiled(precompiled_charsmap))
Exception: Error while attempting to build Precompiled normalizer: Cannot parse precompiled_charsmap
```
This error is caused by passing empty bytes to `normalizers.Precompiled`. So, this PR prevents the problem to check `proto.normalizer_spec.name` before passing empty bytes.
## How to reproduce this problem
```
OS/Arch: macOS/Apple Silicon
Python 3.10.4 (main, Jun 26 2022, 22:29:49) [Clang 13.0.0 (clang-1300.0.27.3)] on darwin
protobuf==3.19.0
sentencepiece==0.1.97
transformers==4.28.1
```
Save SentencePiece model using [python/test/botchan.txt](https://github.com/google/sentencepiece/blob/master/python/test/botchan.txt).
```python
import sentencepiece as spm
spm.SentencePieceTrainer.train(input='python/test/botchan.txt', model_prefix='spiece', vocab_size=1000, normalization_rule_name='identity')
```
Read SentencePiece model using `AlbertTokenizerFast.from_pretrained`.
```python
from transformers import AlbertTokenizerFast
tokenizer = AlbertTokenizerFast.from_pretrained('.')
```
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- I think this is a bug fix. So, no documentation updates required.
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 04-17-2023 16:15:50 | 04-17-2023 16:15:50 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_22809). All of your documentation changes will be reflected on that endpoint.<|||||>cc @ArthurZucker <|||||>> cc @Narsil if I am missing something (maybe the normalizers in rust should support identity type)
`normalizer: None` should do nothing.
Most likely a case not handled by our current code, we probably need to check that the spec is set to indentity, and not even attempt to create the `precompiled_charsmap` (since it's invalid and we already have a mecanism for identity)<|||||>@ArthurZucker Thank you for reviewing!
I fixed all issues related to empty `precompiled_charsmap` referring to following code.
https://github.com/huggingface/transformers/blob/dc67da01829090ec92dfc24653242cf3f56d1a01/src/transformers/convert_slow_tokenizer.py#L625-L628<|||||>The current modification LGTM. I'm not sure why the test fail, maybe rebase ?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Closing in favor of #24618 |
transformers | 22,808 | closed | Fix squeeze into torch 1.x compatible form in llama model | # What does this PR do?
Rewrite the squeeze into torch 1.x compatible form as squeeze accepting tuple as arg is a 2.0 only feature https://pytorch.org/docs/stable/generated/torch.squeeze.html, introduced in https://github.com/huggingface/transformers/pull/22785.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes https://github.com/huggingface/transformers/issues/22807
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@amyeroberts
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 04-17-2023 16:06:44 | 04-17-2023 16:06:44 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 22,807 | closed | New Crash Using Llama | ### System Info
Seeing the following crash starting today when loading via accelerate.
I think maybe related to https://github.com/huggingface/transformers/pull/22785
CC @fpgaminer @gante @amyeroberts
```
File "/usr/local/lib/python3.8/dist-packages/accelerate/hooks.py", line 165, in new_forward
output = old_forward(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/transformers/models/llama/modeling_llama.py", line 205, in forward
query_states, key_states = apply_rotary_pos_emb(query_states, key_states, cos, sin, position_ids)
File "/usr/local/lib/python3.8/dist-packages/transformers/models/llama/modeling_llama.py", line 135, in apply_rotary_pos_emb
cos = cos.squeeze((0, 1)) # [seq_len, dim]
TypeError: squeeze() received an invalid combination of arguments - got (tuple), but expected one of:
* ()
didn't match because some of the arguments have invalid types: (!tuple of (int, int)!)
* (int dim)
didn't match because some of the arguments have invalid types: (!tuple of (int, int)!)
* (name dim)
didn't match because some of the arguments have invalid types: (!tuple of (int, int)!)
```
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Load Llama on GPU with accelerate and try to generate text.
### Expected behavior
Text is generated | 04-17-2023 15:50:56 | 04-17-2023 15:50:56 | @sam-h-bean - Yes, sorry, you're right about the issue and the cause. We're just opening up a PR now to resolve. Thanks for reporting so quickly! <|||||>> @sam-h-bean - Yes, sorry, you're right about the issue and the cause. We're just opening up a PR now to resolve. Thanks for reporting so quickly!
My fault, I didn't see the note in the documentation that tuple inputs to `squeeze` is a new feature of PyTorch 2.0. If you'd like I can open a pull request to fix by replacing with two `squeeze`s.<|||||>No worries @fpgaminer - I should have caught it in the review. @DyeKuu is opening a PR as we type :) |
transformers | 22,806 | closed | π [i18n-KO] Translated `serialization.mdx` to Korean | <!-- PRμ μ λͺ©μ "π [i18n-KO] Translated `<your_file>.mdx` to Korean" μΌλ‘ λΆνλ립λλΉ -->
# What does this PR do?
Translated the `serialization.mdx` file of the documentation to Korean.
Thank you in advance for your review.
Part of https://github.com/huggingface/transformers/issues/20179
<!-- λ©μΈ μ΄μμ κΈ°λ‘μ΄ λ¨μμ! κ°μ§μ°κ΅¬μ 리ν¬λ₯Ό μ¬μ©ν΄ μ°μ΅νμ€λλ μ κ±°ν΄μ£Όμλ©΄ κ°μ¬νκ² μ΅λλ€! :smile: -->
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
<!-- μ μΆ μ 체ν¬λ¦¬μ€νΈλ‘, κ°μ§μ°κ΅¬μλ§μ 체ν¬λ¦¬μ€νΈλ <details>λ‘ κ°μΈμ λ§λ€μ΄λλ©΄ λ μ’μ κ² κ°μμ. -->
## Who can review?
<!-- 1. λͺ¨λ λ²μμ΄ μλ£λ λ€μλ§ κ°μ§μ°κ΅¬μ νμλ€μκ² λ¦¬λ·° μμ²νλ μλ μ£Όμμ λ
ΈμΆν΄μ£ΌμΈμ! -->
Team PseudoLab, may you please review this PR? @0525hhgus, @KIHOON71, @sim-so, @gabrielwithappy, @HanNayeoniee, @wonhyeongseo, @jungnerd
<!-- 2. κ°μ§μ°κ΅¬μ νμλ€κ³Ό λ¦¬λ·°κ° λλ νμλ§ νκΉ
νμ΄μ€ μ§μλ€μκ² λ¦¬λ·° μμ²νλ μλ μ£Όμμ λ
ΈμΆν΄μ£ΌμΈμ! -->
@sgugger, @ArthurZucker, @eunseojo May you please review this PR? | 04-17-2023 14:47:55 | 04-17-2023 14:47:55 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Team PseudoLab, may you please review this PR? @0525hhgus, @KIHOON71, @sim-so, @gabrielwithappy, @HanNayeoniee, @wonhyeongseo, @jungnerd<|||||>@sgugger, @ArthurZucker, @eunseojo May you please review this PR? |
transformers | 22,805 | closed | π [i18n-KO] Translated `tasks/translation.mdx` to Korean | <!-- PRμ μ λͺ©μ "π [i18n-KO] Translated `<your_file>.mdx` to Korean" μΌλ‘ λΆνλ립λλΉ -->
# What does this PR do?
Translated the `tasks/translation.mdx` file of the documentation to Korean.
Thank you in advance for your review.
Part of https://github.com/huggingface/transformers/issues/20179
<!-- λ©μΈ μ΄μμ κΈ°λ‘μ΄ λ¨μμ! κ°μ§μ°κ΅¬μ 리ν¬λ₯Ό μ¬μ©ν΄ μ°μ΅νμ€λλ μ κ±°ν΄μ£Όμλ©΄ κ°μ¬νκ² μ΅λλ€! :smile: -->
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
<!-- μ μΆ μ 체ν¬λ¦¬μ€νΈλ‘, κ°μ§μ°κ΅¬μλ§μ 체ν¬λ¦¬μ€νΈλ <details>λ‘ κ°μΈμ λ§λ€μ΄λλ©΄ λ μ’μ κ² κ°μμ. -->
## Who can review?
<!-- κ°μ§μ°κ΅¬μ νμλ€κ³Ό λ¦¬λ·°κ° λλ νμλ§ νκΉ
νμ΄μ€ μ§μλ€μκ² λ¦¬λ·° μμ²νλ μλ μ£Όμμ λ
ΈμΆν΄μ£ΌμΈμ! -->
@sgugger, @ArthurZucker, @eunseojo May you please review this PR? | 04-17-2023 14:31:20 | 04-17-2023 14:31:20 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 22,804 | closed | Fix sneaky torch dependency in TF example | Thanks to @muellerzr for uncovering this one - the TF image classification example sneakily depended on `torch` because it used `MODEL_FOR_IMAGE_CLASSIFICATION_MAPPING` (which is a dummy if `torch` is unavailable) and called one of the `TrainingArguments` properties that requires `torch`. Made a quick PR to fix it! | 04-17-2023 11:48:20 | 04-17-2023 11:48:20 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 22,803 | closed | Skip `test_disk_offload` for `WhisperModelTest` | # What does this PR do?
Since #22486, `WhisperModelTest.test_disk_offload` to fail. I just blindly skip this test and guess it is ok.....?
| 04-17-2023 11:44:07 | 04-17-2023 11:44:07 | _The documentation is not available anymore as the PR was closed or merged._<|||||>I am not able to use `openai/whisper-base` as I get GPU OOM. When I tried to use `openai/whisper-tiny.en`, I have a hard time to change the parameters to get a working input dict for the model.
If I just changed the values in model tester to get a larger model (but random), I get different kinds of error like `IndexError: list index out of range` in `dispatch_model` or `RuntimeError: Tensor on device meta is not on the expected device cuda:0!` in model forward.
But even if I revert the change in #22486, the above attempts to use larger (fake) models still have the same issue. I guess we will have to look into this.<|||||>Convert to draft to avoid being merged.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 22,802 | closed | fssdasdf | null | 04-17-2023 11:12:18 | 04-17-2023 11:12:18 | |
transformers | 22,801 | closed | Del model does not work with device_map!=None | ### System Info
- `transformers` version: 4.29.0.dev0
- Platform: Linux-4.18.0-348.7.1.el8_5.x86_64-x86_64-with-glibc2.28
- Python version: 3.9.7
- Huggingface_hub version: 0.13.3
- Safetensors version: 0.3.0
- PyTorch version (GPU?): 2.1.0.dev20230411+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help?
@sgugger @muellerzr
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
'del model' function does't free the GPU memory if the model has been loaded with device_map != None.
```Python
import torch
from transformers import AutoModelForCausalLM, PreTrainedModel
import os
````
### Loading the model with device_map = None
```Python
model: PreTrainedModel = AutoModelForCausalLM.from_pretrained(
pretrained_model_name_or_path="EleutherAI/gpt-neo-125m",
load_in_8bit=False,
device_map=None,
torch_dtype=None,
)
model = model.to("cuda")
torch.cuda.memory_allocated()
````
555601920
```Python
del model
torch.cuda.empty_cache()
torch.cuda.memory_allocated()
````
0 β
### Loading the model with device_map = Auto
```Python
model: PreTrainedModel = AutoModelForCausalLM.from_pretrained(
pretrained_model_name_or_path="EleutherAI/gpt-neo-125m",
load_in_8bit=False,
device_map="auto",
torch_dtype=None,
)
torch.cuda.memory_allocated()
````
555601920
```Python
del model
torch.cuda.empty_cache()
torch.cuda.memory_allocated()
````
555077632 β
### Loading the model with device_map = {'': 0}
```Python
device_map = {"": int(os.environ.get("LOCAL_RANK") or 0)}
model: PreTrainedModel = AutoModelForCausalLM.from_pretrained(
pretrained_model_name_or_path="EleutherAI/gpt-neo-125m",
load_in_8bit=False,
device_map=device_map,
torch_dtype=None,
)
torch.cuda.memory_allocated()
````
555601920
```Python
del model
torch.cuda.empty_cache()
torch.cuda.memory_allocated()
````
555077632 β
### Rewriting models
```Python
for x in range(1,5):
model: PreTrainedModel = AutoModelForCausalLM.from_pretrained(
pretrained_model_name_or_path="EleutherAI/gpt-neo-125m",
load_in_8bit=False,
device_map=None,
torch_dtype=None,
)
model = model.to("cuda")
print(f"Iteration {x}: {torch.cuda.memory_allocated()}")
````
Iteration 1: 555601920
Iteration 2: 555601920
Iteration 3: 555601920
Iteration 4: 555601920
```Python
for x in range(1,5):
model: PreTrainedModel = AutoModelForCausalLM.from_pretrained(
pretrained_model_name_or_path="EleutherAI/gpt-neo-125m",
load_in_8bit=False,
device_map="auto",
torch_dtype=None,
)
print(f"Iteration {x}: {torch.cuda.memory_allocated()}")
````
Iteration 1: 554553344
Iteration 2: 1107795968
Iteration 3: 1108058112
Iteration 4: 1109368832
### Using Garbage Collector
This workaround is useful to clean the GPU memory, although it would be more appropriate to fix the delete method behavior. But for now, It can be used as a way to solve the memory leaks.
```Python
model: PreTrainedModel = AutoModelForCausalLM.from_pretrained(
pretrained_model_name_or_path="EleutherAI/gpt-neo-125m",
load_in_8bit=False,
device_map="auto",
torch_dtype=None,
)
del model
torch.cuda.empty_cache()
torch.cuda.memory_allocated()
````
555077632
```Python
import gc
torch.cuda.empty_cache()
gc.collect()
torch.cuda.empty_cache()
````
```Python
torch.cuda.memory_allocated()
````
0
### Expected behavior
The model should be deleted when calling 'del model'. This bug causes multiple issues. For example: if you want to evaluate multiple model checkpoints, the model is not correctly overwritten/deleted when loading the next one, causing a memory leak that eventually results in an OOM error.
| 04-17-2023 10:31:44 | 04-17-2023 10:31:44 | I'm afraid using `gc.collect()` is not a workaround but the only way around this that we know of. If you find a way to have Python directly reclaim the memory without calling it, we're completely game to have it merged. I suspect it all comes down to model being initialized on the meta device where we have to re-set the parameters afterward using [this function](https://github.com/huggingface/accelerate/blob/2106e87d585ae9a245c895c568fffeaa519dfb9a/src/accelerate/utils/modeling.py#L96) but not 100% sure.
Since there is a way to avoid this by just adding a line to your cleanup code, this is not high priority for us to investigate more.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 22,800 | closed | Fix `test_word_time_stamp_integration` for `Wav2Vec2ProcessorWithLMTest` | # What does this PR do?
Same as in #22474: caused by datasets version 2.10.1 -> 2.11, so just update the expected output values | 04-17-2023 10:16:33 | 04-17-2023 10:16:33 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 22,799 | closed | Can not import T5BiLDModel | ### System Info
I try from transformers.models.t5.modeling_t5 import T5Model, but it doesn't work. I build the library from this repo @ArthurZucker @younesbelkada
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
from transformers.models.t5.modeling_t5 import T5BiLDModel
### Expected behavior
It can import the model. | 04-17-2023 06:20:37 | 04-17-2023 06:20:37 | Hi @sufeidechabei, thanks for raising an issue.
So that we can best help you, could you follow the issue template and share information about the running environment (run `transformers-cli env` in your terminal and share what's printed out).
Could you elaborate on what you mean by ` I build the library from this repo`? Is this running from a fork of the repo or from code on the hub? <|||||>From the code for this hub @amyeroberts
<|||||>Here is the print information:ImportError: cannot import name 'T5BiLDModel' from 'transformers.models.t5.modeling_t5' (/nobackup/haozhang/venv/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py). I use python 3.9.13 and pytorch 2.0<|||||>@sufeidechabei As requested, could you please share the information printed out when you run `transformers-cli env` in your terminal?
Could you also point to the code on the hub which has the model implementation? <|||||>- `transformers` version: 4.25.1
- Platform: Linux-5.15.0-69-generic-x86_64-with-glibc2.35
- Python version: 3.9.13
- Huggingface_hub version: 0.11.1
- PyTorch version (GPU?): 1.13.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
@amyeroberts <|||||>@sufeidechabei - it's not possible to directly import the model in the following way:
```
from transformers.models.t5.modeling_t5 import T5BiLDModel
```
as `T5BiLDModel` isn't a model in the `modeling_t5` module.
It's possible to use checkpoints from models defined on the hub using the `AutoModel` API. See documentation [here](https://huggingface.co/docs/transformers/custom_models#using-a-model-with-custom-code). <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 22,798 | closed | Show diff between 2 CI runs on Slack reports | # What does this PR do?
It has now become more difficult to identify the **new** CI failures on Slack reports, as the number of failures is in the range of `[100, 200]` + the number of rows in the reported table is kept at around `40`.
This PR adds a **diff** of the **(model) failure tables** reported by the latest run against by the last previous run.
### The effect
<img width="720" alt="Screenshot 2023-04-17 060007" src="https://user-images.githubusercontent.com/2521628/232374887-66f9259b-9878-4ade-b337-553d3d34dc71.png">
| 04-17-2023 04:01:17 | 04-17-2023 04:01:17 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 22,797 | closed | Add RWKV-4 | # What does this PR do?
This PR is a draft and while there is a working implementation of the model, there is still a lot to do :-)
This PR adds the RWKV model from [BlinkDL/RWKV-LM](https://github.com/BlinkDL/RWKV-LM) which is a RNN-like Transformers: it has an attention layer and a feed-forward, but the attention is linear and can be expressed recurrently (more details coming in the doc page of the model).
Here is a code snippet to play with the model:
```py
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("sgugger/rwkv-7b-pile", torch_dtype=torch.float16, device_map="auto")
tokenizer = AutoTokenizer.from_pretrained("sgugger/rwkv-7b-pile")
prompt = "\nIn a shocking finding, scientist discovered a herd of dragons living in a remote, previously unexplored valley, in Tibet. Even more surprising to the researchers was the fact that the dragons spoke perfect Chinese."
inputs = tokenizer(prompt, return_tensors="pt").to(0)
output = model.generate(inputs["input_ids"], max_new_tokens=400, top_p=0.8, do_sample=True)
print(tokenizer.decode(output[0].tolist()))
```
To use the chat models (called raven):
```python
from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline
model_id = "ybelkada/rwkv-raven-1b5"
model = AutoModelForCausalLM.from_pretrained(model_id).to(0)
tokenizer = AutoTokenizer.from_pretrained(model_id)
question = "Tell me about ravens"
prompt = f"### Instruction: {question}\n### Response:"
inputs = tokenizer(prompt, return_tensors="pt").to(0)
output = model.generate(inputs["input_ids"], max_new_tokens=100)
print(tokenizer.decode(output[0].tolist(), skip_special_tokens=True))
```
Fixes #20737
Fixes #17230
TODO:
- [x] Write documentation of the model explaining the linear attention and the recurrent formulas in the code
- [x] Make the model compatible with generate
- [x] Add output_attentions/output_hidden_states API
- [ ] Convert mode models and check conversion script is compatible
- [x] Tweak CUDA kernels for state to use the state for init
- [x] Make tests that pass
- [ ] Add attention mask to be able to batch sentences (might be in a followup PR)
cc @ArthurZucker and @younesbelkada | 04-16-2023 22:01:46 | 04-16-2023 22:01:46 | _The documentation is not available anymore as the PR was closed or merged._<|||||>IMO the model is in a nice shape! Would love to have a round of review before I transfer the weights on the proper organization!<|||||>@younesbelkada In README.md
The name should be "Bo Peng" (Peng is the surname) instead of "Peng Bo" :)<|||||>hi @sgugger, thanks A TON for this merge! I am trying to train a new model of type and facing the following error:
```
Traceback (most recent call last):
File "train.py", line 229, in <module>
main(model_args, data_args, training_args)
File "train.py", line 193, in main
trainer.train()
File "transformers/src/transformers/trainer.py", line 1664, in train
return inner_training_loop(
File "transformers/src/transformers/trainer.py", line 1940, in _inner_training_loop
tr_loss_step = self.training_step(model, inputs)
File "transformers/src/transformers/trainer.py", line 2753, in training_step
loss.backward()
File ".conda/envs/rwkv-eval-3.9/lib/python3.9/site-packages/torch/_tensor.py", line 487, in backward
torch.autograd.backward(
File ".conda/envs/rwkv-eval-3.9/lib/python3.9/site-packages/torch/autograd/__init__.py", line 200, in backward
Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
File ".conda/envs/rwkv-eval-3.9/lib/python3.9/site-packages/torch/autograd/function.py", line 274, in apply
return user_fn(self, *args)
TypeError: backward() takes 2 positional arguments but 3 were given
```
From what I can see, the backward function of RwkvLinearAttentionBackward does not mention a g_state - should gradients be computed for the state, I guess not? Any pointers as to how I can resolve this will be very much appreciated!<|||||>I managed to get the code to run with some changes to the forward() and backward() functions:
```python
class RwkvLinearAttention(torch.autograd.Function):
@staticmethod
def forward(ctx, time_decay, time_first, key, value, state=None, return_state=False):
batch_size, seq_len, hidden_size = key.size()
if seq_len > rwkv_cuda_kernel.max_seq_length:
raise ValueError(
f"Cannot process a batch with {seq_len} tokens at the same time, use a maximum of "
f"{rwkv_cuda_kernel.max_seq_length} with this model."
)
if batch_size * hidden_size % min(hidden_size, 32) != 0:
raise ValueError(
f"The product of batch size ({batch_size}) and hidden size ({hidden_size}) needs to be a round "
f"multiple of {min(hidden_size, 32)}."
)
ctx.input_dtype = key.dtype
if (
time_decay.device.type != "cuda"
or time_first.device.type != "cuda"
or key.device.type != "cuda"
or value.device.type != "cuda"
):
raise ValueError("Calling the CUDA kernel for wkv attention requires all tensors to be on CUDA devices.")
time_decay = -torch.exp(time_decay.float().contiguous())
if key.dtype == torch.float16:
time_first = time_first.float()
key = key.float()
value = value.float()
time_first = time_first.contiguous()
key = key.contiguous()
value = value.contiguous()
# The CUDA kernel will fill this tensor.
output = torch.empty_like(key, memory_format=torch.contiguous_format)
if return_state or state is not None:
if state is None:
state = torch.zeros(
batch_size,
hidden_size,
3,
dtype=torch.float32,
device=key.device,
memory_format=torch.contiguous_format,
)
state[:, :, 2] -= 1e38
else:
state = torch.cat([s.unsqueeze(2) for s in state], dim=2).contiguous()
if key.dtype == torch.bfloat16:
forward_func = rwkv_cuda_kernel.forward_with_state_bf16
else:
forward_func = rwkv_cuda_kernel.forward_with_state
forward_func(time_decay, time_first.to(key.dtype), key, value, output, state)
else:
forward_func = rwkv_cuda_kernel.forward_bf16 if key.dtype == torch.bfloat16 else rwkv_cuda_kernel.forward
forward_func(time_decay, time_first.to(key.dtype), key, value, output)
ctx.save_for_backward(time_decay, time_first, key, value, output)
if state is not None:
state = [s.squeeze(2) for s in torch.chunk(state, 3, dim=2)]
return output.to(ctx.input_dtype), state
```
```python
def backward(ctx, g_output, g_state):
input_dtype = ctx.input_dtype
time_decay, time_first, key, value, output = ctx.saved_tensors
# The CUDA kernel will fill those tensors.
g_time_decay = torch.empty_like(
time_decay,
memory_format=torch.contiguous_format,
dtype=torch.bfloat16 if input_dtype == torch.bfloat16 else torch.float32,
)
g_time_first = torch.empty_like(
time_first,
memory_format=torch.contiguous_format,
dtype=torch.bfloat16 if input_dtype == torch.bfloat16 else torch.float32,
)
g_key = torch.empty_like(key, memory_format=torch.contiguous_format)
g_value = torch.empty_like(value, memory_format=torch.contiguous_format)
if input_dtype == torch.float16:
g_output = g_output.float()
backward_func = rwkv_cuda_kernel.backward_bf16 if input_dtype == torch.bfloat16 else rwkv_cuda_kernel.backward
backward_func(
time_decay,
time_first.to(key.dtype),
key,
value,
output,
g_output.contiguous(),
g_time_decay,
g_time_first,
g_key,
g_value,
)
#g_time_decay = torch.sum(g_time_decay, dim=0)
#g_time_first = torch.sum(g_time_first, dim=0)
return (
g_time_decay.to(input_dtype),
g_time_first.to(input_dtype),
g_key.to(input_dtype),
g_value.to(input_dtype),
None,
None
)
```
One problem I run into now is that although I'm trying to train a fairly small model (12 layers, 256 hidden size, 64 context size) I can only train with a very small batch size (16) on a 40GB A100 card. For comparison, a RoBERTa model with a similar size allows for a bs of 256. This seems counterintuitive to me, but I might be wrong.
Another issue I observed is instability: in some cases, within the first 3 steps of training the loss goes from something normal like 10 to 90543067814198.3 and then to 0.0. This seems to happen more when bf16 training is disabled and at higher batch sizes when bf16 training is enabled.
<|||||>@YovaKem Would you mind try change this
```python
# The CUDA kernel will fill those tensors.
g_time_decay = torch.empty_like(
time_decay,
memory_format=torch.contiguous_format,
dtype=torch.bfloat16 if input_dtype == torch.bfloat16 else torch.float32,
)
g_time_first = torch.empty_like(time_first, memory_format=torch.contiguous_format)
```
to
```python
# The CUDA kernel will fill those tensors.
g_time_decay = torch.empty(
key.shape[0], key.shape[2],
memory_format=torch.contiguous_format,
dtype=torch.bfloat16 if input_dtype == torch.bfloat16 else torch.float32,
)
g_time_first = torch.empty(k.shape[0], k.shape[2], memory_format=torch.contiguous_format)
```
I suspect there's an overflow in the current code, as mentioned above in the review comment but not tested yet. The binary distribution on PyPI does not include the cuda kernels XD
Also, the gradient of the state should be computed, but the current kernel is not doing it. Later after I setup the env I'll open the PR.<|||||>Thanks @Blealtan! I guess you meant `k` for `key`? I added bf16 support for `g_time_first` (I get an error otherwise) and put the tensors on CUDA
```python
# The CUDA kernel will fill those tensors.
g_time_decay = torch.empty(
key.shape[0], key.shape[2],
memory_format=torch.contiguous_format,
dtype=torch.bfloat16 if input_dtype == torch.bfloat16 else torch.float32,
).to(key.device)
g_time_first = torch.empty(
key.shape[0], key.shape[2],
memory_format=torch.contiguous_format,
dtype=torch.bfloat16 if input_dtype == torch.bfloat16 else torch.float32,
).to(key.device)
```
This seems to solve both the OOM issue and the instability!
One question re your comment of state gradients - I now saw this
> It will also match the _with_state variant of WKV forward.
In what cases is the _with_state variant used? As far as I can see the model I'm training is not passing states at all during the forward step. Is that something that only becomes relevant an inference time when the model is used like an RNN?
<|||||>Hey @sgugger how did you prepare the models? Could you point us how to convert original .pth or .safetensors model to your format? Thanks!
PS
Awesome RWKV joined transformers!<|||||>@lambdaofgod The logic used to convert the RWKV checkpoints from BlinkDL to HF format can be found in the [conversion script](https://github.com/huggingface/transformers/blob/main/src/transformers/models/rwkv/convert_rwkv_checkpoint_to_hf.py).<|||||>@YovaKem AFAIK, `with_state` is used only in inference now (in existing non-`transformers` implementations throughout the RWKV community). However, with proper implementation, this will allow more efficient training on long sequences, but it has not yet been implemented.<|||||>I have no idea why the CUDA kernels all disappeared from the pacakge on Pypi (it's not just RWKV, but all models using custom kernels). Will investigate later today and post a patch release when I find a solution.<|||||>Normally custom kernels should be included in 4.29.2, sorry for the inconvenience. We added stronger to checks to make sure they don't disappear again in a future release.<|||||>Hi, can i ask a simple question about RWKV kernel? The rwkv model without customized kernel uses a `for loop` here:
https://github.com/huggingface/transformers/blob/3658488ff77ff8d45101293e749263acf437f4d5/src/transformers/models/rwkv/modeling_rwkv.py#L223-L241
I am not familiar with cuda kernel. So i am not sure whether the customized cuda kernel still computes sequentially and delivers a faster `for loop`, or just make the computation parallelized in GPU?<|||||>Putting this here so it doesn't get lost.
I am trying to run microsoft guidance (https://github.com/microsoft/guidance) on RWKV through transformers and I am getting an error
`AttributeError: 'RwkvCausalLMOutput' object has no attribute 'past_key_values'`
which can be reproduced here: https://gist.github.com/fullstackwebdev/a6523374e6687825fcb92ca74048c12b<|||||>@fullstackwebdev
I don't think the fix should go inside `transformers` as this means we should always output `past_key_values=None` - which is quite misleading as by desing RWKV does not rely on `past_key_values` for caching - as the tokens are processed one by one. I made https://github.com/microsoft/guidance/pull/91 that fixed the issue in my local env |
transformers | 22,796 | closed | π [i18n-KO] Fix anchor links for docs `auto_tutorial`, `training` | # What does this PR do?
Fixed anchor links for `auto_tutorial` and `training` docs
Thank you in advance for your review.
Part of https://github.com/huggingface/transformers/issues/20179
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Team PseudoLab, may you please review this PR?
@0525hhgus, @KIHOON71, @sim-so, @gabrielwithappy, @HanNayeoniee, @wonhyeongseo, @jungnerd
@sgugger, @ArthurZucker, @eunseojo May you please review this PR?
<!-- Team PseudoLab, may you please review this PR?
@0525hhgus, @KIHOON71, @sim-so, @gabrielwithappy, @HanNayeoniee, @wonhyeongseo, @jungnerd --> | 04-16-2023 16:42:38 | 04-16-2023 16:42:38 | Team PseudoLab, may you please review this PR?
@0525hhgus, @KIHOON71, @sim-so, @gabrielwithappy, @HanNayeoniee, @wonhyeongseo, @jungnerd
I fixed anchor links for documents I translated
<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>@sgugger, @ArthurZucker, @eunseojo
May you please review this PR? |
transformers | 22,795 | closed | add open-llama model with ckpt | This PR adds a new model called Open-Llama, which is based on Llama's implementation in Transformers.
In Open-Llama, emory-efficient attention has been added, resulting in a 30% improvement in training efficiency. Additionally, hidden dropout and attention dropout have been added for better generalization during training.
We have also added two optional features: stable embedding from Bloom and shared input-output vectors from PALM, which have been tested and found to improve training stability and performance.
The following code snippet shows the implementation of memory-efficient attention:
```python
try:
from xformers import ops as xops
except ImportError:
xops = None
print("xformers is not installed correctly.")
if self.config.use_memorry_efficient_attention and xops is not None and self.training:
attn_weights = None
query_states = query_states.transpose(1, 2)
key_states = key_states.transpose(1, 2)
value_states = value_states.transpose(1, 2)
attn_output = xops.memory_efficient_attention(
query_states, key_states, value_states, attn_bias=xops.LowerTriangularMask(), p=self.dropout_prob
)
```
At the same time, for maximum compatibility, we have made xformers an optional dependency so that the original implementation can still be used for training and inference if it is not installed.
We implemented pre-training of the Llama model based on transformers + accelerate, incorporating the modifications described above.
[Open-Llama](https://github.com/Bayes-Song/Open-Llama/blob/main/README_en.md)
The pre-trained model has already been open-sourced on [s-JoL/Open-Llama-V1](https://huggingface.co/s-JoL/Open-Llama-V1).
ref: https://github.com/huggingface/transformers/pull/22386
cc: @sgugger | 04-16-2023 15:46:20 | 04-16-2023 15:46:20 | _The documentation is not available anymore as the PR was closed or merged._<|||||>cc @ArthurZucker and @younesbelkada <|||||>Please help me review this pull request. @ArthurZucker @younesbelkada <|||||>Hey! Thanks will review now<|||||>Thanks a lot for your contribution!<|||||>> Thanks a lot for your contribution!
Hello, I have a question, why the open-Llama model cannot be searched in the documentation of transformers? Is there something I forgot to add?

<|||||>Hi @s-JoL, thanks for notifying.
There was an issue in the doc rendering (resolved with [1](https://github.com/huggingface/huggingface-meilisearch/pull/60/files), [2](https://github.com/huggingface/huggingface-meilisearch/pull/61)) leading to some pages not being retrievable in search. Should be working now! <|||||>@s-JoL I noticed that the links pertaining to Open-LLaMA are currently leading to 404 errors. Could you please provide some information on what might have happened?<|||||>@s-JoL Hi, I can't find a Open-LLaMA checkpoint and I noticed you delete your original repo. What happend? How Can I have a try of Open-LLaMA?<|||||>@heya5 Possibly due to some controversies surrounding this project, the original author has closed the original project.
https://github.com/chenfeng357/open-Chinese-ChatLLaMA/issues/1 |
transformers | 22,794 | closed | LLaMA FastTokenizer does not add `eos_token_id` at the end. | ### System Info
- `transformers` version: 4.29.0.dev0
- Platform: Linux-4.18.0-305.19.1.el8_4.x86_64-x86_64-with-glibc2.28
- Python version: 3.9.7
- Huggingface_hub version: 0.13.3
- Safetensors version: 0.3.0
- PyTorch version (GPU?): 2.1.0.dev20230411+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help?
@ArthurZucker
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
As mentioned on the title, the LLaMA tokenizer does not add the `eos_token` at the end of the inputs. This only happens on the fast version (`use_fast=True`).
Steps to reproduce the behaviour:
1. Load the LLaMA tokenizer
```python
tokenizer = AutoTokenizer.from_pretrained(LLAMA_PATH, add_eos_token=True, use_fast=True)
```
2. Tokenize something
```python
simple_sentence = "This is a sentence to test if the tokenizer adds eos token."
simple_sentence_ids = tokenizer(
simple_sentence, add_special_tokens=True
).input_ids
```
3. Print the `input_ids` to check if the `eos_token_id` (`2`) is added at the end.
```python
print(simple_sentence_ids)
```
4. Output:
```python
[1, 910, 338, 263, 10541, 304, 1243, 565, 278, 5993, 3950, 12778, 321, 359, 5993, 29889]
```
### Expected behavior
Expected output
```python
[1, 910, 338, 263, 10541, 304, 1243, 565, 278, 5993, 3950, 12778, 321, 359, 5993, 29889, 2]
``` | 04-16-2023 14:40:41 | 04-16-2023 14:40:41 | Yes! Quick fix, use the slow tokenizer. Otherwise I'll open a PR to add template processing!
Thanks for reporting!<|||||>But it shouldn't add an `eos` token right? The LM is not trained to generate a token after the `eos` I believe.<|||||>> But it shouldn't add an `eos` token right? The LM is not trained to generate a token after the `eos` I believe.
By default, but if specified with `add_eos_token=True` it should. You can always fine-tune the model to make the model learn when to stop.<|||||>I guess they would set the `pad_token_id` using the `eos_token_id`?
`model.config.pad_token_id = model.config.eos_token_i`<|||||>Same here, doing add_eos_token=True doesn't do anything<|||||>This should have been fixed by #22959 <|||||>> I guess they would set the `pad_token_id` using the `eos_token_id`? `model.config.pad_token_id = model.config.eos_token_i`
I believe if you just set the `pad_token = eos_token` the model still is not learning to predict the `eos_token` because the corresponding `attn_mask` does not include the token and the `labels` ignores that token - i.e. no loss is computed for it. Not 100% sure about this, but that was what it seemed like from some self exploration.<|||||>The same is happening with Falcon...<|||||>When you say the same, what do you mean? <|||||>That it doesn't generate <|endoftext|> (token id 11) when calling generate, therefore it never stops generating. I have tried by setting `eos_token_id` to 193, which corresponds to `\n`, but I don't think that's a clean fix. I have noticed that when tokenizing the inputs with the Falcon-40b tokenizer, it's not adding `eos_token_id` at the end of input ids. <|||||>Few things here.
Llama has no official model so make sure the one you are using is up to date and has the same eos token id for the model.config / generation config and the tokenizer.
For falcon, code is on the hub, but latest code of transformers adds the eos if you set βadd_eos=Trueβ. In the doc for llama you can find that initializing a model with βadd_eos=Trueβ will make it add the eos when tokenizing. <|||||>Actually I was talking about Falcon, not llama, because I'm facing an issue similar to the ones people are reporting with Llama. In fact I upgraded my transformers version to the last version in `main` branch, and the problem persists... The model never generates a EOS token, so it never stops generating...
I have tried to explicitly add a string "<|endoftext|>" at the end of the inputs for fine-tuning, but still doesn't work.
What can I do to make falcon generate a eos token ? <|||||>The issue is different, the model not stopping does not mean that it is not *adding* the `eos_token` but rather not *predicting* it.
The problem with LLAM has already been mentioned here: #23230 <|||||>I thought it could be related, my hypothesis was that Falcon wasn't generating the EOS token because it wasn't being included in the inputs when tokenizing, so when we train the model over inputs without EOS token at the end, the model doesn't learn to generate EOS token.<|||||>@avacaondata - I have noticed this same issue, where the model is not learning to predict the EOS token. After doing some digging through several examples and source code, I've noticed something a bit strange particularly related to the `DataCollatorForLanguageModeling`. A very typical pattern that I have seen suggested is the following:
```
transformers import DataCollatorForLanguageModeling
tokenizer.pad_token = tokenizer.eos_token
data_collator = DataCollatorForLanguageModeling(tokenizer=tokenizer, mlm=False)
```
However, the problem I see with this approach is that when the DataCollator overrides OR generates the `labels` field for the batch it sets all `tokens == pad_token` to be `-100`.
```
labels = batch["input_ids"].clone()
if self.tokenizer.pad_token_id is not None:
labels[labels == self.tokenizer.pad_token_id] = -100
batch["labels"] = labels
```
Since the `CrossEntropy` loss ignores tokens with `-100` even if the tokenizer we are using properly adds the `eos_token`, the loss function will actually ignore this token.
Ways that I have worked around this issue are either (1) to ensure that the `eos_token_id != pad_token_id` and make sure that the tokenizer includes the `eos_token` when tokenizing (some automatically do this such as the `T5 tokenizer`) OR (2) create the labels column myself when tokenizing - by cloning `input_ids` - and then using the `DataCollatorForSeq2Seq`. I actually really like the `DataCollatorForSeq2Seq` because it automatically pads the inputs and labels, but does not mess with tokens in unexpected ways, such as the eos_token.
Hope this is helpful!<|||||>@jonathangomesselman Thank you very much for the clear explanation, it makes much sense!
I will change the label for the eos token so that it's not ignored by cross entropy anymore.
Ideally I think that for instruction-tuning we shouldn't use `DataCollatorForLanguageModeling`, in this paper they did some experiments and found that only training over outputs typically works better: https://arxiv.org/pdf/2305.14314.pdf . However, I haven't found a way to make `DataCollatorForSeq2Seq` work for decoder-only models such as Llama or Falcon. Do you have any code on how to do that? <|||||>@avacaondata - You're welcome!
I have generally followed this practice as well - just fine-tuning over the `model outputs`, since generally I don't need the model to directly learn the statistical distribution over human instructions, but rather just how to "react" to them.
Continuing from above, to use the `DataCollatorForSeq2Seq` for decoder-only models we need to manually create the `labels` field when tokenizing our data - i.e. ensuring we have the fields `input_ids`, `attention_mask`, and `labels`. Since we create the `labels` ourselves we have control over what tokens we explicitly train over vs. which we want to ignore (using `-100` as a label). Here is the skeleton of some code you could use to tokenize the inputs:
```
from transformers import LlamaTokenizerFast
tokenizer = LlamaTokenizerFast.from_pretrained("hf-internal-testing/llama-tokenizer")
# By default the bos_token is added and not the eos_token. For instruction tuning I often ignore bos_token.
tokenizer.add_bos_token = False
tokenizer.add_eos_token = True
def create_instruction_tuned_format(data_row):
return f"""<User Instruction>:{data_row["instruct"]}
<Agent Response>: {data_row['response']}
""".strip()
def tokenize(data_row):
"""Format and tokenize instruction tuning data
1) Combine the user input (instruction) and agent response
2) Create `labels` - ensuring we only fine tune over the
desired agent response
"""
model_input_text = create_instruction_tuned_format(data_row)
# Tokenize the full model input
model_input = tokenizer(
model_input_text,
truncation=True,
padding=False,
return_tensors=None
)
# Create `labels` - ignoring user input (instructions)
agent_response = tokenizer(data_row['title']).input_ids
num_tokens_ignore = len(model_input['labels']) - len(agent_response)
ignored_tokens = [-100] * (num_tokens_ignore)
# Copy over the ids for the desired agent response
model_input['labels'] = ignored_tokens \
+ model_input['input_ids'][-len(agent_response):]
# Just to demonstrate length equality
assert len(model_inputs['labels']) == len(model_inputs['input_ids'])
return model_input
tokenized_ds = ds.map(tokenizer, remove_columns=ds.column_names)
```
A couple of things to note/highlight:
1. We combine the user instruction and agent response using a very simple format. In the [LIMA paper](https://arxiv.org/pdf/2305.11206.pdf) for example they introduce a new EOT (end-of-turn) token to separate the instruction and the response.
2. We tokenize the response to figure out the number of fine-tuning tokens at the end of the full token sequence.
Now that we have our data tokenized and formatted we can use the `DataCollatorForSeq2Seq` as follows:
```
tokenizer.pad_token = tokenizer.eos_token
data_collator = DataCollatorForSeq2Seq(
tokenizer, return_tensors="pt", padding=True
)
batch_size = 8
train_dataloader = DataLoader(
tokenized_ds, shuffle=True, collate_fn=data_collator, batch_size=batch_size, pin_memory=True
)
```
Note that the LLAMA tokenizer by default does not have a `pad_token` so we have to set it. Because we are using the `DataCollatorForSeq2Seq` it is okay for us to set the padding token to the `eos_token` as the collator does not create the labels tensor but rather just pads our existing labels tensor with `-100` - i.e. the `eos_token` will not be ignored/replaced.
This may not be the most standard approach for doing this - but this is an example of what I have found to work / have seen some repos roughly follow. The main idea being that by creating the `labels` ourselves we are able to set `-100` for tokens that we don't want to fine-tune over + ensure that we learn to generate the `eos_token`. <|||||>Wow @jonathangomesselman Thank you so much for the so clear explanation... :heart_eyes:
I tried it and yes it works flawlessly. I will check the LIMA paper in detail too to check for that EOT special token, I think that's an interesting approach.
Again, thank you so much, you were extremely helpful!! :heart: <|||||>@avacaondata you're welcome! I had very similar questions to what you asked and found myself a bit surprised to *not* find many good resources. Thankfully the HuggingFace code repos are actually quite readable, especially in separating the complex model logic of the base pre-trained transform models (encoder-decoder + decoder only) vs. adding the "language modeling" head (see sub-classes with `...ConditionalGeneration`, `...CausalLM`, `...LMHeadModel`).
If you're curious yourself, I would definitely recommend looking at the code to learn more. Each model has a slightly different naming convention but you will see that the logic is nearly identical. Some to check out are:
- [T5ForConditionalGeneration](https://github.com/huggingface/transformers/blob/main/src/transformers/models/t5/modeling_t5.py#L1528) (encoder-decoder)
- [LlamaForCausalLM](https://github.com/huggingface/transformers/blob/main/src/transformers/models/llama/modeling_llama.py#L613) (decoder-only)
- [GPT2LMHeadModel](https://github.com/huggingface/transformers/blob/v4.30.0/src/transformers/models/gpt2/modeling_gpt2.py#L959) (decoder-only)
Have fun exploring!<|||||>@jonathangomesselman thanks a lot!
I was also running into this issue where the model was unable to output the eos_token after fine-tuning. I also followed examples where they set `tokenizer.pad_token = tokenizer.eos_token`. From your earlier comment, I made sure `tokenizer.pad_token != tokenizer.eos_token` by setting `tokenizer.add_special_tokens({'pad_token': '[PAD]'})` and using `DataCollatorForLanguageModeling` as before, e.g.
```
tokenizer.add_special_tokens({'pad_token': '[PAD]'})
data_collator = DataCollatorForLanguageModeling(tokenizer=tokenizer, mlm=False)
```
Now the model finally outputs the eos_token as intended!<|||||>@georgesung Thanks for sharing this approach! Adding a new `[PAD]` token is a great way to differentiate between that and the `EOS` token - which as you say allows you to then use the native `DataCollatorForLanuageModeling`. It is very interesting / odd to me that this is such a common problem, given it seems sort of obvious that we want this behavior. But regardless it is exciting to see the model finally start outputting the `eos_token` π
. An interesting thing that I noticed is that this is generally not an issue with the Encoder-Decoder models such as T5. With these models the tokenizer generally adds the `eos_token` by default and the colaters used don't have this problem of `ignoring` the `eos_token` by treating it as a padding token.
@avacaondata We can use a similar approach to add a the `EOT` token described in the LIMA Paper for separating the `instruction` and the `response`.<|||||>I think this could be a great TIP addition to the documentation / blog! If anyone of you has time to open PR, feel free to do so and ping me! π€ <|||||>@ArthurZucker - I would be happy to work on this! Where do you think it would be best to add this TIP?<|||||>Probably in the `llama.md`!<|||||>What is the correct code for Falcon? I'm still puzzled.
Related links:
- discord: https://discord.com/channels/879548962464493619/1126681170957045770/1126681170957045770
- hf: https://discuss.huggingface.co/t/why-does-the-falcon-qlora-tutorial-code-use-eos-token-as-pad-token/45954
- so: https://stackoverflow.com/questions/76633368/why-does-the-falcon-qlora-tutorial-code-use-eos-token-as-pad-token<|||||>@georgesung question:
> tokenizer.add_special_tokens({'pad_token': '[PAD]'})
But this assumes the model has a `pad_token`. I think an additional check has to be done that it does have an embedding for `pad_token` so that there are no run time errors (~type errors in the matrix extraction from the embedding "table"/matrix).
But if one does that some care might be needed to initialize the new token so that it dominates the generation: https://nlp.stanford.edu/~johnhew/vocab-expansion.html
<|||||>@brando90
> But this assumes the model has a pad_token
I haven't confirmed, but I think `tokenizer.add_special_tokens({'pad_token': '[PAD]'})` is equivalent to `tokenizer.pad_token = '[PAD]'` (edit: might be wrong about that). So if there are runtime errors with `tokenizer.add_special_tokens({'pad_token': '[PAD]'})` then there would also be runtime errors with `tokenizer.pad_token = tokenizer.eos_token` -- note `tokenizer.eos_token` is just a string. But I observed runtime errors with neither. I just observed that when I set `tokenizer.pad_token = tokenizer.eos_token` during training, the model won't stop generating during inference, since it was trained to not output the eos token (per discussions above).
Since I was working with open_llama_7b, I saw that even though the model's tokenizer didn't specify a pad token string in its [tokenizer_config.json](https://huggingface.co/openlm-research/open_llama_7b/blob/main/tokenizer_config.json), it still had a row in its token embedding matrix for the pad token. If you run `print(model)`, you can see its token embedding matrix has an index reserved for the pad token (idx 0 in this case):
```
> print(model)
LlamaForCausalLM(
(model): LlamaModel(
(embed_tokens): Embedding(32000, 4096, padding_idx=0)
..
```
You can also see the pad token's embedding itself: `model.state_dict()['model.embed_tokens.weight'][0]`. Although from discussions above and also [this discussion](https://stackoverflow.com/questions/73155719/do-weights-of-the-pad-token-have-a-function), it doesn't seem to matter what the actual embeddings are for the pad token.<|||||>@georgesung unfortunately I'm working with Falcon. It doesn't have a pad token to my surprise (I'm not sure how this even happens in the first place tbh):
```
Loading checkpoint shards: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 8/8 [00:10<00:00, 1.36s/it]
type(model)=<class 'transformers_modules.tiiuae.falcon-7b.2f5c3cd4eace6be6c0f12981f377fb35e5bf6ee5.modelling_RW.RWForCausalLM'>
type(tokenizer)=<class 'transformers.tokenization_utils_fast.PreTrainedTokenizerFast'>
Using pad_token, but it is not set yet.
tokenizer.pad_token=None
type(peft_config)=<class 'peft.tuners.lora.LoraConfig'>
model=RWForCausalLM(
(transformer): RWModel(
(word_embeddings): Embedding(65024, 4544)
(h): ModuleList(
(0-31): 32 x DecoderLayer(
(input_layernorm): LayerNorm((4544,), eps=1e-05, elementwise_affine=True)
(self_attention): Attention(
(maybe_rotary): RotaryEmbedding()
(query_key_value): Linear4bit(in_features=4544, out_features=4672, bias=False)
(dense): Linear4bit(in_features=4544, out_features=4544, bias=False)
(attention_dropout): Dropout(p=0.0, inplace=False)
)
(mlp): MLP(
(dense_h_to_4h): Linear4bit(in_features=4544, out_features=18176, bias=False)
(act): GELU(approximate='none')
(dense_4h_to_h): Linear4bit(in_features=18176, out_features=4544, bias=False)
)
)
)
(ln_f): LayerNorm((4544,), eps=1e-05, elementwise_affine=True)
)
(lm_head): Linear(in_features=4544, out_features=65024, bias=False)
)
---- start Print all special tokens
eos_token: <|endoftext|>
additional_special_tokens: ['>>TITLE<<', '>>ABSTRACT<<', '>>INTRODUCTION<<', '>>SUMMARY<<', '>>COMMENT<<', '>>ANSWER<<', '>>QUESTION<<', '>>DOMAIN<<', '>>PREFIX<<', '>>SUFFIX<<', '>>MIDDLE<<']
---- end Print all special tokens
model.get_input_embeddings().weight.size()=torch.Size([65024, 4544])
pad_embedding=tensor([[[-0.0179, 0.0201, -0.0273, ..., -0.0275, -0.0396, -0.0131],
[-0.0510, -0.0079, -0.0383, ..., -0.0481, 0.0581, 0.0282],
[-0.0217, -0.0216, -0.0064, ..., -0.0508, 0.0554, -0.0013],
...,
[ 0.0425, 0.0452, -0.0131, ..., 0.0019, 0.0476, 0.0342],
[-0.0170, -0.0085, 0.0449, ..., -0.0074, 0.0178, 0.0043],
[-0.0439, -0.0859, -0.0820, ..., 0.0130, 0.0669, 0.0884]]],
device='cuda:0', dtype=torch.float16, grad_fn=<UnsqueezeBackward0>)
Success!
/lfs/hyperturing1/0/brando9/miniconda/envs/data_quality/lib/python3.10/site-packages/transformers/generation/utils.py:1259: UserWarning: You have modified the pretrained model configuration to control generation. This is a deprecated strategy to control generation and will be removed soon, in a future version. Please use a generation configuration file (see https://huggingface.co/docs/transformers/main_classes/text_generation)
warnings.warn(
Traceback (most recent call last):
File "/lfs/hyperturing1/0/brando9/ultimate-utils/ultimate-utils-proj-src/uutils/hf_uu/model_tokenizer/falcon_uu_mdl_tok.py", line 190, in <module>
example_test_model_already_has_pad_token()
File "/lfs/hyperturing1/0/brando9/ultimate-utils/ultimate-utils-proj-src/uutils/hf_uu/model_tokenizer/falcon_uu_mdl_tok.py", line 182, in example_test_model_already_has_pad_token
tokenizer.decode(model.generate(**tokenizer(sent, return_tensors='pt'), do_sample=True)[0])
File "/lfs/hyperturing1/0/brando9/miniconda/envs/data_quality/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/lfs/hyperturing1/0/brando9/miniconda/envs/data_quality/lib/python3.10/site-packages/transformers/generation/utils.py", line 1271, in generate
self._validate_model_kwargs(model_kwargs.copy())
File "/lfs/hyperturing1/0/brando9/miniconda/envs/data_quality/lib/python3.10/site-packages/transformers/generation/utils.py", line 1144, in _validate_model_kwargs
raise ValueError(
ValueError: The following `model_kwargs` are not used by the model: ['token_type_ids'] (note: typos in the generate arguments will also show up in this list)
```
code:
```
# qlora flacon7b
from uutils.hf_uu.model_tokenizer.falcon_uu_mdl_tok import get_model_tokenizer_qlora_falcon7b
model, tokenizer, peft_config = get_model_tokenizer_qlora_falcon7b()
print(f'{model=}')
sent = 'Dogs are great because they are '
print()
# print to see if pad tokens are present and if it ignores the tokens at the end
# encoded_input = tokenizer(sent, padding='max_length', max_length=10, return_tensors='pt')
# sys.exit()
# Print all special tokens
print('\n---- start Print all special tokens')
for token_name, token in tokenizer.special_tokens_map.items():
print(f"{token_name}: {token}")
print('\n---- end Print all special tokens')
# Get the ID for the '[PAD]' token
try:
pad_token_id = tokenizer.convert_tokens_to_ids('[PAD]')
except KeyError:
raise ValueError("Token [PAD] is not present in the tokenizer vocabulary.")
# Index into the model's embedding table
try:
print(f'{model.get_input_embeddings().weight.size()=}')
pad_embedding = model.get_input_embeddings().weight[pad_token_id]
except IndexError:
raise ValueError(f"Token ID {pad_token_id} is not present in the model's embedding matrix.")
print(f'{pad_embedding=}')
print('Success!')
# check it generates something sensible
tokenizer.decode(model.generate(**tokenizer(sent, return_tensors='pt'), do_sample=True)[0])
print('Success2!')
```<|||||>I think I just need to add it to the tokenizer and the model. Since during fine-tuning/training the pad token would be ignored anyway, adding a random set of weights to the embedding table matrix wouldn't matter anyway. It wouldn't be updated anyway.
Code:
```
# - Get falcon 4bit model
# todo, where is this being saved & how to download quicker
model = AutoModelForCausalLM.from_pretrained(
pretrained_model_name_or_path=pretrained_model_name_or_path,
quantization_config=bnb_config,
trust_remote_code=True # allows to execute custom code you download from the uploaded model code you are using
)
# this is here to save gpu vram. Likely only needed when using 40b or when oom issues happen ref: https://stackoverflow.com/questions/76633335/why-does-hugging-face-falcon-model-use-mode-config-use-cache-false-why-wouldn
model.config.use_cache = use_cache
print(f'{type(model)=}')
# - Get falcon tokenizer
tokenizer = AutoTokenizer.from_pretrained(pretrained_model_name_or_path,
trust_remote_code=True) # execs code downloaded from hf hub
# tokenizer.pad_token = tokenizer.eos_token # todo: why? https://stackoverflow.com/questions/76633368/why-does-the-falcon-qlora-tutorial-code-use-eos-token-as-pad-token
tokenizer.add_special_tokens({'pad_token': '[PAD]'}) # I think this is fine if during the training pad is ignored
model.resize_token_embeddings(len(tokenizer)) # todo: I think this is fine if during the training pad is ignored
print(f'{type(tokenizer)=}')
print(f'{tokenizer.pad_token=}')
```
----
So close....
```
Darn this still not works:
```
UserWarning: You have modified the pretrained model configuration to control generation. This is a deprecated strategy to control generation and will be removed soon, in a future version. Please use a generation configuration file (see https://huggingface.co/docs/transformers/main_classes/text_generation)
```
code:
```
"""
sfttrainer (likely using peft) best practices:
https://huggingface.co/docs/trl/main/en/sft_trainer#best-practices
Best practices
Pay attention to the following best practices when training a model with that trainer:
- SFTTrainer always pads by default the sequences to the max_seq_length argument of the SFTTrainer. If none is passed, the trainer will retrieve that value from the tokenizer. Some tokenizers do not provide default value, so there is a check to retrieve the minimum between 2048 and that value. Make sure to check it before training.
- For training adapters in 8bit, you might need to tweak the arguments of the prepare_model_for_int8_training method from PEFT, hence we advise users to use prepare_in_int8_kwargs field, or create the PeftModel outside the SFTTrainer and pass it.
- For a more memory-efficient training using adapters, you can load the base model in 8bit, for that simply add load_in_8bit argument when creating the SFTTrainer, or create a base model in 8bit outside the trainer and pass it.
- If you create a model outside the trainer, make sure to not pass to the trainer any additional keyword arguments that are relative to from_pretrained() method.
todo: why trust_remote_code? I want more details.
"""
import sys
import torch
from peft import LoraConfig
from transformers.modeling_utils import PreTrainedModel
from pdb import set_trace as st
def test_bfloat16_int4(compute_dtype: torch.dtype,
use_4bit,
):
"""
python -c "import torch; print(torch.cuda.get_device_capability());"
todo: check other code test_bfloat16() do we need use_4bit?
"""
if compute_dtype == torch.float16 and use_4bit:
major, _ = torch.cuda.get_device_capability()
if major >= 8:
print("=" * 80)
print("Your GPU supports bfloat16, you can accelerate training with the argument --bfloat16")
print("=" * 80)
def get_model_tokenizer_qlora_falcon7b(
# -- mode args
# model_id = "tiiuae/falcon-7b"
pretrained_model_name_or_path: str = "ybelkada/falcon-7b-sharded-bf16",
use_cache: bool = True,
# -- lora args
lora_alpha=16, # todo
lora_dropout=0.1, # todo, evidence drop out really help? google, crfm, gpt4
lora_r=64, # todo
bnb_4bit_compute_dtype=torch.float16, # changed it from Guanaco hf
# -- training args
output_dir="./results",
per_device_train_batch_size=4,
gradient_accumulation_steps=4,
# paging so that the sudden mem gpu spikes don't cause the run to shut down
# (I think usually caused by too long seqs)
# todo: why 32 bit opt?
# todo: paged nadamw opt?
optim="paged_adamw_32bit",
save_steps=10,
logging_steps=10,
learning_rate=2e-4,
max_grad_norm=0.3,
max_steps=500,
warmup_ratio=0.03,
lr_scheduler_type="constant",
# -- quant. args (not recommended to be changed unless you know what your doing?)
load_in_4bit=True, # load (usually huge) base model in 4 bits
bnb_4bit_quant_type="nf4", # normal float 4 for the (large) base models qlora
) -> tuple:
"""
Load the Falcon 7B model, quantize it in 4bit and attach LoRA adapters on it.
bf16 = 1S, 7Exp, 8Mantissa
hypothesis: 7b trained due to 6.7 emergence rumour, I still don't think emergence is real.
Notes:
- ft a model is very specific to the model, tokenizer and training scheme. Thus we return
- model, tokenizer, ft config (peft config), training args
ref:
- https://colab.research.google.com/drive/1DOi8MFv4SWN9NImVornZ7t6BgmLoPQO-#scrollTo=AjB0WAqFSzlD
"""
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig, AutoTokenizer
# - Get bnb config for bit-4 base model (bnb lib for using 4bit qlora quantization techniques by tim dettmers)
bnb_config = BitsAndBytesConfig(
load_in_4bit=load_in_4bit, # load (usually huge) base model in 4 bits
bnb_4bit_quant_type=bnb_4bit_quant_type, # normal float 4 for the (usually huge) base model
bnb_4bit_compute_dtype=bnb_4bit_compute_dtype, # if you can, during computation use bf16
)
# - Get falcon 4bit model
# todo, where is this being saved & how to download quicker
model = AutoModelForCausalLM.from_pretrained(
pretrained_model_name_or_path=pretrained_model_name_or_path,
quantization_config=bnb_config,
trust_remote_code=True # allows to execute custom code you download from the uploaded model code you are using
)
print(f'{type(model)=}')
print(f'{model=}')
# this is here to save gpu vram. Likely only needed when using 40b or when oom issues happen ref: https://stackoverflow.com/questions/76633335/why-does-hugging-face-falcon-model-use-mode-config-use-cache-false-why-wouldn
model.config.use_cache = use_cache
print(f'{type(model)=}')
# - Get falcon tokenizer
tokenizer = AutoTokenizer.from_pretrained(pretrained_model_name_or_path,
trust_remote_code=True) # execs code downloaded from hf hub
# tokenizer.pad_token = tokenizer.eos_token # ref: https://stackoverflow.com/questions/76633368/why-does-the-falcon-qlora-tutorial-code-use-eos-token-as-pad-token
# tokenizer.add_special_tokens({'pad_token': '[PAD]'}) # I think this is fine if during the training pad is ignored
tokenizer.add_special_tokens({'pad_token': '<|pad|>'}) # I think this is fine if during the training pad is ignored
# - Modify model
# add pad token embed
model.resize_token_embeddings(len(tokenizer)) # todo: I think this is fine if during the training pad is ignored
model.transformer.word_embeddings.padding_idx = len(tokenizer) - 1
model.config.max_new_tokens = len(tokenizer)
# model.config.min_length = 1
print(f'{model=}')
print(f'{type(tokenizer)=}')
print(f'{tokenizer.pad_token=}')
# data_collator = DataCollatorForLanguageModeling(tokenizer=tokenizer, mlm=False) todo
# - Get falcon lora config
peft_config = LoraConfig(
lora_alpha=lora_alpha,
lora_dropout=lora_dropout,
r=lora_r,
bias="none",
task_type="CAUSAL_LM",
# model card for falcon tiiuae/falcon-7b: https://huggingface.co/tiiuae/falcon-7b/blob/main/modelling_RW.py
# does seem to include all trainable params as done by qlora on their own paper
target_modules=[
# word_embeddings,
"query_key_value",
"dense",
"dense_h_to_4h",
"dense_4h_to_h",
# "lm_head"
]
)
print(f'{type(peft_config)=}')
# todo: print the num params of the lora = D1*r + D2*r and num of bytes by prec. (bytes) * num params
return model, tokenizer, peft_config
# -- tests
def example_test_model_already_has_pad_token():
"""
if it already has pad token, it likely has a small prob, so we are done.
compare it's norm with other tokens to verify this is true.
python ~/ultimate-utils/ultimate-utils-proj-src/uutils/hf_uu/model_tokenizer/falcon_uu_mdl_tok.py
"""
# - the get datasets todo: preprocessing, padding, streaming
from uutils.hf_uu.data_hf.common import get_guanaco_datsets_add_splits_train_test_only
trainset, _, testset = get_guanaco_datsets_add_splits_train_test_only()
# qlora flacon7b
from uutils.hf_uu.model_tokenizer.falcon_uu_mdl_tok import get_model_tokenizer_qlora_falcon7b
model, tokenizer, peft_config = get_model_tokenizer_qlora_falcon7b()
model: PreTrainedModel = model
print(f'{model=}')
sent = 'Dogs are great because they are '
print()
# print to see if pad tokens are present and if it ignores the tokens at the end
encoded_input = tokenizer(sent, padding='max_length', max_length=10, return_tensors='pt')
print(f'{encoded_input=}')
# Print all special tokens
print('\n---- start Print all special tokens')
for token_name, token in tokenizer.special_tokens_map.items():
print(f"{token_name}: {token}")
print('\n---- end Print all special tokens')
# Get the ID for the '[PAD]' token
try:
pad_token_id = tokenizer.convert_tokens_to_ids('[PAD]')
except KeyError:
raise ValueError("Token [PAD] is not present in the tokenizer vocabulary.")
# Index into the model's embedding table
try:
print(f'{model.get_input_embeddings().weight.size()=}')
pad_embedding = model.get_input_embeddings().weight[pad_token_id]
except IndexError:
raise ValueError(f"Token ID {pad_token_id} is not present in the model's embedding matrix.")
print(f'{pad_embedding=}')
print('Success!\n')
# check it generates something sensible
# tokenizer.decode(model.generate(**tokenizer(sent, return_tensors='pt'), do_sample=True)[0])
input_ids, attention_mask = encoded_input['input_ids'], encoded_input['attention_mask']
predicted_tokens_ids_options = model.generate(input_ids=input_ids, attention_mask=attention_mask, do_sample=True)
predicted_tokens_ids = predicted_tokens_ids_options[0]
predicted_sent = tokenizer.decode(predicted_tokens_ids)
print(f'original sentence: {sent=}')
print(f'predicted sentence: {predicted_sent=}')
print('Success2!')
if __name__ == '__main__':
import time
start_time = time.time()
example_test_model_already_has_pad_token()
print(f"The main function executed in {time.time() - start_time} seconds.\a")
```
it doesn't like the modifications to the model:
```
model.transformer.word_embeddings.padding_idx = len(tokenizer) - 1
model.config.max_new_tokens = len(tokenizer)
```
<|||||>Hey @brando90 ! Thanks a lot for reporting and using `transformers`. This particular thread is not exactly the good place to have such huge chunks of codes and talk about another issue. My best recommendation is:
- create a colab with your code, make it minimaly reproducible. Use small models so that it's faster for everyone who wants to take a look π !
- share your colab and issue on the hugging face forum : https://discuss.huggingface.co/. If you don't get an answer form the community, try to ping me or anyone from the team!
- properly format the part of your code. In this case the previous message is pretty much unreadable! Would love to help you make this work, but make sure you convey in a good format!
- summarise your issue! (A tokenizer not having a pad tokens is pretty common, GPT2 was pretty much the same. When training, inputs can often be truncated rather than padded, to have as much information as possible).
- check the documentation π ! Especially regarding how to modify generation parameters such as `pad_token`, `max_new_tokens` etc . You should have a look [here](https://huggingface.co/docs/transformers/main_classes/text_generation#transformers.GenerationConfig). This will remove the warning that you were seeing.
Reading the post you created on the HF forum, you mention
> it doesnβt like the modifications to the model:
But since there is no traceback, this is very vague! A colab will show the outputs you got, making it easier to understand.
Also regarding padding token and not padding token, I believe this is a very important question and if we should review how we resize the embedding, so be it! Some model's embedding are usually always bigger than the length of the tokenizer to allow adding new tokens / be a power of X to make it faster.<|||||>https://stackoverflow.com/questions/76633368/why-does-the-falcon-qlora-tutorial-code-use-eos-token-as-pad-token<|||||>As a temporary fix I was able to accomplish the inference (of a Falcon 7b training) stopping correctly like this:
- In each row of my training data, at the end I added "*****" (without the quotes), which encoded into one token: 39735
- Then I do the normal training ( just using `tokenizer.pad_token = tokenizer.eos_token`)
- And in the inference run I set `eos_token_id=39735`
This makes the inference generate token ***** at the end of the answer (because it is in all the training examples), at which point it will stop because it is set as the ending token.
```
output_tokens = model.generate(
input_ids = batch.input_ids,
max_new_tokens=100,
temperature=0.001,
top_p=0.7,
num_return_sequences=1,
pad_token_id=39735, # *****
eos_token_id=39735, # *****
)
```<|||||>Finally found the correct way to do this here:
https://georgesung.github.io/ai/qlora-ift/
You need to do `tokenizer.add_special_tokens({'pad_token': '[PAD]'})`
instead of `tokenizer.pad_token = tokenizer.eos_token`
And you need to add the `tokenizer.eos_token` at the end of EACH training example.<|||||>in my case for some reason `eos_token_id` and ... was not being added to model.generate configs<|||||>If you want help feel free to open an issue with more details π <|||||>@robertheessels your answer solved my problem. you save my life. thank you so much!!!
> You need to do tokenizer.add_special_tokens({'pad_token': '[PAD]'})
> instead of tokenizer.pad_token = tokenizer.eos_token
>
> And you need to add the tokenizer.eos_token at the end of EACH training example.
<|||||>Adding the `eos_token` at the end of each training example can be activated using
```python
>>> from transformers import AutoTokenizer
>>> tokenizer = AutoTokenizer.from_pretrained("huggyllama/llama-7b", add_eos_token = True)
```
Or simply:
```python
>>> tokenizer.add_eos_token = True
``` |
transformers | 22,793 | closed | π [i18n-KO] Translated `run_scripts.mdx` to Korean | <!-- PRμ μ λͺ©μ "π [i18n-KO] Translated `<your_file>.mdx` to Korean" μΌλ‘ λΆνλ립λλΉ -->
# What does this PR do?
Translated the `run_scripts.mdx` file of the documentation to Korean.
Thank you in advance for your review.
Part of https://github.com/huggingface/transformers/issues/20179
<!-- λ©μΈ μ΄μμ κΈ°λ‘μ΄ λ¨μμ! κ°μ§μ°κ΅¬μ 리ν¬λ₯Ό μ¬μ©ν΄ μ°μ΅νμ€λλ μ κ±°ν΄μ£Όμλ©΄ κ°μ¬νκ² μ΅λλ€! :smile: -->
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
<!-- μ μΆ μ 체ν¬λ¦¬μ€νΈλ‘, κ°μ§μ°κ΅¬μλ§μ 체ν¬λ¦¬μ€νΈλ <details>λ‘ κ°μΈμ λ§λ€μ΄λλ©΄ λ μ’μ κ² κ°μμ. -->
## Who can review?
<!-- κ°μ§μ°κ΅¬μ νμλ€κ³Ό λ¦¬λ·°κ° λλ νμλ§ νκΉ
νμ΄μ€ μ§μλ€μκ² λ¦¬λ·° μμ²νλ μλ μ£Όμμ λ
ΈμΆν΄μ£ΌμΈμ! -->
<!-- @sgugger, @ArthurZucker, @eunseojo May you please review this PR? -->
Team PseudoLab, may you please review this PR?
@0525hhgus, @KIHOON71, @sim-so, @gabrielwithappy, @HanNayeoniee, @wonhyeongseo, @jungnerd | 04-16-2023 14:23:44 | 04-16-2023 14:23:44 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 22,792 | closed | LLaMA 13B does not forward or generate properly after being converted to HuggingFace checkpoints | ### System Info
## System info
- Transformer Version: 4.28.0.dev0
- Platform: Ubuntu 18.04.3 LTS (GNU/Linux 4.15.0-194-generic x86_64)
- GPU: Nvidia Tesla V100 SXM2 32GB
- CUDA version: 11.0
- PyTorch version: 1.12.1
## Summary
The LLaMA 13B model converted to HuggingFace format seems not able to generate any text, which may be due to NaN attentions produced in the last layer.
## Bug description
- Downloaded LLaMA weights (7B and 13B) from facebook, and checked their generation by `llama/example.py` provided with them β
- Used `transformers/src/transformers/models/llama/convert_llama_weights_to_hf.py` to convert 7B and 13B models into huggingface format β
- Loaded the 13B model by `LlamaForCausalLM.from_pretrained(model_path, device_map= "balanced", load_in_8bit=True)` and try to generate text by `model.generate` and ` tokenizer.batch_decode`, but the model simply outputs the given input and nothing else. β
- Loaded the 13B model by `LlamaModel.from_pretrained(model_path, device_map="balanced", load_in_8bit=True)` and try to obtain its attentions by `attentions = model(**encoded_input, output_attentions=True).attentions`, and found that all the attention weights in the last layer (layer 39) are NaN's. β
## Trying to debug
- Loaded the 7B model by `LlamaForCausalLM.from_pretrained(model_path, device_map= "balanced", load_in_8bit=True)`, and checked its generation by `model.generate` and ` tokenizer.batch_decode` β
- Loaded the 7B model by `LlamaModel.from_pretrained(model_path, device_map="balanced", load_in_8bit=True)` and try to obtain its attentions by `attentions = model(**encoded_input, output_attentions=True).attentions`, and the attentions are fine β
- I guess there may be corruptions in the model parameters. So I printed the parameters in layer 38 and 39, `model.norm.weight`, and `lm_head.weight` of the 13B model, but did not find any trace.
### Who can help?
@ArthurZucker @sgugger
### Information
- [x] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Step 1: Convert LLaMA weights to HuggingFace
```
python src/transformers/models/llama/convert_llama_weights_to_hf.py \
--input_dir downloaded_llama_weights --model_size 13B --output_dir llama-hf
```
Step 2: Try to generate text
```
import torch
from transformers import LlamaTokenizer, LlamaForCausalLM
model_path = f'llama-hf/13B'
sentences = [
'Hello!',
'Translate this sentence into German: I love baseball.',
'Tell me about New York.'
]
# model initialization
tokenizer = LlamaTokenizer.from_pretrained(model_path)
model = LlamaForCausalLM.from_pretrained(model_path, device_map= "balanced", load_in_8bit=True)
# iterate over sentences
for sentence in sentences:
print('-----\nInput:\n' + sentence + '\n-----')
encoded_input = tokenizer(sentence, return_tensors='pt').to('cuda')
generate_ids = model.generate(encoded_input.input_ids, max_length=256)
output = tokenizer.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]
print('-----\nOutput:\n' + output + '\n-----')
```
Step 3: Try to obtain model attentions
```
# model initialization
tokenizer = LlamaTokenizer.from_pretrained(model_path)
model = LlamaModel.from_pretrained(model_path, device_map= "balanced", load_in_8bit=True, torch_dtype=torch.float32)
n_layer = model.config.num_hidden_layers
# iterate over sentences
attention_per_sentence = [[] for j in range(n_layer)] # n_layer * n_sent * (n_head, max_sent_len, max_sent_len)
for sentence in sentences:
encoded_input = tokenizer(sentence, return_tensors='pt')
attentions = model(**encoded_input, output_attentions=True).attentions # n_layer * shape (1, n_head, L, L)
for lyr in range(n_layer):
attn_array = attentions[lyr].detach().cpu().numpy()
assert not np.isnan(np.sum(attn_array)), f"layer {lyr} has NaN attentions"
```
### Expected behavior
- In step 1, everything works fine.
- In step 2, the outputs are the same as inputs, no token is generated
- In step 3, assertion fails in layer 39: "AssertionError: layer 39 has NaN attentions" | 04-16-2023 07:13:31 | 04-16-2023 07:13:31 | @RiverGao - could you retry running these steps with the most recent version of transformers? Llama was under a lot of development and has only be officially added to the library in the most recent release - v4.28.0 (4.28.0.dev is some commit between 4.27 and 4.28). This ensures that you have all the most recent updates to the model. <|||||>I've encountered the same problem(specifically step 3) with version 4.29.0.dev0.
A quick fix would be not quantizing the final 2 layers by manually passing the `qunatization_config`:
```python
from transformers.utils.quantization_config import BitsAndBytesConfig
quantization_config = BitsAndBytesConfig.from_dict({
'load_in_8bit': True, 'llm_int8_skip_modules': ['model.layers.39', 'model.layers.38', 'lm_head']}, False)
llama_13B = './models/13B_hf'
tokenizer = LlamaTokenizer.from_pretrained(llama_13B)
model = LlamaForCausalLM.from_pretrained(llama_13B, torch_dtype=torch.float16,
quantization_config=quantization_config, load_in_8bit=False, device_map='auto')
```<|||||>@aliwalker Thank you for the quick fix!<|||||>cc @younesbelkada <|||||>@aliwalker @amyeroberts I have retried with updated transformers (version 4.29.0dev0), and the result is:
- When using `BitsAndBytesConfig` mentioned above, the model is able to generate normal text, but it still produces NaN's in layer 39
- When `load_in_8bit` is set to `False` for all layers, no NaN value is produced.
So, in conclusion, it may be a problem with the 8-bit quantization, instead of a problem with the converting script.<|||||>Hi @RiverGao
This is probably related to the fact that you are using a V100. Could you share with us your `bitsandbytes` version? Or alternatively update `bitsandbytes` and let us know if you still face the issue
```bash
pip install --upgrade bitsandbytes
```<|||||>@younesbelkada Thank you for your helpful information, I updated `bitsandbytes` from 0.37.2 to 0.38.1, and LLaMA 13B produced no NaN attention when loaded with `load_in_8bit=True`. However, when applied with [Alpaca-LoRA tuned weights](https://github.com/tloen/alpaca-lora), the model produces NaN's in layer 39 again if it is loaded in int8.<|||||>Hi @RiverGao
Interesting, how you apply the LoRA weights on that model? Can you share a repro script?<|||||>@younesbelkada I applied the LoRA weights following using `export_hf_checkpoint.py` from [their repo](https://github.com/tloen/alpaca-lora) by
`$ BASE_MODEL=path/to/converted/llama python export_hf_checkpoint.py`:
```
# export_hf_checkpoint.py
import os
import torch
import transformers
from peft import PeftModel
from transformers import LlamaForCausalLM, LlamaTokenizer # noqa: F402
BASE_MODEL = os.environ.get("BASE_MODEL", None)
assert (
BASE_MODEL
), "Please specify a value for BASE_MODEL environment variable, e.g. `export BASE_MODEL=huggyllama/llama-7b`" # noqa: E501
tokenizer = LlamaTokenizer.from_pretrained(BASE_MODEL)
base_model = LlamaForCausalLM.from_pretrained(
BASE_MODEL,
load_in_8bit=False,
torch_dtype=torch.float16,
device_map={"": "cpu"},
)
first_weight = base_model.model.layers[0].self_attn.q_proj.weight
first_weight_old = first_weight.clone()
lora_model = PeftModel.from_pretrained(
base_model,
"Angainor/alpaca-lora-13b",
device_map={"": "cpu"},
torch_dtype=torch.float16,
)
lora_weight = lora_model.base_model.model.model.layers[
0
].self_attn.q_proj.weight
assert torch.allclose(first_weight_old, first_weight)
# merge weights - new merging method from peft
lora_model = lora_model.merge_and_unload()
lora_model.train(False)
# did we do anything?
assert not torch.allclose(first_weight_old, first_weight)
lora_model_sd = lora_model.state_dict()
deloreanized_sd = {
k.replace("base_model.model.", ""): v
for k, v in lora_model_sd.items()
if "lora" not in k
}
LlamaForCausalLM.save_pretrained(
base_model, "../alpaca-hf/13B", state_dict=deloreanized_sd, max_shard_size="400MB"
)
```<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 22,791 | closed | Able to load 'gpt_neox_reward_model' type models | ### Feature request
The new models from OpenAssistant are type of gpt_neox_reward_model, the latest version of transformers lib not supporting them.
OpenAssistant/oasst-rm-2.1-pythia-1.4b-epoch-2.5
OpenAssistant/oasst-rm-2-pythia-6.9b-epoch-1
Getting the following error
KeyError: 'gpt_neox_reward_model'
- | 04-16-2023 06:11:27 | 04-16-2023 06:11:27 | @sann3, could you follow the issue template please and provide:
* Environment information printed out when running `transformers-cli env` in your terminal
* A reproducible code snippet
* Error and full traceback
* Information about the expected behaviour<|||||>Hi, to add to this (same issue). The new models from OpenAssistant have `gpt_neox_reward_model` defined as the model type in Β `config.json` This doesn't map to any existing models as defined in `transformers\models\auto\configuration_auto.py`.
The traceback for the error is as follows:
```
Traceback (most recent call last):
File βC:\dev\oobabooga-windows\text-generation-webui\server.pyβ, line 85, in load_model_wrapper
shared.model, shared.tokenizer = load_model(shared.model_name)
File βC:\dev\oobabooga-windows\text-generation-webui\modules\models.pyβ, line 186, in load_model
model = LoaderClass.from_pretrained(checkpoint, **params)
File βC:\dev\oobabooga-windows\installer_files\env\lib\site-packages\transformers\models\auto\auto_factory.pyβ, line 441, in from_pretrained
config, kwargs = AutoConfig.from_pretrained(
File βC:\dev\oobabooga-windows\installer_files\env\lib\site-packages\transformers\models\auto\configuration_auto.pyβ, line 937, in from_pretrained
config_class = CONFIG_MAPPING[config_dict[βmodel_typeβ]]
File βC:\dev\oobabooga-windows\installer_files\env\lib\site-packages\transformers\models\auto\configuration_auto.pyβ, line 643, in getitem
raise KeyError(key)
KeyError: βgpt_neox_reward_modelβ
```
I'm experimenting by changing the value in `config.json` for the model to `gpt_neox`.
I hope this helps.<|||||>@sann3 @sweetlilmre Could either of you share which checkpoint you're trying to use?
On one of the OpenAssistant [model repos](https://huggingface.co/OpenAssistant/oasst-rm-2-pythia-6.9b-epoch-1), they share a code snippet explaining how to use the checkpoint, including how to use their GPTNeoXRewardModel defined [here](https://github.com/LAION-AI/Open-Assistant/blob/main/model/model_training/models/reward_model.py).
<|||||>Hi @amyeroberts I'm using the model with [https://github.com/oobabooga/text-generation-webui](https://github.com/oobabooga/text-generation-webui) so my low level knowledge is almost zero here. Thank you for the links, I'll check them out, but I think this is probably a capability mismatch between the text-generation-webui project and the new models from OpenAssistant. I really don't want to waste your time on this.<|||||>@amyeroberts I am trying the following models, the [link](https://github.com/LAION-AI/Open-Assistant/blob/main/model/model_training/models/reward_model.py) you have shared might help
OpenAssistant/oasst-rm-2.1-pythia-1.4b-epoch-2.5
OpenAssistant/oasst-rm-2-pythia-6.9b-epoch-1<|||||>@sann3 - yes, for those checkpoints the instructions on the [model repo](https://huggingface.co/OpenAssistant/oasst-rm-2.1-pythia-1.4b-epoch-2.5#how-to-use) should enable you to use them. <|||||>Able to load the model with https://github.com/LAION-AI/Open-Assistant/blob/main/model/model_training/models/reward_model.py code. |
transformers | 22,790 | open | DeBERTa models produce nonsense fill-mask output | ### System Info
Python version: 3.8.15
Transformers version: 4.24.0
### Who can help?
@ArthurZucker, @younesbelkada
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Both on the HF website and using transformers in Python scripts/interpreter, the DeBERTa models seem to produce nonsense outputs in a fill-mask task. This is demonstrated below using a fill-mask pipeline for ease of reproduction, but the same thing happens even when calling the models manually and inspecting the logits. I demonstrate with one model, but the other `microsoft/deberta` masked language models appear to have the same issue (i.e., not the ones fine-tuned on mnli or whatever, which I wouldn't test against).
```python
>>> from transformers import pipeline
>>> test_sentence = 'Do you [MASK] the muffin man?'
# for comparison
>>> bert = pipeline('fill-mask', model = 'bert-base-uncased')
>>> print('\n'.join([d['sequence'] for d in bert(test_sentence)]))
do you know the muffin man?
do you remember the muffin man?
do you mean the muffin man?
do you see the muffin man?
do you recognize the muffin man?
>>> deberta = pipeline('fill-mask', model = 'microsoft/deberta-v3-large')
>>> print('\n'.join([d['sequence'] for d in deberta(test_sentence)]))
Do you Moisturizing the muffin man?
Do you Kagan the muffin man?
Do youULA the muffin man?
Do youι the muffin man?
Do you aplica the muffin man?
```
Here's a screenshot from the HF website for the same model (`microsoft/deberta-v3-large`):

Based on the paper and the documentation on the model cards, it seems like these should be able to be used for masked language modeling out of the box since they were pre-trained on it, but they're clearly not doing a good job of it. Am I missing something about why these models shouldn't be used for MLM without fine-tuning, or is there a bug with them?
### Expected behavior
I'd expect sensible predictions for masked token locations (assuming these models can indeed be used for that without additional fine-tuning). | 04-16-2023 03:41:36 | 04-16-2023 03:41:36 | Hey! Did you find a solution/cause yet? I am experiencing the same issues on debertav3-base even though I pretrained the model on my own training data...<|||||>No dice, but I discovered the problem is worse than than just mask filling; it doesn't even produce the right thing for given tokens.
```python
>>> import torch
>>> from transformers import AutoModelForMaskedLM, AutoTokenizer
>>> tokenizer = AutoTokenizer.from_pretrained('deberta-v3-base')
>>> model = AutoModelForMaskedLM.from_pretrained('deberta-v3-base')
>>> text = 'Do you [MASK] the muffin man?'
>>> inputs = tokenizer(text, return_tensors='pt')
# double checking
>>> tokenizer.batch_decode(inputs['input_ids'])
# all good
['Do you [MASK] the muffin man?']
>>> with torch.no_grad():
>>> outputs = model(**inputs)
>>> tokenizer.batch_decode(torch.argmax(outputs.logits, dim=-1))
# ???
['Γ»t slimnatch Laughternatchilia ArrijailΓ»t']
```
I'd think it was something with the tokenizer, but for you saying you had the same issue with your pre-trained model. Do you know whether the same thing happens for all positions for your model?
Edit:
Found #18674 that references this. Looks like it's been around for a while and it's being worked on.<|||||>Hey! I just came back from holidays, will have a look when I can, note that Deberta should be refactored soon, follow #22105 if you want to know more. This will be looked at when fixing! <|||||>Hope to get to this by the end of the summer! |
transformers | 22,789 | closed | It's possible to create a chunk pipeline with an invalid stride/max-length combination | ### System Info
- `transformers` version: 4.29.0.dev0
- Platform: Darwin-21.6.0-x86_64-i386-64bit
- Python version: 3.7.16
- Huggingface_hub version: 0.12.1
- Safetensors version: 0.2.8
- PyTorch version (GPU?): 1.13.1 (False)
- Tensorflow version (GPU?): 2.11.0 (False)
- Flax version (CPU?/GPU?/TPU?): 0.5.3 (cpu)
- Jax version: 0.3.6
- JaxLib version: 0.3.5
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help?
@Narsil
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
When the user initializes a `ChunkPipeline` (e.g. for token classification) with a `stride` that is too high for the given tokenizer's `model_max_length`, the pipeline initializes without raising any errors. Ideally, it should not let the user initialize a pipeline with invalid parameters.
#### Concrete example
If we attempt to initialize a pipeline with a `stride` equal to its tokenizer's `model_max_length` (in the case of GPT-2, `1024`) it will initialize without any warnings or errors (which is bad):
```
from transformers import pipeline
token_classifier = pipeline('token-classification-sliding-window', model='nguyenkhoa2407/gpt2-NER-favsbot', aggregation_strategy='FIRST', stride=1024)
```
however, when we go to use the pipeline (in this case, on some sufficiently long nonsense text):
```
text = 2000 * 'hello, '
output = token_classifier(text)
```
it will finally inform the user that this is an invalid value for `stride` by giving them the following error:
```
PanicException: assertion failed: stride < max_len
```
Note that this error *won't* be triggered if the text is shorter than the tokenizer's `model_max_length`:
```
text = 20 * 'hello, '
output = token_classifier(text)
# this executes without raising any error or warning
```
thus allowing the bug to go undetected until the user inputs a text of sufficient length.
#### Complication: special tokens
Fixing this would unfortunately be more complicated than just checking `stride < tokenizer.model_max_length`. Since tokenizer's `stride`s account for special characters, the true value of `max_len` is `tokenizer.model_max_length - <num_special_tokens>`, where `<num_special_tokens>` is the number of special tokens added by the tokenizer to each chunk. E.g. the following:
```
token_classifier = pipeline('token-classification', model='dslim/bert-base-NER', aggregation_strategy='FIRST', stride=510)
text = 2000 * 'hello, '
output = token_classifier(text)
```
will also fail (with the same `PanicException` as above) because while BERT's `model_max_length` is 512 (longer than the stride of `510`), the tokenizer adds 2 special tokens (namely, `[CLS]` and `[SEP]`) to each chunk.
### Expected behavior
`ChunkPipeline` should follow the philosophy of failing as early as possible with a clear message to the user. Therefore any initialization of a `ChunkPipeline` with a `stride` that is too long for its model's maximum length (plus the special tokens added by its tokenizer) should result in an exception. | 04-15-2023 23:31:28 | 04-15-2023 23:31:28 | > ChunkPipeline should follow the philosophy of failing as early as possible with a clear message to the user. Therefore any initialization of a ChunkPipeline with a stride that is too long for its model's maximum length (plus the special tokens added by its tokenizer) should result in an exception.
Fully agree with that statement.
> Fixing this would unfortunately be more complicated than just checking stride < tokenizer.model_max_length.
This is completely true, however I think if we can use it and already fix 90% of the issues by raising early that's already a huge win.
I think raising early during `_sanitize_parameters` makes complete sense to me.
@sgugger for a core maintainer's opinion. Wdyt ?
@boyleconnor would you be open to create a PR for it ?<|||||>Yes, the simple check should cover most use cases already and would be a nice addition.<|||||>So I take it there is no way to extract from the tokenizer how many special tokens it will add to each window (that you are aware of @sgugger and @Narsil)?
@Narsil I can open a PR some time this weekend |
transformers | 22,788 | closed | Feature to convert videomae huge and small finetuned on kinetics and ssv2 added to the videomae to pytorch converter | # What does this PR do?
Adds the feature to convert VideoMAE huge and small to hugging face compatible pytorch model. The following models are added to the converter:
1. VideoMAE huge fine-tuned Kinetics
2. VideoMAE small fine-tuned Kinetics
3. VideoMAE small fine-tuned SSV2
## Who can review?
@NielsRogge | 04-15-2023 23:12:49 | 04-15-2023 23:12:49 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@NielsRogge Uploaded the huge-kinetics, small-kinetics, and small-ssv2 models to the HF model hub under the following names:
1. sandstorm12/videomae-huge-finetuned-kinetics
2. sandstorm12/videomae-small-finetuned-kinetics
3. sandstorm12/videomae-small-finetuned-ssv2
Let me know if anything else is needed. |
transformers | 22,787 | closed | Add `top_k` argument to post-process of conditional/deformable-DETR | # What does this PR do?
The current post-processing for object detection methods of deformable and conditional DETR assumes the number of classes * the number of object queries > 100. This reflects the original code in the [deformable-DETR repository](https://github.com/fundamentalvision/Deformable-DETR/blob/11169a60c33333af00a4849f1808023eba96a931/models/deformable_detr.py#L412). However, this limits the flexibility of training on datasets with fewer classes/object queries. This PR suggests updating the post process for object detection code not to break if n_classes * n_object_queries < 100.
This PR suggests adding `top_k` argument to post-process functions of conditional/deformable-DETR with the default value of the previously hard-coded value.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@amyeroberts, as you added these models do you think this approach is a reasonable addition?
| 04-15-2023 21:43:11 | 04-15-2023 21:43:11 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@CreatlV Thanks for opening this PR!
Having the processor be compatible with the model is definitely something we want and updating the code to make it less brittle is a great initiative. At the moment, with `min`, if `num_queries *num_classes < 100`, then the model will return all of the boxes. I think we could adapt this further to make it scale `k` according to the model. Specifically, adding an argument e.g. `k` to the method, which defaults to `num_queries` or its current default. We can keep the `min` upper bound to keep it safe.
@NielsRogge For the number of boxes returned, the default `k` value for this model is 100 (rather than 300). Was there a reason for setting it to this value? (I'm guessing consistency with other detr models?)
<|||||>@amyeroberts it was done to reflect the original code, as linked in his message. The probabilities get reshaped to `(batch_size, num_queries*num_labels)` and then the top 100 values (highest scoring queries) are taken for each example in the batch. However, since Deformable DETR uses 300 queries by default, this will always be > 100. But when you train the model from scratch with a custom number of queries, this would indeed raise an error.
Making this more general makes sense. Note that we typically filter them based on a certain threshold; we first filter the 300 queries to get the top 100 recognized objects, and then set a threshold like 0.9 to only get the predictions with a score higher than 0.9. Both the `threshold` and the `top_k` value can both be seen as postprocessing hyperparameters. However I'm not sure `top_k` is general enough as it seems DETR-specific<|||||>I added `top_k` as an argument to the post-processing functions of conditional/deformable-DETR that used them. With the default value unchanged from previously. The top_k value for `post_process` of conditional DETR is 300, compared to 100 of the other functions, is this intentional @NielsRogge ? <|||||>Thanks again for adding this improvement and iterating! π |
transformers | 22,786 | open | Implement a decode method in transformers.BasicTokenizer | ### Feature request
Transformers has provided a nice BasicTokenizer for basic tokenizing when we don't need BPE tokenizers. For data processing (like data format converting), it is better to offer a decode method for basic use.
### Motivation
When doing data format converting in some data processing problems, we usually meet the requirement to recover a list of tokens into continuous, readable text.
### Your contribution
None. | 04-15-2023 21:11:07 | 04-15-2023 21:11:07 | @jiangwy99 The BasicTokenizer class will perform simple string processing e.g. splitting on white spaces. However it doesn't encode the tokens to ids, and so doesn't have a corresponding `decode` method.
cc @ArthurZucker <|||||>@amyeroberts BasicTokenizer has implemented a tokenize function, which converts text into a list of tokens. What I want is a de-tokenize, which converts a list of tokenized tokens into the original text, serving as a dual operation of BasicTokenizer.tokenize<|||||>@jiangwy99 The BasicTokenizer is just that, a very simple class used for doing basic preprocessing of strings: puncutation splitting, making lower case etc. Adding this functionality is beyond the scope of the current class.
If it's something you're still interested in, you're welcome to make a fork of the repo with an implementation and share it here. The forum is [a good place](https://discuss.huggingface.co/) to discuss questions about implementation details. Note: it won't be possible to always exactly recover the original input if `do_lower_case=True`, and special handling of spacing and capitalization with punctuation would be required. |
transformers | 22,785 | closed | improve(llama): Faster apply_rotary_pos_emb | # What does this PR do?
Faster implementation for `apply_rotary_pos_emb` in `modeling_llama.py`.
Please see issue #22683 for code that verifies the correctness of the change.
NOTE: Not marking as fixing the above issue, as speed is still not as good as before.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@gante
| 04-15-2023 18:25:39 | 04-15-2023 18:25:39 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Should a similar patch be applied to GPT-NeoX?<|||||>@neggert I believe it can be added to GPT-NeoX too - very happy to review a PR if you'd like to add! |
transformers | 22,784 | closed | Example does not work at all | ### System Info
latest version
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I tried to run this as in instructions:
https://huggingface.co/sileod/deberta-v3-base-tasksource-nli
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("sileod/deberta-v3-base-tasksource-nli")
model = AutoModelForSequenceClassification.from_pretrained("sileod/deberta-v3-base-tasksource-nli")
I get error that tokenizer does not exist.
Then i tried the author code:
from tasknet import Adapter
from transformers import AutoModelForMultipleChoice,AutoTokenizer
model_name="sileod/deberta-v3-base-tasksource-nli"
tokenizer3 = AutoTokenizer.from_pretrained("microsoft/deberta-v3-base")
model3 = AutoModelForMultipleChoice.from_pretrained(model_name,ignore_mismatched_sizes=True, cache_dir="/root/Desktop/models/", low_cpu_mem_usage=True)
adapter = Adapter.from_pretrained(model_name.replace('nli','adapters'))
model_for_rlhf = adapter.adapt_model_to_task(model3, 'glue/cola') #glue/cola #hh-rlhf
if 1==1:
def cola(sentences1):
import torch
import time
beg = time.time()
inputs = tokenizer3(sentences1, return_tensors="pt", padding=True, truncation=True, max_length=40).to("cpu")
with torch.no_grad():
outputs = model_for_rlhf(**inputs)
the_cola_scores = []
print("outputs.logits",outputs.logits)
for aout in outputs.logits:
cola_prediction = torch.nn.functional.softmax(aout)[1].item()
the_cola_scores.append(round(cola_prediction,2))
return the_cola_scores
import time
timea = time.time
sentences1 = ["I likes apples","I love apples."]
cola = cola(sentences1)
print(cola,timea - time.time )
I get error:
RuntimeError: shape '[-1, 6]' is invalid for input of size 4
### Expected behavior
it should work, but there are many models here that do not have instruction on running them. | 04-15-2023 16:47:47 | 04-15-2023 16:47:47 | @Oxi84 Thanks for reporting this issue.
Could you share some more information so that we can best help you? Specifically the running environment: copy-paste the info you get from running `transformers-cli env` in your terminal ("latest version" is ill-defined). I'm able to run the snippet:
```py
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("sileod/deberta-v3-base-tasksource-nli")
model = AutoModelForSequenceClassification.from_pretrained("sileod/deberta-v3-base-tasksource-nli")
```
without any issue on the main branch - 4.29.0.dev0. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 22,783 | closed | π [i18n-KO] Translated `tasks/summarization.mdx` to Korean | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Translated the `tasks/summarization.mdx` file of the documentation to Korean.
Thank you in advance for your review!
Part of #20179
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
This is a work on progress.
Could you review this PR when I finish this work?
@0525hhgus, @KIHOON71, @sim-so, @gabrielwithappy, @HanNayeoniee, @wonhyeongseo, @jungnerd
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 04-15-2023 10:51:51 | 04-15-2023 10:51:51 | Team Pseudo Lab,
I'm happy to inform you I finished the translation!
Could you review this PR?<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>Could you review this PR? π
@sgugger, @ArthurZucker, @eunseojo |
transformers | 22,782 | closed | A minor change in the `decoder_config` of T5Model | In the line mentioned below,
https://github.com/huggingface/transformers/blob/fb3aa06cb673d0e2774a2924747d3290135c09cc/src/transformers/models/t5/modeling_t5.py#L1339,
I believe there should be an update in the `decoder_config`
The below is the change, I want to suggest
```python
decoder_config = copy.deepcopy(config)
decoder_config.update("is_decoder", True)
```
This is because, we are using the decoder configuration, and when loading from the `PreTrained` class, the same is suggested.
Not sure whom to tag, @sgugger @NielsRogge, could you address it?
| 04-15-2023 07:11:14 | 04-15-2023 07:11:14 | My mistake, sorry for the issue |
transformers | 22,781 | closed | Unable to import transformers.models.bert.modeling_tf_bert on macOS? | ### System Info
```
transformers=4.28.0
tensorflow-macos=2.9.0
python=3.10
os=macOS Ventura 13.3.1
BERT_model=uncased_L-12_H-768_A-12
```
### Who can help?
@ArthurZucker @younesbelkada
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
Simply run the following code snippet:
```
from transformers import BertTokenizer, TFBertModel
tokenizer = BertTokenizer.from_pretrained(BERT_PATH)
model = TFBertModel.from_pretrained(BERT_PATH)
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
resp = model(encoded_input)
print(resp)
```
### Expected behavior
Retrieve the output generated by the BERT model. | 04-15-2023 05:39:43 | 04-15-2023 05:39:43 | @talhakabakus, thanks for raising this issue.
So that we can best help you, could you provide some additional information:
* Environment: Copy-paste the output of `transformers-cli env` here
* The error and full trackback encountered<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 22,780 | closed | msgpack.exceptions.ExtraData: unpack(b) received extra data. | ### System Info
- `transformers` version: 4.9.0
- Platform: Linux-5.15.0-52-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- PyTorch version (GPU?): 1.9.0+cpu (False)
- Tensorflow version (GPU?): 2.9.1 (True)
- Flax version (CPU?/GPU?/TPU?): 0.3.4 (gpu)
- Jax version: 0.4.8
- JaxLib version: 0.4.7
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
-Models: FlaxHybridCLIP
### Who can help?
@sanchit-gandhi
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I'm trying to load a trained FlaxHybridCLIP model from a folder that contains the following files
config.json
flax_model.msgpack
I attempted to load it using the below:
```
if args.run_from_checkpoint is not None:
with open(f"{args.run_from_checkpoint}/config.json", "r") as fp:
config_dict = json.load(fp)
config_dict["vision_config"]["model_type"] = "clip"
config = HybridCLIPConfig(**config_dict)
model = FlaxHybridCLIP.from_pretrained(
args.run_from_checkpoint,
seed=training_args.seed,
dtype=getattr(jnp, model_args.dtype),
config=config,
freeze_backbones=args.freeze_backbones
)
```
But, I encountered the following error:
```
text_config_dict is None. Initializing the CLIPTextConfig with default values.
vision_config_dict is None. initializing the CLIPVisionConfig with default values.
loading weights file freeze/flax_model.msgpack
Traceback (most recent call last):
File "run_hybrid_clip.py", line 832, in <module>
main()
File "run_hybrid_clip.py", line 529, in main
model = FlaxHybridCLIP.from_pretrained(
File "/home/ubuntu/.local/lib/python3.8/site-packages/transformers/modeling_flax_utils.py", line 350, in from_pretrained
state = from_bytes(cls, state_f.read())
File "/home/ubuntu/.local/lib/python3.8/site-packages/flax/serialization.py", line 359, in from_bytes
state_dict = msgpack_restore(encoded_bytes)
File "/home/ubuntu/.local/lib/python3.8/site-packages/flax/serialization.py", line 342, in msgpack_restore
return msgpack.unpackb(
File "msgpack/_unpacker.pyx", line 201, in msgpack._cmsgpack.unpackb
msgpack.exceptions.ExtraData: unpack(b) received extra data.
```
I used the modified Italian hybrid CLIP scripts [here](https://github.com/clip-italian/clip-italian/tree/master/hybrid_clip)
### Expected behavior
to load successfully and fine-tune with unfrozen backbone
Thanks | 04-14-2023 20:56:46 | 04-14-2023 20:56:46 | Hey @alhuri! Echoing what I suggested in https://github.com/huggingface/transformers/issues/22673#issuecomment-1517714740 - this is probably best asked on the Italian Hybrid CLIP repo (since they use a custom implementation of JAX CLIP there). Unless you have a code snippet that I can use to reproduce this error with just transformers? In which case I'd be able to take a deeper dive!<|||||>Closing since this issue is related to the Italian CLIP repo (not transformers!) |
transformers | 22,779 | closed | Move labels to the same device as logits for Whisper | # What does this PR do?
Fixes issue #22561 by moving labels to the same device as logits for `Whisper` model.
@sgugger Could you please review?
| 04-14-2023 20:45:07 | 04-14-2023 20:45:07 | _The documentation is not available anymore as the PR was closed or merged._ |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.