repo
stringclasses 1
value | number
int64 1
25.3k
| state
stringclasses 2
values | title
stringlengths 1
487
| body
stringlengths 0
234k
⌀ | created_at
stringlengths 19
19
| closed_at
stringlengths 19
19
| comments
stringlengths 0
293k
|
---|---|---|---|---|---|---|---|
transformers | 24,300 | closed | [`SwitchTransformers`] Fix return values | # What does this PR do?
The previous version of the code would always return `None` values for the router losses, since the `if output_router_probs` was wrapper around `if labels is not None`. But router logits can still be computed wthout labels.
Also returns tensors instead of None, this follows our usual API, and is less prone to errors. | 06-15-2023 12:07:50 | 06-15-2023 12:07:50 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 24,299 | closed | Make `can_generate` as class method | # What does this PR do?
Make `can_generate` as class method, so we can check a model (class) can generate or not without loading/creating a model instance.
(The goal of this PR is not to address the issue regarding how to check `is_encoder`, `is_decoder` etc. discussed offline). | 06-15-2023 12:03:34 | 06-15-2023 12:03:34 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 24,298 | closed | deepspeed init during eval fix | # What does this PR do?
1. Fixes #24294 as DS Z2 and Z1 stages don't modify the model and the check was exhibiting wrong behaviour. | 06-15-2023 11:41:08 | 06-15-2023 11:41:08 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Thanks for working on this. I'm currently running into the same issue with V 4.30.2. Any idea when the the new V with this fix will be released? |
transformers | 24,297 | closed | Fix 'local_rank' AttiributeError in Trainer class | # What does this PR do?
This PR fixes `AttributeError: 'Trainer' object has no attribute 'local_rank'`.
Please see the discussion at https://github.com/huggingface/transformers/pull/23681 for details.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 06-15-2023 11:07:34 | 06-15-2023 11:07:34 | This works for me with python 3.10.12 (Google Colab runtime).<|||||>cc @muellerzr <|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>Friendly ping @muellerzr |
transformers | 24,296 | closed | [EnCodec] Changes for 32kHz ckpt | # What does this PR do?
Updates the EnCodec config and modelling code to allow two options for the residual connection in the Resnet block:
1. Pass the residual through a Conv1d
2. Apply the residual directly (identity)
=> this change is required to use the latest 32kHz EnCodec model in the Music Gen model (#24109). It is tested for with a fast test, and confirmed to match the original implementation.
| 06-15-2023 09:56:41 | 06-15-2023 09:56:41 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 24,295 | open | Add training support for EnCodec | ### Feature request
Would be cool to add training support for the EnCodec model.
Not entirely sure if we can easily make it compatible with Trainer, so this can be a good second issue I think.
### Motivation
…
### Your contribution
… | 06-15-2023 09:50:41 | 06-15-2023 09:50:41 | Hi @ArthurZucker , I want to try this, please assign it to me. Thanks.<|||||>Sure! Feel free to open a PR and ping me<|||||>@Swastyy @ArthurZucker Let me know if you are looking for any support. I would also like to help with this if possible. Thanks!<|||||>Seems like he did not link a PR, feel free to synch and ping me for any help! Even a draft is good! <|||||>Hi @ArthurZucker can you let me know of the overall changes that have to be made. I see the EnCodec model already implemented in transformers, so to integrate it with Trainer what are the additional requirements?
<|||||>The idea is mostly to integrate the loss computation for the VQVAE! Trainer might not work as the model does not use attention, but the target should be to have the same loss as the original model ! <|||||>Thanks Arthur.
I read through the paper (https://arxiv.org/pdf/2210.13438.pdf) and the existing code, and here is my impression on the work breakdown. Does any of this make sense or am I going in a totally wrong direction?
The loss function detailed in the paper (equation 4) is a combination of (1) the reconstruction loss (over frequency and time domains), (2) the discriminative loss (requires the discriminator), and (3) the VQ commitment loss (the quantizer loss).
(1) The reconstruction loss is computed using the original audio as the label, and we basically need to apply certain time and frequency transformations to the input/output and compute the L1/L2 distances between them.
(2) The discriminative loss requires a discriminator. As far as I can tell, this hasn't been ported/implemented yet and we'll need to do it if we wanted to compute the loss as stated in the paper (msstftd.py from facebookresearch). We'll need to hook up the discriminator in the training code somewhere (is there any pretrained discriminator here?). Also, it's unclear to me whether we can train the discriminator and the model/generator at the same time (I'm assuming not, and we'll need to train one at a time).
(3) The VQ commitment loss is from the quantizer. It looks like it's summing up the losses across all the residual steps. Are we supposed to train the quantizer at the same time as the encoder/decoders? Or should we train them at different times?
In addition to the general loss function, the paper introduced a balancer (balancer.py) that weighs the reconstruction, discriminative, and commitment losses differently. We would also need to import the balancer code if we want this special balancer.<|||||>Makes sense to me! I think you can focus simply on returning the loss for the modules. The order of training is not that important (when implementing the module wise loss) since you don't need to train (but compare output losses) until you have eveything!
For the discriminator, you can live it in the training file! It should be pretty small and that's usually how we do things 🤗 !
The order of training, on what is frozen when should be in the paper/original codebase, have not looked it up! <|||||>I'll attempt to code up (3) VQ commitment loss first then. I'll reach out if I get stuck or run into any issues. Thanks!<|||||>I added an initial draft here: https://github.com/huggingface/transformers/commit/4f697be0b62c4f3b0401ccbd00d1d46aac81906d
Can you take a look and let me know what you think? Thanks<|||||>FYI I will be traveling in July, so won't be as available that month. <|||||>Sure, would you mind opening a proper PR? Would be easier to test locally and visualize and follow changes! |
transformers | 24,294 | closed | Error during evaluation using deepspeed zero stage 2 | ### System Info
transformers v4.30.0
python 3.8
Training using `deepspeed stage zero 2` hit an error when in evaluation/prediction loop. Both prediction/evaluation initiate [deepspeed with inference=True] (https://github.com/huggingface/transformers/blob/6793f0cfe0006d7cedfb9b6081f55d9d38eae18a/src/transformers/trainer.py#L3045) and hence now can't run inference for anything other than stage 3 (inference not supported for zero 1/2).
So my question is how to run deepspeed zero 2? My code is [here](https://github.com/explodinggradients/Funtuner/blob/main/funtuner/trainer.py)
Error stack
`Traceback (most recent call last):
File "funtuner/trainer.py", line 98, in train
trainer.train()
File "/nfshome/store03/users/c.scmse/venv/lib/python3.8/site-packages/transformers/trainer.py", line 1645, in train
return inner_training_loop(
File "/nfshome/store03/users/c.scmse/venv/lib/python3.8/site-packages/transformers/trainer.py", line 2011, in _inner_training_loop
self._maybe_log_save_evaluate(tr_loss, model, trial, epoch, ignore_keys_for_eval)
File "/nfshome/store03/users/c.scmse/venv/lib/python3.8/site-packages/transformers/trainer.py", line 2312, in _maybe_log_save_evaluate
metrics = self.evaluate(ignore_keys=ignore_keys_for_eval)
File "/nfshome/store03/users/c.scmse/venv/lib/python3.8/site-packages/transformers/trainer.py", line 3043, in evaluate
output = eval_loop(
File "/nfshome/store03/users/c.scmse/venv/lib/python3.8/site-packages/transformers/trainer.py", line 3769, in prediction_loop
_, _ = deepspeed_init(self, num_training_steps=0, inference=True)
File "/nfshome/store03/users/c.scmse/venv/lib/python3.8/site-packages/transformers/deepspeed.py", line 351, in deepspeed_init
raise ValueError("ZeRO inference only makes sense with ZeRO Stage 3 - please adjust your config")
ValueError: ZeRO inference only makes sense with ZeRO Stage 3 - please adjust your config`
### Who can help?
@pacman100
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
My code is [here](https://github.com/explodinggradients/Funtuner/blob/main/funtuner/trainer.py)
Run `python3 funtuner/trainer.py`
### Expected behavior
Run evaluation loop without any error using deepspeed stage 1 and 2. | 06-15-2023 09:27:52 | 06-15-2023 09:27:52 | Hello, could you try the latest release and let us know if that resolves the issues?<|||||>Getting `ModuleNotFoundError: No module named 'funtuner'` when trying to run `python3 funtuner/trainer.py`<|||||>Hi @pacman100 , can you add the PYTHONPATH and try again?
` export PYTHONPATH="${PYTHONPATH}:/your-path/Funtuner" `
Also checkout the `dev-train` branch. The issue remains the same with the latest version. I tried that. <|||||>Also, on how many GPUs are you running this?
<|||||>V 100 16GB - 1. <|||||>with one GPU, there won't be any sharing of the optim states and gradients, therefore it will be same as DDP. So a bit confused there
<|||||>Also, getting various issues when running with 2 GPUs:
main-branch
```
RuntimeError: CUDA error: device-side assert triggered
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
```
dev-train branch
```
Traceback (most recent call last):
File "/home/sourab/Funtuner/funtuner/trainer.py", line 28, in train
os.mkdir(cfg.log_dir)
FileNotFoundError: [Errno 2] No such file or directory: '/scratch/c.scmse/Funtuner-logs'
```
<|||||>The main branch is not updated, please stick to dev-train for now. For fixing this error, please change the `log_dir` to your folder [here](https://github.com/explodinggradients/Funtuner/blob/c4e66209d5ee276a7eb8caf582435f1eaafbf18f/funtuner/config/config.yaml#L4) also you might want to set `log_wandb=False`
I have run this branch on single and multi GPU settings. Although now I use only single GPU for redpajama-3B model. <|||||>> with one GPU, there won't be any sharing of the optim states and gradients, therefore it will be same as DDP. So a bit confused there
I think in single GPU + Deepspeed zero 2 I can benefit from zero offloading and smart GPU mem management allowing me to fit larger models/batch sizes. <|||||>above PR should resolve the DS issue<|||||>I'll try it out one merged. |
transformers | 24,293 | closed | [fix] bug in BatchEncoding.__getitem__ | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 06-15-2023 09:14:03 | 06-15-2023 09:14:03 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 24,292 | closed | [Docs] Improve docs for MMS loading of other languages | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Clarifies #24223 and improves docs of MMS.
| 06-15-2023 08:47:42 | 06-15-2023 08:47:42 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 24,291 | closed | Cannot serialize Whisper decoder layer in a keras model | ### System Info
- `transformers` version: 4.30.2
- Platform: Linux-4.19.0-24-cloud-amd64-x86_64-with-glibc2.28
- Python version: 3.11.3
- Huggingface_hub version: 0.15.1
- Safetensors version: 0.3.1
- PyTorch version (GPU?): not installed (NA)
- Tensorflow version (GPU?): 2.12.0 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help?
@Rocketknight1
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
Changes introduced in #23760 and released in transformers [v4.30.0](https://github.com/huggingface/transformers/releases/tag/v4.30.0) are breaking the ability to serialize a keras model that contains a Whisper decoder layer.
Here is a minimal reproducible example:
```python
from transformers import TFWhisperModel
import tensorflow as tf
whisper = TFWhisperModel.from_pretrained("openai/whisper-tiny")
inp = tf.keras.Input((80, 3000))
stack = whisper.get_encoder()(inp)
decoder_input_ids = tf.ones((tf.shape(inp)[0], 1), dtype=tf.int32)* whisper.config.decoder_start_token_id
stack = whisper.get_decoder()(input_ids=decoder_input_ids, encoder_hidden_states=stack.last_hidden_state)
model = tf.keras.Model(inp, stack)
model.summary()
model.save("whisper-tiny-custom")
```
With `transformers>=4.30.0`, this minimal reproducible example will raise the error:
```
OperatorNotAllowedInGraphError: Exception encountered when calling layer 'decoder' (type TFWhisperDecoder).
Using a symbolic `tf.Tensor` as a Python `bool` is not allowed: AutoGraph did convert this function. This might indicate you are trying to use an unsupported feature.
Call arguments received by layer 'decoder' (type TFWhisperDecoder):
• self=tf.Tensor(shape=(1, 1), dtype=int32)
• input_ids=None
• attention_mask=None
• position_ids=None
• encoder_hidden_states=tf.Tensor(shape=(None, 1500, 384), dtype=float32)
• head_mask=None
• cross_attn_head_mask=None
• past_key_values=None
• inputs_embeds=None
• use_cache=None
• output_attentions=None
• output_hidden_states=None
• return_dict=None
• training=True
```
### Expected behavior
Keras model serialization should succeed. | 06-15-2023 08:32:03 | 06-15-2023 08:32:03 | Hi @perretv yes, this looks like a regression. Investigating now, hopefully we can make a quick patch!<|||||>Hi @perretv, the fix has now been merged. You can `pip install git+https://github.com/huggingface/transformers.git` to install from `main` and use it immediately. It'll be included in the next 4.31 release of `transformers`, after which you can go back to normal pip installs. |
transformers | 24,290 | closed | [Agents] RuntimeError: Invalid device string: 'hakurei/waifu-diffusion' | ### System Info
Hey!
I have encountered an issue in the `agents` extra where calling `load_tool()` with `model_repo_id=<sd checkpoint model id>` causes a `RuntimeError` to occur. It would appear that the repo id is being used as a device_id when using the `text-to-image` tool:
```
`text_config_dict` is provided which will be used to initialize `CLIPTextConfig`. The value `text_config["id2label"]` will be overriden.
Traceback (most recent call last):
File "/home/simtoon/transformers/main.py", line 18, in <module>
img = imggen("cute anime cat")
File "/home/simtoon/.cache/huggingface/modules/transformers_modules/huggingface-tools/text-to-image/8a3d5357ffa541880148f2425c83ba89f7d56172/text_to_image.py", line 45, in __call__
self.setup()
File "/home/simtoon/.cache/huggingface/modules/transformers_modules/huggingface-tools/text-to-image/8a3d5357ffa541880148f2425c83ba89f7d56172/text_to_image.py", line 36, in setup
self.pipeline.to(self.device)
File "/home/simtoon/transformers/venv/lib/python3.10/site-packages/diffusers/pipelines/pipeline_utils.py", line 682, in to
module.to(torch_device, torch_dtype)
File "/home/simtoon/transformers/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1126, in to
device, dtype, non_blocking, convert_to_format = torch._C._nn._parse_to(*args, **kwargs)
RuntimeError: Invalid device string: 'hakurei/waifu-diffusion'
```
```
- `transformers` version: 4.30.2
- Platform: Linux-5.4.0-139-generic-x86_64-with-glibc2.35
- Python version: 3.10.6
- Huggingface_hub version: 0.15.1
- Safetensors version: 0.3.1
- PyTorch version (GPU?): 2.0.1+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: N/A
- Using distributed or parallel set-up in script?: N/A
```
### Who can help?
cc @Narsil
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
- install `transformers`
- install `transformers[agents]`
- call `load_tool` with `model_repo_id` set to a valid sd checkpoint repo id: `imggen = load_tool(task_or_repo_id="huggingface-tools/text-to-image", model_repo_id="hakurei/waifu-diffusion")`
...
`img = imggen("cute anime cat")`
- observe the exception
### Expected behavior
Pull model from hf hub or load from local and use in the txt2img task | 06-15-2023 03:13:20 | 06-15-2023 03:13:20 | Device_string is not looking good.
@LysandreJik might know better than me where this is coming from.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>@LysandreJik afaik this still hasn't been fixed<|||||>Hey @simonSlamka sorry for the late response. I think what you're trying to do is out of scope of what we're trying to do, we haven't designed the remote tools to work by specifying specific checkpoints in this way.
For this tool in particular (but this will need to be adapted to remote tools as these can be community contributed), here's how you would go about it:
```py
from transformers import load_tool
imggen = load_tool(task_or_repo_id="huggingface-tools/text-to-image")
imggen.default_checkpoint = "hakurei/waifu-diffusion"
img = imggen("cute anime cat")
```
I think the best way to leverage a given checkpoint here would be to clone the existing remote tool, replace the checkpoint, and update the generation settings so that they work best with the checkpoint you have in mind. So you would have a remote tool of your own, for example `simonSlamka/anime-text-to-image` that could be used to replace the existing image generation tool in the toolbox (or provided as an additional tool with anime as its focus).
Hope that helps!<|||||>Hi,
Thanks a lot for your assistance and guidance. I will do what you suggested.
Have a nice day! |
transformers | 24,289 | open | XLMProphetNet returning different results when using padding | ### System Info
- `transformers` version: 4.30.2
- Platform: Linux-5.10.102.1-microsoft-standard-WSL2-x86_64-with-glibc2.17
- Python version: 3.8.16
- Huggingface_hub version: 0.15.1
- Safetensors version: 0.3.1
- PyTorch version (GPU?): 2.0.1+cu117 (False)
### Who can help?
@ArthurZucker @patrickvonplaten
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
from transformers import XLMProphetNetTokenizer, XLMProphetNetForConditionalGeneration
import torch
tokenizer = XLMProphetNetTokenizer.from_pretrained("microsoft/xprophetnet-large-wiki100-cased-xglue-ntg")
model = XLMProphetNetForConditionalGeneration.from_pretrained("microsoft/xprophetnet-large-wiki100-cased-xglue-ntg").eval()
enc_input = tokenizer("test", return_tensors="pt")
input_ids = enc_input.input_ids
attention_mask = enc_input.attention_mask
dec_input_ids = torch.tensor([[model.config.decoder_start_token_id]], dtype=torch.int64)
dec_attention_mask = torch.tensor([[1]], dtype=torch.int64)
dec_input_ids_pad = torch.tensor([[model.config.decoder_start_token_id, model.config.pad_token_id]], dtype=torch.int64)
dec_attention_mask_pad = torch.tensor([[1, 0]], dtype=torch.int64)
out1 = model(
input_ids=input_ids, attention_mask=attention_mask,
decoder_input_ids=dec_input_ids, decoder_attention_mask=dec_attention_mask
)
out2 = model(
input_ids=input_ids, attention_mask=attention_mask,
decoder_input_ids=dec_input_ids_pad, decoder_attention_mask=dec_attention_mask_pad
)
torch.isclose(out1.logits, out2.logits[:, 0], atol=1e-1).all() # false
```
### Expected behavior
XLMProphetNet is not returning the same output when the decoder input ids are padded. While the logits are quite similar (high cosine similarity), they are not the same which results in different losses and, in some cases, different predictions.
The expected behavior is that the padded and unpadded version produce the same output. | 06-15-2023 00:52:03 | 06-15-2023 00:52:03 | Hey! Thanks for reporting! I think this is probably inherent to the model itself, but will see if theres a bug! Our integration tests don't cover this case, and seems like we don't have our common test for this model! |
transformers | 24,288 | closed | Beam search type | # What does this PR do?
This PR fixes issue #22856 , which has a type mismatch between the returned value, and the type hinting in the BeamSeachScorer, process function. Previously the type hint was set to a Tuple which was inconsistent with the returned Dict value. I've changed them to be consistent and ran the following test cases.
1. Performed print statements of the process annotations.
- Before changes return type was typing.Tuple[torch.Tensor]}
- After changes return type was typing.Dict[str, torch.Tensor]
2. Ran PyTest for the test_beam_search.py with all 6 test cases passing.
# Motivation and Context
To ensure high quality code within the HuggingFace repo.
## Who can review?
@gante
| 06-14-2023 22:11:55 | 06-14-2023 22:11:55 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Absolutely :) |
transformers | 24,287 | closed | cannot import name 'TextIteratorStreamer' from 'transformers' | ### System Info
Hi
My code raises error when I want to import `TextIteratorStreamer`
`from transformers import StoppingCriteria, StoppingCriteriaList, TextIteratorStreamer`
I have transformer installed !pip install --upgrade transformers in databricks.
If it helps this code runs successfully:
python -c "from transformers import pipeline; print(pipeline('sentiment-analysis')('I love you'))"
Appreciate your input and help.
### Who can help?
@ArthurZucker and @younesbelkada
### Expected behavior
Just run without error | 06-14-2023 20:16:01 | 06-14-2023 20:16:01 | |
transformers | 24,286 | closed | Update tokenizer_summary.mdx (grammar) | # What does this PR do?
Update docs with minor grammar fix
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests? | 06-14-2023 19:53:34 | 06-14-2023 19:53:34 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 24,285 | closed | Missing T5X module | ### System Info
I am using the T5X to Pytorch conversion script located in the transformers library to convert my pre-trained T5X model into a Pytorch model, however, upon running the script I receive the error below.
```
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/transformers/models/t5/convert_t5x_checkpoint_to_pytorch.py", line 36, in <module>
from t5x import checkpoints
ModuleNotFoundError: No module named 't5x'
```
I have installed the necessary libraries by executing these statements.
```
!pip install transformers
!pip install git+https://github.com/google-research/t5x
```
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
The official guidelines for the script is located in transformers/models/t5/convert_t5x_checkpoint_to_pytorch.py
To execute the script, the command below is run.
```
python3 [path_to_file]/convert_t5x_checkpoint_to_pytorch.py --t5x_checkpoint_path=$HOME/t5_1_1_small --config_file=config.json --pytorch_dump_path=$HOME/t5_1_1_small_pt
```
Where config.json is a config for t5-small (https://huggingface.co/t5-small/blob/main/config.json)
### Expected behavior
The script should convert the checkpoint to a Pytorch model. | 06-14-2023 19:45:47 | 06-14-2023 19:45:47 | Hi @akku779, thanks for raising this issue.
It seems that this is an issue with the installing of the t5x library, rather than one relating to transformers. Running the installation steps I was able to import `t5x` in a python session.
Given the `!` at the start of the pip commands, were these steps being run in a notebook or ipython environment? In which case, it's necessary to restart to environment in order for the updates to take affect. <|||||>@amyeroberts Thanks for getting back to me. I tried re-running my Colab notebook and I am not receiving the same error. Now, there seems to be some sort of dependency mismatch.
```
Traceback (most recent call last):
File "/content/convert_t5x_checkpoint_to_pytorch.py", line 36, in <module>
from t5x import checkpoints
File "/usr/local/lib/python3.10/dist-packages/t5x/__init__.py", line 17, in <module>
import t5x.adafactor
File "/usr/local/lib/python3.10/dist-packages/t5x/adafactor.py", line 64, in <module>
from t5x import utils
File "/usr/local/lib/python3.10/dist-packages/t5x/utils.py", line 43, in <module>
import orbax.checkpoint
File "/usr/local/lib/python3.10/dist-packages/orbax/checkpoint/__init__.py", line 20, in <module>
from orbax.checkpoint import checkpoint_utils
File "/usr/local/lib/python3.10/dist-packages/orbax/checkpoint/checkpoint_utils.py", line 25, in <module>
from orbax.checkpoint import type_handlers
File "/usr/local/lib/python3.10/dist-packages/orbax/checkpoint/type_handlers.py", line 25, in <module>
from jax.experimental.gda_serialization import serialization
ModuleNotFoundError: No module named 'jax.experimental.gda_serialization'
```<|||||>@akku779 From the traceback, this error is coming from the t5x module, and so isn't a transformers issue. I looks like there's a mismatch in your environment between the jax version installed and what the t5x library expects. <|||||>Fixed error by upgrading orbax |
transformers | 24,284 | closed | Split common test from core tests | # What does this PR do?
This PR aims at cleaning the tests files like `test_modeling_common.py` which contain two distinct things: the common tester which is reused by all model tests, but also some core tests of `modeling_utils.py`. This PR split this file (and all similar) into 2: the common tester mixin stays in `test_modeling_common.py` and all other tests go to `test_modeling_utils.py`. | 06-14-2023 19:08:51 | 06-14-2023 19:08:51 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@ydshieh It's just the addition of new tests for modeling_utils or tokenization_utils which is really painful since those files are ~4k lines. |
transformers | 24,283 | closed | fp16_full_eval argument/flag in training arguments does not increase runtime or decrease memory footprint | ### System Info
I am running a simple benchmarking test to see the speedup and accuracy changes that we see when we switch to `float16` (full float16, not mixed precision). For this purpose, I am using the example [semantic segmentation example script](https://github.com/huggingface/transformers/blob/main/examples/pytorch/semantic-segmentation/run_semantic_segmentation.py)
This is how I am running code
for float32
```bash
python run_semantic_segmentation.py --model_name_or_path nvidia/mit-b0
--dataset_name segments/sidewalk-semantic --output_dir ./segformer_outputs/
--remove_unused_columns False --do_train --evaluation_strategy steps
--push_to_hub --push_to_hub_model_id segformer-finetuned-sidewalk-10k-steps
--max_steps 10000 --learning_rate 0.00006 --lr_scheduler_type polynomial
--per_device_train_batch_size 8 --per_device_eval_batch_size 8 --logging_strategy steps
--logging_steps 100 --evaluation_strategy epoch --save_strategy epoch --seed 1337
```
for full float16
```bash
python run_semantic_segmentation.py --model_name_or_path nvidia/mit-b0
--dataset_name segments/sidewalk-semantic --output_dir ./segformer_outputs/
--remove_unused_columns False --do_train --evaluation_strategy steps
--push_to_hub --push_to_hub_model_id segformer-finetuned-sidewalk-10k-steps
--max_steps 10000 --learning_rate 0.00006 --lr_scheduler_type polynomial
--per_device_train_batch_size 8 --per_device_eval_batch_size 8 --logging_strategy steps
--logging_steps 100 --evaluation_strategy epoch --save_strategy epoch --seed 1337
--fp16_full_eval
```
For both I am getting, `~6.9 GB` gpu memory and `~6it/s`
I don't think that should be case. For `fp16_full_eval`, there should be some speedup.
System info:
```
transformers version: 4.31.0.dev0
python: 3.8.16
GPU: RTX4090
```
### Who can help?
@sgugger @amyeroberts
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
for float16 full run
```bash
python run_semantic_segmentation.py --model_name_or_path nvidia/mit-b0
--dataset_name segments/sidewalk-semantic --output_dir ./segformer_outputs/
--remove_unused_columns False --do_train --evaluation_strategy steps
--push_to_hub --push_to_hub_model_id segformer-finetuned-sidewalk-10k-steps
--max_steps 10000 --learning_rate 0.00006 --lr_scheduler_type polynomial
--per_device_train_batch_size 8 --per_device_eval_batch_size 8 --logging_strategy steps
--logging_steps 100 --evaluation_strategy epoch --save_strategy epoch --seed 1337
--fp16_full_eval
```
### Expected behavior
I would have expected some speed up in terms of more iterations/s for `fp16_full_eval` | 06-14-2023 17:05:01 | 06-14-2023 17:05:01 | From the command you are pasting, you are not doing any evaluation (`--do_eval` is not set to True). `--fp16_full_eval` is ignored during training, so your model will still take the same place and the same speed during training. It's just for evaluation.<|||||>@sgugger Thanks for replying. Is there any way to run training in full `float16` rather than mixed-precision `float16`?
Something similar to lightning `Fabric` where you have two separate options of running training, for fp16 use `precision="16"` and for mixed-fp16 use `precision="mixed-16"`<|||||>No there is none in the `Trainer`, since training in float16 does not converge in most cases.<|||||>Thanks |
transformers | 24,282 | closed | Big TF test cleanup | Now we've done a big overhaul of the TF model internals, a lot of tests can be fixed. Several tests were disabled for being buggy or too slow - these are almost all performant now, so I re-enabled them. Runtime for the re-enabled tests was 15-20 seconds on my local machine.
Also, we had a number of TF test failures in the daily CI. I think this PR should fix all of them, except for two cases:
Firstly, some models have issues with `resize_token_embeddings`. These failures are caused by the transition to `TFSharedEmbedding` that @gante is currently working on, and I didn't want to interfere! The usual cause is that `resize_token_embeddings` replaces the new-style `TFSharedEmbedding` with an old `tf.Variable`.
Secondly, there are a couple of failures in generate tests. I'm also leaving this to @gante because he knows much more about that code than me :sweat_smile: | 06-14-2023 16:58:19 | 06-14-2023 16:58:19 | _The documentation is not available anymore as the PR was closed or merged._<|||||>(@Rocketknight1 ping me if the gen tests are not sorted after the latest push)<|||||>I think everything has been addressed now, but I'm not going to merge this one today because there's another PR affecting our tests (#24301) and ideally I'd like to be able to separately view their impact on the CI!<|||||>> I think everything has been addressed now, but I'm not going to merge this one today
Nice 👍 .
I never merge PRs on Firday evening or early afternoon. I don't want to get a ☎️ ⚡ !<|||||>Wait, you merged ...!? (but you said you are not going to merge 🤔 ) |
transformers | 24,281 | closed | Add MMS CTC Fine-Tuning | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
This PR adds language adapter fine-tuning for MMS. Still playing around with good hyper-parameters but script is functional.
Getting some very nice results now for:
```py
export CUDA_VISIBLE_DEVICES="0"
LEARNING_RATE="1e-3"
python run_speech_recognition_ctc.py \
--dataset_name="common_voice" \
--model_name_or_path="facebook/mms-1b-all" \
--dataset_config_name="tr" \
--output_dir="./wav2vec2-common_voice-tr-mms-demo" \
--overwrite_output_dir \
--num_train_epochs="15" \
--per_device_train_batch_size="32" \
--learning_rate="${LEARNING_RATE}" \
--warmup_steps="400" \
--evaluation_strategy="steps" \
--text_column_name="sentence" \
--length_column_name="input_length" \
--save_steps="400" \
--eval_steps="200" \
--layerdrop="0.0" \
--save_total_limit="3" \
--adapter_attn_dim="16" \
--adapter_language="tur" \
--gradient_checkpointing \
--chars_to_ignore , ? . ! - \; \: \" “ % ‘ ” � \
--fp16 \
--group_by_length \
--do_train --do_eval
```
WER drops to 25% just after 200 steps.
See: https://wandb.ai/patrickvonplaten/huggingface/runs/6f5cx5gg?workspace=user-patrickvonplaten
@sgugger @amyeroberts @sanchit-gandhi it'd be super nice to get a quick review here whether the code changes are generally fine with you. I'll only have to fill out the TODOs in the README with a nice example code and some description.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 06-14-2023 15:55:47 | 06-14-2023 15:55:47 | _The documentation is not available anymore as the PR was closed or merged._<|||||>> I think this should go in its own example instead of adding some more code to the (already complex) ctc example. It's preferable to have multiple examples focused on one thing than one big multi-purpose example.
Ok for me<|||||>Added a test. Moved the code into a new example file. Added an extensive README. WER for a quick 10min run can be as low as 23% WER! <|||||>Demo training run: https://huggingface.co/patrickvonplaten/wav2vec2-common_voice-tr-mms-demo<|||||>In which release will this be available in?<|||||>You can find the examples scripts here: https://github.com/huggingface/transformers/tree/main/examples/pytorch/speech-recognition#connectionist-temporal-classification-with-adapters
They assume that you are running from the latest dev version:
https://github.com/huggingface/transformers/blob/f10452271802573fe6e19442631113c4c23a2c70/examples/pytorch/speech-recognition/run_speech_recognition_ctc_adapter.py#L55-L56
Which you can do by following the instructions for installing from source or editable install here: https://huggingface.co/docs/transformers/installation#install-from-source
Although for MMS ASR fine-tuning, you can safely run the script using the latest PyPi release version (4.31.0). |
transformers | 24,280 | closed | Loading fp16 model as fp32 when using .from_retrained() | ### System Info
When loading GPTJ from GPTJForCausalLM.from_pretrained() loading 16bit model which should be approximately 12GB, it instead has a model size ~23GB which is the full 32 bit weights.
the code:
```
device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
tokenizer = AutoTokenizer.from_pretrained("ThisIsMyUsername69/gpt-j-6B-16bit")
config = AutoConfig.from_pretrained("ThisIsMyUsername69/gpt-j-6B-16bit", torch_dtype=torch.float16)
model = GPTJForCausalLM.from_pretrained("ThisIsMyUsername69/gpt-j-6B-16bit", config=config)
```
I've tried multiple ways of trying to load in 16 bit, from_config, with or without autoconfig, regardless of everything it seems to always use 23GB of VRAM except with EleutherAI/gpt-j-6B using revision float16.
The model has memory footprint of 23194MiB. ="ThisIsMyUsername69/gpt-j-6B-16bit" and "nlpcloud/instruct-gpt-j-fp16" both give the larger model size.
I have tried giving the parameter ' revision="float16" ' and it gives the same response.
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
1. run the code above
2. observe model size
### Expected behavior
The model size should be approximately 11GB however it is giving the full model weight (32 bit float) size. | 06-14-2023 14:30:59 | 06-14-2023 14:30:59 | cc @younesbelkada <|||||>Hi @EdanZizo
to load the model with the desired dtype directly from the config I believe you should use `torch_dtype="auto"` in `GPTJForCausalLM.from_pretrained("ThisIsMyUsername69/gpt-j-6B-16bit", config=config)`. But note that the canonical way to load any model in half precision is:
```python
GPTJForCausalLM.from_pretrained("ThisIsMyUsername69/gpt-j-6B-16bit", torch_dtype=torch.float16)
```<|||||>It is working now, thanks for your help. |
transformers | 24,279 | closed | Clean up old Accelerate checks | # What does this PR do?
Since we now enforce at init that the user has the version of Accelerate pinned at minimum, we can remove a lot of boilerplate code checking if things are available or not. | 06-14-2023 14:22:31 | 06-14-2023 14:22:31 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 24,278 | closed | Allowing one to pass run_config to hyperparameter tuning (to allow storing checkpoints on s3) | ### Feature request
[in ray 2.5.1 it becomes possible to set the intermediate storage of checkpoints to s3](https://docs.ray.io/en/latest/tune/tutorials/tune-storage.html#configuring-tune-with-cloud-storage-aws-s3-google-cloud-storage).
```
from ray.air.config import RunConfig
from ray.air.integrations.mlflow import MLflowLoggerCallback
run_config = RunConfig(
storage_path="s3://.....",
callbacks=[MLflowLoggerCallback],
)
```
However in the huggingface integration it is not possible to pass this kwarg to the tuner, like so:
```
best_run = trainer.hyperparameter_search(
direction="maximize",
backend="ray",
run_config=run_config,
)
```
### Motivation
to tune a transformer without clogging local storage
### Your contribution
i could submit a proposal | 06-14-2023 14:08:01 | 06-14-2023 14:08:01 | Hi @hugocool,
Thanks for raising this issue. The integrations are maintained by their authors and not us. You can definitely open a PR, just make sure to ping them to verify the change!
<|||||>`transformers.integrations.run_hp_search_ray` calls `ray.tune.run` which doesn't accept `run_config` as that is part of the newer `Tuner` API. According to https://discuss.ray.io/t/tune-run-vs-tuner-fit/7041/3 `Tuner` is supposed to replace `run` in the long term, although for now it's still beta. `transformers` should probably migrate to `Tuner` at some point, I don't know if/when that'll be a good idea.
In the meantime, `ray.tune.run` directly accepts `callbacks` and `storage_path` arguments, so I think passing them to `hyperparameter_search` without wrapping in a `RunConfig` should just work?<|||||>That’s correct, somewhat hidden in ray.tune one finds that **kwargs are passed, unfortunately it is not documented, nor what settings to use in order to prevent local accumulation of checkpoints to the point your disk fills (which is governed by the `check_point_freq`).
So maybe on update to the documentation is in order?
I could provide an example of how to run HF tuning on AWS batch for example?
On 20 Jun 2023 at 18:14 +0200, Alex Hall ***@***.***>, wrote:
> transformers.integrations.run_hp_search_ray calls ray.tune.run which doesn't accept run_config as that is part of the newer Tuner API. According to https://discuss.ray.io/t/tune-run-vs-tuner-fit/7041/3 Tuner is supposed to replace run in the long term, although for now it's still beta. transformers should probably migrate to Tuner at some point, I don't know if/when that'll be a good idea.
> In the meantime, ray.tune.run directly accepts callbacks and storage_path arguments, so I think passing them to hyperparameter_search without wrapping in a RunConfig should just work?
> —
> Reply to this email directly, view it on GitHub, or unsubscribe.
> You are receiving this because you were mentioned.Message ID: ***@***.***>
<|||||>> somewhat hidden in ray.tune one finds that **kwargs are passed
Do you mean hidden in `transformers.hyperparameter_search`/`run_hp_search_ray`?
> I could provide an example of how to run HF tuning on AWS batch for example?
I don't know who you're offering this to. I'm not using this myself, I'm looking for ways to contribute to this repo in general.<|||||>> Do you mean hidden in transformers.hyperparameter_search/run_hp_search_ray?
Yes, there and within `ray/tune/tune.py`; `run`
> > I could provide an example of how to run HF tuning on AWS batch for example?
> > I don't know who you're offering this to. I'm not using this myself, I'm looking for ways to contribute to this repo in general.
I’m sorry for being a little vague, basically the documentation is lacking, I had to dig through GitHub issues, forum posts and source code to figure out how to do this.
The API that hugging face exposes is deceptively simple, it seems on the surface like it will just work while in reality this is not the case.
This is not helped by the Enum choice connection to the where you provide backend=‘ray’, and now type checking doesn’t help you..
So, more documentation could definitely help people I think.
Maybe I could help provide more documentation? Based on the examples I worked out?
Idk, id hope that would help others!
___
Hugo Evers
On 20 Jun 2023 at 18:42 +0200, Alex Hall ***@***.***>, wrote:
> > somewhat hidden in ray.tune one finds that **kwargs are passed
> Do you mean hidden in transformers.hyperparameter_search/run_hp_search_ray?
> > I could provide an example of how to run HF tuning on AWS batch for example?
> I don't know who you're offering this to. I'm not using this myself, I'm looking for ways to contribute to this repo in general.
> —
> Reply to this email directly, view it on GitHub, or unsubscribe.
> You are receiving this because you were mentioned.Message ID: ***@***.***>
<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 24,277 | closed | Update check of core deps | # What does this PR do?
This PR updates the check of core dependencies to:
- include all of them (huggingface_hub and safetensors were not tested)
- add Accelerate since we have a lot of issues with versions mismatch, with the work done on the Trainer
cc @ydshieh one of the lines removed here concerns the Python 3.6 but you should include the equivalent for your PR to drop Python 3.7 | 06-14-2023 13:33:39 | 06-14-2023 13:33:39 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24277). All of your documentation changes will be reflected on that endpoint. |
transformers | 24,276 | open | [TokenizerSlow] `replace_additional_special_tokens` is not doing much | Just flagging this as the `add_special_tokens` method got pretty complicated, adding a kwargs, `replace_additional_special_tokens`, that supposedly can prevent replacing the `self._additional_special_tokens` attribute.
For any tokenizer, this will remove it from the list, but will not update the internal `trie` and thus has no effect at all:
```python
>>> from transformers import XLMRobertaTokenizer
>>> tokenizer_a = XLMRobertaTokenizer.from_pretrained('xlm-roberta-base')
>>> tokenizer_a.add_special_tokens({"additional_special_tokens":["<//s>"]})
>>> tokenizer_a.additional_special_tokens
['<//s>']
>>> print(tokenizer_a.tokenize("This is a <//s>"))
['▁This', '▁is', '▁a', '<//s>']
>>> tokenizer_a.add_special_tokens({"additional_special_tokens":["<///s>"]}, replace_additional_special_tokens= True)
>>> print(tokenizer_a.tokenize("This is a <//s>"))
['▁This', '▁is', '▁a', '<//s>']
```
This will be addressed in #23909 | 06-14-2023 13:16:39 | 06-14-2023 13:16:39 | cc @ydshieh since you added the feature<|||||>~~I don't *fully* understand what the code snippet above try to demonstrate.~~
But the origin of `self._additional_special_tokens` is from this issue #20418, where `added_tokens_encoder` will include all the added tokens, but `additional_special_tokens` is being replaced, which is really confusing behavior.
If you look the description in #20418, your code snippet does its job (although yes confusing).
The `replace_additional_special_tokens` with its default value `True` is just to make the behavior not **too** surprising, but keep the backward compatibility.
<|||||>> It was confusingly for me that the added tokens encoder is not updated.
yeah I know, but that's what it has been for years. (and I agree that the name of this introduced argument itself might be confusing too.)
> That’s because maybe we should have a separate function just to say that’s we don’t want this token to be special anymore
If you have good idea to address the issue #20418 while reducing the (naming) confusion added in #20424, go ahead :-)
(sorry, I accidentally modified your message 😭 ) |
transformers | 24,275 | closed | Can we convert dynamic DNN model to TorchScript? | ### System Info
- `transformers` version: 4.26.1
- Platform: Linux-5.4.0-42-generic-x86_64-with-glibc2.10
- Python version: 3.8.13
- Huggingface_hub version: 0.13.4
- PyTorch version (GPU?): 2.0.1+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@ArthurZucker
@sgugger
@gante
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Hi all.
I'm trying to convert SwitchTransformer model to TorchScript.
(SwitchTransformer model is MoE DNN based on Google T5 model.)
When converting both T5 and SwitchTransforemer, there's no error for T5 but I got following error for SwitchTransformer.
```
/root/HuggingFace/.HF/lib/python3.8/site-packages/transformers/modeling_utils.py:776: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if causal_mask.shape[1] < attention_mask.shape[1]:
Traceback (most recent call last):
File "example.py", line 423, in <module>
traced_model = torch.jit.trace(model, (input_ids, attention_mask, decoder_input_ids))
File "/root/HuggingFace/.HF/lib/python3.8/site-packages/torch/jit/_trace.py", line 794, in trace
return trace_module(
File "/root/HuggingFace/.HF/lib/python3.8/site-packages/torch/jit/_trace.py", line 1056, in trace_module
module._c._create_method_from_trace(
RuntimeError: Only tensors, lists, tuples of tensors, or dictionary of tensors can be output from traced functions
```
I think it is because of the dynamic characteristics of SwitchTransformer.
This is the code for T5.
```python
from transformers import T5Tokenizer, T5ForConditionalGeneration
import torch
tokenizer = T5Tokenizer.from_pretrained('t5-small')
model = T5ForConditionalGeneration.from_pretrained('t5-small', torchscript = True)
input_ids = tokenizer('The <extra_id_0> walks in <extra_id_1> park', return_tensors='pt').input_ids
attention_mask = input_ids.ne(model.config.pad_token_id).long()
decoder_input_ids = tokenizer('<pad> <extra_id_0> cute dog <extra_id_1> the <extra_id_2>', return_tensors='pt').input_ids
traced_model = torch.jit.trace(model, (input_ids, attention_mask, decoder_input_ids))
torch.jit.save(traced_model, "traced_t5.pt")
```
And this is the code for SwitchTransformer.
```python
from transformers import AutoTokenizer, SwitchTransformersForConditionalGeneration
from transformers import AutoTokenizer, SwitchTransformersConfig
import torch
# Tokenizer
tokenizer = AutoTokenizer.from_pretrained(
"google/switch-base-8", resume_download=True)
model = SwitchTransformersForConditionalGeneration.from_pretrained(
"google/switch-base-8",
resume_download=True, torch_dtype=torch.bfloat16,
torchscript=True,
)
input_text = "A <extra_id_0> walks into a bar a orders a <extra_id_1> with <extra_id_2> pinch of <extra_id_3>."
output_text = "<pad> <extra_id_0> man<extra_id_1> beer<extra_id_2> a<extra_id_3> salt<extra_id_4>.</s>"
input_ids = tokenizer(input_text, return_tensors="pt").input_ids
decoder_input_ids = tokenizer(output_text, return_tensors="pt", padding=True).input_ids
attention_mask = input_ids.ne(model.config.pad_token_id).long()
# model.eval()
traced_model = torch.jit.trace(model, (input_ids, attention_mask, decoder_input_ids))
```
### Expected behavior
TorchScript version of SwitchTransformer. | 06-14-2023 13:08:15 | 06-14-2023 13:08:15 | Hey! Thanks for reporting, I think it might also be because our Switch returns None values sometimes, checking
! <|||||>Thanks a lot for providing the full reproduction script 🤗
Actually, I don't think that MoE models can be torch scripted as the path taken by the inputs will be different for each tokens (because of the routing mechanism).
However, there was a problem of returning `None` values, and not returning the router probs if labels were `None`. Fixing in #24300<|||||>Thanks, so, do you mean that the MoE model cannot be torchscripted as it has a dynamic workflow depending on its input, not because returning `None` value?
And for the `None` value issue, which one of these is the newest correct version? They are slightly different.
https://github.com/huggingface/transformers/pull/24300/files/3b899c180d4a38a06d34dcb1687626594f0497a0
https://github.com/huggingface/transformers/commit/ba3fb4b8d72b9202423cda01896349a883480d2e#diff-897fe3777ef1c9d71d6268fac217b0e163f2e20a3a5e4fabfe5a3675bc9202c7<|||||>The return value was indeed an issue, which prevent starting the tracing. But now that `None` are not returned anymore, the model still cannot be traced because of the dynamic workflow yes.
The correct commit is the one that was merged to main! <|||||>Thank a lot!
But then can I just use scripting for torchscript?
As far as I know, there're two optimization schemes for torchscript, tracing and scripting.
So can I just adopt only scripting selectively?<|||||>Yep! I think scripting is what you should be using for dynamic workflows! <|||||>Well, I found something interesting.
As shown below, scripting for the T5 model also does not work.
But as shown above, tracing worked.
How does this happen?
```python
from transformers import T5Tokenizer, T5ForConditionalGeneration
import torch
tokenizer = T5Tokenizer.from_pretrained('t5-small')
model = T5ForConditionalGeneration.from_pretrained('t5-small', torchscript = True)
input_ids = tokenizer('The <extra_id_0> walks in <extra_id_1> park', return_tensors='pt').input_ids
attention_mask = input_ids.ne(model.config.pad_token_id).long()
decoder_input_ids = tokenizer('<pad> <extra_id_0> cute dog <extra_id_1> the <extra_id_2>', return_tensors='pt').input_ids
# traced_model = torch.jit.trace(model, (input_ids, attention_mask, decoder_input_ids))
scripted_model = torch.jit.script(model)
# torch.jit.save(traced_model, "traced_t5.pt")
``` |
transformers | 24,274 | closed | Fix resuming PeftModel checkpoints in Trainer | # What does this PR do?
Fix an error occurred when Trainer tries to resume a PeftModel from a training checkpoint. That was caused since PeftModel.pre_trained saves only adapter-related data while _load_from_checkpoint expects a saved torch model. This PR fix this issue and allows the adapter checkpoint to be loaded.
Resolves: #24252
Fixes #24252
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [X] Was this discussed via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case. (#24252)
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@younesbelkada | 06-14-2023 13:07:31 | 06-14-2023 13:07:31 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Although this continues training but it doesnt retain the old stuff for me. Can someone look into this?
<|||||>This doesn't seem like older resume from checkpoint that has it for pytorch models. Inside trainer.train we need to pass resume from checkpoint parameters as our last checkpoint path. while passing the path, it shows as can't find a valid checkpoint. Could someone please post some code snippet on how to use resume from checkpoint for PEFT models?<|||||>@pacman100 Requesting to review and merge the changes, thanks!<|||||>Hi @techthiyanes
> This doesn't seem like older resume from checkpoint that has it for pytorch models. Inside trainer.train we need to pass resume from checkpoint parameters as our last checkpoint path. while passing the path, it shows as can't find a valid checkpoint. Could someone please post some code snippet on how to use resume from checkpoint for PEFT models?
As shared in the snippet above, to make `resume_from_checkpoint` work as expected, it assumes that you have previously trained your model using trainer that saves artifacts under `{output_dir}/checkpoint-{i}`, I have "faked" that in the example by manually saving a model in a folder called `{output_dir}/checkpoint-1`. Therefore you need to make sure the model weights lives under that folder.<|||||>> This doesn't seem like older resume from checkpoint that has it for pytorch models. Inside trainer.train we need to pass resume from checkpoint parameters as our last checkpoint path. while passing the path, it shows as can't find a valid checkpoint. Could someone please post some code snippet on how to use resume from checkpoint for PEFT models?
Hi @younesbelkada can you look at my issue with the code and please address it?
<|||||>Yes @adityaaryan77 , sure, please have a look at my comment on the PEFT issue and discuss there<|||||>Hi @younesbelkada
> ```python
> ```python
> trainer.train(resume_from_checkpoint=True)
> ```
>
>
>
>
>
>
>
>
>
>
>
> ```
Thank you for your response.
Please look at below code snippet:
# -*- coding: utf-8 -*-
"""Untitled345.ipynb
Automatically generated by Colaboratory.
Original file is located at
https://colab.research.google.com/drive/1SgzMXUUDK1wDH0M0yQPfWmeNAKyy7EFs
"""
! pip install datasets transformers peft evaluate
!git clone https://github.com/llohann-speranca/transformers.git -b fix-resume-checkpoint-for-peftmodel
!cp -r /content/transformers /usr/local/lib/python3.10/dist-packages/transformers
import transformers
import numpy as np
GLUE_TASKS = ["cola", "mnli", "mnli-mm", "mrpc", "qnli", "qqp", "rte", "sst2", "stsb", "wnli"]
task = "cola"
model_checkpoint = "bert-large-uncased"
batch_size = 16
from datasets import load_dataset, load_metric
actual_task = "mnli" if task == "mnli-mm" else task
dataset = load_dataset("glue", actual_task)
metric = load_metric('glue', actual_task)
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(model_checkpoint, use_fast=True)
task_to_keys = {
"cola": ("sentence", None),
"mnli": ("premise", "hypothesis"),
"mnli-mm": ("premise", "hypothesis"),
"mrpc": ("sentence1", "sentence2"),
"qnli": ("question", "sentence"),
"qqp": ("question1", "question2"),
"rte": ("sentence1", "sentence2"),
"sst2": ("sentence", None),
"stsb": ("sentence1", "sentence2"),
"wnli": ("sentence1", "sentence2"),
}
sentence1_key, sentence2_key = task_to_keys[task]
if sentence2_key is None:
print(f"Sentence: {dataset['train'][0][sentence1_key]}")
else:
print(f"Sentence 1: {dataset['train'][0][sentence1_key]}")
print(f"Sentence 2: {dataset['train'][0][sentence2_key]}")
def preprocess_function(examples):
if sentence2_key is None:
return tokenizer(examples[sentence1_key], truncation=True)
return tokenizer(examples[sentence1_key], examples[sentence2_key], truncation=True)
encoded_dataset = dataset.map(preprocess_function, batched=True)
from transformers import AutoModelForSequenceClassification, TrainingArguments, Trainer
from peft import (
get_peft_config,
get_peft_model,
get_peft_model_state_dict,
set_peft_model_state_dict,
LoraConfig,
PeftType,
PrefixTuningConfig,
PromptEncoderConfig,
)
peft_type = PeftType.LORA
device = "cuda"
peft_config = LoraConfig(task_type="SEQ_CLS", inference_mode=False, r=8, lora_alpha=16, lora_dropout=0.1)
lr = 3e-4
num_labels = 3 if task.startswith("mnli") else 1 if task=="stsb" else 2
model = AutoModelForSequenceClassification.from_pretrained(model_checkpoint, num_labels=num_labels)
model = get_peft_model(model, peft_config)
model.print_trainable_parameters()
model
metric_name = "pearson" if task == "stsb" else "matthews_correlation" if task == "cola" else "accuracy"
model_name = model_checkpoint.split("/")[-1]
args = TrainingArguments(
f"{model_name}-finetuned1-{task}",
evaluation_strategy = "epoch",
save_strategy = "epoch",
learning_rate=2e-5,
per_device_train_batch_size=batch_size,
per_device_eval_batch_size=batch_size,
num_train_epochs=2,
weight_decay=0.01,
# load_best_model_at_end=True,
metric_for_best_model=metric_name,
# push_to_hub=True,
)
from transformers import Seq2SeqTrainer, TrainerCallback, TrainingArguments, TrainerState, TrainerControl
from transformers.trainer_utils import PREFIX_CHECKPOINT_DIR
import os
class SavePeftModelCallback(TrainerCallback):
def on_save(
self,
args: TrainingArguments,
state: TrainerState,
control: TrainerControl,
**kwargs,
):
checkpoint_folder = os.path.join(args.output_dir, f"{PREFIX_CHECKPOINT_DIR}-{state.global_step}")
peft_model_path = os.path.join(checkpoint_folder, "adapter_model")
kwargs["model"].save_pretrained(peft_model_path)
pytorch_model_path = os.path.join(checkpoint_folder, "pytorch_model.bin")
if os.path.exists(pytorch_model_path):
os.remove(pytorch_model_path)
return control
def compute_metrics(eval_pred):
predictions, labels = eval_pred
if task != "stsb":
predictions = np.argmax(predictions, axis=1)
else:
predictions = predictions[:, 0]
return metric.compute(predictions=predictions, references=labels)
validation_key = "validation_mismatched" if task == "mnli-mm" else "validation_matched" if task == "mnli" else "validation"
trainer = Trainer(
model,
args,
train_dataset=encoded_dataset["train"],
eval_dataset=encoded_dataset[validation_key],
tokenizer=tokenizer,
compute_metrics=compute_metrics,
callbacks=[SavePeftModelCallback],
)
trainer.train()
trainer.save_model("/content/bert-large-uncased-finetuned1-cola/checkpoint-1")
trainer.train(resume_from_checkpoint='/content/bert-large-uncased-finetuned1-cola/checkpoint-1070/adapter_model')
Inside the resume from checkpoint i have tried with below options
1) resume_from_checkpoint = True
2) resume_from_checkpoint = (Last checkpoint path)
3) resume_from_checkpoint = (trainer.saved model path)
Everywhere I'm getting the same message of Can't find a valid checkpoint at <Model saved path>.
At the same time, I'm able to continue my resume from checkpoint in native pytorch code.
<|||||>> Hi @llohann-speranca Again thanks for your great work on this, I think this seems a rather important fix that might unlock a lot of users, if that's ok for you, I can quickly take over the PR and address the last comment so that we can merge the PR. What do you think ?
Hi @younesbelkada. Sure! I have been very busy and have still to learn how to deal with PRs. Sorry about that.<|||||>@llohann-speranca thanks!
@techthiyanes it seems you are using the API the wrong way. `resume_from_checkpoint` will try to retrieve the latest checkpoint from the output directory of the trainer. Therefore make sure you have correct `checkpoints-{i}` folders inside `f"{model_name}-finetuned1-{task}"` in your case and use `resume_from_checkpoint=True`<|||||>> @llohann-speranca thanks! @techthiyanes it seems you are using the API the wrong way. `resume_from_checkpoint` will try to retrieve the latest checkpoint from the output directory of the trainer. Therefore make sure you have correct `checkpoints-{i}` folders inside `f"{model_name}-finetuned1-{task}"` in your case and use `resume_from_checkpoint=True`
Thanks a lot on your response.
By default while passing resume from checkpoint then API automatically consumes the recent checkpoint. But this is something not working as expected for PEFT models than torch models. As mentioned, I have pointed out the correct checkpoint and the same folder resides inside alone.
If you don't mind, could you please try executing the any of huggingface example code inserting PEFT with the trainer & resume from checkpoint? Then you might be able to replicate. <|||||>@techthiyanes I think it works as expected with this PR, as explained in https://github.com/huggingface/transformers/pull/24274#pullrequestreview-1479614483 I have tried the attached snippet that was not working before the PR as mentioned and this PR properly fixes it by loading the checkpoint. If you want you can try to replicate using a smaller example (for example imdb as attached) and let me know if you still face an issue by opening a new ticket<|||||>> @techthiyanes I think it works as expected with this PR, as explained in [#24274 (review)](https://github.com/huggingface/transformers/pull/24274#pullrequestreview-1479614483) I have tried the attached snippet that was not working before the PR as mentioned and this PR properly fixes it by loading the checkpoint. If you want you can try to replicate using a smaller example (for example imdb as attached) and let me know if you still face an issue by opening a new ticket
Sure..Thanks a lot.. Let me try above snippet for classification models then let you know.<|||||>> @techthiyanes I think it works as expected with this PR, as explained in [#24274 (review)](https://github.com/huggingface/transformers/pull/24274#pullrequestreview-1479614483) I have tried the attached snippet that was not working before the PR as mentioned and this PR properly fixes it by loading the checkpoint. If you want you can try to replicate using a smaller example (for example imdb as attached) and let me know if you still face an issue by opening a new ticket
Still able to replicate the issue. Raised a separate issue on the same.
https://github.com/huggingface/transformers/issues/24354<|||||>Hi, nice job! Does this new feature available if I `pip install` the latest peft/transformers packages? Or should I install from source?<|||||>Thank you so much @llohann-speranca and @younesbelkada for adding this 🤗! @beyondguo, please install from source as this isn't yet part of the release.<|||||>Thanks for iterating! Will we perform inference in the same manner? Specifically, `peft` requires us to load `PeftConfig` via `adapter_config.json`. I saw that this is not saved with `trainer.save_model()`. Will we need to add `model.save_pretrained()` to use for inference? |
transformers | 24,273 | closed | Update to transformers==4.30 | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 06-14-2023 12:15:00 | 06-14-2023 12:15:00 | Hi @dbogunowicz, thanks for opening a PR.
Could you add a description detailing what issue the PR address or feature it adds? |
transformers | 24,272 | open | Finetuning Whisper with prompts | ### Feature request
Training code implementation for finetuning Whisper using prompts.
Hi All,
I’m trying to finetune Whisper by resuming its pre-training task and adding initial prompts as part of the model’s forward pass. I saw this [amazing tutorial](https://huggingface.co/blog/fine-tune-whisper), however, it does not contain a section about using prompts as part of the fine-tuning dataset.
### Motivation
We witness that Whisper is not acting as expected when transcribing with prompts. Sometimes the output is blank text and on other occasions the output text contains reoccurrence. We want to solve such behaviors by fine-tuning Whisper with prompts.
### Your contribution
Open for ideas. | 06-14-2023 10:36:04 | 06-14-2023 10:36:04 | Hi, thanks for raising an issue!
Questions about custom training and model performance are best placed in our [forums](https://discuss.huggingface.co/). We try to reserve the github issues for feature requests and bug reports.
If you believe the behaviour is due to a bug in the model, then please share all the necessary information so that we can reproduce the issue on our side: running environment; minimal code snippet. And full details about the observed behaviour e.g. example outputs and the expected behaviour.<|||||>Hi @amyeroberts,
Thank you for your fast response. I have already opened a thread in [the forum](https://discuss.huggingface.co/t/finetuning-whisper-with-prompts/43053). I agree that this is not a direct bug, but also the current behavior of Whisper does not make any sense (blank transcribes + repetitions).
How should I proceed from here?
Thanks!<|||||>@AvivSham You should wait to see if anyone replies to your post in the forum. I'd also suggest checking out the discord, as it's active and there's lots of people sharing ideas and helping one another with projects. <|||||>How can I enter the discord server? Can you please share URL / QRcode?
I tried the [following link](https://discord.com/invite/hugging-face-879548962464493619) but it seems to be invalid.
BTW this issue can be marked as a feature request since (as for now) I did not see a relevant code for fine-tuning Whisper with prompts. <|||||>Hi @AvivSham - the discord link you shared ([here](https://discord.com/invite/hugging-face-879548962464493619)), is the same one I would use, and works for me. What do you mean by it being 'invalid'?
<|||||>@amyeroberts Please see the attached image.
<img width="514" alt="image" src="https://github.com/huggingface/transformers/assets/43371254/730ae753-a4c3-4efe-a1d2-7b0010855ce0">
<|||||>@AvivSham Oh no :/ I've tried making a new account with the previous link and it worked, so I'm not sure what's going on unfortunately. I'll see on our end if there's any known issues / resolutions.
In the meantime, do you already have a discord account or are you able to make one independent of this server invite?
<|||||>@amyeroberts
Hi Amye,
I re-opened this issue since I did not get any support over discord and HF forum. I think that this issue is in high priority for DL practitioners! can you please help with it?<|||||>Hi @AvivSham,
So that we can figure out whether this is an issue on our side, could you confirm that you have an active discord account or are able to create one (independent of the HF invite link)? <|||||>Hi @amyeroberts
I feel like you are totally ignoring my questions :/
See my lastest message, please.<|||||>Hi @AvivSham,
I'm certainly not ignoring the questions. Please understand the we're all very busy and trying to address as many issues as possible. As the previous thread was discussing technical difficulties in joining the discord server, I'd understood that this was the ongoing issue, my apologies for misunderstanding.
With regards to training whisper with prompts, then same case applies as in my first comment. Unless there's a specific behaviour which you believe to be a bug of the model, this is a question for the forums / discord and not github issues. Not having responses isn't justification for posting in github issues as it just isn't a scalable solution. <|||||>Sorry for reviving this thread. I was going to create my own issue (but saw this one already existed). I do actually think this is a legitimate feature request based off of discussions in a pull request that is related to this issue. The original post is however not worded in the best manner to explain what is requested and to demonstrate the general benefit.
The relevant PR (https://github.com/huggingface/transformers/pull/22496) added prompt support for Whisper inference. In the PR a user asked whether similar support could be added for finetuning. @hollance and @sanchit-gandhi replied with ideas of how prompting support during training could be implemented and a suggestion to start a new issue (https://github.com/huggingface/transformers/pull/22496#issuecomment-1557501336, https://github.com/huggingface/transformers/pull/22496#issuecomment-1556934882) for the feature.
My alternative wording of this feature request:
## Feature request
Huggingface recently added support for prompting Whisper with `model.generate()` (see https://github.com/huggingface/transformers/issues/22395, https://github.com/huggingface/transformers/pull/22496). In the PR, there were discussions (https://github.com/huggingface/transformers/pull/22496#issuecomment-1557501336) of adding similar support for including parts of the previous (text) context when training and finetuning the model. It was suggested a new issue be started for the feature request, though no one ended up creating the issue.
The Whisper paper seems to suggest the general pretraining process was:
* Cut audio file in to 30s chunks.
* Pair the audio segments with the subset of transcripts that fall within that time.
* If the "final transcript segment" is only partially included within the 30s audio chunk, the model is trained to only predict the start time token for the final segment. (I'm not sure if this implies that the final part of the transcript is passed as the previous context in the decoder for the next training example. I find the wording in the paper vague here.)
Relevant parts of the paper:
> Since our decoder is an audio-conditional language model, we also train it to condition on the history of text of the transcript in the hope that it will learn to use longer-range text context to resolve ambiguous audio. Specifically, with some probability we add the transcript text preceding the current audio segment to the decoder’s context. [...] When a final transcript segment is only partially included in the current 30 second audio chunk, we predict only its start time token for the segment when in timestamp mode, to indicate that the subsequent decoding should be performed on an audio window aligned with that time, otherwise we truncate the audio to not include the segment.
Support for prompting in training/finetuning has also been requested and discussed on the HF forums:
https://discuss.huggingface.co/t/adding-prompt-context-to-whisper-with-huggingface-transformers/31070
https://discuss.huggingface.co/t/finetuning-whisper-with-prompts/43053
I believe being able to include previous context in finetuning would be a useful feature. It would also enable users to finetune the model in a manner that is consistent with how it was pretrained (i.e. how the final segment is handled when it is only partially included in the audio). This is something that may have an effect on the robustness of finetuned models when it comes to long form transcription and timestamps.
The reason OpenAI preprocessed data in this manner during finetuning is likely because it would best mimic the kind of data it would see during inference (i.e. audio being chunked where it regularly cuts off in the middle of sentences and/or words). <|||||>@Lauler OK, I see. Thanks for taking the time to write up such a clear explanation and to link to all the relevant issues, PR and discussions.
As this is a feature request I'll re-open and tag it as such :)
cc @sanchit-gandhi <|||||>Hey @AvivSham and @Lauler, really cool to see such excitement around developing Whisper fine-tuning further! Thanks both for the motivations for the feature request.
In terms of what we have to do to make the fine-tuning script work with prompted fine-tuning, it's super simple. All we have to do is update the `prepare_dataset` function to encode the prompts, the target text, and then combine them to get the labels:
```python
def prepare_dataset(batch):
# load and resample audio data from 48 to 16kHz
audio = batch["audio"]
# compute log-Mel input features from input audio array
batch["input_features"] = feature_extractor(audio["array"], sampling_rate=audio["sampling_rate"]).input_features[0]
# encode prompts to prompt ids - we assume that the dataset has a column `"prompt"` that contains the prompt for each example
prompt_ids = tokenizer.get_prompt_ids(batch["prompt"])
# encode target text to token ids
token_ids = tokenizer(batch["sentence"]).input_ids
# combine them to get our labels
batch["labels"] = prompt_ids + token_ids
return batch
```
You can try this with a toy example:
```python
from transformers import WhisperProcessor
processor = WhisperProcessor.from_pretrained("openai/whisper-tiny")
prompt_ids = processor.get_prompt_ids("Nokia")
token_ids = processor.tokenizer(" No kea phones are great").input_ids
labels = prompt_ids + token_ids
# let's check how the labels are decoded
print(processor.decode(labels))
```
**Print Output:**
```
'<|startofprev|> Nokia<|startoftranscript|><|notimestamps|> No kea phones are great<|endoftext|>'
```
-> we see the prompt `Nokia` nestled between the prompt token id and the BOS token id, and the target text nestled between the BOS and EOS token ids, which is the expected behaviour.
Now the tricky bit about getting this working more generally is how we get the `prompt` column in our dataset - we can't assume that every dataset is going to have examples with a trio of (audio, target text, prompt), most ASR datasets only have (audio, target text).
Maybe we could start with the LibriSpeech ASR dataset: since the dataset is taken from recorded samples of audio book narration, each sentence can be prompted with the previous one? i.e. if you have a dataset:
```
(audio_1, text_1)
(audio_2, text_2)
...
(audio_n, text_n)
```
You could augment it as:
```
(audio_2, text_2, prompt=text_1)
(audio_3, text_3, prompt=text_2)
...
(audio_n, text_n, prompt= text_n-1)
```
Since we know the dataset samples are recorded sequentially? Here we just need to check that `text_i` follows on from `text_i-1` by making sure it comes from the same speaker
I think this would be a good starting point for adapting the fine-tuning script, but I don't think there's a way of generalising it to work with all datasets since we don't always have the prompts available?<|||||>For datasets where audio snippets are sequential (audiobooks) that makes sense!
A complementary solution that could be more general in nature is to perhaps wait for the PR that adds support for encoding timestamp tokens _as is_ (https://github.com/huggingface/transformers/pull/24476).
A general preprocessing step involving encoding a separate "timestamp_encoded"-column could then perhaps work for both datasets with sequential audio snippets (LibriSpeech audiobooks), and those who already have longer audio samples with more granular timestamp information.
Then in the case of LibriSpeech (and any dataset with sequential audio snippets) the following preprocessing guide would apply:
1. Extract the length of each audio sample and create a `duration` column.
2. Encode the decoder input as `"<|startofprev|>" + text_n-1 + "<|startoftranscript|><|timestamps|> <|0.00|>" + text_n + "<|duration_n|><|endoftext|>"` as a separate column.
If it would be possible to train with timestamps, then a conceptually similar approach would apply to those who already have datasets with more granular timestamps. Their preprocessing would consist of creating a similar suitable column where the prompt and timestamps are already encoded properly.
I'm aware that existing audio datasets on the Hub are currently mostly composed of single sentences. However, I think this is increasingly going to change with time. The question for these new datasets then becomes:
* How does a user best add timestamp information to their dataset that has longer audio snippets with granular timestamps?
Right now it is not obvious how such information should best be included in a HF dataset. As an example, our organization published an audio dataset of [parliamentary recordings](https://huggingface.co/datasets/KBLab/rixvox). In its original form we have sentence aligned these transcripts. However in the published version that is on the Hub, we concatenated sequential sentences and coresponding audio snippets until they fill up as much as possible of a 30s bucket.
We have been discussing the most flexible way of adding the more granular sentence-level timestamps to the dataset, with our top two choices being:
* Add a nested list column with tuples of `[(start_sentence1, end_sentence1), (..., ...)]` for each audio sample.
* Add an already pre-encoded version of the text for the specific model. The problem here was that i) it's a model specific solution, and ii) timestamp tokens were not included as part of the vocab in HF Whisper.
I think the first option is probably the best, since it's model agnostic, and it will allow us and any user to remix and re-encode the dataset in whatever way they need.
A separate question:
Would the prompt ids automatically be masked in the loss calculation in the current Whisper implementation?
* Edit:
On second thought I think having a separate `prompt` column would work in the more general use case as well. However, I think the point in my post about allowing (optional) pre-encoding of the `text` column with timestamps (and if necessary other special tokens) is what would make the solution more general in nature. <|||||>Hi @sanchit-gandhi,
Thank you for your reply!
Following your reply:
> https://github.com/huggingface/transformers/issues/24272#issuecomment-1633834410
Do we manually need to mask the `prompt_ids` since we do not want to include those when calculating the loss? Is it dealt with internally (by looking inside the code I did not find such masking)?
What is the right approach here?
Thanks in advance.
<|||||>>
Hi Aviv, in the paper it says:
During training it should “mask out the training loss over the previous context text and train the
model to predict all other tokens”.
I'm not sure how to implement it with HF Trainer. But it is an important feature (I posted on it in the forum half a year ago) and I hope you can test some ideas and see what works.<|||||>@samuelazran Thank you for your comment. I'm looking for an official guide here since it is a bit tricky to integrate the implementation with HF.
@sanchit-gandhi @Lauler Maybe you can help us with it? how should we approach this? Do we manually need to mask the prompt_ids since we do not want to include those when calculating the loss? Is it dealt with internally (by looking inside the code I did not find such masking)?
What is the right approach here?<|||||>If you're taking multiple audio samples < 30s and combining them to give your prompt and target text, you probably don't need the timing information within each sample. You can get the length of each sample by measuring the length of the audio array, and dividing it by the sampling rate (assuming there's little to no silence):
```python
audio_length_s = len(audio["array"]) / audio["sampling_rate"]
```
Timing information would be useful if you had the opposite situation, where you had very long samples that you wanted to split up into smaller ones. Here, you would split on appropriately chosen timestamps.
> * How does a user best add timestamp information to their dataset that has longer audio snippets with granular timestamps?
I'm not sure I fully understand this question - you want to take long audio samples and **add** timestamp information to them? Or you want to format audio samples that have existing timestamp information?
Also agree that the first option you've proposed is best! I don't think we can make a very general recommendation here as to the format your data should be in, since this is quite a niche application of fine-tuning and one that is conditional on the form of your original data. But the design you've proposed sounds sensible for your use case!
> Would the prompt ids automatically be masked in the loss calculation in the current Whisper implementation?
No they wouldn't - we would need to update this. I know that @peregilk and co from NbAiLab have done something similar in Flax: https://github.com/NbAiLab/nb-whisper/blob/352bf2d0efb073405c90fb4ef048a5d52b6128b6/run_nb_flax_speech_recognition_seq2seq_streaming_dev.py#L579-L582
We would need to do the same for the PyTorch code. Would also be interested in hearing from @peregilk how you constructed your prompts + text pairs! Are we on the right lines by pairing consecutive samples of our dataset together?<|||||>Sure @sanchit-gandhi. Our dataset consists of multiple different sources, subtitles, parliament transcripts, audio books and created datasets. In some cases we do have the text directly preceding the current sample. In our dataset, we simply add this as a separate "pretext"-column. In a lot of scenarios this information is not available, and here we simply leave that field empty.
We have not added timestamps to the pretext (yet). I see your point (with reference to the article) @Lauler regarding not predicting the end timestamp. We have not tried that, but one of our datasets are cut on pauses, and here we have a lot of incomplete sentences (ie not ending with punctation and not starting with capital letter). This seems to be well handled by the model.
We ended up with a dataset-format with multiple columns for each audio-clips. One sample could for instance have both text, timestamp_text, pretext, english_translation, nynorsk_transcription etc. For other samples, very few of these are filled out. This means that we for one audio-clip can generate 1-5 training samples. We have modified the data loader to be able to handle this so that we can generate the actual prompt on the fly. I can share this with you @Lauler if you are interested.
@AvivSham Personally I found the masking a bit tricky. This snippet helped me a lot in understanding what was going on. Maybe you can reuse it: https://github.com/NbAiLab/nb-whisper/blob/352bf2d0efb073405c90fb4ef048a5d52b6128b6/run_nb_flax_speech_recognition_seq2seq_streaming_dev.py#L1692-L1697
|
transformers | 24,271 | closed | Fix URL in comment for contrastive loss function | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
In the comment for contrastive loss in `src/transformers/models/clip/modeling_clip.py`, the source URL was not working correctly, so I fixed it to the correct address.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 06-14-2023 09:50:23 | 06-14-2023 09:50:23 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 24,270 | closed | `Pix2StructImageProcessor` requires `torch>=1.11.0` | # What does this PR do?
So let's be nice to past CI ❤️ !
It's the argument `antialias` in `interpolate` only supported in `torch>=1.11.0`:
```
torch.nn.functional.interpolate(..., antialias=True)
``` | 06-14-2023 09:00:51 | 06-14-2023 09:00:51 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 24,269 | closed | Flax LMHeadModel for common models like Bert and Albert | ### Feature request
A LM Head Model for common models such as Bert and Albert is available in both PyTorch and TensorFlow, but it appears to be missing in Flax.
### Motivation
We are developing a library in JAX and Flax for uncertainty quantification, and we rely on Hugging Face transformers written in Flax.
### Your contribution
I would love to contribute. However, unfortunately I might not have much time to submit a PR myself. | 06-14-2023 08:52:28 | 06-14-2023 08:52:28 | Alright, I found out that LMHeadModel is an old naming choice that was kept not to break compatibility. In Flax, I should be able to do what I want with `FlaxBertForCausalLM`. |
transformers | 24,268 | closed | pytest jax,jaxlib,flax versions incompatibility | ### System Info
When running pytest (```pytest --collect-only -q``` is enough), it fails due to
```
AttributeError: module 'jax.tree_util' has no attribute 'register_pytree_with_keys_class'
```
From setup.py deps:
```
"jax>=0.2.8,!=0.3.2,<=0.3.6",
"jaxlib>=0.1.65,<=0.3.6",**
```
## Fix:
pip install jax==0.3.25 jaxlib==0.3.25 flax==0.6.2
### Who can help?
@sanchit-gandhi
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
pytest --collect-only ./tests/models/<some_model>
### Expected behavior
"tests collected" without any error | 06-14-2023 08:18:54 | 06-14-2023 08:18:54 | Could you try:
```
pip install -e ".[flax]"
```
To get the correct version of all three packages?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>More detailed instructions for installing Flax & Transformers can be found here: https://github.com/huggingface/transformers/tree/main/examples/research_projects/jax-projects#how-to-install-relevant-libraries
Leaving this one closed for now. Feel free to open if the above guide doesn't solve your issue and I can take another look |
transformers | 24,267 | closed | Skip some `TQAPipelineTests` tests in past CI | # What does this PR do?
A continuation of #24251. `TapasModel` is used in pipeline tests (`TQAPipelineTests`) and we need torch >=12 there too.
(didn't check all failures in one go before opening PRs 😅 ) | 06-14-2023 08:04:45 | 06-14-2023 08:04:45 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 24,266 | closed | Fix bug in slow tokenizer conversion, make it a lot faster | # What does this PR do?
The slow tokenizer conversion currently has a bug where merges with a score of 0 do not get used due to an erroneous check. The check simply tested truthiness, but was actually looking for a `None`. During fixing, I noticed that the code was also slow, so I made it a lot faster.
Fixes #24233
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@ArthurZucker
| 06-14-2023 06:24:54 | 06-14-2023 06:24:54 | Speed info: the new implementation takes 70 ms, the old one took 123 seconds for the `openlm-research/open_llama_7b` tokenizer mentioned in the issue.<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>> Really nice fix and improvement - thanks for working on this ❤️
>
> Logic all looks good to me. There's a test that's failing, but it's decorated with `@is_flaky` so shouldn't be preventing CI being green here. @ydshieh any insights into what might be happening?
@amyeroberts
`is_flacky()` won't keep the test green 100%. It just runs the test a few times (default `5`) 😿 . The failing is still expected but less frequently.<|||||>Sorry for the weird error. I forgot to re-run tests after the second commit |
transformers | 24,265 | closed | `BloomForSequenceClassification` output is sensitive to `padding_side` and `max_length` | ### System Info
- `transformers` version: 4.30.0.dev0
- Platform: Linux-5.15.0-18-shopee-generic-x86_64-with-glibc2.31
- Python version: 3.10.8
- Huggingface_hub version: 0.14.1
- Safetensors version: 0.3.1
- PyTorch version (GPU?): 2.0.1+cu118 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help?
text models: @ArthurZucker and @younesbelkada
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I found that `BloomForSequenceClassification` (possibly also other causal models) produces non-deterministic outputs based on `max_length` when tokenizer `padding_side = "left"`.
It might be caused by this line: https://github.com/huggingface/transformers/blob/v4.30.1/src/transformers/models/bloom/modeling_bloom.py#L1080 which seems to assume right padding.
If this diagnostic is correct, imho it's quite unintuitive and error-prone, as: 1) bloom's default `padding_side` is `left`, and 2) many tutorials (e.g. [peft P-tuning for sequence classification](https://huggingface.co/docs/peft/main/en/task_guides/ptuning-seq-classification#preprocess-dataset)) recommend setting `padding_side = "left"` for causal models.
Could you provide some guidance? What's the correct way to use causal models for sequence classification?
Sample to reproduce:
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification, set_seed
set_seed(123)
text = "Paris, France's capital, is a major European city and a global center for art, fashion, gastronomy and culture."
def f(text, tokenizer, model):
emb = tokenizer(text, return_tensors='pt')
logits = model(**emb).logits.detach().numpy()
print(f'no padding: {logits=}')
for max_length in [50, 100, 200]:
emb = tokenizer(text, padding='max_length', max_length=max_length, return_tensors='pt')
logits = model(**emb).logits.detach().numpy()
print(f'pad to {max_length=}: {logits=}')
# non-deterministic
def clm_left():
pretrain = 'bigscience/bloomz-560m'
tokenizer = AutoTokenizer.from_pretrained(pretrain)
model = AutoModelForSequenceClassification.from_pretrained(pretrain)
f(text, tokenizer, model)
# >>> no padding: logits=array([[15.1557665, 31.423962 ]], dtype=float32)
# >>> pad to max_length=50: logits=array([[ 8.255632, 23.838833]], dtype=float32)
# >>> pad to max_length=100: logits=array([[ 1.263773, 12.405185]], dtype=float32)
# >>> pad to max_length=200: logits=array([[0.79204845, 8.847221 ]], dtype=float32)
# ok
def clm_right():
pretrain = 'bigscience/bloomz-560m'
tokenizer = AutoTokenizer.from_pretrained(pretrain)
tokenizer.padding_side = 'right'
model = AutoModelForSequenceClassification.from_pretrained(pretrain)
f(text, tokenizer, model)
# >>> no padding: logits=array([[15.1557665, 31.423962 ]], dtype=float32)
# >>> pad to max_length=50: logits=array([[15.1557665, 31.423962 ]], dtype=float32)
# >>> pad to max_length=100: logits=array([[15.155769, 31.42395 ]], dtype=float32)
# >>> pad to max_length=200: logits=array([[15.155751, 31.423967]], dtype=float32)
if __name__ == '__main__':
clm_left()
```
### Expected behavior
Model should produce the same outputs regardless of padding length | 06-14-2023 04:22:16 | 06-14-2023 04:22:16 | (bump)<|||||>Hey! Thanks for opening this issue!
Seems to rather be related to [this](https://github.com/huggingface/transformers/blob/v4.30.1/src/transformers/models/bloom/modeling_bloom.py#L1072) line, where we define the sequence length tensor.
Most of our models that compute partial pooled logits use this. Can you try something like
```python
if input_ids is not None:
sequence_lengths = (torch.eq(input_ids, self.config.pad_token_id).long().argmax(-1) - 1).to(logits.device)
```
I'll open a PR to fix it! <|||||>Thanks @ArthurZucker, the fix works great.
Seems the PR misses a few models: biogpt, bloom, falcon, mpt.<|||||>There was a follow up PR: #25085, might have forgotten other models! |
transformers | 24,264 | open | MeZo Forward Pass Implementation | ### Feature request
https://github.com/princeton-nlp/MeZO/blob/main/large_models/trainer.py
### Motivation
Faster training
### Your contribution
Just a user atm. | 06-14-2023 01:31:20 | 06-14-2023 01:31:20 | cc @sgugger and @pacman100 who know more about `Trainer` and integrations <|||||>Should this be integrated with PEFT instead? https://github.com/huggingface/peft<|||||>Anw the motivation is not faster training; in fact it ought to be slower as far as I understand. Rather, it is lower memory requirement.<|||||>You are correct. I misread/transcribed that. I read x12 memory saved as
x12 more context available which leads to faster inference.
On Sun, Jun 18, 2023 at 4:44 AM jon-chuang ***@***.***> wrote:
> Anw the motivation is not faster training; in fact it ought to be slower
> as far as I understand. Rather, it is lower memory requirement.
>
> —
> Reply to this email directly, view it on GitHub
> <https://github.com/huggingface/transformers/issues/24264#issuecomment-1596114214>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/ABHKKOSM7S3TL3DWEYH67UTXL3S3JANCNFSM6AAAAAAZFUOVYQ>
> .
> You are receiving this because you authored the thread.Message ID:
> ***@***.***>
>
<|||||>Created an issue in peft. Wasn't aware hf managed both. |
transformers | 24,263 | closed | Is time to update the transformers dependence in README. | In README docs, it say `This repository is tested on Python 3.6+, Flax 0.3.2+, PyTorch 1.3.1+ and TensorFlow 2.3+.` | 06-14-2023 01:27:51 | 06-14-2023 01:27:51 | @luoling1993 Indeed! Would you like to open a PR to update this, so that you get the contribution? <|||||>take<|||||>Hi @amyeroberts ,
As I can see this has not been fixed yet, I would love to raise a pull request to resolve this. May I know the latest versions it has been tested on? My apologies as I am new to this.<|||||>@sqali - great! Happy to hear you want to make this contribution.
The supported versions can be found in [setup.py](https://github.com/huggingface/transformers/blob/main/setup.py).
<|||||>Hi @amyeroberts,
I have made the following changes as per the setup.py file. However, I couldn't find PyTorch specifically mentioned, so I used Torch instead. Please review and kindly correct me if there are any mistakes. I have provided two formats below, and I would greatly appreciate your confirmation before I proceed with raising the pull request.
1.) "This repository has been tested with Python 3.7.0+, Flax >= 0.4.1 & <= 0.6.9, Torch >= 1.9 & != 1.12.0, and TensorFlow >= 2.4 & < 2.13."
2.) This repository is tested on the following versions:
- Python: 3.7.0+
- Flax: >= 0.4.1 & <= 0.6.9
- Torch: >= 1.9 & != 1.12.0
- TensorFlow: >= 2.4 & < 2.13
I kindly request your guidance and feedback regarding these changes. If you have any further suggestions or modifications, please let me know.
Thank you for your assistance.<|||||>@sqali Thanks for pulling this info. For the update, it's best to follow the current pattern in the docs:
`This repository is tested on Python 3.7+, Flax 0.4.1+, PyTorch 1.9+ and TensorFlow 2.4+.`
Let's open a PR, and we can discuss any additional changes before merging there. <|||||>Hi @amyeroberts ,
Thanks for the confirmation. Was a little skepitcal about the format. I will raise the PR now.
Thanks for the assistance.<|||||>Hi @amyeroberts ,
I have raised the pull request and it has been approved by Sylvain Gugger. It has passed all the required checks and is ready to be merged. I would like to thank you for giving me the opportunity to raise the PR and make this contribution.
Thanks<|||||>Thanks for fixing and congrats on getting your first contribution merged in! |
transformers | 24,262 | open | Fixing RotaryEmbedding.forward to return float16 values in float16 precision mode. | # What does this PR do?
RotaryEmbedding.forward() returns values with float32 precision even in float16 precision mode.
This affects to the subsequent calculation and takes extra GPU memory usage.
This PR fixes that problem.
Fixes #24261
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
| 06-14-2023 01:00:28 | 06-14-2023 01:00:28 | I will investigate whether or not this is the source of instabilities in Llama2! If so, will adresse it |
transformers | 24,261 | open | GPTNeoXAttention takes extra GPU memory footprint in torch.float16 precision mode. | ### System Info
- `transformers` version: 4.30.1
- Platform: Linux-5.15.0-1034-gcp-x86_64-with-glibc2.2.5
- Python version: 3.8.10
- Huggingface_hub version: 0.14.1
- Safetensors version: 0.3.1
- PyTorch version (GPU?): 2.0.1+cu117 (True)
- Tensorflow version (GPU?): 2.11.1 (True)
- Flax version (CPU?/GPU?/TPU?): 0.5.3 (cpu)
- Jax version: 0.3.6
- JaxLib version: 0.3.5
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
@ArthurZucker and @younesbelkada
Hi.
I'm using a model, `GPTNeoXForCausalLM` (defined in `src/transformers/models/gpt_neox/modeling_gpt_neox.py`) with torch.float16 precision by calling `.from_pretrained(torch_dtype=torch.float16)`.
In this mode, the model is expected to calculate in float16 precision to save GPU memory usage.
However, [some of variables](https://github.com/huggingface/transformers/blob/b89fcccd44508110fd11579a554c1876bc10c0ad/src/transformers/models/gpt_neox/modeling_gpt_neox.py#L265) in this model remain float32 and don't turn to float16, and they affects the subsequent calculation.
Eventually, the weight attention, which can be dominant memory consumer, is calculated in float32.
GPU memory won't be saved as we expected.
The following is the problem detail:
1. setup model [`GPTNeoXForCausalLM`](https://github.com/huggingface/transformers/blob/b89fcccd44508110fd11579a554c1876bc10c0ad/src/transformers/models/gpt_neox/modeling_gpt_neox.py#L598) with `torch_dtype=torch.float16`
2. [`self.cos_cached`](https://github.com/huggingface/transformers/blob/b89fcccd44508110fd11579a554c1876bc10c0ad/src/transformers/models/gpt_neox/modeling_gpt_neox.py#L264) and [`self.sin_cached`](https://github.com/huggingface/transformers/blob/b89fcccd44508110fd11579a554c1876bc10c0ad/src/transformers/models/gpt_neox/modeling_gpt_neox.py#LL265C14-L265C14) in `RotaryEmbedding` class held by `GPTNeoXAttention` are calcurated as float32 in __init__().
3. `GPTNeoXAttention.forward()` calls `RotaryEmbedding.forward()`.
4. `RotaryEmbedding.forward()` prepare the return values in float32.
5. `GPTNeoXAttention.forward()` receives the return values in float32.
6. Hereafter, all variables including `attn_weights` are calculated in float32.
7. `attn_weights = attn_weights.to(value.dtype)` is called and `attn_weights` is returned to float16.
Because of step 7, the model forward() returns the float16, but it consumes float32 GPU footprint internally.
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
1. Checkout to [ko_gptneox_fp16_debug branch in https://github.com/kikutakou/transformers](https://github.com/kikutakou/transformers/tree/ko_gptneox_fp16_debug) (this branch only has additional debug print code on origin/main)
2. setup model by GPTNeoXForCausalLM.from_pretrained with torch_dtype=torch.float16
3. model.forward()
Here is a sample code.
```
import torch
from transformers import GPTNeoXForCausalLM, GPTNeoXTokenizerFast
torch.manual_seed(0)
MODEL_NAME = 'cyberagent/gpt-neox-1b-japanese'
# load text
input_text = 'this is test'
# tokenize text
tokenizer = GPTNeoXTokenizerFast.from_pretrained(MODEL_NAME, use_auth_token=True)
t = tokenizer(input_text, return_tensors='pt', truncation=True, padding='longest', add_special_tokens=False)
input_ids = t['input_ids'].cuda()
attention_mask = t['attention_mask'].cuda()
input_len = len(input_ids[0])
model = GPTNeoXForCausalLM.from_pretrained(MODEL_NAME, low_cpu_mem_usage=True,
use_auth_token=True, torch_dtype=torch.float16)
model.eval()
model.cuda()
# generate
generation_len = (input_len + 50)
batch_params = dict(input_ids=input_ids,
attention_mask=attention_mask,
repetition_penalty=None, num_return_sequences=3, num_beams=1, do_sample=True,
temperature=None, top_p=0.95, pad_token_id=1, max_length=generation_len)
output_ids = model.generate(**batch_params).cpu()[0]
# decode
output_ids = output_ids[input_len:]
decoded = tokenizer.decode(output_ids, skip_special_tokens=False)
print(decoded)
```
### Expected behavior
It prints all dtypes if you execute on ko_gptneox_fp16_debug branch.
All dtypes are expected to be float16, but actually float 32.
| 06-14-2023 00:44:15 | 06-14-2023 00:44:15 | PR is prepared as #24262<|||||>cc @younesbelkada @ArthurZucker <|||||>@younesbelkada @ArthurZucker
Hi. This is just a friendly reminder. <|||||>Hi @kikutakou
For fp16 models it is important to calculate the attention scores in full precision, mainly for numerical stability reasons. Check out for instance: https://github.com/huggingface/transformers/issues/17433 or the thread in (that includes authors from OPT models) https://github.com/huggingface/transformers/pull/17437 to start with. So the computation inside attention module to calculate `attn_weights` should always stay in full precision.
Regarding the positional embeddings, looking at the official implementation, it seems that indeed the positional embeddings are returned in half-precision: https://github.com/EleutherAI/gpt-neox/blob/main/megatron/model/positional_embeddings.py#L48 . Maybe @StellaAthena can help us confirm if the rotary embeddings should return fp16 values in half-precision modes<|||||>For rope, there was an attempt to fix this here: #23837, as it seems that in the original code they are re-computed each forward, with the correct dtype. It's very detailed! <|||||>> Hi @kikutakou
> For fp16 models it is important to calculate the attention scores in full precision, mainly for numerical stability reasons. Check out for instance: #17433 or the thread in (that includes authors from OPT models) #17437 to start with. So the computation inside attention module to calculate `attn_weights` should always stay in full precision.
> Regarding the positional embeddings, looking at the official implementation, it seems that indeed the positional embeddings are returned in half-precision: https://github.com/EleutherAI/gpt-neox/blob/main/megatron/model/positional_embeddings.py#L48 . Maybe @StellaAthena can help us confirm if the rotary embeddings should return fp16 values in half-precision modes
I have no reason to think that you can’t compute rotary embed signs in half-precision. |
transformers | 24,260 | closed | Configuration | [8329381051](http://@googlemaps.com)
> #@https://github.com/huggingface/transformers/blob/fe861e578f50dc9c06de33cd361d2f625017e624/src/transformers/utils/hub.py#L734
/
```python`
#23655 [WlP] add transfer script | 06-13-2023 18:48:55 | 06-13-2023 18:48:55 | Duplicate of # |
transformers | 24,259 | closed | [Mask2Former] Remove SwinConfig | # What does this PR do?
This PR removes what was probably a leftover from the Mask2Former PR, the model works without requiring those lines of code.
Fixes #24244 | 06-13-2023 18:36:11 | 06-13-2023 18:36:11 | _The documentation is not available anymore as the PR was closed or merged._<|||||>gently ping @amyeroberts to see if we could merge 🤗 |
transformers | 24,258 | closed | Does Fine tuning of already fine tuned model forgets the previous features like weights and bias? | Is it possible for fine tuning the already fine tuned model without losing previous features??
Suppose i fine-tune a model on "squad" dataset then I want to Incremental fine-tune the same model on some other dataset having same/different data formate and hyperparameters, does this means now model is fine-tuned on 2 datasets??? or it forgets the "squad" dataset when I fine tune on the second dataset.
Please acknowledge me over this
@vanpelt
@pvl
@arfon | 06-13-2023 17:35:10 | 06-13-2023 17:35:10 | Hi @akesh1235, thanks for raising an issue!
This is a question best placed in our [forums](https://discuss.huggingface.co/). We try to reserve the github issues for feature requests and bug reports.<|||||>Thankyou @amyeroberts mam ,
I have posted on forum [Link for My topic on forum ](https://discuss.huggingface.co/t/fine-tuning-the-existing-fine-tuned-model/43113?u=akesh1235)
Please acknowledge me over this
@vanpelt
@pvl
@arfon<|||||>@akesh1235 Great, thank you.
In future, please only @ relevant people in issues (transformers topics and the HF maintainers to ask are listed in our issues template). The people you tagged are incredibly busy people and if everyone did this it would be impossible for anyone to meaningfully address issues, PRs and notifications on github. <|||||>Okay mam I apologize,
I'll take care of this next time
<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 24,257 | closed | Add padding changes the output of BertModel | ### System Info
- `transformers` version: 4.30.1
- Platform: Linux-4.4.0-131-generic-x86_64-with-glibc2.10
- Python version: 3.8.16
- Huggingface_hub version: 0.15.1
- Safetensors version: 0.3.1
- PyTorch version (GPU?): 2.0.1+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: False
- Using distributed or parallel set-up in script?: False
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```python
import matplotlib.pyplot as plt
import numpy as np
import torch
from transformers import BertModel, __version__
print(f"torch version: {torch.__version__}")
print(f"transformers version: {__version__}")
np.random.seed(42)
torch.manual_seed(42)
torch.set_default_dtype(torch.float32)
# Load model
model = BertModel.from_pretrained('bert-base-uncased')
model = model.eval()
# pad and mask
x = [10]
m = [1]
outs = []
for pads in range(511):
xt = torch.LongTensor([x])
mt = torch.FloatTensor([m])
out = model(xt, attention_mask=mt).last_hidden_state[0, 0, 0].item()
outs.append(out)
x = x + [0]
m = m + [0]
# plot outs
outs = np.array(outs)
outs = outs - outs[0]
plt.figure(figsize=(4, 4), dpi=80)
plt.plot(outs)
plt.show()
```
### Expected behavior
```
torch version: 2.0.1+cu117
transformers version: 4.30.1
Some weights of the model checkpoint at bert-base-uncased were not used when initializing BertModel: ['cls.predictions.transform.LayerNorm.weight', 'cls.predictions.bias', 'cls.predictions.transform.LayerNorm.bias', 'cls.predictions.transform.dense.bias', 'cls.predictions.transform.dense.weight', 'cls.seq_relationship.bias', 'cls.seq_relationship.weight']
- This IS expected if you are initializing BertModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing BertModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
```

(x: pad length; y: error)
And `torch.set_default_dtype(torch.float64)` reduces this error to ~1e-15. | 06-13-2023 17:28:51 | 06-13-2023 17:28:51 | Hi @AaronNing, thanks for opening this issue!
The (very small!) differences are arising because of the forced typing with `torch.set_default_dtype(torch.float32)`. Floating point arithmetic is inherently imprecise. By forcing tensors to be `float32`, where previously they weren't, you're introducing imprecision into architecture and ultimately the outputs. As you've observed, the difference is v. small if this is changed to `float64`. This is because the model can use more memory to more accurately represent numbers and perform calculations.
Below are the results if this type casting doesn't occur. As you can see, there is now no difference when introducing padding:

For reference, when we're porting models into our library, we consider absolute differences on the order of ~1e-6 to be acceptable. <|||||>@amyeroberts Thanks for your reply!
However, I removed the `torch.set_default_dtype(torch.float32)` line (with others unchanged) from the code above but still got the same figure. Did you modify other parts of the code? Thanks.
Also, according to [PyTorch's official document](https://pytorch.org/docs/stable/generated/torch.set_default_dtype.html),
> When PyTorch is initialized its default floating point dtype is torch.float32
So I suspect that this operation is not what caused the inaccuracy?<|||||>@AaronNing - you're right, my bad, I misinterpreted what `torch.set_default_dtype` was doing. Removing that was the only change I made to the code, however, like you, adding it back didn't affect behaviour and I observed the same results (no change with padding).
In this case, I suspect the differences might be due to hardware, which can affect float computations, I'm running on CPU with Mac M1. Other than that, I don't have a good guess. <|||||>> @AaronNing - you're right, my bad, I misinterpreted what `torch.set_default_dtype` was doing. Removing that was the only change I made to the code, however, like you, adding it back didn't affect behaviour and I observed the same results (no change with padding).
>
> In this case, I suspect the differences might be due to hardware, which can affect float computations, I'm running on CPU with Mac M1. Other than that, I don't have a good guess.
I see, thanks. BTW do you have any idea why (in your plot) the output is different when input length = 1?<|||||>> BTW do you have any idea why (in your plot) the output is different when input length = 1?
I think this is just because the diff is calculated as:
```python
outs = outs - outs[0]
```
And the first element with have ~0 difference with itself. <|||||>@amyeroberts @AaronNing
Hi, I also met the same issue. I find that it may due to layernorm operation, since different sequence length will lead to different norm results.
To verify this, you can print the results [here](https://github.com/huggingface/transformers/blob/main/src/transformers/models/bert/modeling_bert.py#L239).
And I tested it is hardware independent and not related to python accuracy error. I believe it could be a common issue for all transformer-based models.<|||||>@StevenTang1998 Could you share which different hardware you ran this on? Were the differences you saw from running the same script that @AaronNing provided? If not, could you share yours? <|||||>Yes, I have the similar results using the @AaronNing 's code.

I find that it may due to layernorm operation, since different sequence length will lead to different norm results.
To verify this, you can print the results [here](https://github.com/huggingface/transformers/blob/main/src/transformers/models/bert/modeling_bert.py#L239).
This the system info of mine:
- `transformers` version: 4.30.2
- Platform: Linux-5.15.0-76-generic-x86_64-with-glibc2.31
- Python version: 3.10.10
- Huggingface_hub version: 0.15.1
- Safetensors version: 0.3.1
- PyTorch version (GPU?): 1.13.1 (True)<|||||>Hi @StevenTang1998, with respect to the hardware, what I meant was which hardware have you run this on to confirm that it's invariant?
If I run on my CPU I get the same as before - ~1e-7 difference across all sequence lengths
If I run on GPU, I see mostly ~ 1e-6 difference

For my own experiments, the difference in observations seems to be arising from the hardware. <|||||>I conducted the experiment on RTX 3090 GPUs.
<|||||>@StevenTang1998 In order to confirm it's invariant one must run on at least two different pieces of hardware - ideally CPU and GPU - and obtain the same results.<|||||>@amyeroberts
I reran the experiments and want to reclaim that I don't believe it is an coincidence since @AaronNing and I both got the variant results.
**I have printed the results before and after the layernorm. The results before the layernorm are exactly the same regardless the pad length, while the results after the layernorm get variant.**
Since the layernorm will normalize all the word embeddings in one sequence, I think add pad tokens will affect the normalization results.
- CPU

- GPU (3090 and A100 have the same results)

<|||||>@StevenTang1998 Thanks. It sounds reasonable that LN caused the difference.
> I have printed the results before and after the layernorm. The results before the layernorm are exactly the same regardless the pad length, while the results after the layernorm get variant.
It would be very nice if you provide one or two figures to demonstrate that, though I'm already taking it.
> I believe it could be a common issue for all transformer-based models.
I agree. Maybe when the model is well-trained, this difference will be reasonably small (as shown in our experiments), so this is not regarded as a big issue. |
transformers | 24,256 | closed | Skip `GPT-J` fx tests for torch < 1.12 | # What does this PR do?
After #22069, the fx tests for gpt-j with torch < 1.12 gives an error
```bash
(line 941) AssertionError: Couldn't trace module: 'len' is not supported in symbolic tracing by default. If you want this call to be recorded, please call torch.fx.wrap('len') at module scope
```
Guess it's best to skip in this case, as it works with recent torch versions.
| 06-13-2023 17:27:20 | 06-13-2023 17:27:20 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 24,255 | closed | Fix how we detect the TF package | Our framework detection code calls `_is_package_available()` for TensorFlow, but this code fails when only the `tensorflow-cpu` package is present. The failure occurs because `importlib_metadata.version("tensorflow")` throws an error in the version detection branch of `_is_package_available` unless the core `tensorflow` package is installed.
I solved this by just calling `importlib.util.find_spec("tensorflow")` instead of `_is_package_available()`. However, we could also resolve this issue by rewriting `_is_package_available()` so that it only takes the version check branch when `return_version` is `True`. The `importlib_metadata.version()` call is only used to get the package version, but it causes the entire `_is_package_available()` call to fail if it can't find a version, even if the `importlib.util.find_spec()` call was a success.
ccing @sgugger because there's a `TODO` above that function [requesting his attention](https://github.com/huggingface/transformers/blob/main/src/transformers/utils/import_utils.py#L40), so I'd like his input on the right approach here!
Fixes #24253 | 06-13-2023 17:15:20 | 06-13-2023 17:15:20 | cc @apbard as well since I think the issue was introduced in #23163, but thankfully it's a relatively easy fix!<|||||>_The documentation is not available anymore as the PR was closed or merged._ |
transformers | 24,254 | closed | ValueError: You have to specify either decoder_input_ids or decoder_inputs_embeds Transformers Translation Tutorial Repro | ### System Info
### Context
Hello There!
First and foremost, congrats for Transformers Translation[ tutorial](https://huggingface.co/docs/transformers/tasks/translation). 👍
It serves as a Spark for building english-to-many translation languages models!
I´m following it along with TF mostly reproducing it in a jupyter Notebook with TF for mac with GPU enabled
Using the following dependency versions.
```
tensorflow-macos==2.9.0
tensorflow-metal==0.5.0
transformers ==4.29.2
```
_* NOTE : tensorflow-macos dependencies are [fixed ](https://developer.apple.com/forums/thread/721619) for ensuring GPU training_
### Who can help?
@ArthurZucker @younesbelkada
@gante maybe?
### Information
- [x] The official example scripts
- [x] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
### Issue Description
Im finding the following error when **fitting the model** for finetunning a model coming from [TFAutoModelForSeq2SeqLM](https://huggingface.co/docs/transformers/v4.30.0/en/model_doc/auto#transformers.TFAutoModelForSeq2SeqLM) autoclass
```
with tf.device('/device:GPU:0'):
model.fit(x=tf_train_set, validation_data=tf_test_set, epochs=1, callbacks= callbacks )
```
It is returning
```
ValueError: You have to specify either decoder_input_ids or decoder_inputs_embeds
Call arguments received by layer "decoder" (type TFT5MainLayer):
• self=None
• input_ids=None
• attention_mask=None
• encoder_hidden_states=tf.Tensor(shape=(32, 96, 512), dtype=float32)
• encoder_attention_mask=tf.Tensor(shape=(32, 96), dtype=int32)
• inputs_embeds=None
• head_mask=None
• encoder_head_mask=None
• past_key_values=None
• use_cache=True
• output_attentions=False
• output_hidden_states=False
• return_dict=True
• training=False
Call arguments received by layer "tft5_for_conditional_generation" (type TFT5ForConditionalGeneration):
• self={'input_ids': 'tf.Tensor(shape=(32, 96), dtype=int64)', 'attention_mask': 'tf.Tensor(shape=(32, 96), dtype=int64)'}
• input_ids=None
• attention_mask=None
• decoder_input_ids=None
• decoder_attention_mask=None
• head_mask=None
• decoder_head_mask=None
• encoder_outputs=None
• past_key_values=None
• inputs_embeds=None
• decoder_inputs_embeds=None
• labels=None
• use_cache=None
• output_attentions=None
• output_hidden_states=None
• return_dict=None
• training=False
```
### Backtrace
Tried:
* Remove callbacks : The model is trained, but of course not loaded into the Hub, nor the metrics computed
* Followed #16234 , this[ comment ](https://github.com/huggingface/transformers/issues/16234#issuecomment-1071114294) and **ensured that Im using AutoTokenizer.** This glimpsed that this could be related to TFAutoModelForSeq2SeqLM .
```
model = TFAutoModelForSeq2SeqLM.from_pretrained(checkpoint)
```
Seems to be working correctly. Therefore I assume that the **pre-trained model is loaded**
* Also followed #21116 and added `save_strategy=no` argument in [PushToCallBack ](https://github.com/huggingface/transformers/issues/21116#issuecomment-1382869967) , but the error persisted
### Expected behavior
Model trained should be uploaded to the Hub.
The folder appears empty , there is an error
### Hypothesis
At this point, what Im guessing is that once I load the model I shall redefine the verbose error trace?
Any help please of how to do this ? :) or how can I fix it ? Do I have to define a specific Trainer ? Any idea of where I can find this in docs?
| 06-13-2023 16:25:36 | 06-13-2023 16:25:36 | Hey @SoyGema 👋
From your exception, I believe the issue is at the data preparation stage -- it is pretty much complaining that your dataset has no labels. Have you followed the data preprocessing steps described [here](https://huggingface.co/docs/transformers/tasks/translation#preprocess)?<|||||>Hello there @gante ! Thanks for your quick response and help !
I really appreciate it . 🥇
I´ve uploaded the notebook [here](https://github.com/SoyGema/The-Lord-of-The-Words-The-two-frameworks/blob/main/notebooks/TransformerTranslationPOCV2.ipynb) . As far as I can understand (let me know if Im missing something here ), Im using the preprocessing function.
In fact, the _tokenized_books (cell 16) returns something in the form of
```
DatasetDict({
train: Dataset({
features: ['id', 'translation', 'input_ids', 'attention_mask', 'labels'],
num_rows: 1123
})
test: Dataset({
features: ['id', 'translation', 'input_ids', 'attention_mask', 'labels'],
num_rows: 281
})
})
```
And _data_collator_ (cell 19) returns something like
```
DataCollatorForSeq2Seq(tokenizer=T5Tokenizer(name_or_path='t5-small', vocab_size=32100, model_max_length=512, is_fast=False, padding_side='right', truncation_side='right', special_tokens={'eos_token': '</s>', 'unk_token': '<unk>', 'pad_token': '<pad>', 'additional_special_tokens': ['<extra_id_0>', .....
```
Am I missing something from the video that should be in code ?
for quick testing purposes, Im with **pt_to_en** dataset, that seems to have same characteristics. I've checked that tokenized_books function returns the same data structure type in **pt_to_en** that in **fr_to_en** dataset
My apologies in advance for the extremely notebook verbose code regarding GPU low level operation use. I am trying to optimize for that therefore all trace.
Thanks so so much for your time on this
Happy if you can point me on the right direction! 👍
<|||||>Hey @SoyGema 👋
Your `KerasMetricCallback` was missing `predict_with_generate=True` -- metrics that rely on text generation must pass this flag, as generating text is different from a model forward pass. It should become `metric_callback = KerasMetricCallback(metric_fn=compute_metrics, eval_dataset=tf_test_set, predict_with_generate=True)`
For future reference in case you encounter further bugs, have a look at our complete translation example: https://github.com/huggingface/transformers/blob/main/examples/tensorflow/translation/run_translation.py<|||||>Hello there @gante 👋
Thanks for the reference. I'm definetly having this as a north script and also using it !
Been thinking about how to structure this exploration and also _indexing the roadblocks/bugs/solutions so other users can benefit from it_ .
I'm closing this issue (as it is solved but other arised )and probably open another ones in [my own repo](https://github.com/SoyGema/The-Lord-of-The-Words-The-two-frameworks) as it goes so issues are unitary-structured . Hope this makes sense. Hope I can take it from there and not disturb you!
Thanks again!<|||||>Just for Reproducibility. If someone wants to go through the script example. Documentation about flag configuration and more can be found [here](https://huggingface.co/docs/transformers/run_scripts) |
transformers | 24,253 | closed | transformers does not detect Tensorflow when installing tensorflow-cpu package | ### System Info
This issue was introduced in transformers 4.30.x.
Output of pip list (partial):
```
tensorflow-cpu 2.12.0
tensorflow-estimator 2.12.0
tensorflow-intel 2.12.0
tensorflow-io-gcs-filesystem 0.31.0
transformers 4.30.1
```
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Manifests itself like this:
```
None of PyTorch, TensorFlow >= 2.0, or Flax have been found. Models won't be available and only tokenizers, configuration and file/data utilities can be used.
ImportError while loading
<REDACTED>
from transformers import file_utils, modeling_tf_outputs, modeling_tf_utils
.venv\lib\site-packages\transformers\modeling_tf_utils.py:42: in <module>
from .generation import GenerationConfig, TFGenerationMixin
E ImportError: cannot import name 'TFGenerationMixin' from 'transformers.generation' (<REDACTED>)
```
### Expected behavior
No errors, Tensorflow detected. | 06-13-2023 16:15:18 | 06-13-2023 16:15:18 | @apbard I see you touched the Tensorflow detection logic here: https://github.com/huggingface/transformers/commit/83eda6435e7c842e55b42a529e9bf367bf2a126b. <|||||>cc @ydshieh @Rocketknight1 <|||||>Just to confirm, if I set env var `FORCE_TF_AVAILABLE` all works as expected. (I'd like to avoid that of course)<|||||>Remark: On our CircleCI, the `tensorflow` is installed (forced by `tensorflow_text`), so there is `tensorflow` and `tensorflow_cpu` in the CI environment.<|||||>Confirmed this issue on my end. The problem is the name discrepancy between the packages and our `_is_package_available` code. Will open a PR to fix it!<|||||>PR is open at #24255 <|||||>@faph PR is merged. Since this will probably affect quite a few people, I'll leave a note here to other users who find this issue:
You should be able to resolve this by installing from `main` with `pip install --upgrade git+https://github.com/huggingface/transformers.git`. After our next version release, you can go back to just `pip install --upgrade transformers`. As this is a relatively serious issue we may do a hotfix release, but this is still under discussion.<|||||>Appreciate the efforts @Rocketknight1 !<|||||>Thanks for the quick bug report too @faph - our test suite wasn't checking installations with only `tensorflow-cpu`, so we never would have picked up this issue in time if you hadn't reported it! |
transformers | 24,252 | closed | Peft Model not resuming from Checkpoint | ### System Info
Running from huggingface/transformers-pytorch-gpu docker image.
- `transformers` version: 4.31.0.dev0
- Platform: Linux-5.4.0-139-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.15.1
- Safetensors version: 0.3.1
- PyTorch version (GPU?): 2.0.1+cu118 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help?
@younesbelkada
Since PR #24073 the Trainer does not resume from checkpoint.
The Issue happens since `PeftModel.save_pretrained` saves only adapter's files, but the `model._load_from_checkpoint` method expects a full pytorch checkpoint.
I worked around that by subclassing the Trainer class. I am willing to submit a PR merging the `_load_from_peft_checkpoint` with the Hugging face Trainer.
```python
class PeftTrainer(Trainer):
def _load_from_peft_checkpoint(self, resume_from_checkpoint, model):
adapter_weights_file = os.path.join(resume_from_checkpoint, ADAPTER_WEIGHTS_NAME)
adapter_safe_weights_file = os.path.join(resume_from_checkpoint, ADAPTER_SAFE_WEIGHTS_NAME)
if not any(
os.path.isfile(f) for f in [adapter_weights_file, adapter_safe_weights_file]
):
raise ValueError(f"Can't find a valid checkpoint at {resume_from_checkpoint}")
logger.info(f"Loading model from {resume_from_checkpoint}.")
# Load adapters following PR # 24096
if is_peft_available() and isinstance(model, PeftModel):
# If train a model using PEFT & LoRA, assume that adapter have been saved properly.
if hasattr(model, "active_adapter") and hasattr(model, "load_adapter"):
if os.path.exists(resume_from_checkpoint) or os.path.exists(resume_from_checkpoint):
model.load_adapter(resume_from_checkpoint, model.active_adapter)
# Load_adapter has no return value present, modify it when appropriate.
from torch.nn.modules.module import _IncompatibleKeys
load_result = _IncompatibleKeys([], [])
else:
logger.warning(
"The intermediate checkpoints of PEFT may not be saved correctly, "
f"using `TrainerCallback` to save {ADAPTER_WEIGHTS_NAME} in corresponding folders, "
"here are some examples https://github.com/huggingface/peft/issues/96"
)
else:
logger.warning("Could not load adapter model, make sure to have `peft>=0.3.0` installed")
def _load_from_checkpoint(self, resume_from_checkpoint, model=None):
if model is None:
model = self.model_wrapped if is_sagemaker_mp_enabled() else self.model
if is_peft_available() and isinstance(model, PeftModel):
# Try to load adapters before trying to load a torch model
try:
return self._load_from_peft_checkpoint(resume_from_checkpoint, model=model)
except:
return super()._load_from_checkpoint(resume_from_checkpoint, model=model)
# If it is not a PeftModel, use the original _load_from_checkpoint
else:
return super()._load_from_checkpoint(resume_from_checkpoint, model=model)
```
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```python
from transformers import (
AutoModelForSequenceClassification,
AutoTokenizer,
TrainingArguments,
Trainer,
DataCollatorWithPadding,
)
from peft import get_peft_config, get_peft_model, PromptTuningInit, PromptTuningConfig, TaskType, PeftType, PromptEncoderConfig
import torch
import os
import evaluate
from datasets import Dataset
# P-tuning hyper-parameters
model_id = "microsoft/deberta-v3-base"
model = AutoModelForSequenceClassification.from_pretrained(model_id)
tokenizer = AutoTokenizer.from_pretrained(model_id)
# Load tokenized datesets
train_ds = test_ds = Dataset.from_dict({'input_ids':[[1, 2430, 429, 92207, 303, 331, 1789, 3495, 2344, 1300, 355, 268, 1131, 270, 310, 354, 3732, 388,
2],[1, 1865, 843, 20060, 265, 483, 2196, 281, 411, 2784, 2]],
'labels':[0,0]})
PEFT_CONFIG ={"peft_type":"P_TUNING",
"num_virtual_tokens": 30,
"encoder_reparameterization_type": "MLP",
"encoder_hidden_size": 128,
"num_attention_heads": 17,
}
peft_config = PromptEncoderConfig(
task_type="SEQ_CLS",
**PEFT_CONFIG
)
model = get_peft_model(model, peft_config)
data_collator = DataCollatorWithPadding(tokenizer=tokenizer, padding=True, max_length=482,)
training_args = TrainingArguments(
output_dir='p',
per_device_train_batch_size=1,
per_device_eval_batch_size=1,
num_train_epochs=1,
load_best_model_at_end=False,
save_strategy='epoch'
)
trainer = Trainer(
model=model,
args=training_args,
train_dataset=train_ds,
eval_dataset=test_ds,
tokenizer=tokenizer,
data_collator=data_collator,
compute_metrics=evaluate.load('accuracy')
)
trainer.train()
training_args = TrainingArguments(
output_dir='p',
per_device_train_batch_size=1,
per_device_eval_batch_size=1,
num_train_epochs=2,
load_best_model_at_end=False,
save_strategy='epoch'
)
trainer = Trainer(
model=model,
args=training_args,
train_dataset=train_ds,
eval_dataset=test_ds,
tokenizer=tokenizer,
data_collator=data_collator,
compute_metrics=evaluate.load('accuracy')
)
trainer.train(resume_from_checkpoint=True)
```
Raises
```
ValueError Traceback (most recent call last)
Cell In[26], line 92
70 training_args = TrainingArguments(
71 output_dir='p',
72 per_device_train_batch_size=1,
(...)
77
78 )
82 trainer = Trainer(
83 model=model,
84 args=training_args,
(...)
89 compute_metrics=evaluate.load('accuracy')
90 )
---> 92 trainer.train(resume_from_checkpoint=True)
File /transformers/src/transformers/trainer.py:1634, in Trainer.train(self, resume_from_checkpoint, trial, ignore_keys_for_eval, **kwargs)
1631 raise ValueError(f"No valid checkpoint found in output directory ({args.output_dir})")
1633 if resume_from_checkpoint is not None and not is_sagemaker_mp_enabled() and not self.is_deepspeed_enabled:
-> 1634 self._load_from_checkpoint(resume_from_checkpoint)
1636 # If model was re-initialized, put it on the right device and update self.model_wrapped
1637 if model_reloaded:
File /transformers/src/transformers/trainer.py:2119, in Trainer._load_from_checkpoint(self, resume_from_checkpoint, model)
2114 safe_weights_index_file = os.path.join(resume_from_checkpoint, SAFE_WEIGHTS_INDEX_NAME)
2116 if not any(
2117 os.path.isfile(f) for f in [weights_file, safe_weights_file, weights_index_file, safe_weights_index_file]
2118 ):
-> 2119 raise ValueError(f"Can't find a valid checkpoint at {resume_from_checkpoint}")
2121 logger.info(f"Loading model from {resume_from_checkpoint}.")
2123 if os.path.isfile(config_file):
ValueError: Can't find a valid checkpoint at p/checkpoint-2
```
### Expected behavior
The train should be resumed from Epoch 1 and proceeded up to Epoch 2.
| 06-13-2023 15:44:43 | 06-13-2023 15:44:43 | Hi @llohann-speranca
Thanks for digging, flagging the issue and proposing a fix!
Indeed didn't properly tried it with `resume_from_checkpoint`. Yes please could you open a PR and tag me there?
Thanks and looking forward to the PR!<|||||>Hi Younes. Thank you for the reply. I will do it later today, after working
hours.
On Tue, Jun 13, 2023 at 6:05 PM Younes Belkada ***@***.***>
wrote:
> Hi @llohann-speranca <https://github.com/llohann-speranca>
> Thanks for digging, flagging the issue and proposing a fix!
> Indeed didn't properly tried it with resume_from_checkpoint. Yes please
> could you open a PR and tag me there?
> Thanks and looking forward to the PR!
>
> —
> Reply to this email directly, view it on GitHub
> <https://github.com/huggingface/transformers/issues/24252#issuecomment-1589610039>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/AZFKQJWCDRDMFWVI6OYNZ23XLCFVLANCNFSM6AAAAAAZFCV3FM>
> .
> You are receiving this because you were mentioned.Message ID:
> ***@***.***>
>
|
transformers | 24,251 | closed | Add `torch >=1.12` requirement for `Tapas` | # What does this PR do?
Tapas files are changed in #20149 to use torch's `scatter`. The torch tensor's method `scatter_reduce` accept the argument `src` only for torch >= 1.12.
This PR add some warnings/requirements in tapas modeling/test files to avoid test failures in past CI with torch <= 1.11.
(one previous similar PR is #19851) | 06-13-2023 15:31:24 | 06-13-2023 15:31:24 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 24,250 | closed | docs wrt using accelerate launcher with trainer | # What does this PR do?
1. Major issue currently is the confusion about how to use accelerate launcher with Trainer. This PR addresses it. | 06-13-2023 15:18:46 | 06-13-2023 15:18:46 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 24,249 | closed | update FSDP save and load logic | # What does this PR do?
1. Should be merged after PR https://github.com/huggingface/accelerate/pull/1576
2. Updates the saving and loading utils for FSDP to be in sync with the latest PyTorch release. | 06-13-2023 14:40:42 | 06-13-2023 14:40:42 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 24,248 | open | auto_find_batch_size=True and eval_steps=ratio unexpected behavior | ### System Info
- `transformers` version: 4.30.1
- Platform: Linux-5.7.19-050719-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.15.1
- Safetensors version: 0.3.1
- PyTorch version (GPU?): 2.0.1+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help?
@sgugger
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I don't have a full example that I can share, but I think this is a simple enough problem that one may not be needed.
I am using `TrainingArguments(auto_find_batch_size=True, eval_steps=0.1, per_device_train_size=1024)`. With batch size of 1024, I have 657 steps. The eval ratio appears to be evaluated on this, with evaluation happening every 66 steps.
However, the automatic batch size adjusts to 16, and a corresponding 83787 steps. But the evaluation is still performed every 66 steps.
### Expected behavior
I expected the eval steps to be recomputed when the batch size updated. In the example above, I expected evaluation to occur every ~8000 steps. | 06-13-2023 14:06:42 | 06-13-2023 14:06:42 | cc @muellerzr <|||||>Any chance you could provide a minimal reproducer I can test with?
Otherwise please try installing via `pip install git+https://github.com/huggingface/transformers@muellerzr-ratio` to see if that fixes it? 🙏 <|||||>Let me try your patch first.<|||||>With the patch, still evaling every 66 steps. Let me try to make a reproducer. It probably won't be minimal though...<|||||>[notebook.zip](https://github.com/huggingface/transformers/files/11736222/notebook.zip)
<|||||>Looks like `max_steps` is not being updated<|||||>Very strange. Here is some debug output:
```
Currently training with a batch size of: 8
The following columns in the training set don't have a corresponding argument in `RobertaForSequenceClassification.forward` and have been ignored: Addr, Binary, Name, text. If Addr, Binary, Name, text are not expected by `RobertaForSequenceClassification.forward`, you can safely ignore this message.
***** Running training *****
Num examples = 223,431
Num Epochs = 3
Instantaneous batch size per device = 8
Total train batch size (w. parallel, distributed & accumulation) = 8
Gradient Accumulation steps = 1
Total optimization steps = 83,787
Number of trainable parameters = 83,452,418
```
Total optimization steps is printing `max_steps`... :confused: <|||||>I see the problem I think:
``` python
if args.eval_steps and args.eval_steps < 1:
args.eval_steps = math.ceil(max_steps * args.eval_steps)
```
Since this actually modifies `args.eval_steps`, the ratio will be lost the first time we run this code. E.g., this will set `args.eval_steps` to 66 and lose 0.1.<|||||>Okay, I think it should be fixed now. Can you try again via the same branch?<|||||>Still eval'ing at 66 :-(<|||||>I did upload the notebook as a .zip above, but I'm trying to put it on colab to make things easier.<|||||>I can't run it on colab because I'm out of free GPU usage, but I did upload it, and I think it should work if you have GPU access there:
https://colab.research.google.com/drive/1A-MzFHIbWtrtO4tjf2GROAdfAueEHidw?usp=sharing<|||||>re; Total optimization steps is printing max_steps... 😕, yes we don't perform gradient accumulation with this, so if you happen to get small enough that max steps < steps w/ reduction multiplier, that does make sense.
Looking into this still. Thanks for the reproducer<|||||>Thanks again, I'll need to run this in the AM to verify but I believe I've fixed this now by storing the steps away in a data struct before we loop over again: https://github.com/huggingface/transformers/compare/muellerzr-ratio?expand=1
Once verified I'll place a PR in<|||||>I'm sorry to report that I still think it is broken!<|||||>Might not be a simple solution then! 😉 I'll be off on holiday rest of this week, and I'll look at this again come next Tuesday. <|||||>Enjoy your holiday. If I have some spare time I'll see if I can figure out what is going wrong yet...<|||||>Ping to keep fresh
On Thu, Jul 13, 2023, 10:02 AM github-actions[bot] -
***@***.*** <github.edmcman.99c9f1b9d0.notifications#
***@***.***> wrote:
> This issue has been automatically marked as stale because it has not had
> recent activity. If you think this still needs to be addressed please
> comment on this thread.
>
> Please note that issues that do not follow the contributing guidelines
> <https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md>
> are likely to be ignored.
>
> —
> Reply to this email directly, view it on GitHub
> <https://github.com/huggingface/transformers/issues/24248#issuecomment-1634405576>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/AAHYKZPYRUBJ3KSDAYDN3BDXQAEYXANCNFSM6AAAAAAZE5ZW3U>
> .
> You are receiving this because you authored the thread.Message ID:
> ***@***.***>
>
|
transformers | 24,247 | closed | Fix gradient checkpointing + fp16 autocast for most models | # What does this PR do?
This PR fixes a bug users can encounter when using gradient checkpointing under fp16 autocast context manager. Currently if a user trains a model using autocast and GC the last layer's weights will never get updated.
<details><summary>Handy reproducible snippet</summary>
```python
import torch
from transformers import AutoModelForCausalLM
model_id = "facebook/opt-350m"
model = AutoModelForCausalLM.from_pretrained(model_id).to(0)
model.gradient_checkpointing_enable()
model.train()
assert model.training and model.is_gradient_checkpointing
optimizer = torch.optim.Adam(model.parameters(), lr=1e-3)
with torch.cuda.amp.autocast(True, dtype=torch.float16):
dummy_input = torch.LongTensor([[0, 1, 0, 1]]).to(0)
model.train()
logits = model(dummy_input).logits
loss = logits.sum()
loss.backward()
optimizer.step()
for n, param in model.named_parameters():
if param.grad is None:
print(n)
```
</details>
As discussed internally, the fix seems to be to force-set `use_reentrant=False` when calling the gradient checkpointing. Putting that boolean to False lifts the restriction that the input tensors initially need to have if `use_reentrant=True` - according to PT team `use_reentrant=True` led to some silent bugs and they are planning to remove that boolean in the next releases and use `False` by deafault.
This might be problematic for users that train adapters (using PEFT for example) where they will see some training performance downside. I propose a PoC to fix this for most common architectures until PyTorch remove that support for the next releases.
For more context, users that train models using PEFT end up using autocast inside the trainer as they use 4bit / 8bit base models
Related: #23990 | 06-13-2023 13:47:45 | 06-13-2023 13:47:45 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 24,246 | closed | Training not converging with `transformers==4.26.1` | ### System Info
- `transformers` version: 4.26.1
- Platform: Linux-3.10.0-1160.71.1.el7.x86_64-x86_64-with-glibc2.17
- Python version: 3.8.16
- Huggingface_hub version: 0.15.1
- PyTorch version (GPU?): 1.9.0+cu102 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help?
@sanchit-gandhi
### Information
- [X] The official example scripts (hardly 3-4 line of code is changed)
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [] My own task or dataset (give details below)
### Reproduction
```
CUDA_VISIBLE_DEVICES=1 python run_speech_recognition_ctc.py \
--train_dataset_name="./data_sub/rs_en/train" \
--model_name_or_path="facebook/hubert-base-ls960" \
--train_dataset_name ="./data_sub/rs_en/test" \
--output_dir="./ft-full-run-test-hubert" \
--overwrite_output_dir \
--num_train_epochs="5" \
--per_device_train_batch_size="4" \
--gradient_accumulation_steps="2" \
--learning_rate="3e-4" \
--warmup_steps="300" \
--evaluation_strategy="steps" \
--text_column_name="transcription" \
--length_column_name="input_length" \
--save_steps="400" \
--eval_steps="25" \
--layerdrop="0.0" \
--save_total_limit="3" \
--freeze_feature_encoder \
--chars_to_ignore , ? . ! \
--group_by_length \
--do_train --do_eval
```
In `run_speech_recognition_ctc.py` I made these minor changes in `DataCollatorCTCWithPadding` as it won't run otherwise. With `transformers==4.30.1` it runs perfectly but I currently have 4.26.1 and the eval loss is not going down even after 3 epochs. My dataset has just 1500 samples of librispeech.
```
def __call__(self, features: List[Dict[str, Union[List[int], torch.Tensor]]]) -> Dict[str, torch.Tensor]:
# split inputs and labels since they have to be of different lenghts and need
# different padding methods
input_features = [{"input_ids": np.array(feature["input_values"])} for feature in features]
label_features = [{"input_ids": feature["labels"]} for feature in features]
output = {}
batch = self.processor.pad(
input_features,
padding=self.padding,
pad_to_multiple_of=self.pad_to_multiple_of,
return_tensors="pt",
)
labels_batch = self.processor.pad(
label_features,
padding=self.padding,
pad_to_multiple_of=self.pad_to_multiple_of_labels,
return_tensors="pt",
)
# replace padding with -100 to ignore loss correctly
labels = labels_batch["input_ids"].masked_fill(labels_batch.attention_mask.ne(1), -100)
output["labels"] = labels
output["input_values"] = batch["input_ids"]
# output["attention_mask"] = batch["attention_mask"]
if "attention_mask" in batch:
output["attention_mask"] = batch["attention_mask"].to(torch.long)
return output
```
### Expected behavior
Eval loss should not plateau at 2.91. It doesn't reduce any further. Also, what's ideal number of `warmup_steps` that you'd recommend? | 06-13-2023 13:14:37 | 06-13-2023 13:14:37 | Hi, @bhavitvyamalik thanks for raising an issue!
Questions about customising scripts for your own requirements e.g. optimal warmup steps, are best placed in our [forums](https://discuss.huggingface.co/). We try to reserve the github issues for feature requests and bug reports.
If the training works for the most recent version of transformers, then there's really nothing for us to do. <|||||>I understand and appreciate your response. I apologize for any confusion caused by raising the issue here. Since I am utilizing `adapter-transformers` for my project, which is built upon `transformers v4.26.1`, I am primarily relying on the functionality of `transformers` itself, specifically for fine-tuning Hubert. My intention was to inquire whether there might be a bug within the Trainer that could potentially explain why my training is not converging as expected or the changes I made to the code (described above) as it was giving problems with v4.26.1, hence why I brought up the issue in this context. Thank you!
<|||||>@bhavitvyamalik It's possible there was a bug. If there was, it's now been resolved and so it's not something we would spend time digging into. If you wanted to dig into this yourself, you could always use `git bisect` to find which commit introduced a change of behaviour.
We're not responsible for maintenance of third-party libraries built upon this one. I would suggest opening an issue in the `adapter-transformers` library, possibly asking for the pinned transformers version to be increased. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 24,245 | closed | Qlora on open llama 13b fails | ### System Info
Installed by ```!pip install -q -U git+https://github.com/huggingface/transformers.git```
On databricks
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
```
import transformers
trainer = transformers.Trainer(
model=peft_model,
train_dataset=data["train"],
args=transformers.TrainingArguments(
save_steps=250,
per_device_train_batch_size=2,
gradient_accumulation_steps=8,
num_train_epochs=5,
# max_steps=5,
learning_rate=2e-4,
fp16=True,
logging_steps=1,
output_dir=models[model_name]['folder_name'],
optim="paged_adamw_8bit"
),
data_collator=transformers.DataCollatorForLanguageModeling(tokenizer, mlm=False),
)
model.config.use_cache = False # silence the warnings. Please re-enable for inference!
trainer.train()
```
```
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
File <command-412498178049036>:21
3 trainer = transformers.Trainer(
4 model=peft_model,
5 train_dataset=data["train"],
(...)
18 data_collator=transformers.DataCollatorForLanguageModeling(tokenizer, mlm=False),
19 )
20 model.config.use_cache = False # silence the warnings. Please re-enable for inference!
---> 21 trainer.train()
File /local_disk0/.ephemeral_nfs/envs/pythonEnv-35a3008b-a999-41db-a8be-1e0597d78a6b/lib/python3.10/site-packages/transformers/trainer.py:1537, in Trainer.train(self, resume_from_checkpoint, trial, ignore_keys_for_eval, **kwargs)
1532 self.model_wrapped = self.model
1534 inner_training_loop = find_executable_batch_size(
1535 self._inner_training_loop, self._train_batch_size, args.auto_find_batch_size
1536 )
-> 1537 return inner_training_loop(
1538 args=args,
1539 resume_from_checkpoint=resume_from_checkpoint,
1540 trial=trial,
1541 ignore_keys_for_eval=ignore_keys_for_eval,
1542 )
File /local_disk0/.ephemeral_nfs/envs/pythonEnv-35a3008b-a999-41db-a8be-1e0597d78a6b/lib/python3.10/site-packages/transformers/trainer.py:1860, in Trainer._inner_training_loop(self, batch_size, args, resume_from_checkpoint, trial, ignore_keys_for_eval)
1855 nn.utils.clip_grad_norm_(
1856 amp.master_params(self.optimizer),
1857 args.max_grad_norm,
1858 )
1859 else:
-> 1860 self.accelerator.clip_grad_norm_(
1861 model.parameters(),
1862 args.max_grad_norm,
1863 )
1865 # Optimizer step
1866 optimizer_was_run = True
File /local_disk0/.ephemeral_nfs/envs/pythonEnv-35a3008b-a999-41db-a8be-1e0597d78a6b/lib/python3.10/site-packages/accelerate/accelerator.py:1908, in Accelerator.clip_grad_norm_(self, parameters, max_norm, norm_type)
1904 elif self.distributed_type == DistributedType.DEEPSPEED:
1905 # `accelerator.backward(loss)` is doing that automatically. Therefore, its implementation is not needed
1906 # We cannot return the gradient norm because DeepSpeed does it.
1907 return None
-> 1908 self.unscale_gradients()
1909 return torch.nn.utils.clip_grad_norm_(parameters, max_norm, norm_type=norm_type)
File /local_disk0/.ephemeral_nfs/envs/pythonEnv-35a3008b-a999-41db-a8be-1e0597d78a6b/lib/python3.10/site-packages/accelerate/accelerator.py:1871, in Accelerator.unscale_gradients(self, optimizer)
1869 while isinstance(opt, AcceleratedOptimizer):
1870 opt = opt.optimizer
-> 1871 self.scaler.unscale_(opt)
File /local_disk0/.ephemeral_nfs/envs/pythonEnv-35a3008b-a999-41db-a8be-1e0597d78a6b/lib/python3.10/site-packages/torch/cuda/amp/grad_scaler.py:275, in GradScaler.unscale_(self, optimizer)
272 optimizer_state = self._per_optimizer_states[id(optimizer)]
274 if optimizer_state["stage"] is OptState.UNSCALED:
--> 275 raise RuntimeError("unscale_() has already been called on this optimizer since the last update().")
276 elif optimizer_state["stage"] is OptState.STEPPED:
277 raise RuntimeError("unscale_() is being called after step().")
RuntimeError: unscale_() has already been called on this optimizer since the last update().
```
Interestingly failed at exactly 1 Epoch
### Expected behavior
Run normally? | 06-13-2023 12:34:50 | 06-13-2023 12:34:50 | Hi @nivibilla,
Please make sure to search the issues first, as it's possible they have previously been reported and resolved e.g.:
#24050
#23935
Could you try installing accelerate, peft and transformers from source, and rerunning your script
```
pip install git+https://github.com/huggingface/peft.git git+https://github.com/huggingface/transformers.git git+https://github.com/huggingface/accelerate.git
```<|||||>sorry mb, I am already installing from source so Im not sure what went wrong. In any case, will test again and let you know<|||||>I did as you asked @amyeroberts , installed from source. But I still get the same error. <|||||>```
!pip install -q torch==2.0.1 torchvision torchaudio
!pip install -q -U bitsandbytes
!pip install -q -U git+https://github.com/huggingface/transformers.git
!pip install -q -U git+https://github.com/huggingface/peft.git
!pip install -q -U git+https://github.com/huggingface/accelerate.git
!pip install -q -U git+https://github.com/huggingface/datasets.git
!pip install -q -U einops
!pip install -q -U sentencepiece
```<|||||>Was fixed when I used this particular branch
```
!pip install git+https://github.com/huggingface/transformers@de9255de27abfcae4a1f816b904915f0b1e23cd9
```
Will this branch be merged?<|||||>Note I am using 4bit quantisation in training, which may be the cause of the issue as mentioned in #23935<|||||>Another issue I have encountered with the branch I tested is that it doesn't save a adapter_config.json for the checkpoints.<|||||>Update:
fixed the adapter_config saving issue by
```
from transformers import TrainerCallback
class PeftSavingCallback(TrainerCallback):
def on_save(self, args, state, control, **kwargs):
checkpoint_path = os.path.join(args.output_dir, f"checkpoint-{state.global_step}")
kwargs["model"].save_pretrained(checkpoint_path)
if "pytorch_model.bin" in os.listdir(checkpoint_path):
os.remove(os.path.join(checkpoint_path, "pytorch_model.bin"))
```
However the issue still remains when using the normal installation instead of the particular commit mentioned<|||||>> Was fixed when I used this particular branch
That's great to hear! Peculiar that it didn't work from source though 🤔
> Will this branch be merged?
[This commit has already been merged](https://github.com/huggingface/transformers/commit/de9255de27abfcae4a1f816b904915f0b1e23cd9), I believe, and is part of the latest release. Could you confirm the version of transformers that was installed when the problem was happening initially?
> Another issue I have encountered with the branch I tested is that it doesn't save a adapter_config.json for the checkpoints.
Hmmmm.... I have no idea about this cc @pacman100 who knows a lot more about Peft and Trainer :) <|||||>> Could you confirm the version?
I did transformers.__version__ and got ```4.31.0.dev0```<|||||>I had this same issue today, always stopped around 1 epoch with the same error. I was trying to fine-tune llama-13b as well, on my own dataset, which I know is correctly formatted.<|||||>Using git source pip install too. Trying `!pip install git+https://github.com/huggingface/transformers@de9255de27abfcae4a1f816b904915f0b1e23cd9` is currently working on the second epoch. Thank you @nivibilla!<|||||>cc @younesbelkada As you've been working on the related issue <|||||>@richardr1126 are your checkpoints saving properly? I had to write a custom call back as the adapter_config wasn't being written <|||||>> @richardr1126 are your checkpoints saving properly? I had to write a custom call back as the adapter_config wasn't being written
Yeah, I used your PeftSavingCallback below and added it to the callbacks param in the Trainer. It created the adapter_config and adapter_model and saved them into the `checkpoint-XXX` folder after every save step, which I set to 100. I am using Colab so I downloaded the adapter_model and config to my local computer, then uploaded it to Hugging Face as a LoRA adapter using the Upload files button on the model repo.
```
from trl import SFTTrainer
from transformers import TrainerCallback
import os
class PeftSavingCallback(TrainerCallback):
def on_save(self, args, state, control, **kwargs):
checkpoint_path = os.path.join(args.output_dir, f"checkpoint-{state.global_step}")
kwargs["model"].save_pretrained(checkpoint_path)
if "pytorch_model.bin" in os.listdir(checkpoint_path):
os.remove(os.path.join(checkpoint_path, "pytorch_model.bin"))
trainer = SFTTrainer(
model=model,
train_dataset=sql,
peft_config=peft_config,
dataset_text_field="text",
max_seq_length=176,
tokenizer=tokenizer,
args=training_arguments,
callbacks=[PeftSavingCallback]
)
```<|||||>> Interestingly failed at exactly 1 Epoch
Hello @nivibilla, PR #24415 should fix this. Can you confirm the same?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>I think this works. Haven't tested though. Will close for now |
transformers | 24,244 | closed | Make classifier backbone dynamic maskformer, mask2former | ### Feature request
Make classifier backbone dynamic maskformer, mask2former. Currently it only supports swin transformer.
link example: https://github.com/huggingface/transformers/blob/main/src/transformers/models/mask2former/modeling_mask2former.py#L1393
### Motivation
If this feature request successful, it will help to benchmark for new clasifier easily. | 06-13-2023 12:14:03 | 06-13-2023 12:14:03 | Thank you for solving this issue. But there is still problem.
The supported backbones variable hinders to allow any backbones other than
swin.
Please check this line. I had to write child class and had to modify this
line to solve this problem.
https://github.com/huggingface/transformers/blob/main/src/transformers/models/mask2former/configuration_mask2former.py#L124
On Tue, 27 Jun 2023, 11:34 pm Sylvain Gugger, ***@***.***>
wrote:
> Closed #24244 <https://github.com/huggingface/transformers/issues/24244>
> as completed via #24259
> <https://github.com/huggingface/transformers/pull/24259>.
>
> —
> Reply to this email directly, view it on GitHub
> <https://github.com/huggingface/transformers/issues/24244#event-9655252543>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/AQSQGCPK433TDSJZKGBL7VDXNMKRDANCNFSM6AAAAAAZEYRWOA>
> .
> You are receiving this because you authored the thread.Message ID:
> ***@***.***>
>
<|||||>cc @amyeroberts <|||||>Hi @tanzzilaalam, you're completely right. I've opened a PR - #24532 - which should resolve this and allow you to pass in any `backbone_config`. |
transformers | 24,243 | closed | facebook\opt layer norm | ### System Info
transformers version 4.28.1.
I notice that in the facebook\optX models the LayerNorm weight is equal to 1 in all layers, means no parameter changed.
I checked the sizes 125m, 1.3b, 2.7b, 6.7b, 13b
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
from transformers import OPTModel
import torch
model = OPTModel.from_pretrained("facebook/opt-13b")
for m in model.modules():
if isinstance(m,torch.nn.LayerNorm):
(m.weight == 1).all()
### Expected behavior
I get: (Expected to get different values)
tensor(True)
tensor(True)
tensor(True)
tensor(True)
tensor(True)
tensor(True)
tensor(True)
tensor(True)
tensor(True)
tensor(True)
tensor(True)
tensor(True)
tensor(True)
tensor(True)
tensor(True)
tensor(True)
tensor(True)
tensor(True)
tensor(True)
tensor(True)
tensor(True)
tensor(True)
tensor(True)
tensor(True)
tensor(True)
tensor(True)
tensor(True) | 06-13-2023 12:06:44 | 06-13-2023 12:06:44 | Hi @CompressTeam, thanks for raising this issue!
I believe this behaviour is likely coming from the fact that the layer norm layers are instantiated with `elementwise_affine=True` e.g. [here](https://github.com/huggingface/transformers/blob/fdd78d91532dffc4b2493d3b9bd9e19aaf78fe6b/src/transformers/models/opt/modeling_opt.py#L292) (as default [config value is `True`](https://github.com/huggingface/transformers/blob/fdd78d91532dffc4b2493d3b9bd9e19aaf78fe6b/src/transformers/models/opt/configuration_opt.py#L120)). This instantiates the layer with [all weight values as 1, and biases as 0](https://pytorch.org/docs/stable/generated/torch.nn.LayerNorm.html).
Playing quickly with the snippet provided, I can see that the biases are all different values, so it would seem that either only the biases were updated when training the model, there's been a error in weight conversion or an issue with weight saving.
I'll hand over to @younesbelkada who added the model as is most familiar with layer norm related logic like `config._remove_final_layer_norm`
<|||||>Hi @CompressTeam
I think that this is expected, see this interesting thread from the authors: https://github.com/huggingface/transformers/issues/17653 and in particular these 2 messages: https://github.com/huggingface/transformers/issues/17653#issuecomment-1163065167 / https://github.com/huggingface/transformers/issues/17653#issuecomment-1163293340 from what I have understood the models somehow learned to get a layer norm of 1 <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 24,242 | closed | Error in finetuning starcoder with 8 GPU 24GB Memory | ### System Info
I am trying finetuning starcoder , with 8 GPU P40 (each 24GB Memory) , I am using
"https://github.com/Xirider/finetune-gpt2xl " and also referring the "https://github.com/bigcode-project/starcoder"
Facing error with both.
The error says :
raise ValueError( ValueError: You can't train a model that has been loaded in 8-bit precision on multiple devices in any distributed mode. In order to use 8-bit models that have been loaded across multiple GPUs the solution is to use Naive Pipeline Parallelism. Therefore you should not specify that you are under any distributed regime in your accelerate config.
@younesbelkada , please help on the same
Thanks!!!
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
/group-volume/orc_srib/mukesh.sm/STARCODER/finetune-gpt2xl/deep/lib/python3.8/site-packages/accelerate/accelerator.py"
python -m torch.distributed.launch \
--nproc_per_node 8 finetune.py \
--model_path="bigcode/starcoder"\
--dataset_name="HuggingFaceH4/CodeAlpaca_20K"\
--split="train"\
--size_valid_set 10000\
--streaming \
--seq_length 2048\
--max_steps 1000\
--batch_size 4\
--input_column_name="prompt"\
--output_column_name="completion"\
--gradient_accumulation_steps 16\
--learning_rate 1e-4\
--lr_scheduler_type="cosine"\
--num_warmup_steps 100\
--weight_decay 0.05\
--output_dir="./checkpoints" \
### Expected behavior
I am not able to understand why its not getting finetuned either 8 bit or 4 bit .
Because full precision if i try gives out of memory error .
Please help with the right steps to finetune the starcoder . | 06-13-2023 12:02:45 | 06-13-2023 12:02:45 | Hi @22Mukesh22
Thanks for the issue, Per my understanding you want to use NPP(Naive Pipeline Parallelism)
For reference check: https://github.com/huggingface/accelerate/issues/1515#issuecomment-1584515731 and the entire thread
In this case you should run your script in a non-distributed mode. Please run your script with just `python finetune.py xxxx` and let us know how it goes<|||||>But , with single GPU , it will take lot and lot of time to finish the training , I want to make sure it uses all my GPU , but its not happening .
I will update the same to you , for single GPU <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 24,241 | closed | Safely import pytest in testing_utils.py | # What does this PR do?
After merging in #23271, hitting `TAB` for autocompleting: `from transformers.` in an ipython session results in a runtime error.
It appears that this is because `_pytest` and `pytest` are imported in `testing_utils.py`. Although there's no direct imports of `transformers.testing_utils.py`, it seems this module read when using autocomplete.
Fixes #24227
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
| 06-13-2023 12:01:28 | 06-13-2023 12:01:28 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 24,240 | closed | Tensorflow and Torch yield significantly different results for same model | ### System Info
- `transformers` version: 4.20.1
- Platform: Linux-4.18.0-425.3.1.el8.x86_64-x86_64-with-glibc2.10
- Python version: 3.8.5
- Huggingface_hub version: 0.11.1
- PyTorch version (GPU?): 1.13.0 (True)
- Tensorflow version (GPU?): 2.7.4 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: True
- Using distributed or parallel set-up in script?: False
### Who can help?
I am trying to convert a PyTorch transformer model to TensorFlow. When comparing the model outputs, I observe significant differences in all output values.
To reproduce, I compared the model output for the same model (e.g. `distilbert-base-uncased`) once loaded with Tensorflow (2.7.4) and once with torch (1.13.0), see script below. Results are very different.
- Is this an expected outcome?
- Are there any strategies to mitigate these differences?
@sgugger @Rocketknight1
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Execute the following Python code.
```py
import numpy as np
import tensorflow as tf
from transformers import TFAutoModel, AutoTokenizer, AutoModel
import torch
model_path = "distilbert-base-uncased"
tf_model = TFAutoModel.from_pretrained(model_path) # The same problems occur with from_pt=True
pt_model = AutoModel.from_pretrained(model_path)
tokenizer = AutoTokenizer.from_pretrained(model_path)
payload = ["This is a great sentence embedding"]
encoded_input_tf = tokenizer(payload, return_tensors='tf')
encoded_input_pt= tokenizer(payload, return_tensors='pt')
tf_output = tf_model(**encoded_input_tf)
with torch.no_grad():
pt_output = pt_model(**encoded_input_pt)
np.testing.assert_allclose(pt_output.last_hidden_state, tf_output.last_hidden_state)
```
yields
```
AssertionError:
Not equal to tolerance rtol=1e-07, atol=0
Mismatched elements: 7680 / 7680 (100%)
Max absolute difference: 0.00231361
Max relative difference: 4.5563045
x: array([[[-0.259016, -0.081947, -0.052371, ..., -0.021481, 0.184779,
0.368155],
[-0.412638, -0.111859, -0.17412 , ..., -0.204094, 0.356639,...
y: array([[[-0.25897 , -0.081805, -0.052341, ..., -0.021413, 0.185112,
0.367904],
[-0.412646, -0.111567, -0.174027, ..., -0.204349, 0.356727,...
```
### Expected behavior
Both models outputs should be similar. The `np.testing.assert_allclose()` should not raise an `AssertionError`. | 06-13-2023 11:41:29 | 06-13-2023 11:41:29 | Hi @DavidHuebner, this is a known issue, and it's unfortunately unavoidable! Floating point calculations are inherently imprecise, which means that changing the specific kernels you use and the order of operations will slightly alter the result. Because TF uses different kernels to Torch, and because TF compiles models by default (which means that some operations may be fused or reordered), there will always be a numerical difference in their outputs. In general, we find that the final model outputs (e.g. token logits for `distilbert`) remain relatively similar, even though hidden states can vary by ~1e-3 or even ~1e-2 in some cases.
If you want to reduce the error, one useful tip is that TensorFlow enables TensorFloat-32 computation by default (which increases speed but reduces precision on newer GPUs), but Torch does not. Adding the line `tf.config.experimental.enable_tensor_float_32_execution(False)` to your TF code may make the results more similar to the Torch outputs, but even with that I suspect the error will be in the range 1e-4 to 1e-5, and reducing the error to ~1e-7 for any real model is probably impossible!<|||||>Alright. Thanks for the response and explanation. |
transformers | 24,239 | closed | deprecate `use_mps_device` | # What does this PR do?
1. deprecate `use_mps_device`. `mps` device will be used by default if available similar to the way `cuda` device is used.
Therefore, no action from user is required. | 06-13-2023 11:39:25 | 06-13-2023 11:39:25 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 24,238 | closed | Generate: GenerationConfig can overwrite attributes at from_pretrained time | # What does this PR do?
Fixes #24104
As with other configuration files, we should allow overwriting attributes at `from_pretrained` time. The latest change to the `GenerationConfig` loading code disallowed it -- this PR fixes it and adds a test. | 06-13-2023 11:16:39 | 06-13-2023 11:16:39 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 24,237 | closed | [Time Series] use mean scaler when scaling is a boolean True | # What does this PR do?
Use the mean scaler when `scaling` is `True` or `"mean"` | 06-13-2023 11:16:19 | 06-13-2023 11:16:19 | _The documentation is not available anymore as the PR was closed or merged._<|||||>thank you! |
transformers | 24,236 | closed | Use `accelerate` with transformers==4.26.1 | ### Feature request
How can we use `accelerate` features with an older version of `transformers`? I'm asking this because I've to use `adapter-transformers` and that is based on `transformers v4.26.1`
### Motivation
I used it latest `transformers` version and found it really helpful for parallelising the training
### Your contribution
It's more of an optimization thing for an older version, I'm not sure how helpful will it be for other people or if any PR will be required for this.
Tagging @muellerzr here as he has worked on this perviously. Thank you! | 06-13-2023 10:42:42 | 06-13-2023 10:42:42 | You'd need to recreate everything that has been performed throughout this integration, which is quite a lot. Distributed training still works in the Trainer natively, we just replaced its guts with Accelerate. The only real main difference is the DataLoader setup is a bit different in how it handles the batches. However in terms of usage nothing truly has changed. So they should update their version to v4.30.1 if you'd like this.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 24,235 | open | Add SPTSv2 | ### Model description
SPTSv2 is the latest SOTA text spotting model from Bytedance. Given that we already support DETR, should be a breeze to support this model as well.
SPTSv2 is an improvement over the first version: https://github.com/shannanyinxiang/SPTS.
### Open source status
- [X] The model implementation is available
- [X] The model weights are available
### Provide useful links for the implementation
https://github.com/bytedance/SPTSv2 | 06-13-2023 09:42:08 | 06-13-2023 09:42:08 | Hi @NielsRogge can I please contribute this model? |
transformers | 24,234 | closed | Adapt Wav2Vec2 conversion for MMS lang identification | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Adds conversion code for MMS - Language Identification models: https://huggingface.co/models?other=mms&sort=downloads&search=lid
Source: https://github.com/facebookresearch/fairseq/tree/main/examples/mms#tts
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 06-13-2023 09:27:18 | 06-13-2023 09:27:18 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24234). All of your documentation changes will be reflected on that endpoint. |
transformers | 24,233 | closed | Auto-Converted Fast Tokenizer Producing Incorrect Results | ### System Info
- `transformers` version: 4.30.1
- Platform: Linux-5.15.107+-x86_64-with-glibc2.31
- Python version: 3.10.12
- Huggingface_hub version: 0.15.1
- Safetensors version: 0.3.1
- PyTorch version (GPU?): 2.0.1+cu118 (False)
- Tensorflow version (GPU?): 2.12.0 (False)
- Flax version (CPU?/GPU?/TPU?): 0.6.9 (cpu)
- Jax version: 0.4.10
- JaxLib version: 0.4.10
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help?
@ArthurZucker
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
The auto-converted fast tokenizer for the LLaMA model sometimes does not produce the same tokenization results as the original sentence piece tokenizer. This is affecting the OpenLLaMA models. Here's the code to reproduce it:
```python
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained('openlm-research/open_llama_7b', use_fast=False)
fast_tokenizer = AutoTokenizer.from_pretrained('openlm-research/open_llama_7b')
text = 'thermal'
print(tokenizer.encode(text))
print(fast_tokenizer.encode(text))
```
The code produces the following output:
```
[1, 14412]
[1, 31822, 496, 12719]
```
### Expected behavior
The auto-converted fast tokenizer should produce the exact same tokens as the original sentencepiece tokenizer. | 06-13-2023 09:20:50 | 06-13-2023 09:20:50 | Hey! Thanks for reporting. I am investigating this !<|||||>Hi, I have a fix. It also makes the conversion process a lot faster (it is super slow on my machine right now for some reason). Is it ok if I make a PR?
@young-geng do you have other examples of words that go wrong? I think I've fixed it, but more evidence would also be nice 😸 <|||||>@stephantul I can dig into it more to find some more examples. Could you tell me why this happens?<|||||>I'm still a bit confused as to the exact cause of the issue. I think it has to do with the way the merges are ordered. I'm now running the slow conversion process, which takes a long time, but the new fast conversion process at least fixes the "thermal" example you had above.
After that, I can compare and give you a proper analysis, should be done later today.<|||||>The issue was that your tokenizer has a merge which has a score of 0, which is `_t`. This merge wasn't properly recorded, since the conversion code checked for Falsiness of the merge score, and not whether it existed.
i.e., it checked `if vocab_score:`, but it should have been checking `if vocab_score is None:`. Because of this, it removed the `_t` as a possible merge, which afflicts `_thermal` and other words starting with lowercase letter `t`.
<|||||>Great work @stephantul ! Will review your PR to merge it asap! |
transformers | 24,232 | closed | Improving error message when using `use_safetensors=True`. | # What does this PR do?
Fixes[ #273](https://github.com/huggingface/safetensors/issues/273)
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
--> | 06-13-2023 09:11:14 | 06-13-2023 09:11:14 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 24,231 | closed | Fix `check_config_attributes`: check all configuration classes | # What does this PR do?
As @NielsRogge points out to me, the `check_config_attributes` doesn't check all configuration classes (to make sure all `__init__` arguments are really used).
This is because `CONFIG_MAPPING` doesn't contain all configuration classes, in particular, the vision/text config classes for models like Blip2 or CLIP.
This PR fixes this issue, and remove some unused arguments/attributes in some configuration classes. | 06-13-2023 09:04:36 | 06-13-2023 09:04:36 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 24,230 | closed | Fix doc deployment | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 06-13-2023 08:59:37 | 06-13-2023 08:59:37 | Hi @CarlotaCiruelos, thanks for opening a PR!
Could you add some additional information in the PR description describing what issue this is resolving?
All the CircleCI tests need to be passing in order for any PR to be ready to merge. For more information on how to make a PR ready, please read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests).<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 24,228 | closed | QA doc: import torch before it is used | # What does this PR do?
import torch before it is used in qa doc. Otherwise, it complains import error.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@stevhliu
| 06-13-2023 06:55:25 | 06-13-2023 06:55:25 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@amyeroberts @stevhliu mind checking again? thanks!<|||||>@ByronHsu Thanks again for fixing! All looks good 👍 |
transformers | 24,227 | closed | Importing transformers in ipython throws error due to `_pytest` | ### System Info
- `transformers` version: 4.30.1
- Platform: macOS-12.5-arm64-arm-64bit
- Python version: 3.10.8
- Huggingface_hub version: 0.15.1
- Safetensors version: 0.3.1
- PyTorch version (GPU?): 2.0.1 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
typing `from transformers.` and then pressing `TAB` (i.e., autocompletion) shows the following stack trace in `ipython`. Aftwards, everything works as expected, although trying to use anything from `transformers.testing_utils` will throw the same error. This happens because `ipython` is doing introspection of the testing module, which then attempts to import the `_pytest` module, which doesn't exist.
```
Traceback (most recent call last):
File "my_env/lib/python3.10/site-packages/transformers/utils/import_utils.py", line 1084, in _get_module
return importlib.import_module("." + module_name, self.__name__)
File "/opt/homebrew/Cellar/[email protected]/3.10.8/Frameworks/Python.framework/Versions/3.10/lib/python3.10/importlib/__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1050, in _gcd_import
File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 688, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 883, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "my_env/lib/python3.10/site-packages/transformers/testing_utils.py", line 39, in <module>
from _pytest.doctest import (
ModuleNotFoundError: No module named '_pytest'
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "my_env/lib/python3.10/site-packages/IPython/core/completer.py", line 3171, in _complete
result = matcher(context)
File "my_env/lib/python3.10/site-packages/IPython/core/completer.py", line 2707, in custom_completer_matcher
matches = self.dispatch_custom_completer(context.token) or []
File "my_env/lib/python3.10/site-packages/IPython/core/completer.py", line 2747, in dispatch_custom_completer
res = c(event)
File "my_env/lib/python3.10/site-packages/IPython/core/completerlib.py", line 272, in module_completer
return module_completion(event.line)
File "my_env/lib/python3.10/site-packages/IPython/core/completerlib.py", line 249, in module_completion
completion_list = try_import('.'.join(mod[:-1]), True)
File "my_env/lib/python3.10/site-packages/IPython/core/completerlib.py", line 183, in try_import
completions.extend( [attr for attr in dir(m) if
File "my_env/lib/python3.10/site-packages/IPython/core/completerlib.py", line 184, in <listcomp>
is_importable(m, attr, only_modules)])
File "my_env/lib/python3.10/site-packages/IPython/core/completerlib.py", line 153, in is_importable
return inspect.ismodule(getattr(module, attr))
File "my_env/lib/python3.10/site-packages/transformers/utils/import_utils.py", line 1072, in __getattr__
value = self._get_module(name)
File "my_env/lib/python3.10/site-packages/transformers/utils/import_utils.py", line 1086, in _get_module
raise RuntimeError(
RuntimeError: Failed to import transformers.testing_utils because of the following error (look up to see its traceback):
No module named '_pytest'
```
### Expected behavior
A possible workaround would be to use a try-except statement around this block, which then prints a useful error message. | 06-13-2023 05:33:34 | 06-13-2023 05:33:34 | I have exactly same issue with this, how to resolve the issue? <|||||>I have the same issue with the 4.30.0 version. Try to use the 4.29.0 version.<|||||>We can workaround it with `pip install pytest`<|||||>@gkgkska @xxupiano @xin3he Thanks for reporting this. A fix has now been merged into `main`. <|||||>Running into a similar error when running `4.31.0.dev0` transformer language modeling example.
```(base) ubuntu@104-171-202-20:~/llm-training/transformers/examples/pytorch/language-modeling$ python run_clm.py --model_name_or_path gpt2 --dataset_name wikitext --dataset_config_name wikitext-2-raw-v1 --per_device_train_batch_size 8 --per_device_eval_batch_size 8 --do_train --do_eval --output_dir /tmp/test-clm
Traceback (most recent call last):
File "/home/ubuntu/llm-training/transformers/examples/pytorch/language-modeling/run_clm.py", line 51, in <module>
from transformers.testing_utils import CaptureLogger
File "/home/ubuntu/anaconda3/lib/python3.9/site-packages/transformers/testing_utils.py", line 109, in <module>
from _pytest.doctest import (
ImportError: cannot import name 'Module' from '_pytest.doctest' (/home/ubuntu/anaconda3/lib/python3.9/site-packages/_pytest/doctest.py)```<|||||>Hi @praateekmahajan, could you provide some more information about the issue? Specifically a reproducible set of steps or code and information about the running environment (run `transformers-cli env` in your terminal). <|||||>> Running into a similar error when running `4.31.0.dev0` transformer language modeling example.
>
> ```
> Traceback (most recent call last):
> File "/home/ubuntu/llm-training/transformers/examples/pytorch/language-modeling/run_clm.py", line 51, in <module>
> from transformers.testing_utils import CaptureLogger
> File "/home/ubuntu/anaconda3/lib/python3.9/site-packages/transformers/testing_utils.py", line 109, in <module>
> from _pytest.doctest import (
> ImportError: cannot import name 'Module' from '_pytest.doctest' (/home/ubuntu/anaconda3/lib/python3.9/site-packages/_pytest/doctest.py)```
> ```
I am also seeing this error. I just hacked out the use of the CapturLogger in the example run_clm.
Since this bug is closed, should we open a different one with this issue?<|||||>@asampat3090 @sei-amellinger
@amyeroberts 's fix #24241 is merged after the tag `v4.31.0.dev0`. So the first thing to check is to see what the commit you have used to install `transformers`. In any case, you can fetch the latest commit and install it again. It should work fine I think. <|||||>pip3 uninstall pytest
pip3 install pytest
maybe pytest version is old<|||||>> maybe pytest version is old
Yeah, it was on old version of pytest.
Thanks!<|||||>Hi team, when will there be a new release that contains the fix from https://github.com/huggingface/transformers/pull/24241?
cc @sgugger<|||||>The next release will be early next week, probably on Tuesday.<|||||>Thank you! |
transformers | 24,226 | closed | remove unused is_decoder parameter in DetrAttention | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #24161
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 06-13-2023 05:22:55 | 06-13-2023 05:22:55 | _The documentation is not available anymore as the PR was closed or merged._<|||||>_The documentation is not available anymore as the PR was closed or merged._ |
transformers | 24,225 | closed | Transformers can't detect tensorflow installed in some different path of the environment, even though the path is added to PYTHONPATH | Transformers is not able to detect tensorflow installed in different directory.
Steps to reproduce, execute the commands in order:
`docker pull unifyai/ivy:latest`
`docker run --rm -it unifyai/ivy:latest`
`pip install transformers`
`python`
`from transformers import TFDeiTModel`
`model = TFDeiTModel.from_pretrained("facebook/deit-base-distilled-patch16-224")`
| 06-13-2023 05:15:49 | 06-13-2023 05:15:49 | Hi @RickSanchezStoic, thanks for raising this issue!
It seems this is a bug relating to how we detect TF being in the environment, @Rocketknight1 is opening a PR to resolve. Related issue here: #24253<|||||>Hi @RickSanchezStoic, can you confirm what you mean by "installed in a different directory" here? Do you mean that it's not installed as a library, but instead just present as a local directory in the directory you're running your code in?<|||||>@RickSanchezStoic we just merged PR #24255 based on this issue and #24253. However, it might not actually resolve your problem - can you test if the new version fixes your issue? You can install the latest version from main with `pip install --upgrade git+https://github.com/huggingface/transformers.git`<|||||>> Hi @RickSanchezStoic, can you confirm what you mean by "installed in a different directory" here? Do you mean that it's not installed as a library, but instead just present as a local directory in the directory you're running your code in?
It means when the package is installed elsewhere when you use the `--target` flag with pip to specify a different location.<|||||>> @RickSanchezStoic we just merged PR #24255 based on this issue and #24253. However, it might not actually resolve your problem - can you test if the new version fixes your issue? You can install the latest version from main with `pip install --upgrade git+https://github.com/huggingface/transformers.git`
Sure! will check this out and report here. Thanks!<|||||>> @RickSanchezStoic we just merged PR #24255 based on this issue and #24253. However, it might not actually resolve your problem - can you test if the new version fixes your issue? You can install the latest version from main with `pip install --upgrade git+https://github.com/huggingface/transformers.git`
This worked. This is what we needed. Thanks a lot! |
transformers | 24,224 | closed | Fix LLaMa beam search when using parallelize | # What does this PR do?
This PR fixes a crash when running beam search on LLaMa on multiple GPUs. Similar issue is also observed and fixed on T5 #11717
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 06-13-2023 04:09:46 | 06-13-2023 04:09:46 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@FeiWang96 The quality checks are currently failing. To resolve these, run `make style` at the top level of the repo and commit any changes made. <|||||>Hi @amyeroberts , thank you for approving. I've fixed the code format issue. However, the ci failed on other tests due to some network issues. I don't have the permission to rerun.<|||||>@FeiWang96 Re-ran and all passing now. Thanks again! |
transformers | 24,223 | closed | MMS: target_lang=fra in pipeline() leads to "Size mismatch for lm_head.weight/bias when loading state_dict for Wav2Vec2ForCTC" | ### System Info
- `transformers` version: 4.31.0.dev0
- Platform: Linux-5.15.107+-x86_64-with-glibc2.31
- Python version: 3.10.12
- Huggingface_hub version: 0.15.1
- Safetensors version: 0.3.1
- PyTorch version (GPU?): 2.0.1+cu118 (True)
- Tensorflow version (GPU?): 2.12.0 (True)
- Flax version (CPU?/GPU?/TPU?): 0.6.9 (gpu)
- Jax version: 0.4.10
- JaxLib version: 0.4.10
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help?
@patrickvonplaten @sanchit-gandhi
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Colab link: https://colab.research.google.com/drive/1YmABsYKCk39Z7GF390G316ZsEWH7GkqT?usp=sharing
### Expected behavior
```pipe = pipeline(model="facebook/mms-1b-l1107", model_kwargs={"target_lang":"fra"})```
I expected this to create the pipeline with the `fra` adapter loaded, as seems to be intended [here](https://github.com/huggingface/transformers/commit/5dfd407b37ac683dc91637e9913b0ae9205d2acd#diff-fde96a141d70737bff942cb61341f3b4b87729c9a066ecee4bfc86dfe590a8e6R1864).
It fails with a size mismatch issue. Ignoring it seems to load the english adapter instead, as the result is poor and doesn’t match the demo on the official space (https://huggingface.co/spaces/facebook/MMS). | 06-13-2023 00:37:23 | 06-13-2023 00:37:23 | Hey @erickedji,
We were indeed missing docs here, I've added them in #24292 .
However, the way you used the pipeline is 100% correct! For some reason **"facebook/mms-1b-l1107"** doesn't perform very well. However, **"facebook/mms-1b-all"** works well.
```py
from transformers import pipeline
model_id = "facebook/mms-1b-all"
pipe = pipeline(model=model_id, model_kwargs={"target_lang":"fra", "ignore_mismatched_sizes":True})
print(pipe("http://french.voiceoversamples.com/jeanNL.mp3"))
```
gives
```
{'text': "la première fois que vous allez ouvrir une interaction client vous serait dirigée vers la page d'identification il s'agit du mode par défaut utilisé pour toutes les interactions clients veuillez vérifier le numéro de sécurité sociale de l'appelant avant de poursuivre une fois après avoir confirmé cliqué sur le bouton suivant comme ceci très bien passons maintenant à l'étape dex"}
```
and **"facebook/mms-1b-fl102"** also seems to have problems.
```py
from transformers import pipeline
model_id = "facebook/mms-1b-fl102"
pipe = pipeline(model=model_id, model_kwargs={"target_lang":"fra", "ignore_mismatched_sizes":True})
print(pipe("http://french.voiceoversamples.com/jeanNL.mp3"))
```
gives
```
{'text': "la première fois que vous alez ouvrir une interaction client vous seraen dirigée vers la page d'identification il s’agit du mode par des fauts utilisé pour toutes les interactions clients veuillez vérifier le numéro de sécurité sociale de l'appelan avant de poursuivre une fois après avoir confirmé clicque sur le bouton suivant comme ceci très bien passons maintenant à l’étape d"}
```
cc @vineelpratap it's a bit surprising that the fl102 model perfoms worse than the `"all"` model here no? Maybe I've made an error with the weight conversion? Could you maybe check what the original checkpoint & code gives for pure CTC for `"http://french.voiceoversamples.com/jeanNL.mp3"` ?
@Vaibhavs10 we also should run some evals on the whole FLEURS dataset to be sure.<|||||>Hi, for the above sample - I get this result with FL102 models and using greedy decoding. I converted `.mp3` to `.wav` using this command `ffmpeg -y -i audio.mp3 -ar 16000 -ac 1 audio.wav`
```
la première fois que vous allez ouvrir une interaction client vous seraet dirigée vers la page d'identification il s’agit du mode par des fauts utilisé pour toutes les interactions clients. veuillez vérifier le numéro de sécurité sociale de l'appelan avant de poursuivre. une fois après avoir confirmé clique sur le bouton suivant comme ceci très bien passans maintenant à l’étape 2
```
Is it possible that we used incorrect dictionary ?
> it's a bit surprising that the fl102 model perfoms worse than the "all" model here no?
Note that MMS-FL102 model is trained only on FLEURS data, which consists of about 10 hours of data per language while while MMS-ALL model is trained on combining MLS, Common Voice, FLEURS etc. So, it is expected that the performance of MMS-ALL model is better than MMS-FL102.
MMS-FL102, MMS-FL1107 were open sourced so that one can reproduce some of the results in the paper. If you care about the best performing ASR model, using MMS-ALL model would be the best choice. Running LM decoding will further boost performance as we discuss in the MMS paper, and we are working on open sourcing the LMs soon.
<|||||>Thanks you both for the clarifications!<|||||>@patrickvonplaten - do you know why there is a discrepancy in the output of FL102 models from `fairseq` and `transformer` models for the above audio sample in French. It would be good to figure out the underlying issue.
<|||||>I don't know currently, I'll try to look into it over the weekend<|||||>There was indeed a bug! What is happening here is the following.
1.) By specifying `target_lang` in the constructor method of the pipeline, it is passed to the constructor method of `from_pretrained` of the model, which means inside the `pipeline(...)` function this is called:
```py
model_id = "facebook/mms-1b-fl102"
model = Wav2Vec2ForCTC.from_pretrained(model_id, target_lang="fra")
```
2.) Now by passing `target_lang="fra"` however we load the french adapter weights here: https://github.com/huggingface/transformers/blob/ee88ae59940fd4b2c8fc119373143d7a1175c651/src/transformers/models/wav2vec2/modeling_wav2vec2.py#L1880
in the init method of the model.
3.) **However** the init method is run before `from_prertained(...)` loads the state dict into the model. This means the correctly loaded French adapter layers are later again overwritten by the original English adapter layers (the default in the state dict).
This was sadly not noticed in the tests because English adapter weights work surprisingly well for French :sweat_smile:
=> A quick'n'dirty fix for users to get the exact same results as posted by @vineelpratap [here](https://github.com/huggingface/transformers/issues/24223#issuecomment-1593812212), is running the following code:
```py
from transformers import pipeline
model_id = "facebook/mms-1b-all"
pipe = pipeline(model=model_id, model_kwargs={"target_lang":"fra", "ignore_mismatched_sizes":True})
pipe.model.load_adapter("fra") # THIS CORRECTS THE INCORRECTLY OVERWRITTEN WEIHGTS!
print(pipe("http://french.voiceoversamples.com/jeanNL.mp3"))
```
gives:
```
la première fois que vous allez ouvrir une interaction client vous seraet dirigée vers la page d'identification il s’agit du mode par des fauts utilisé pour toutes les interactions clients. veuillez vérifier le numéro de sécurité sociale de l'appelan avant de poursuivre. une fois après avoir confirmé clique sur le bouton suivant comme ceci très bien passans maintenant à l’étape 2
```
**Also this is not a problem for the original demo because the original demo only make use of `load_adapter` after having called `from_pretrained` which solves this problem.**
<|||||>For now this is a hack we have to do, but this PR: https://github.com/huggingface/transformers/pull/24335 should solve it nicely.<|||||>I want to use lm in the decoder but I can seem to get it right. Do you know how to use the language model https://huggingface.co/facebook/mms-cclms?
I get ClientError: Not Found for url: https://huggingface.co/facebook/mms-cclms/resolve/main/config.json.
I checked the link and there is no instruction or and example to follow from.<|||||>Hey @shdh1995 - would you mind opening a new issue to track this with a reproducible code snippet? Note that this file might assist you in setting up in the meantime: https://huggingface.co/spaces/mms-meta/MMS/blob/main/asr.py<|||||>Running the [code snippet in the docs](https://huggingface.co/docs/transformers/model_doc/mms#loading) results in this exact issue. As shown in the below colab logs, I am installing from source. Has this problem been resolved and I am missing something? 👀

```py
from transformers import pipeline
model_id = "facebook/mms-1b-all"
target_lang = "fra"
pipe = pipeline(model=model_id, model_kwargs={"target_lang": "fra", "ignore_mismatched_sizes": True})
```
The following code also results in the same error:
```py
from transformers import Wav2Vec2ForCTC, AutoProcessor
model_id = "facebook/mms-1b-all"
target_lang = "fra"
processor = AutoProcessor.from_pretrained(model_id, target_lang=target_lang)
model = Wav2Vec2ForCTC.from_pretrained(model_id, target_lang=target_lang, ignore_mismatched_sizes=True)
```

<|||||>Thanks for the ping here @Vaibhavs10 and sorry about this issue going a bit unnoticed. Could you try again with current "main" after https://github.com/huggingface/transformers/pull/25267 @erickedji ?<|||||>@patrickvonplaten @xenova The only difference seems to be the `pip install`, and I don't see why it leads to a different behavior.
I just tried again by running the first and last cells here : https://colab.research.google.com/drive/1YmABsYKCk39Z7GF390G316ZsEWH7GkqT?usp=sharing
It worked. The above notebook basically does `!pip install git+https://github.com/huggingface/transformers datasets[torch]`, then:
```python
from transformers import pipeline
model_id = "facebook/mms-1b-all"
pipe = pipeline(model=model_id, model_kwargs={"target_lang":"fra", "ignore_mismatched_sizes":True})
output = pipe("http://french.voiceoversamples.com/jeanNL.mp3")
output
```
I'm not familiar enough with `pip` to comment.
@xenova Can you try with the same pip call as my notebook?<|||||>That's right 👍 you are installing from source (which includes [the latest fix](https://github.com/huggingface/transformers/pull/25267)).<|||||>Oh, nevermind ;) |
transformers | 24,222 | closed | Bump transformers from 4.26.1 to 4.30.0 in /examples/tensorflow/language-modeling-tpu | Bumps [transformers](https://github.com/huggingface/transformers) from 4.26.1 to 4.30.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/huggingface/transformers/releases">transformers's releases</a>.</em></p>
<blockquote>
<h2>v4.30.0: 100k, Agents improvements, Safetensors core dependency, Swiftformer, Autoformer, MobileViTv2, timm-as-a-backbone</h2>
<h2>100k</h2>
<p>Transformers has just reached 100k stars on GitHub, and to celebrate we wanted to highlight 100 projects in the vicinity of <code>transformers</code> and we have decided to create an <a href="https://github.com/huggingface/transformers/blob/main/awesome-transformers.md">awesome-transformers</a> page to do just that.</p>
<p>We accept PRs to add projects to the list!</p>
<ul>
<li>Top 100 by <a href="https://github.com/LysandreJik"><code>@LysandreJik</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/22912">#22912</a></li>
<li>Add LlamaIndex to awesome-transformers.md by <a href="https://github.com/ravi03071991"><code>@ravi03071991</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/23484">#23484</a></li>
<li>add cleanlab to awesome-transformers tools list by <a href="https://github.com/jwmueller"><code>@jwmueller</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/23440">#23440</a></li>
</ul>
<h2>4-bit quantization and QLoRA</h2>
<p>By leveraging the <code>bitsandbytes</code> library by <a href="https://github.com/TimDettmers"><code>@TimDettmers</code></a>, we add 4-bit support to <code>transformers</code> models!</p>
<ul>
<li>4-bit QLoRA via bitsandbytes (4-bit base model + LoRA) by <a href="https://github.com/TimDettmers"><code>@TimDettmers</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/23479">#23479</a></li>
</ul>
<h2>Agents</h2>
<p>The Agents framework has been improved and continues to be stabilized. Among bug fixes, here are the important new features that were added:</p>
<ul>
<li>Local agent capabilities, to load a generative model directly from <code>transformers</code> instead of relying on APIs.</li>
<li>Prompts are now hosted on the Hub, which means that anyone can fork the prompts and update them with theirs, to let other community contributors re-use them</li>
<li>We add an <code>AzureOpenAiAgent</code> class to support Azure OpenAI agents.</li>
</ul>
<ul>
<li>Add local agent by <a href="https://github.com/sgugger"><code>@sgugger</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/23438">#23438</a></li>
<li>Enable prompts on the Hub by <a href="https://github.com/sgugger"><code>@sgugger</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/23662">#23662</a></li>
<li>Add AzureOpenAiAgent by <a href="https://github.com/sgugger"><code>@sgugger</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/24058">#24058</a></li>
</ul>
<h2>Safetensors</h2>
<p>The <code>safetensors</code> library is a safe serialization framework for machine learning tensors. It has been audited and will become the default serialization framework for several organizations (Hugging Face, EleutherAI, Stability AI).</p>
<p>It has now become a core dependency of <code>transformers</code>.</p>
<ul>
<li>Making <code>safetensors</code> a core dependency. by <a href="https://github.com/Narsil"><code>@Narsil</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/23254">#23254</a></li>
</ul>
<h2>New models</h2>
<h3>Swiftformer</h3>
<p>The SwiftFormer paper introduces a novel efficient additive attention mechanism that effectively replaces the quadratic matrix multiplication operations in the self-attention computation with linear element-wise multiplications. A series of models called ‘SwiftFormer’ is built based on this, which achieves state-of-the-art performance in terms of both accuracy and mobile inference speed. Even their small variant achieves 78.5% top-1 ImageNet1K accuracy with only 0.8 ms latency on iPhone 14, which is more accurate and 2× faster compared to MobileViT-v2.</p>
<ul>
<li>Add swiftformer by <a href="https://github.com/shehanmunasinghe"><code>@shehanmunasinghe</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/22686">#22686</a></li>
</ul>
<h3>Autoformer</h3>
<p>This model augments the Transformer as a deep decomposition architecture, which can progressively decompose the trend and seasonal components during the forecasting process.</p>
<ul>
<li>[Time-Series] Autoformer model by <a href="https://github.com/elisim"><code>@elisim</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/21891">#21891</a></li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="https://github.com/huggingface/transformers/commit/fe861e578f50dc9c06de33cd361d2f625017e624"><code>fe861e5</code></a> [<code>GPT2</code>] Add correct keys on <code>_keys_to_ignore_on_load_unexpected</code> on all chil...</li>
<li><a href="https://github.com/huggingface/transformers/commit/b3e27a80578d022301611363b890107244e12354"><code>b3e27a8</code></a> Update the pin on Accelerate (<a href="https://redirect.github.com/huggingface/transformers/issues/24110">#24110</a>)</li>
<li><a href="https://github.com/huggingface/transformers/commit/53e1f5cf66d320b9c809f3940c707b6fef435d2d"><code>53e1f5c</code></a> [<code>Trainer</code>] Correct behavior of <code>_load_best_model</code> for PEFT models (<a href="https://redirect.github.com/huggingface/transformers/issues/24103">#24103</a>)</li>
<li><a href="https://github.com/huggingface/transformers/commit/17db177714b03103bb94cd71b7dd414bc63bffd5"><code>17db177</code></a> reset accelerate env variables after each test (<a href="https://redirect.github.com/huggingface/transformers/issues/24107">#24107</a>)</li>
<li><a href="https://github.com/huggingface/transformers/commit/905892f09027cab690918c7766fea1bb51bcdd26"><code>905892f</code></a> Release: v4.30.0</li>
<li><a href="https://github.com/huggingface/transformers/commit/c3572e6bfba13ce6dc3fedb05cd1a946ea109576"><code>c3572e6</code></a> Add AzureOpenAiAgent (<a href="https://redirect.github.com/huggingface/transformers/issues/24058">#24058</a>)</li>
<li><a href="https://github.com/huggingface/transformers/commit/5eb3d3c7023ed0522d3c743ee2e13d896a3aa788"><code>5eb3d3c</code></a> Up pinned accelerate version (<a href="https://redirect.github.com/huggingface/transformers/issues/24089">#24089</a>)</li>
<li><a href="https://github.com/huggingface/transformers/commit/d1c039e39864a41f6eb8b770a65f123c40164ea5"><code>d1c039e</code></a> fix accelerator prepare during eval only mode (<a href="https://redirect.github.com/huggingface/transformers/issues/24014">#24014</a>)</li>
<li><a href="https://github.com/huggingface/transformers/commit/2c887cf8e0cb1ac96d28361ff3235a77f83c36ee"><code>2c887cf</code></a> Do not prepare lr scheduler as it as the right number of steps (<a href="https://redirect.github.com/huggingface/transformers/issues/24088">#24088</a>)</li>
<li><a href="https://github.com/huggingface/transformers/commit/12298cb65c7e9d615b749dde935a0b4966f4ae49"><code>12298cb</code></a> fix executable batch size issue (<a href="https://redirect.github.com/huggingface/transformers/issues/24067">#24067</a>)</li>
<li>Additional commits viewable in <a href="https://github.com/huggingface/transformers/compare/v4.26.1...v4.30.0">compare view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts).
</details> | 06-12-2023 21:46:48 | 06-12-2023 21:46:48 | _The documentation is not available anymore as the PR was closed or merged._<|||||>OK, I won't notify you again about this release, but will get in touch when a new version is available. If you'd rather skip all updates until the next major or minor version, let me know by commenting `@dependabot ignore this major version` or `@dependabot ignore this minor version`.
If you change your mind, just re-open this PR and I'll resolve any conflicts on it.<|||||>@dependabot ignore this major version<|||||>OK, I won't notify you about version 4.x.x again, unless you re-open this PR. 😢 |
transformers | 24,221 | closed | Bump transformers from 4.26.0 to 4.30.0 in /examples/research_projects/vqgan-clip | Bumps [transformers](https://github.com/huggingface/transformers) from 4.26.0 to 4.30.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/huggingface/transformers/releases">transformers's releases</a>.</em></p>
<blockquote>
<h2>v4.30.0: 100k, Agents improvements, Safetensors core dependency, Swiftformer, Autoformer, MobileViTv2, timm-as-a-backbone</h2>
<h2>100k</h2>
<p>Transformers has just reached 100k stars on GitHub, and to celebrate we wanted to highlight 100 projects in the vicinity of <code>transformers</code> and we have decided to create an <a href="https://github.com/huggingface/transformers/blob/main/awesome-transformers.md">awesome-transformers</a> page to do just that.</p>
<p>We accept PRs to add projects to the list!</p>
<ul>
<li>Top 100 by <a href="https://github.com/LysandreJik"><code>@LysandreJik</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/22912">#22912</a></li>
<li>Add LlamaIndex to awesome-transformers.md by <a href="https://github.com/ravi03071991"><code>@ravi03071991</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/23484">#23484</a></li>
<li>add cleanlab to awesome-transformers tools list by <a href="https://github.com/jwmueller"><code>@jwmueller</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/23440">#23440</a></li>
</ul>
<h2>4-bit quantization and QLoRA</h2>
<p>By leveraging the <code>bitsandbytes</code> library by <a href="https://github.com/TimDettmers"><code>@TimDettmers</code></a>, we add 4-bit support to <code>transformers</code> models!</p>
<ul>
<li>4-bit QLoRA via bitsandbytes (4-bit base model + LoRA) by <a href="https://github.com/TimDettmers"><code>@TimDettmers</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/23479">#23479</a></li>
</ul>
<h2>Agents</h2>
<p>The Agents framework has been improved and continues to be stabilized. Among bug fixes, here are the important new features that were added:</p>
<ul>
<li>Local agent capabilities, to load a generative model directly from <code>transformers</code> instead of relying on APIs.</li>
<li>Prompts are now hosted on the Hub, which means that anyone can fork the prompts and update them with theirs, to let other community contributors re-use them</li>
<li>We add an <code>AzureOpenAiAgent</code> class to support Azure OpenAI agents.</li>
</ul>
<ul>
<li>Add local agent by <a href="https://github.com/sgugger"><code>@sgugger</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/23438">#23438</a></li>
<li>Enable prompts on the Hub by <a href="https://github.com/sgugger"><code>@sgugger</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/23662">#23662</a></li>
<li>Add AzureOpenAiAgent by <a href="https://github.com/sgugger"><code>@sgugger</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/24058">#24058</a></li>
</ul>
<h2>Safetensors</h2>
<p>The <code>safetensors</code> library is a safe serialization framework for machine learning tensors. It has been audited and will become the default serialization framework for several organizations (Hugging Face, EleutherAI, Stability AI).</p>
<p>It has now become a core dependency of <code>transformers</code>.</p>
<ul>
<li>Making <code>safetensors</code> a core dependency. by <a href="https://github.com/Narsil"><code>@Narsil</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/23254">#23254</a></li>
</ul>
<h2>New models</h2>
<h3>Swiftformer</h3>
<p>The SwiftFormer paper introduces a novel efficient additive attention mechanism that effectively replaces the quadratic matrix multiplication operations in the self-attention computation with linear element-wise multiplications. A series of models called ‘SwiftFormer’ is built based on this, which achieves state-of-the-art performance in terms of both accuracy and mobile inference speed. Even their small variant achieves 78.5% top-1 ImageNet1K accuracy with only 0.8 ms latency on iPhone 14, which is more accurate and 2× faster compared to MobileViT-v2.</p>
<ul>
<li>Add swiftformer by <a href="https://github.com/shehanmunasinghe"><code>@shehanmunasinghe</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/22686">#22686</a></li>
</ul>
<h3>Autoformer</h3>
<p>This model augments the Transformer as a deep decomposition architecture, which can progressively decompose the trend and seasonal components during the forecasting process.</p>
<ul>
<li>[Time-Series] Autoformer model by <a href="https://github.com/elisim"><code>@elisim</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/21891">#21891</a></li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="https://github.com/huggingface/transformers/commit/fe861e578f50dc9c06de33cd361d2f625017e624"><code>fe861e5</code></a> [<code>GPT2</code>] Add correct keys on <code>_keys_to_ignore_on_load_unexpected</code> on all chil...</li>
<li><a href="https://github.com/huggingface/transformers/commit/b3e27a80578d022301611363b890107244e12354"><code>b3e27a8</code></a> Update the pin on Accelerate (<a href="https://redirect.github.com/huggingface/transformers/issues/24110">#24110</a>)</li>
<li><a href="https://github.com/huggingface/transformers/commit/53e1f5cf66d320b9c809f3940c707b6fef435d2d"><code>53e1f5c</code></a> [<code>Trainer</code>] Correct behavior of <code>_load_best_model</code> for PEFT models (<a href="https://redirect.github.com/huggingface/transformers/issues/24103">#24103</a>)</li>
<li><a href="https://github.com/huggingface/transformers/commit/17db177714b03103bb94cd71b7dd414bc63bffd5"><code>17db177</code></a> reset accelerate env variables after each test (<a href="https://redirect.github.com/huggingface/transformers/issues/24107">#24107</a>)</li>
<li><a href="https://github.com/huggingface/transformers/commit/905892f09027cab690918c7766fea1bb51bcdd26"><code>905892f</code></a> Release: v4.30.0</li>
<li><a href="https://github.com/huggingface/transformers/commit/c3572e6bfba13ce6dc3fedb05cd1a946ea109576"><code>c3572e6</code></a> Add AzureOpenAiAgent (<a href="https://redirect.github.com/huggingface/transformers/issues/24058">#24058</a>)</li>
<li><a href="https://github.com/huggingface/transformers/commit/5eb3d3c7023ed0522d3c743ee2e13d896a3aa788"><code>5eb3d3c</code></a> Up pinned accelerate version (<a href="https://redirect.github.com/huggingface/transformers/issues/24089">#24089</a>)</li>
<li><a href="https://github.com/huggingface/transformers/commit/d1c039e39864a41f6eb8b770a65f123c40164ea5"><code>d1c039e</code></a> fix accelerator prepare during eval only mode (<a href="https://redirect.github.com/huggingface/transformers/issues/24014">#24014</a>)</li>
<li><a href="https://github.com/huggingface/transformers/commit/2c887cf8e0cb1ac96d28361ff3235a77f83c36ee"><code>2c887cf</code></a> Do not prepare lr scheduler as it as the right number of steps (<a href="https://redirect.github.com/huggingface/transformers/issues/24088">#24088</a>)</li>
<li><a href="https://github.com/huggingface/transformers/commit/12298cb65c7e9d615b749dde935a0b4966f4ae49"><code>12298cb</code></a> fix executable batch size issue (<a href="https://redirect.github.com/huggingface/transformers/issues/24067">#24067</a>)</li>
<li>Additional commits viewable in <a href="https://github.com/huggingface/transformers/compare/v4.26.0...v4.30.0">compare view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts).
</details> | 06-12-2023 21:45:39 | 06-12-2023 21:45:39 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@dependabot ignore this major version<|||||>OK, I won't notify you about version 4.x.x again, unless you re-open this PR. 😢 |
transformers | 24,220 | closed | Bump transformers from 4.21.1 to 4.30.0 in /examples/research_projects/codeparrot/examples | Bumps [transformers](https://github.com/huggingface/transformers) from 4.21.1 to 4.30.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/huggingface/transformers/releases">transformers's releases</a>.</em></p>
<blockquote>
<h2>v4.30.0: 100k, Agents improvements, Safetensors core dependency, Swiftformer, Autoformer, MobileViTv2, timm-as-a-backbone</h2>
<h2>100k</h2>
<p>Transformers has just reached 100k stars on GitHub, and to celebrate we wanted to highlight 100 projects in the vicinity of <code>transformers</code> and we have decided to create an <a href="https://github.com/huggingface/transformers/blob/main/awesome-transformers.md">awesome-transformers</a> page to do just that.</p>
<p>We accept PRs to add projects to the list!</p>
<ul>
<li>Top 100 by <a href="https://github.com/LysandreJik"><code>@LysandreJik</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/22912">#22912</a></li>
<li>Add LlamaIndex to awesome-transformers.md by <a href="https://github.com/ravi03071991"><code>@ravi03071991</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/23484">#23484</a></li>
<li>add cleanlab to awesome-transformers tools list by <a href="https://github.com/jwmueller"><code>@jwmueller</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/23440">#23440</a></li>
</ul>
<h2>4-bit quantization and QLoRA</h2>
<p>By leveraging the <code>bitsandbytes</code> library by <a href="https://github.com/TimDettmers"><code>@TimDettmers</code></a>, we add 4-bit support to <code>transformers</code> models!</p>
<ul>
<li>4-bit QLoRA via bitsandbytes (4-bit base model + LoRA) by <a href="https://github.com/TimDettmers"><code>@TimDettmers</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/23479">#23479</a></li>
</ul>
<h2>Agents</h2>
<p>The Agents framework has been improved and continues to be stabilized. Among bug fixes, here are the important new features that were added:</p>
<ul>
<li>Local agent capabilities, to load a generative model directly from <code>transformers</code> instead of relying on APIs.</li>
<li>Prompts are now hosted on the Hub, which means that anyone can fork the prompts and update them with theirs, to let other community contributors re-use them</li>
<li>We add an <code>AzureOpenAiAgent</code> class to support Azure OpenAI agents.</li>
</ul>
<ul>
<li>Add local agent by <a href="https://github.com/sgugger"><code>@sgugger</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/23438">#23438</a></li>
<li>Enable prompts on the Hub by <a href="https://github.com/sgugger"><code>@sgugger</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/23662">#23662</a></li>
<li>Add AzureOpenAiAgent by <a href="https://github.com/sgugger"><code>@sgugger</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/24058">#24058</a></li>
</ul>
<h2>Safetensors</h2>
<p>The <code>safetensors</code> library is a safe serialization framework for machine learning tensors. It has been audited and will become the default serialization framework for several organizations (Hugging Face, EleutherAI, Stability AI).</p>
<p>It has now become a core dependency of <code>transformers</code>.</p>
<ul>
<li>Making <code>safetensors</code> a core dependency. by <a href="https://github.com/Narsil"><code>@Narsil</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/23254">#23254</a></li>
</ul>
<h2>New models</h2>
<h3>Swiftformer</h3>
<p>The SwiftFormer paper introduces a novel efficient additive attention mechanism that effectively replaces the quadratic matrix multiplication operations in the self-attention computation with linear element-wise multiplications. A series of models called ‘SwiftFormer’ is built based on this, which achieves state-of-the-art performance in terms of both accuracy and mobile inference speed. Even their small variant achieves 78.5% top-1 ImageNet1K accuracy with only 0.8 ms latency on iPhone 14, which is more accurate and 2× faster compared to MobileViT-v2.</p>
<ul>
<li>Add swiftformer by <a href="https://github.com/shehanmunasinghe"><code>@shehanmunasinghe</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/22686">#22686</a></li>
</ul>
<h3>Autoformer</h3>
<p>This model augments the Transformer as a deep decomposition architecture, which can progressively decompose the trend and seasonal components during the forecasting process.</p>
<ul>
<li>[Time-Series] Autoformer model by <a href="https://github.com/elisim"><code>@elisim</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/21891">#21891</a></li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="https://github.com/huggingface/transformers/commit/fe861e578f50dc9c06de33cd361d2f625017e624"><code>fe861e5</code></a> [<code>GPT2</code>] Add correct keys on <code>_keys_to_ignore_on_load_unexpected</code> on all chil...</li>
<li><a href="https://github.com/huggingface/transformers/commit/b3e27a80578d022301611363b890107244e12354"><code>b3e27a8</code></a> Update the pin on Accelerate (<a href="https://redirect.github.com/huggingface/transformers/issues/24110">#24110</a>)</li>
<li><a href="https://github.com/huggingface/transformers/commit/53e1f5cf66d320b9c809f3940c707b6fef435d2d"><code>53e1f5c</code></a> [<code>Trainer</code>] Correct behavior of <code>_load_best_model</code> for PEFT models (<a href="https://redirect.github.com/huggingface/transformers/issues/24103">#24103</a>)</li>
<li><a href="https://github.com/huggingface/transformers/commit/17db177714b03103bb94cd71b7dd414bc63bffd5"><code>17db177</code></a> reset accelerate env variables after each test (<a href="https://redirect.github.com/huggingface/transformers/issues/24107">#24107</a>)</li>
<li><a href="https://github.com/huggingface/transformers/commit/905892f09027cab690918c7766fea1bb51bcdd26"><code>905892f</code></a> Release: v4.30.0</li>
<li><a href="https://github.com/huggingface/transformers/commit/c3572e6bfba13ce6dc3fedb05cd1a946ea109576"><code>c3572e6</code></a> Add AzureOpenAiAgent (<a href="https://redirect.github.com/huggingface/transformers/issues/24058">#24058</a>)</li>
<li><a href="https://github.com/huggingface/transformers/commit/5eb3d3c7023ed0522d3c743ee2e13d896a3aa788"><code>5eb3d3c</code></a> Up pinned accelerate version (<a href="https://redirect.github.com/huggingface/transformers/issues/24089">#24089</a>)</li>
<li><a href="https://github.com/huggingface/transformers/commit/d1c039e39864a41f6eb8b770a65f123c40164ea5"><code>d1c039e</code></a> fix accelerator prepare during eval only mode (<a href="https://redirect.github.com/huggingface/transformers/issues/24014">#24014</a>)</li>
<li><a href="https://github.com/huggingface/transformers/commit/2c887cf8e0cb1ac96d28361ff3235a77f83c36ee"><code>2c887cf</code></a> Do not prepare lr scheduler as it as the right number of steps (<a href="https://redirect.github.com/huggingface/transformers/issues/24088">#24088</a>)</li>
<li><a href="https://github.com/huggingface/transformers/commit/12298cb65c7e9d615b749dde935a0b4966f4ae49"><code>12298cb</code></a> fix executable batch size issue (<a href="https://redirect.github.com/huggingface/transformers/issues/24067">#24067</a>)</li>
<li>Additional commits viewable in <a href="https://github.com/huggingface/transformers/compare/v4.21.1...v4.30.0">compare view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts).
</details> | 06-12-2023 21:40:07 | 06-12-2023 21:40:07 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@dependabot ignore this major version<|||||>OK, I won't notify you about version 4.x.x again, unless you re-open this PR. 😢 |
transformers | 24,219 | closed | Bump transformers from 4.19.0 to 4.30.0 in /examples/research_projects/codeparrot | Bumps [transformers](https://github.com/huggingface/transformers) from 4.19.0 to 4.30.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/huggingface/transformers/releases">transformers's releases</a>.</em></p>
<blockquote>
<h2>v4.30.0: 100k, Agents improvements, Safetensors core dependency, Swiftformer, Autoformer, MobileViTv2, timm-as-a-backbone</h2>
<h2>100k</h2>
<p>Transformers has just reached 100k stars on GitHub, and to celebrate we wanted to highlight 100 projects in the vicinity of <code>transformers</code> and we have decided to create an <a href="https://github.com/huggingface/transformers/blob/main/awesome-transformers.md">awesome-transformers</a> page to do just that.</p>
<p>We accept PRs to add projects to the list!</p>
<ul>
<li>Top 100 by <a href="https://github.com/LysandreJik"><code>@LysandreJik</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/22912">#22912</a></li>
<li>Add LlamaIndex to awesome-transformers.md by <a href="https://github.com/ravi03071991"><code>@ravi03071991</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/23484">#23484</a></li>
<li>add cleanlab to awesome-transformers tools list by <a href="https://github.com/jwmueller"><code>@jwmueller</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/23440">#23440</a></li>
</ul>
<h2>4-bit quantization and QLoRA</h2>
<p>By leveraging the <code>bitsandbytes</code> library by <a href="https://github.com/TimDettmers"><code>@TimDettmers</code></a>, we add 4-bit support to <code>transformers</code> models!</p>
<ul>
<li>4-bit QLoRA via bitsandbytes (4-bit base model + LoRA) by <a href="https://github.com/TimDettmers"><code>@TimDettmers</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/23479">#23479</a></li>
</ul>
<h2>Agents</h2>
<p>The Agents framework has been improved and continues to be stabilized. Among bug fixes, here are the important new features that were added:</p>
<ul>
<li>Local agent capabilities, to load a generative model directly from <code>transformers</code> instead of relying on APIs.</li>
<li>Prompts are now hosted on the Hub, which means that anyone can fork the prompts and update them with theirs, to let other community contributors re-use them</li>
<li>We add an <code>AzureOpenAiAgent</code> class to support Azure OpenAI agents.</li>
</ul>
<ul>
<li>Add local agent by <a href="https://github.com/sgugger"><code>@sgugger</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/23438">#23438</a></li>
<li>Enable prompts on the Hub by <a href="https://github.com/sgugger"><code>@sgugger</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/23662">#23662</a></li>
<li>Add AzureOpenAiAgent by <a href="https://github.com/sgugger"><code>@sgugger</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/24058">#24058</a></li>
</ul>
<h2>Safetensors</h2>
<p>The <code>safetensors</code> library is a safe serialization framework for machine learning tensors. It has been audited and will become the default serialization framework for several organizations (Hugging Face, EleutherAI, Stability AI).</p>
<p>It has now become a core dependency of <code>transformers</code>.</p>
<ul>
<li>Making <code>safetensors</code> a core dependency. by <a href="https://github.com/Narsil"><code>@Narsil</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/23254">#23254</a></li>
</ul>
<h2>New models</h2>
<h3>Swiftformer</h3>
<p>The SwiftFormer paper introduces a novel efficient additive attention mechanism that effectively replaces the quadratic matrix multiplication operations in the self-attention computation with linear element-wise multiplications. A series of models called ‘SwiftFormer’ is built based on this, which achieves state-of-the-art performance in terms of both accuracy and mobile inference speed. Even their small variant achieves 78.5% top-1 ImageNet1K accuracy with only 0.8 ms latency on iPhone 14, which is more accurate and 2× faster compared to MobileViT-v2.</p>
<ul>
<li>Add swiftformer by <a href="https://github.com/shehanmunasinghe"><code>@shehanmunasinghe</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/22686">#22686</a></li>
</ul>
<h3>Autoformer</h3>
<p>This model augments the Transformer as a deep decomposition architecture, which can progressively decompose the trend and seasonal components during the forecasting process.</p>
<ul>
<li>[Time-Series] Autoformer model by <a href="https://github.com/elisim"><code>@elisim</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/21891">#21891</a></li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="https://github.com/huggingface/transformers/commit/fe861e578f50dc9c06de33cd361d2f625017e624"><code>fe861e5</code></a> [<code>GPT2</code>] Add correct keys on <code>_keys_to_ignore_on_load_unexpected</code> on all chil...</li>
<li><a href="https://github.com/huggingface/transformers/commit/b3e27a80578d022301611363b890107244e12354"><code>b3e27a8</code></a> Update the pin on Accelerate (<a href="https://redirect.github.com/huggingface/transformers/issues/24110">#24110</a>)</li>
<li><a href="https://github.com/huggingface/transformers/commit/53e1f5cf66d320b9c809f3940c707b6fef435d2d"><code>53e1f5c</code></a> [<code>Trainer</code>] Correct behavior of <code>_load_best_model</code> for PEFT models (<a href="https://redirect.github.com/huggingface/transformers/issues/24103">#24103</a>)</li>
<li><a href="https://github.com/huggingface/transformers/commit/17db177714b03103bb94cd71b7dd414bc63bffd5"><code>17db177</code></a> reset accelerate env variables after each test (<a href="https://redirect.github.com/huggingface/transformers/issues/24107">#24107</a>)</li>
<li><a href="https://github.com/huggingface/transformers/commit/905892f09027cab690918c7766fea1bb51bcdd26"><code>905892f</code></a> Release: v4.30.0</li>
<li><a href="https://github.com/huggingface/transformers/commit/c3572e6bfba13ce6dc3fedb05cd1a946ea109576"><code>c3572e6</code></a> Add AzureOpenAiAgent (<a href="https://redirect.github.com/huggingface/transformers/issues/24058">#24058</a>)</li>
<li><a href="https://github.com/huggingface/transformers/commit/5eb3d3c7023ed0522d3c743ee2e13d896a3aa788"><code>5eb3d3c</code></a> Up pinned accelerate version (<a href="https://redirect.github.com/huggingface/transformers/issues/24089">#24089</a>)</li>
<li><a href="https://github.com/huggingface/transformers/commit/d1c039e39864a41f6eb8b770a65f123c40164ea5"><code>d1c039e</code></a> fix accelerator prepare during eval only mode (<a href="https://redirect.github.com/huggingface/transformers/issues/24014">#24014</a>)</li>
<li><a href="https://github.com/huggingface/transformers/commit/2c887cf8e0cb1ac96d28361ff3235a77f83c36ee"><code>2c887cf</code></a> Do not prepare lr scheduler as it as the right number of steps (<a href="https://redirect.github.com/huggingface/transformers/issues/24088">#24088</a>)</li>
<li><a href="https://github.com/huggingface/transformers/commit/12298cb65c7e9d615b749dde935a0b4966f4ae49"><code>12298cb</code></a> fix executable batch size issue (<a href="https://redirect.github.com/huggingface/transformers/issues/24067">#24067</a>)</li>
<li>Additional commits viewable in <a href="https://github.com/huggingface/transformers/compare/v4.19.0...v4.30.0">compare view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts).
</details> | 06-12-2023 21:37:17 | 06-12-2023 21:37:17 | @dependabot ignore this major version<|||||>OK, I won't notify you about version 4.x.x again, unless you re-open this PR. 😢 |
transformers | 24,218 | closed | Bump transformers from 3.5.1 to 4.30.0 in /examples/research_projects/deebert | Bumps [transformers](https://github.com/huggingface/transformers) from 3.5.1 to 4.30.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/huggingface/transformers/releases">transformers's releases</a>.</em></p>
<blockquote>
<h2>v4.30.0: 100k, Agents improvements, Safetensors core dependency, Swiftformer, Autoformer, MobileViTv2, timm-as-a-backbone</h2>
<h2>100k</h2>
<p>Transformers has just reached 100k stars on GitHub, and to celebrate we wanted to highlight 100 projects in the vicinity of <code>transformers</code> and we have decided to create an <a href="https://github.com/huggingface/transformers/blob/main/awesome-transformers.md">awesome-transformers</a> page to do just that.</p>
<p>We accept PRs to add projects to the list!</p>
<ul>
<li>Top 100 by <a href="https://github.com/LysandreJik"><code>@LysandreJik</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/22912">#22912</a></li>
<li>Add LlamaIndex to awesome-transformers.md by <a href="https://github.com/ravi03071991"><code>@ravi03071991</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/23484">#23484</a></li>
<li>add cleanlab to awesome-transformers tools list by <a href="https://github.com/jwmueller"><code>@jwmueller</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/23440">#23440</a></li>
</ul>
<h2>4-bit quantization and QLoRA</h2>
<p>By leveraging the <code>bitsandbytes</code> library by <a href="https://github.com/TimDettmers"><code>@TimDettmers</code></a>, we add 4-bit support to <code>transformers</code> models!</p>
<ul>
<li>4-bit QLoRA via bitsandbytes (4-bit base model + LoRA) by <a href="https://github.com/TimDettmers"><code>@TimDettmers</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/23479">#23479</a></li>
</ul>
<h2>Agents</h2>
<p>The Agents framework has been improved and continues to be stabilized. Among bug fixes, here are the important new features that were added:</p>
<ul>
<li>Local agent capabilities, to load a generative model directly from <code>transformers</code> instead of relying on APIs.</li>
<li>Prompts are now hosted on the Hub, which means that anyone can fork the prompts and update them with theirs, to let other community contributors re-use them</li>
<li>We add an <code>AzureOpenAiAgent</code> class to support Azure OpenAI agents.</li>
</ul>
<ul>
<li>Add local agent by <a href="https://github.com/sgugger"><code>@sgugger</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/23438">#23438</a></li>
<li>Enable prompts on the Hub by <a href="https://github.com/sgugger"><code>@sgugger</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/23662">#23662</a></li>
<li>Add AzureOpenAiAgent by <a href="https://github.com/sgugger"><code>@sgugger</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/24058">#24058</a></li>
</ul>
<h2>Safetensors</h2>
<p>The <code>safetensors</code> library is a safe serialization framework for machine learning tensors. It has been audited and will become the default serialization framework for several organizations (Hugging Face, EleutherAI, Stability AI).</p>
<p>It has now become a core dependency of <code>transformers</code>.</p>
<ul>
<li>Making <code>safetensors</code> a core dependency. by <a href="https://github.com/Narsil"><code>@Narsil</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/23254">#23254</a></li>
</ul>
<h2>New models</h2>
<h3>Swiftformer</h3>
<p>The SwiftFormer paper introduces a novel efficient additive attention mechanism that effectively replaces the quadratic matrix multiplication operations in the self-attention computation with linear element-wise multiplications. A series of models called ‘SwiftFormer’ is built based on this, which achieves state-of-the-art performance in terms of both accuracy and mobile inference speed. Even their small variant achieves 78.5% top-1 ImageNet1K accuracy with only 0.8 ms latency on iPhone 14, which is more accurate and 2× faster compared to MobileViT-v2.</p>
<ul>
<li>Add swiftformer by <a href="https://github.com/shehanmunasinghe"><code>@shehanmunasinghe</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/22686">#22686</a></li>
</ul>
<h3>Autoformer</h3>
<p>This model augments the Transformer as a deep decomposition architecture, which can progressively decompose the trend and seasonal components during the forecasting process.</p>
<ul>
<li>[Time-Series] Autoformer model by <a href="https://github.com/elisim"><code>@elisim</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/21891">#21891</a></li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="https://github.com/huggingface/transformers/commit/fe861e578f50dc9c06de33cd361d2f625017e624"><code>fe861e5</code></a> [<code>GPT2</code>] Add correct keys on <code>_keys_to_ignore_on_load_unexpected</code> on all chil...</li>
<li><a href="https://github.com/huggingface/transformers/commit/b3e27a80578d022301611363b890107244e12354"><code>b3e27a8</code></a> Update the pin on Accelerate (<a href="https://redirect.github.com/huggingface/transformers/issues/24110">#24110</a>)</li>
<li><a href="https://github.com/huggingface/transformers/commit/53e1f5cf66d320b9c809f3940c707b6fef435d2d"><code>53e1f5c</code></a> [<code>Trainer</code>] Correct behavior of <code>_load_best_model</code> for PEFT models (<a href="https://redirect.github.com/huggingface/transformers/issues/24103">#24103</a>)</li>
<li><a href="https://github.com/huggingface/transformers/commit/17db177714b03103bb94cd71b7dd414bc63bffd5"><code>17db177</code></a> reset accelerate env variables after each test (<a href="https://redirect.github.com/huggingface/transformers/issues/24107">#24107</a>)</li>
<li><a href="https://github.com/huggingface/transformers/commit/905892f09027cab690918c7766fea1bb51bcdd26"><code>905892f</code></a> Release: v4.30.0</li>
<li><a href="https://github.com/huggingface/transformers/commit/c3572e6bfba13ce6dc3fedb05cd1a946ea109576"><code>c3572e6</code></a> Add AzureOpenAiAgent (<a href="https://redirect.github.com/huggingface/transformers/issues/24058">#24058</a>)</li>
<li><a href="https://github.com/huggingface/transformers/commit/5eb3d3c7023ed0522d3c743ee2e13d896a3aa788"><code>5eb3d3c</code></a> Up pinned accelerate version (<a href="https://redirect.github.com/huggingface/transformers/issues/24089">#24089</a>)</li>
<li><a href="https://github.com/huggingface/transformers/commit/d1c039e39864a41f6eb8b770a65f123c40164ea5"><code>d1c039e</code></a> fix accelerator prepare during eval only mode (<a href="https://redirect.github.com/huggingface/transformers/issues/24014">#24014</a>)</li>
<li><a href="https://github.com/huggingface/transformers/commit/2c887cf8e0cb1ac96d28361ff3235a77f83c36ee"><code>2c887cf</code></a> Do not prepare lr scheduler as it as the right number of steps (<a href="https://redirect.github.com/huggingface/transformers/issues/24088">#24088</a>)</li>
<li><a href="https://github.com/huggingface/transformers/commit/12298cb65c7e9d615b749dde935a0b4966f4ae49"><code>12298cb</code></a> fix executable batch size issue (<a href="https://redirect.github.com/huggingface/transformers/issues/24067">#24067</a>)</li>
<li>Additional commits viewable in <a href="https://github.com/huggingface/transformers/compare/v3.5.1...v4.30.0">compare view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts).
</details> | 06-12-2023 21:30:48 | 06-12-2023 21:30:48 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@dependabot ignore this major version<|||||>OK, I won't notify you about version 4.x.x again, unless you re-open this PR. 😢 |
transformers | 24,217 | closed | Bump transformers from 3.5.1 to 4.30.0 in /examples/research_projects/bertology | Bumps [transformers](https://github.com/huggingface/transformers) from 3.5.1 to 4.30.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/huggingface/transformers/releases">transformers's releases</a>.</em></p>
<blockquote>
<h2>v4.30.0: 100k, Agents improvements, Safetensors core dependency, Swiftformer, Autoformer, MobileViTv2, timm-as-a-backbone</h2>
<h2>100k</h2>
<p>Transformers has just reached 100k stars on GitHub, and to celebrate we wanted to highlight 100 projects in the vicinity of <code>transformers</code> and we have decided to create an <a href="https://github.com/huggingface/transformers/blob/main/awesome-transformers.md">awesome-transformers</a> page to do just that.</p>
<p>We accept PRs to add projects to the list!</p>
<ul>
<li>Top 100 by <a href="https://github.com/LysandreJik"><code>@LysandreJik</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/22912">#22912</a></li>
<li>Add LlamaIndex to awesome-transformers.md by <a href="https://github.com/ravi03071991"><code>@ravi03071991</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/23484">#23484</a></li>
<li>add cleanlab to awesome-transformers tools list by <a href="https://github.com/jwmueller"><code>@jwmueller</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/23440">#23440</a></li>
</ul>
<h2>4-bit quantization and QLoRA</h2>
<p>By leveraging the <code>bitsandbytes</code> library by <a href="https://github.com/TimDettmers"><code>@TimDettmers</code></a>, we add 4-bit support to <code>transformers</code> models!</p>
<ul>
<li>4-bit QLoRA via bitsandbytes (4-bit base model + LoRA) by <a href="https://github.com/TimDettmers"><code>@TimDettmers</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/23479">#23479</a></li>
</ul>
<h2>Agents</h2>
<p>The Agents framework has been improved and continues to be stabilized. Among bug fixes, here are the important new features that were added:</p>
<ul>
<li>Local agent capabilities, to load a generative model directly from <code>transformers</code> instead of relying on APIs.</li>
<li>Prompts are now hosted on the Hub, which means that anyone can fork the prompts and update them with theirs, to let other community contributors re-use them</li>
<li>We add an <code>AzureOpenAiAgent</code> class to support Azure OpenAI agents.</li>
</ul>
<ul>
<li>Add local agent by <a href="https://github.com/sgugger"><code>@sgugger</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/23438">#23438</a></li>
<li>Enable prompts on the Hub by <a href="https://github.com/sgugger"><code>@sgugger</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/23662">#23662</a></li>
<li>Add AzureOpenAiAgent by <a href="https://github.com/sgugger"><code>@sgugger</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/24058">#24058</a></li>
</ul>
<h2>Safetensors</h2>
<p>The <code>safetensors</code> library is a safe serialization framework for machine learning tensors. It has been audited and will become the default serialization framework for several organizations (Hugging Face, EleutherAI, Stability AI).</p>
<p>It has now become a core dependency of <code>transformers</code>.</p>
<ul>
<li>Making <code>safetensors</code> a core dependency. by <a href="https://github.com/Narsil"><code>@Narsil</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/23254">#23254</a></li>
</ul>
<h2>New models</h2>
<h3>Swiftformer</h3>
<p>The SwiftFormer paper introduces a novel efficient additive attention mechanism that effectively replaces the quadratic matrix multiplication operations in the self-attention computation with linear element-wise multiplications. A series of models called ‘SwiftFormer’ is built based on this, which achieves state-of-the-art performance in terms of both accuracy and mobile inference speed. Even their small variant achieves 78.5% top-1 ImageNet1K accuracy with only 0.8 ms latency on iPhone 14, which is more accurate and 2× faster compared to MobileViT-v2.</p>
<ul>
<li>Add swiftformer by <a href="https://github.com/shehanmunasinghe"><code>@shehanmunasinghe</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/22686">#22686</a></li>
</ul>
<h3>Autoformer</h3>
<p>This model augments the Transformer as a deep decomposition architecture, which can progressively decompose the trend and seasonal components during the forecasting process.</p>
<ul>
<li>[Time-Series] Autoformer model by <a href="https://github.com/elisim"><code>@elisim</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/21891">#21891</a></li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="https://github.com/huggingface/transformers/commit/fe861e578f50dc9c06de33cd361d2f625017e624"><code>fe861e5</code></a> [<code>GPT2</code>] Add correct keys on <code>_keys_to_ignore_on_load_unexpected</code> on all chil...</li>
<li><a href="https://github.com/huggingface/transformers/commit/b3e27a80578d022301611363b890107244e12354"><code>b3e27a8</code></a> Update the pin on Accelerate (<a href="https://redirect.github.com/huggingface/transformers/issues/24110">#24110</a>)</li>
<li><a href="https://github.com/huggingface/transformers/commit/53e1f5cf66d320b9c809f3940c707b6fef435d2d"><code>53e1f5c</code></a> [<code>Trainer</code>] Correct behavior of <code>_load_best_model</code> for PEFT models (<a href="https://redirect.github.com/huggingface/transformers/issues/24103">#24103</a>)</li>
<li><a href="https://github.com/huggingface/transformers/commit/17db177714b03103bb94cd71b7dd414bc63bffd5"><code>17db177</code></a> reset accelerate env variables after each test (<a href="https://redirect.github.com/huggingface/transformers/issues/24107">#24107</a>)</li>
<li><a href="https://github.com/huggingface/transformers/commit/905892f09027cab690918c7766fea1bb51bcdd26"><code>905892f</code></a> Release: v4.30.0</li>
<li><a href="https://github.com/huggingface/transformers/commit/c3572e6bfba13ce6dc3fedb05cd1a946ea109576"><code>c3572e6</code></a> Add AzureOpenAiAgent (<a href="https://redirect.github.com/huggingface/transformers/issues/24058">#24058</a>)</li>
<li><a href="https://github.com/huggingface/transformers/commit/5eb3d3c7023ed0522d3c743ee2e13d896a3aa788"><code>5eb3d3c</code></a> Up pinned accelerate version (<a href="https://redirect.github.com/huggingface/transformers/issues/24089">#24089</a>)</li>
<li><a href="https://github.com/huggingface/transformers/commit/d1c039e39864a41f6eb8b770a65f123c40164ea5"><code>d1c039e</code></a> fix accelerator prepare during eval only mode (<a href="https://redirect.github.com/huggingface/transformers/issues/24014">#24014</a>)</li>
<li><a href="https://github.com/huggingface/transformers/commit/2c887cf8e0cb1ac96d28361ff3235a77f83c36ee"><code>2c887cf</code></a> Do not prepare lr scheduler as it as the right number of steps (<a href="https://redirect.github.com/huggingface/transformers/issues/24088">#24088</a>)</li>
<li><a href="https://github.com/huggingface/transformers/commit/12298cb65c7e9d615b749dde935a0b4966f4ae49"><code>12298cb</code></a> fix executable batch size issue (<a href="https://redirect.github.com/huggingface/transformers/issues/24067">#24067</a>)</li>
<li>Additional commits viewable in <a href="https://github.com/huggingface/transformers/compare/v3.5.1...v4.30.0">compare view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts).
</details> | 06-12-2023 21:30:44 | 06-12-2023 21:30:44 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@dependabot ignore this major version<|||||>OK, I won't notify you about version 4.x.x again, unless you re-open this PR. 😢 |
transformers | 24,216 | closed | Bump transformers from 3.5.1 to 4.30.0 in /examples/research_projects/pplm | Bumps [transformers](https://github.com/huggingface/transformers) from 3.5.1 to 4.30.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/huggingface/transformers/releases">transformers's releases</a>.</em></p>
<blockquote>
<h2>v4.30.0: 100k, Agents improvements, Safetensors core dependency, Swiftformer, Autoformer, MobileViTv2, timm-as-a-backbone</h2>
<h2>100k</h2>
<p>Transformers has just reached 100k stars on GitHub, and to celebrate we wanted to highlight 100 projects in the vicinity of <code>transformers</code> and we have decided to create an <a href="https://github.com/huggingface/transformers/blob/main/awesome-transformers.md">awesome-transformers</a> page to do just that.</p>
<p>We accept PRs to add projects to the list!</p>
<ul>
<li>Top 100 by <a href="https://github.com/LysandreJik"><code>@LysandreJik</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/22912">#22912</a></li>
<li>Add LlamaIndex to awesome-transformers.md by <a href="https://github.com/ravi03071991"><code>@ravi03071991</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/23484">#23484</a></li>
<li>add cleanlab to awesome-transformers tools list by <a href="https://github.com/jwmueller"><code>@jwmueller</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/23440">#23440</a></li>
</ul>
<h2>4-bit quantization and QLoRA</h2>
<p>By leveraging the <code>bitsandbytes</code> library by <a href="https://github.com/TimDettmers"><code>@TimDettmers</code></a>, we add 4-bit support to <code>transformers</code> models!</p>
<ul>
<li>4-bit QLoRA via bitsandbytes (4-bit base model + LoRA) by <a href="https://github.com/TimDettmers"><code>@TimDettmers</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/23479">#23479</a></li>
</ul>
<h2>Agents</h2>
<p>The Agents framework has been improved and continues to be stabilized. Among bug fixes, here are the important new features that were added:</p>
<ul>
<li>Local agent capabilities, to load a generative model directly from <code>transformers</code> instead of relying on APIs.</li>
<li>Prompts are now hosted on the Hub, which means that anyone can fork the prompts and update them with theirs, to let other community contributors re-use them</li>
<li>We add an <code>AzureOpenAiAgent</code> class to support Azure OpenAI agents.</li>
</ul>
<ul>
<li>Add local agent by <a href="https://github.com/sgugger"><code>@sgugger</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/23438">#23438</a></li>
<li>Enable prompts on the Hub by <a href="https://github.com/sgugger"><code>@sgugger</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/23662">#23662</a></li>
<li>Add AzureOpenAiAgent by <a href="https://github.com/sgugger"><code>@sgugger</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/24058">#24058</a></li>
</ul>
<h2>Safetensors</h2>
<p>The <code>safetensors</code> library is a safe serialization framework for machine learning tensors. It has been audited and will become the default serialization framework for several organizations (Hugging Face, EleutherAI, Stability AI).</p>
<p>It has now become a core dependency of <code>transformers</code>.</p>
<ul>
<li>Making <code>safetensors</code> a core dependency. by <a href="https://github.com/Narsil"><code>@Narsil</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/23254">#23254</a></li>
</ul>
<h2>New models</h2>
<h3>Swiftformer</h3>
<p>The SwiftFormer paper introduces a novel efficient additive attention mechanism that effectively replaces the quadratic matrix multiplication operations in the self-attention computation with linear element-wise multiplications. A series of models called ‘SwiftFormer’ is built based on this, which achieves state-of-the-art performance in terms of both accuracy and mobile inference speed. Even their small variant achieves 78.5% top-1 ImageNet1K accuracy with only 0.8 ms latency on iPhone 14, which is more accurate and 2× faster compared to MobileViT-v2.</p>
<ul>
<li>Add swiftformer by <a href="https://github.com/shehanmunasinghe"><code>@shehanmunasinghe</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/22686">#22686</a></li>
</ul>
<h3>Autoformer</h3>
<p>This model augments the Transformer as a deep decomposition architecture, which can progressively decompose the trend and seasonal components during the forecasting process.</p>
<ul>
<li>[Time-Series] Autoformer model by <a href="https://github.com/elisim"><code>@elisim</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/21891">#21891</a></li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="https://github.com/huggingface/transformers/commit/fe861e578f50dc9c06de33cd361d2f625017e624"><code>fe861e5</code></a> [<code>GPT2</code>] Add correct keys on <code>_keys_to_ignore_on_load_unexpected</code> on all chil...</li>
<li><a href="https://github.com/huggingface/transformers/commit/b3e27a80578d022301611363b890107244e12354"><code>b3e27a8</code></a> Update the pin on Accelerate (<a href="https://redirect.github.com/huggingface/transformers/issues/24110">#24110</a>)</li>
<li><a href="https://github.com/huggingface/transformers/commit/53e1f5cf66d320b9c809f3940c707b6fef435d2d"><code>53e1f5c</code></a> [<code>Trainer</code>] Correct behavior of <code>_load_best_model</code> for PEFT models (<a href="https://redirect.github.com/huggingface/transformers/issues/24103">#24103</a>)</li>
<li><a href="https://github.com/huggingface/transformers/commit/17db177714b03103bb94cd71b7dd414bc63bffd5"><code>17db177</code></a> reset accelerate env variables after each test (<a href="https://redirect.github.com/huggingface/transformers/issues/24107">#24107</a>)</li>
<li><a href="https://github.com/huggingface/transformers/commit/905892f09027cab690918c7766fea1bb51bcdd26"><code>905892f</code></a> Release: v4.30.0</li>
<li><a href="https://github.com/huggingface/transformers/commit/c3572e6bfba13ce6dc3fedb05cd1a946ea109576"><code>c3572e6</code></a> Add AzureOpenAiAgent (<a href="https://redirect.github.com/huggingface/transformers/issues/24058">#24058</a>)</li>
<li><a href="https://github.com/huggingface/transformers/commit/5eb3d3c7023ed0522d3c743ee2e13d896a3aa788"><code>5eb3d3c</code></a> Up pinned accelerate version (<a href="https://redirect.github.com/huggingface/transformers/issues/24089">#24089</a>)</li>
<li><a href="https://github.com/huggingface/transformers/commit/d1c039e39864a41f6eb8b770a65f123c40164ea5"><code>d1c039e</code></a> fix accelerator prepare during eval only mode (<a href="https://redirect.github.com/huggingface/transformers/issues/24014">#24014</a>)</li>
<li><a href="https://github.com/huggingface/transformers/commit/2c887cf8e0cb1ac96d28361ff3235a77f83c36ee"><code>2c887cf</code></a> Do not prepare lr scheduler as it as the right number of steps (<a href="https://redirect.github.com/huggingface/transformers/issues/24088">#24088</a>)</li>
<li><a href="https://github.com/huggingface/transformers/commit/12298cb65c7e9d615b749dde935a0b4966f4ae49"><code>12298cb</code></a> fix executable batch size issue (<a href="https://redirect.github.com/huggingface/transformers/issues/24067">#24067</a>)</li>
<li>Additional commits viewable in <a href="https://github.com/huggingface/transformers/compare/v3.5.1...v4.30.0">compare view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts).
</details> | 06-12-2023 21:30:14 | 06-12-2023 21:30:14 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@dependabot ignore this major version<|||||>OK, I won't notify you about version 4.x.x again, unless you re-open this PR. 😢 |
transformers | 24,215 | closed | Bump transformers from 3.5.1 to 4.30.0 in /examples/research_projects/adversarial | Bumps [transformers](https://github.com/huggingface/transformers) from 3.5.1 to 4.30.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/huggingface/transformers/releases">transformers's releases</a>.</em></p>
<blockquote>
<h2>v4.30.0: 100k, Agents improvements, Safetensors core dependency, Swiftformer, Autoformer, MobileViTv2, timm-as-a-backbone</h2>
<h2>100k</h2>
<p>Transformers has just reached 100k stars on GitHub, and to celebrate we wanted to highlight 100 projects in the vicinity of <code>transformers</code> and we have decided to create an <a href="https://github.com/huggingface/transformers/blob/main/awesome-transformers.md">awesome-transformers</a> page to do just that.</p>
<p>We accept PRs to add projects to the list!</p>
<ul>
<li>Top 100 by <a href="https://github.com/LysandreJik"><code>@LysandreJik</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/22912">#22912</a></li>
<li>Add LlamaIndex to awesome-transformers.md by <a href="https://github.com/ravi03071991"><code>@ravi03071991</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/23484">#23484</a></li>
<li>add cleanlab to awesome-transformers tools list by <a href="https://github.com/jwmueller"><code>@jwmueller</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/23440">#23440</a></li>
</ul>
<h2>4-bit quantization and QLoRA</h2>
<p>By leveraging the <code>bitsandbytes</code> library by <a href="https://github.com/TimDettmers"><code>@TimDettmers</code></a>, we add 4-bit support to <code>transformers</code> models!</p>
<ul>
<li>4-bit QLoRA via bitsandbytes (4-bit base model + LoRA) by <a href="https://github.com/TimDettmers"><code>@TimDettmers</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/23479">#23479</a></li>
</ul>
<h2>Agents</h2>
<p>The Agents framework has been improved and continues to be stabilized. Among bug fixes, here are the important new features that were added:</p>
<ul>
<li>Local agent capabilities, to load a generative model directly from <code>transformers</code> instead of relying on APIs.</li>
<li>Prompts are now hosted on the Hub, which means that anyone can fork the prompts and update them with theirs, to let other community contributors re-use them</li>
<li>We add an <code>AzureOpenAiAgent</code> class to support Azure OpenAI agents.</li>
</ul>
<ul>
<li>Add local agent by <a href="https://github.com/sgugger"><code>@sgugger</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/23438">#23438</a></li>
<li>Enable prompts on the Hub by <a href="https://github.com/sgugger"><code>@sgugger</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/23662">#23662</a></li>
<li>Add AzureOpenAiAgent by <a href="https://github.com/sgugger"><code>@sgugger</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/24058">#24058</a></li>
</ul>
<h2>Safetensors</h2>
<p>The <code>safetensors</code> library is a safe serialization framework for machine learning tensors. It has been audited and will become the default serialization framework for several organizations (Hugging Face, EleutherAI, Stability AI).</p>
<p>It has now become a core dependency of <code>transformers</code>.</p>
<ul>
<li>Making <code>safetensors</code> a core dependency. by <a href="https://github.com/Narsil"><code>@Narsil</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/23254">#23254</a></li>
</ul>
<h2>New models</h2>
<h3>Swiftformer</h3>
<p>The SwiftFormer paper introduces a novel efficient additive attention mechanism that effectively replaces the quadratic matrix multiplication operations in the self-attention computation with linear element-wise multiplications. A series of models called ‘SwiftFormer’ is built based on this, which achieves state-of-the-art performance in terms of both accuracy and mobile inference speed. Even their small variant achieves 78.5% top-1 ImageNet1K accuracy with only 0.8 ms latency on iPhone 14, which is more accurate and 2× faster compared to MobileViT-v2.</p>
<ul>
<li>Add swiftformer by <a href="https://github.com/shehanmunasinghe"><code>@shehanmunasinghe</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/22686">#22686</a></li>
</ul>
<h3>Autoformer</h3>
<p>This model augments the Transformer as a deep decomposition architecture, which can progressively decompose the trend and seasonal components during the forecasting process.</p>
<ul>
<li>[Time-Series] Autoformer model by <a href="https://github.com/elisim"><code>@elisim</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/21891">#21891</a></li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="https://github.com/huggingface/transformers/commit/fe861e578f50dc9c06de33cd361d2f625017e624"><code>fe861e5</code></a> [<code>GPT2</code>] Add correct keys on <code>_keys_to_ignore_on_load_unexpected</code> on all chil...</li>
<li><a href="https://github.com/huggingface/transformers/commit/b3e27a80578d022301611363b890107244e12354"><code>b3e27a8</code></a> Update the pin on Accelerate (<a href="https://redirect.github.com/huggingface/transformers/issues/24110">#24110</a>)</li>
<li><a href="https://github.com/huggingface/transformers/commit/53e1f5cf66d320b9c809f3940c707b6fef435d2d"><code>53e1f5c</code></a> [<code>Trainer</code>] Correct behavior of <code>_load_best_model</code> for PEFT models (<a href="https://redirect.github.com/huggingface/transformers/issues/24103">#24103</a>)</li>
<li><a href="https://github.com/huggingface/transformers/commit/17db177714b03103bb94cd71b7dd414bc63bffd5"><code>17db177</code></a> reset accelerate env variables after each test (<a href="https://redirect.github.com/huggingface/transformers/issues/24107">#24107</a>)</li>
<li><a href="https://github.com/huggingface/transformers/commit/905892f09027cab690918c7766fea1bb51bcdd26"><code>905892f</code></a> Release: v4.30.0</li>
<li><a href="https://github.com/huggingface/transformers/commit/c3572e6bfba13ce6dc3fedb05cd1a946ea109576"><code>c3572e6</code></a> Add AzureOpenAiAgent (<a href="https://redirect.github.com/huggingface/transformers/issues/24058">#24058</a>)</li>
<li><a href="https://github.com/huggingface/transformers/commit/5eb3d3c7023ed0522d3c743ee2e13d896a3aa788"><code>5eb3d3c</code></a> Up pinned accelerate version (<a href="https://redirect.github.com/huggingface/transformers/issues/24089">#24089</a>)</li>
<li><a href="https://github.com/huggingface/transformers/commit/d1c039e39864a41f6eb8b770a65f123c40164ea5"><code>d1c039e</code></a> fix accelerator prepare during eval only mode (<a href="https://redirect.github.com/huggingface/transformers/issues/24014">#24014</a>)</li>
<li><a href="https://github.com/huggingface/transformers/commit/2c887cf8e0cb1ac96d28361ff3235a77f83c36ee"><code>2c887cf</code></a> Do not prepare lr scheduler as it as the right number of steps (<a href="https://redirect.github.com/huggingface/transformers/issues/24088">#24088</a>)</li>
<li><a href="https://github.com/huggingface/transformers/commit/12298cb65c7e9d615b749dde935a0b4966f4ae49"><code>12298cb</code></a> fix executable batch size issue (<a href="https://redirect.github.com/huggingface/transformers/issues/24067">#24067</a>)</li>
<li>Additional commits viewable in <a href="https://github.com/huggingface/transformers/compare/v3.5.1...v4.30.0">compare view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts).
</details> | 06-12-2023 21:30:14 | 06-12-2023 21:30:14 | @dependabot ignore this major version<|||||>OK, I won't notify you about version 4.x.x again, unless you re-open this PR. 😢 |
transformers | 24,214 | closed | Bump transformers from 3.5.1 to 4.30.0 in /examples/research_projects/bert-loses-patience | Bumps [transformers](https://github.com/huggingface/transformers) from 3.5.1 to 4.30.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/huggingface/transformers/releases">transformers's releases</a>.</em></p>
<blockquote>
<h2>v4.30.0: 100k, Agents improvements, Safetensors core dependency, Swiftformer, Autoformer, MobileViTv2, timm-as-a-backbone</h2>
<h2>100k</h2>
<p>Transformers has just reached 100k stars on GitHub, and to celebrate we wanted to highlight 100 projects in the vicinity of <code>transformers</code> and we have decided to create an <a href="https://github.com/huggingface/transformers/blob/main/awesome-transformers.md">awesome-transformers</a> page to do just that.</p>
<p>We accept PRs to add projects to the list!</p>
<ul>
<li>Top 100 by <a href="https://github.com/LysandreJik"><code>@LysandreJik</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/22912">#22912</a></li>
<li>Add LlamaIndex to awesome-transformers.md by <a href="https://github.com/ravi03071991"><code>@ravi03071991</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/23484">#23484</a></li>
<li>add cleanlab to awesome-transformers tools list by <a href="https://github.com/jwmueller"><code>@jwmueller</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/23440">#23440</a></li>
</ul>
<h2>4-bit quantization and QLoRA</h2>
<p>By leveraging the <code>bitsandbytes</code> library by <a href="https://github.com/TimDettmers"><code>@TimDettmers</code></a>, we add 4-bit support to <code>transformers</code> models!</p>
<ul>
<li>4-bit QLoRA via bitsandbytes (4-bit base model + LoRA) by <a href="https://github.com/TimDettmers"><code>@TimDettmers</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/23479">#23479</a></li>
</ul>
<h2>Agents</h2>
<p>The Agents framework has been improved and continues to be stabilized. Among bug fixes, here are the important new features that were added:</p>
<ul>
<li>Local agent capabilities, to load a generative model directly from <code>transformers</code> instead of relying on APIs.</li>
<li>Prompts are now hosted on the Hub, which means that anyone can fork the prompts and update them with theirs, to let other community contributors re-use them</li>
<li>We add an <code>AzureOpenAiAgent</code> class to support Azure OpenAI agents.</li>
</ul>
<ul>
<li>Add local agent by <a href="https://github.com/sgugger"><code>@sgugger</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/23438">#23438</a></li>
<li>Enable prompts on the Hub by <a href="https://github.com/sgugger"><code>@sgugger</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/23662">#23662</a></li>
<li>Add AzureOpenAiAgent by <a href="https://github.com/sgugger"><code>@sgugger</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/24058">#24058</a></li>
</ul>
<h2>Safetensors</h2>
<p>The <code>safetensors</code> library is a safe serialization framework for machine learning tensors. It has been audited and will become the default serialization framework for several organizations (Hugging Face, EleutherAI, Stability AI).</p>
<p>It has now become a core dependency of <code>transformers</code>.</p>
<ul>
<li>Making <code>safetensors</code> a core dependency. by <a href="https://github.com/Narsil"><code>@Narsil</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/23254">#23254</a></li>
</ul>
<h2>New models</h2>
<h3>Swiftformer</h3>
<p>The SwiftFormer paper introduces a novel efficient additive attention mechanism that effectively replaces the quadratic matrix multiplication operations in the self-attention computation with linear element-wise multiplications. A series of models called ‘SwiftFormer’ is built based on this, which achieves state-of-the-art performance in terms of both accuracy and mobile inference speed. Even their small variant achieves 78.5% top-1 ImageNet1K accuracy with only 0.8 ms latency on iPhone 14, which is more accurate and 2× faster compared to MobileViT-v2.</p>
<ul>
<li>Add swiftformer by <a href="https://github.com/shehanmunasinghe"><code>@shehanmunasinghe</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/22686">#22686</a></li>
</ul>
<h3>Autoformer</h3>
<p>This model augments the Transformer as a deep decomposition architecture, which can progressively decompose the trend and seasonal components during the forecasting process.</p>
<ul>
<li>[Time-Series] Autoformer model by <a href="https://github.com/elisim"><code>@elisim</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/21891">#21891</a></li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="https://github.com/huggingface/transformers/commit/fe861e578f50dc9c06de33cd361d2f625017e624"><code>fe861e5</code></a> [<code>GPT2</code>] Add correct keys on <code>_keys_to_ignore_on_load_unexpected</code> on all chil...</li>
<li><a href="https://github.com/huggingface/transformers/commit/b3e27a80578d022301611363b890107244e12354"><code>b3e27a8</code></a> Update the pin on Accelerate (<a href="https://redirect.github.com/huggingface/transformers/issues/24110">#24110</a>)</li>
<li><a href="https://github.com/huggingface/transformers/commit/53e1f5cf66d320b9c809f3940c707b6fef435d2d"><code>53e1f5c</code></a> [<code>Trainer</code>] Correct behavior of <code>_load_best_model</code> for PEFT models (<a href="https://redirect.github.com/huggingface/transformers/issues/24103">#24103</a>)</li>
<li><a href="https://github.com/huggingface/transformers/commit/17db177714b03103bb94cd71b7dd414bc63bffd5"><code>17db177</code></a> reset accelerate env variables after each test (<a href="https://redirect.github.com/huggingface/transformers/issues/24107">#24107</a>)</li>
<li><a href="https://github.com/huggingface/transformers/commit/905892f09027cab690918c7766fea1bb51bcdd26"><code>905892f</code></a> Release: v4.30.0</li>
<li><a href="https://github.com/huggingface/transformers/commit/c3572e6bfba13ce6dc3fedb05cd1a946ea109576"><code>c3572e6</code></a> Add AzureOpenAiAgent (<a href="https://redirect.github.com/huggingface/transformers/issues/24058">#24058</a>)</li>
<li><a href="https://github.com/huggingface/transformers/commit/5eb3d3c7023ed0522d3c743ee2e13d896a3aa788"><code>5eb3d3c</code></a> Up pinned accelerate version (<a href="https://redirect.github.com/huggingface/transformers/issues/24089">#24089</a>)</li>
<li><a href="https://github.com/huggingface/transformers/commit/d1c039e39864a41f6eb8b770a65f123c40164ea5"><code>d1c039e</code></a> fix accelerator prepare during eval only mode (<a href="https://redirect.github.com/huggingface/transformers/issues/24014">#24014</a>)</li>
<li><a href="https://github.com/huggingface/transformers/commit/2c887cf8e0cb1ac96d28361ff3235a77f83c36ee"><code>2c887cf</code></a> Do not prepare lr scheduler as it as the right number of steps (<a href="https://redirect.github.com/huggingface/transformers/issues/24088">#24088</a>)</li>
<li><a href="https://github.com/huggingface/transformers/commit/12298cb65c7e9d615b749dde935a0b4966f4ae49"><code>12298cb</code></a> fix executable batch size issue (<a href="https://redirect.github.com/huggingface/transformers/issues/24067">#24067</a>)</li>
<li>Additional commits viewable in <a href="https://github.com/huggingface/transformers/compare/v3.5.1...v4.30.0">compare view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts).
</details> | 06-12-2023 21:30:14 | 06-12-2023 21:30:14 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@dependabot ignore this major version<|||||>OK, I won't notify you about version 4.x.x again, unless you re-open this PR. 😢 |
transformers | 24,213 | closed | Bump transformers from 3.5.1 to 4.30.0 in /examples/research_projects/bertabs | Bumps [transformers](https://github.com/huggingface/transformers) from 3.5.1 to 4.30.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/huggingface/transformers/releases">transformers's releases</a>.</em></p>
<blockquote>
<h2>v4.30.0: 100k, Agents improvements, Safetensors core dependency, Swiftformer, Autoformer, MobileViTv2, timm-as-a-backbone</h2>
<h2>100k</h2>
<p>Transformers has just reached 100k stars on GitHub, and to celebrate we wanted to highlight 100 projects in the vicinity of <code>transformers</code> and we have decided to create an <a href="https://github.com/huggingface/transformers/blob/main/awesome-transformers.md">awesome-transformers</a> page to do just that.</p>
<p>We accept PRs to add projects to the list!</p>
<ul>
<li>Top 100 by <a href="https://github.com/LysandreJik"><code>@LysandreJik</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/22912">#22912</a></li>
<li>Add LlamaIndex to awesome-transformers.md by <a href="https://github.com/ravi03071991"><code>@ravi03071991</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/23484">#23484</a></li>
<li>add cleanlab to awesome-transformers tools list by <a href="https://github.com/jwmueller"><code>@jwmueller</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/23440">#23440</a></li>
</ul>
<h2>4-bit quantization and QLoRA</h2>
<p>By leveraging the <code>bitsandbytes</code> library by <a href="https://github.com/TimDettmers"><code>@TimDettmers</code></a>, we add 4-bit support to <code>transformers</code> models!</p>
<ul>
<li>4-bit QLoRA via bitsandbytes (4-bit base model + LoRA) by <a href="https://github.com/TimDettmers"><code>@TimDettmers</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/23479">#23479</a></li>
</ul>
<h2>Agents</h2>
<p>The Agents framework has been improved and continues to be stabilized. Among bug fixes, here are the important new features that were added:</p>
<ul>
<li>Local agent capabilities, to load a generative model directly from <code>transformers</code> instead of relying on APIs.</li>
<li>Prompts are now hosted on the Hub, which means that anyone can fork the prompts and update them with theirs, to let other community contributors re-use them</li>
<li>We add an <code>AzureOpenAiAgent</code> class to support Azure OpenAI agents.</li>
</ul>
<ul>
<li>Add local agent by <a href="https://github.com/sgugger"><code>@sgugger</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/23438">#23438</a></li>
<li>Enable prompts on the Hub by <a href="https://github.com/sgugger"><code>@sgugger</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/23662">#23662</a></li>
<li>Add AzureOpenAiAgent by <a href="https://github.com/sgugger"><code>@sgugger</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/24058">#24058</a></li>
</ul>
<h2>Safetensors</h2>
<p>The <code>safetensors</code> library is a safe serialization framework for machine learning tensors. It has been audited and will become the default serialization framework for several organizations (Hugging Face, EleutherAI, Stability AI).</p>
<p>It has now become a core dependency of <code>transformers</code>.</p>
<ul>
<li>Making <code>safetensors</code> a core dependency. by <a href="https://github.com/Narsil"><code>@Narsil</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/23254">#23254</a></li>
</ul>
<h2>New models</h2>
<h3>Swiftformer</h3>
<p>The SwiftFormer paper introduces a novel efficient additive attention mechanism that effectively replaces the quadratic matrix multiplication operations in the self-attention computation with linear element-wise multiplications. A series of models called ‘SwiftFormer’ is built based on this, which achieves state-of-the-art performance in terms of both accuracy and mobile inference speed. Even their small variant achieves 78.5% top-1 ImageNet1K accuracy with only 0.8 ms latency on iPhone 14, which is more accurate and 2× faster compared to MobileViT-v2.</p>
<ul>
<li>Add swiftformer by <a href="https://github.com/shehanmunasinghe"><code>@shehanmunasinghe</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/22686">#22686</a></li>
</ul>
<h3>Autoformer</h3>
<p>This model augments the Transformer as a deep decomposition architecture, which can progressively decompose the trend and seasonal components during the forecasting process.</p>
<ul>
<li>[Time-Series] Autoformer model by <a href="https://github.com/elisim"><code>@elisim</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/21891">#21891</a></li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="https://github.com/huggingface/transformers/commit/fe861e578f50dc9c06de33cd361d2f625017e624"><code>fe861e5</code></a> [<code>GPT2</code>] Add correct keys on <code>_keys_to_ignore_on_load_unexpected</code> on all chil...</li>
<li><a href="https://github.com/huggingface/transformers/commit/b3e27a80578d022301611363b890107244e12354"><code>b3e27a8</code></a> Update the pin on Accelerate (<a href="https://redirect.github.com/huggingface/transformers/issues/24110">#24110</a>)</li>
<li><a href="https://github.com/huggingface/transformers/commit/53e1f5cf66d320b9c809f3940c707b6fef435d2d"><code>53e1f5c</code></a> [<code>Trainer</code>] Correct behavior of <code>_load_best_model</code> for PEFT models (<a href="https://redirect.github.com/huggingface/transformers/issues/24103">#24103</a>)</li>
<li><a href="https://github.com/huggingface/transformers/commit/17db177714b03103bb94cd71b7dd414bc63bffd5"><code>17db177</code></a> reset accelerate env variables after each test (<a href="https://redirect.github.com/huggingface/transformers/issues/24107">#24107</a>)</li>
<li><a href="https://github.com/huggingface/transformers/commit/905892f09027cab690918c7766fea1bb51bcdd26"><code>905892f</code></a> Release: v4.30.0</li>
<li><a href="https://github.com/huggingface/transformers/commit/c3572e6bfba13ce6dc3fedb05cd1a946ea109576"><code>c3572e6</code></a> Add AzureOpenAiAgent (<a href="https://redirect.github.com/huggingface/transformers/issues/24058">#24058</a>)</li>
<li><a href="https://github.com/huggingface/transformers/commit/5eb3d3c7023ed0522d3c743ee2e13d896a3aa788"><code>5eb3d3c</code></a> Up pinned accelerate version (<a href="https://redirect.github.com/huggingface/transformers/issues/24089">#24089</a>)</li>
<li><a href="https://github.com/huggingface/transformers/commit/d1c039e39864a41f6eb8b770a65f123c40164ea5"><code>d1c039e</code></a> fix accelerator prepare during eval only mode (<a href="https://redirect.github.com/huggingface/transformers/issues/24014">#24014</a>)</li>
<li><a href="https://github.com/huggingface/transformers/commit/2c887cf8e0cb1ac96d28361ff3235a77f83c36ee"><code>2c887cf</code></a> Do not prepare lr scheduler as it as the right number of steps (<a href="https://redirect.github.com/huggingface/transformers/issues/24088">#24088</a>)</li>
<li><a href="https://github.com/huggingface/transformers/commit/12298cb65c7e9d615b749dde935a0b4966f4ae49"><code>12298cb</code></a> fix executable batch size issue (<a href="https://redirect.github.com/huggingface/transformers/issues/24067">#24067</a>)</li>
<li>Additional commits viewable in <a href="https://github.com/huggingface/transformers/compare/v3.5.1...v4.30.0">compare view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts).
</details> | 06-12-2023 21:30:13 | 06-12-2023 21:30:13 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24213). All of your documentation changes will be reflected on that endpoint.<|||||>@dependabot ignore this major version<|||||>OK, I won't notify you about version 4.x.x again, unless you re-open this PR. 😢 |
transformers | 24,212 | open | QLoRA Training does not give expected results | ### System Info
transformers version: 4.30.0
Platform: Linux-5.15.0-69-generic-x86_64-with-glibc2.29
Python version: 3.8.10
Huggingface_hub version: 0.15.1
Safetensors version: 0.3.1
PyTorch version (GPU?): 1.13.1 (True)
Tensorflow version (GPU?): not installed (NA)
Flax version (CPU?/GPU?/TPU?): not installed (NA)
Jax version: not installed
JaxLib version: not installed
Using GPU in script?: Yes
Using distributed or parallel set-up in script?: No
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Tried fine-tuning the [InstructCodeT5+](https://huggingface.co/Salesforce/instructcodet5p-16b) model using QLoRA and the loss is stuck at a particular value. I Followed the steps given in this example [notebook](https://colab.research.google.com/drive/1VoYNfYDKcKRQRor98Zbf2-9VQTtGJ24k?usp=sharing#scrollTo=FuXIFTFapAMI) for QLoRA. Code for the experiment:
```
import pandas as pd
import os
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM, BitsAndBytesConfig
import torch
from peft import LoraConfig, get_peft_model, prepare_model_for_int8_training, TaskType, prepare_model_for_kbit_training
from transformers import DataCollatorForSeq2Seq
import evaluate
import nltk
import numpy as np
from nltk.tokenize import sent_tokenize
from transformers import Seq2SeqTrainer, Seq2SeqTrainingArguments
from datasets import Dataset, DatasetDict
import argparse
import pickle
import json
parser = argparse.ArgumentParser(description='Options')
parser.add_argument('--dataset_dir', default='data', type=str, help="folder in which the dataset is stored")
parser.add_argument('--output_dir', default="lora-instructcodet5p", type=str, help="output directory for the model")
parser.add_argument('--results_dir', default="results", type=str, help="where the results should be stored")
args = parser.parse_args()
nltk.download("punkt")
tokenized_dataset = DatasetDict.load_from_disk(args.dataset_dir)
# Metric
metric = evaluate.load("rouge")
pad_tok = 50256
token_id="Salesforce/instructcodet5p-16b"
tokenizer = AutoTokenizer.from_pretrained(token_id)
# helper function to postprocess text
def postprocess_text(preds, labels):
preds = [pred.strip() for pred in preds]
labels = [label.strip() for label in labels]
# rougeLSum expects newline after each sentence
preds = ["\n".join(sent_tokenize(pred)) for pred in preds]
labels = ["\n".join(sent_tokenize(label)) for label in labels]
return preds, labels
def compute_metrics(eval_preds):
preds, labels = eval_preds
if isinstance(preds, tuple):
preds = preds[0]
for idx in range(len(preds)):
for idx2 in range(len(preds[idx])):
if preds[idx][idx2]==-100:
preds[idx][idx2] = 50256
decoded_preds = tokenizer.batch_decode(preds, skip_special_tokens=True)
# Replace -100 in the labels as we can't decode them.
labels = np.where(labels != pad_tok, labels, tokenizer.pad_token_id)
decoded_labels = tokenizer.batch_decode(labels, skip_special_tokens=True)
# Some simple post-processing
decoded_preds, decoded_labels = postprocess_text(decoded_preds, decoded_labels)
result = metric.compute(predictions=decoded_preds, references=decoded_labels, use_stemmer=True)
result = {k: round(v * 100, 4) for k, v in result.items()}
prediction_lens = [np.count_nonzero(pred != tokenizer.pad_token_id) for pred in preds]
result["gen_len"] = np.mean(prediction_lens)
return result
def get_dict(predicts):
d = {}
for num in range(len(tokenized_dataset['test'])):
pred = tokenizer.decode([n for n in predicts[0][num] if n!=50256 and n!=-100])[1:]
d[num+1] = {'Question':tokenizer.decode([n for n in tokenized_dataset['test'][num]['input_ids'] if n!=50256]),
'Ground truth solution':tokenizer.decode([n for n in tokenized_dataset['test'][num]['labels'] if n!=50256]),
'Prediction': pred if pred else None}
return d
def find_all_linear_names(model):
cls = torch.nn.Linear
lora_module_names = set()
for name, module in model.named_modules():
if isinstance(module, cls):
names = name.split('.')
lora_module_names.add(names[0] if len(names) == 1 else names[-1])
if 'lm_head' in lora_module_names:
lora_module_names.remove('lm_head')
return list(lora_module_names)
def main():
device = 'cuda'
# huggingface hub model id
model_id="instructcodet5p-16b"
if not os.path.exists(model_id):
model_id=token_id
bnb_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_use_double_quant=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=torch.bfloat16
)
# load model from the hub
model = AutoModelForSeq2SeqLM.from_pretrained(model_id,
# torch_dtype=torch.bfloat16,
low_cpu_mem_usage=True,
trust_remote_code=True, decoder_start_token_id=1, pad_token_id=pad_tok, device_map="auto", quantization_config=bnb_config)
modules = find_all_linear_names(model)
# Define LoRA Config
lora_config = LoraConfig(
r=16,
lora_alpha=32,
target_modules=modules,
lora_dropout=0.05,
bias="none",
task_type=TaskType.SEQ_2_SEQ_LM
)
model = prepare_model_for_kbit_training(model, False)
# add LoRA adaptor
model = get_peft_model(model, lora_config)
model.print_trainable_parameters()
# we want to ignore tokenizer pad token in the loss
label_pad_token_id = pad_tok
# Data collator
data_collator = DataCollatorForSeq2Seq(
tokenizer,
model=model,
label_pad_token_id=label_pad_token_id,
pad_to_multiple_of=8
)
output_dir=args.output_dir
training_args = Seq2SeqTrainingArguments(
output_dir=output_dir,
per_device_train_batch_size=1,
# per_device_eval_batch_size=1,
predict_with_generate=True,
weight_decay=0.05,
# warmup_steps=200,
fp16=False, # Overflows with fp16
learning_rate=1e-4,
num_train_epochs=5,
logging_dir=f"{output_dir}/logs",
logging_strategy="epoch",
report_to="tensorboard",
push_to_hub=False,
# generation_max_length=200,
optim="paged_adamw_8bit",
lr_scheduler_type = 'constant'
)
# Create Trainer instance
trainer = Seq2SeqTrainer(
model=model,
args=training_args,
data_collator=data_collator,
train_dataset=tokenized_dataset["train"],
# eval_dataset=tokenized_dataset["validation"],
# compute_metrics=compute_metrics,
)
# train model
train_result = trainer.train()
if __name__ == '__main__':
main()
```
### Expected behavior
Output using QLoRA and the generations are empty during the evaluation stage:
```
{'loss': 6.9007, 'learning_rate': 0.0001, 'epoch': 1.0}
{'loss': 6.9007, 'learning_rate': 0.0001, 'epoch': 2.0}
{'loss': 6.9007, 'learning_rate': 0.0001, 'epoch': 3.0}
{'loss': 6.9007, 'learning_rate': 0.0001, 'epoch': 4.0}
{'loss': 6.9007, 'learning_rate': 0.0001, 'epoch': 5.0}
```
The same is working in a setting where I use LoRA instead where the loss is reducing and the generations are much better:
```
{'loss': 0.8144, 'learning_rate': 0.0001, 'epoch': 1.0}
{'loss': 0.0745, 'learning_rate': 0.0001, 'epoch': 2.0}
{'loss': 0.0391, 'learning_rate': 0.0001, 'epoch': 3.0}
{'loss': 0.0189, 'learning_rate': 0.0001, 'epoch': 4.0}
{'loss': 0.007, 'learning_rate': 0.0001, 'epoch': 5.0}
```
I understand that QLoRA can cause a little bit of performance drop but the results I get using this are close to nothing after fine-tuning. Any suggestions or help with this is greatly appreciated! | 06-12-2023 21:18:16 | 06-12-2023 21:18:16 | cc @pacman100 <|||||>@karths8 did you manage to resolve the above issue?<|||||>No! Have not been able to solve it yet!<|||||>@karths8 @amdnsr
I think the problem is rooted in the `find_all_linear_names` function, which is incompatible with `QLoRA` setting, the correct way to implement this function should be like following
```
def find_all_linear_names(args, model):
cls = bnb.nn.Linear4bit if args.bits == 4 else (bnb.nn.Linear8bitLt if args.bits == 8 else torch.nn.Linear)
lora_module_names = set()
for name, module in model.named_modules():
if isinstance(module, cls):
names = name.split('.')
lora_module_names.add(names[0] if len(names) == 1 else names[-1])
if 'lm_head' in lora_module_names: # needed for 16-bit
lora_module_names.remove('lm_head')
return list(lora_module_names)
```
Because under `QLoRA` setting, the `torch.nn.Linear` module is replaced by `bnb.nn.Linearkbit` to realize the quantilization, the reason why your training loss is always the same is that there is no target module in your model at all. |
transformers | 24,211 | closed | Tied params cleanup | # What does this PR do?
This PR is the first in a series that aims to clean up the content of the variables `_keys_to_ignore_xxx` and the logic of warnings sent to the user resulting from them. In particular unless the model implement a hack like RoBERTa, a trained version with no shared version might not save the decoder...
One use of these variables is to detect "normal" shared weights and have a way to remove all but one of those shared weights for safetensors serialization, but the variable used (`_keys_to_ignore_unexepected`) is very noisy (it contains names of weights that are not shared). This PR introduces a variable `_tied_weights_keys` which will be used for that purpose (and in other places of the cleanup later on). It contains the named of the weights that are tied to other weights and that are safe to dismiss when saving (we won't dismiss them on the torch.save side since `torch.save` won't duplicate memory but will use it for `safetensors`). This PR also introduces a test that checks the content of this variable is correct.
Due to the very large number of models having potential shared weights, this PR limits itself at the introduction of this variable, proper fill for all existing models and test. The rest will follow in subsequent PRs.
cc @Narsil for info | 06-12-2023 20:31:52 | 06-12-2023 20:31:52 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 24,210 | closed | Nah | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 06-12-2023 19:26:01 | 06-12-2023 19:26:01 | |
transformers | 24,209 | closed | fix(trainer): save the model config, tokenizer, and arguments when FSDP | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #24208
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
@sgugger | 06-12-2023 19:21:55 | 06-12-2023 19:21:55 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24209). All of your documentation changes will be reflected on that endpoint.<|||||>cc @pacman100 |
transformers | 24,208 | closed | The `Trainer` only save the model parameters when `is_fsdp_enabled` is True | ### System Info
- `transformers` version: 4.31.0.dev0
- Platform: Linux-5.4.0-42-generic-x86_64-with-glibc2.31
- Python version: 3.10.11
- Huggingface_hub version: 0.14.1
- Safetensors version: 0.3.1
- PyTorch version (GPU?): 2.0.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: yes
### Who can help?
@sgugger
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
The bug occurred when I was using `transformers.Trainer` to train a `LlamaForSequenceClassification` model with the FSDP arguments `--fsdp "full_shard auto_wrap --fsdp_transformer_layer_cls_to_wrap "LlamaDecoderLayer"`.
Specifically, when I used the `Trainer.save_model()` function to save the training results to `output_dir`, it only stored the model weights, without the corresponding model config, tokenizer, and training arguments. This issue only occurred when I trained the model using FSDP, but when not using FSDP, all of these components were saved correctly.
I located the corresponding code section in `Trainer.save_model()`
```python
elif (
ShardedDDPOption.ZERO_DP_2 in self.args.sharded_ddp
or ShardedDDPOption.ZERO_DP_3 in self.args.sharded_ddp
or self.fsdp is not None
or self.is_fsdp_enabled
):
if self.is_fsdp_enabled:
os.makedirs(output_dir, exist_ok=True)
self.accelerator.state.fsdp_plugin.save_model(self.accelerator, self.model, output_dir)
```
and section in `FullyShardedDataParallelPlugin.save_model()` of `accelerate-0.20.3`
```python
def save_model(self, accelerator, model, output_dir, model_index=0):
from torch.distributed.fsdp.fully_sharded_data_parallel import FullyShardedDataParallel as FSDP
from torch.distributed.fsdp.fully_sharded_data_parallel import StateDictType
if is_torch_version("<=", "1.13.5"):
with FSDP.state_dict_type(model, self.state_dict_type, self.state_dict_config):
state_dict = model.state_dict()
else:
FSDP.set_state_dict_type(model, self.state_dict_type, self.state_dict_config)
state_dict = model.state_dict()
if self.state_dict_type == StateDictType.FULL_STATE_DICT:
weights_name = f"{MODEL_NAME}.bin" if model_index == 0 else f"{MODEL_NAME}_{model_index}.bin"
output_model_file = os.path.join(output_dir, weights_name)
if accelerator.process_index == 0:
print(f"Saving model to {output_model_file}")
torch.save(state_dict, output_model_file)
print(f"Model saved to {output_model_file}")
else:
weights_name = (
f"{MODEL_NAME}_rank{accelerator.process_index}.bin"
if model_index == 0
else f"{MODEL_NAME}_{model_index}_rank{accelerator.process_index}.bin"
)
output_model_file = os.path.join(output_dir, weights_name)
print(f"Saving model to {output_model_file}")
torch.save(state_dict, output_model_file)
print(f"Model saved to {output_model_file}")
```
`FullyShardedDataParallelPlugin.save_model()` only saves the weights of the model, so we need to manually save other components.
### Expected behavior
Save the corresponding model config, tokenizer, and training arguments together with the trained model when using FSDP. | 06-12-2023 19:16:56 | 06-12-2023 19:16:56 | cc @pacman100 <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 24,207 | closed | Add the number of `model` test failures to slack CI report | # What does this PR do?
We decided to add deepspeed (nightly version) CI job to past CI in #22393. Also `accelerate` is installed with the `main` branch.
This makes the deepspeed CI job have quite a lot of failures most of the time. Sometimes we need to wait DS team for a fix, and sometimes a fix is required from HF.
**Let's add an information regarding `number of model test failures` (i.e. not counting deepspeed CI job's failures) to the Slack CI report**, so we have a number indicating the progress in past CI and make me feel a bit more of fulfillment 🙏 . | 06-12-2023 19:03:02 | 06-12-2023 19:03:02 | The screenshot below shows the issue that motivates this PR
<img width="506" alt="Screenshot 2023-06-12 205537" src="https://github.com/huggingface/transformers/assets/2521628/f6b322e2-a51e-48e9-a438-d1d431470637">
<|||||>_The documentation is not available anymore as the PR was closed or merged._ |
transformers | 24,205 | closed | Fix Debertav2 embed_proj | # What does this PR do?
Fixes an issue where loading a model with different hidden_size and embedding_size (such as `almanach/camemberta-base-generator`)) for masked language modeling won't work due to a size mismatch in the output projection in TF2 and Pytorch.
To replicate simply try loading the MLM model with the `DebertaV2ForMaskedLM`.
This colab shows that it now works https://colab.research.google.com/drive/1piUbXkmxNIhGdCWiN5Rx-s2-27FNwij7
Error from before:
<details>
<summary>Error</summary>
```python
---------------------------------------------------------------------------
InvalidArgumentError Traceback (most recent call last)
<ipython-input-1-049fda84e52f> in <cell line: 5>()
3 model_name = "almanach/camemberta-base-generator"
4 config = AutoConfig.from_pretrained(model_name)
----> 5 model = AutoModelForMaskedLM.from_pretrained(model_name,from_tf=True)
6 # model = AutoModelForMaskedLM.from_pretrained(model_name,config=config,from_tf=True)
7 tokenizer = AutoTokenizer.from_pretrained(model_name)
7 frames
/usr/local/lib/python3.10/dist-packages/transformers/models/auto/auto_factory.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
445 elif type(config) in cls._model_mapping.keys():
446 model_class = _get_model_class(config, cls._model_mapping)
--> 447 return model_class.from_pretrained(pretrained_model_name_or_path, *model_args, config=config, **kwargs)
448 raise ValueError(
449 f"Unrecognized configuration class {config.__class__} for this kind of AutoModel: {cls.__name__}.\n"
/usr/local/lib/python3.10/dist-packages/transformers/modeling_utils.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
1508 from .modeling_tf_pytorch_utils import load_tf2_checkpoint_in_pytorch_model
1509
-> 1510 model = load_tf2_checkpoint_in_pytorch_model(model, resolved_archive_file, allow_missing_keys=True)
1511 except ImportError:
1512 logger.error(
/usr/local/lib/python3.10/dist-packages/transformers/modeling_tf_pytorch_utils.py in load_tf2_checkpoint_in_pytorch_model(pt_model, tf_checkpoint_path, tf_inputs, allow_missing_keys)
316
317 if tf_inputs is not None:
--> 318 tf_model(tf_inputs, training=False) # Make sure model is built
319
320 load_tf_weights(tf_model, tf_checkpoint_path)
/usr/local/lib/python3.10/dist-packages/keras/utils/traceback_utils.py in error_handler(*args, **kwargs)
68 # To get the full stack trace, call:
69 # `tf.debugging.disable_traceback_filtering()`
---> 70 raise e.with_traceback(filtered_tb) from None
71 finally:
72 del filtered_tb
/usr/local/lib/python3.10/dist-packages/transformers/modeling_tf_utils.py in run_call_with_unpacked_inputs(self, *args, **kwargs)
375 main_input = fn_args_and_kwargs.pop(main_input_name)
376 unpacked_inputs = input_processing(func, self.config, main_input, **fn_args_and_kwargs)
--> 377 return func(self, **unpacked_inputs)
378
379 # Keras enforces the first layer argument to be passed, and checks it through `inspect.getfullargspec()`. This
/usr/local/lib/python3.10/dist-packages/transformers/models/deberta_v2/modeling_tf_deberta_v2.py in call(self, input_ids, attention_mask, token_type_ids, position_ids, inputs_embeds, output_attentions, output_hidden_states, return_dict, labels, training, **kwargs)
1281 )
1282 sequence_output = outputs[0]
-> 1283 prediction_scores = self.mlm(sequence_output=sequence_output, training=training)
1284 loss = None if labels is None else self.hf_compute_loss(labels=labels, logits=prediction_scores)
1285
/usr/local/lib/python3.10/dist-packages/transformers/models/deberta_v2/modeling_tf_deberta_v2.py in call(self, sequence_output)
987
988 def call(self, sequence_output: tf.Tensor) -> tf.Tensor:
--> 989 prediction_scores = self.predictions(hidden_states=sequence_output)
990
991 return prediction_scores
/usr/local/lib/python3.10/dist-packages/transformers/models/deberta_v2/modeling_tf_deberta_v2.py in call(self, hidden_states)
973 seq_length = shape_list(hidden_states)[1]
974 hidden_states = tf.reshape(tensor=hidden_states, shape=[-1, self.hidden_size])
--> 975 hidden_states = tf.matmul(a=hidden_states, b=self.input_embeddings.weight, transpose_b=True)
976 hidden_states = tf.reshape(tensor=hidden_states, shape=[-1, seq_length, self.vocab_size])
977 hidden_states = tf.nn.bias_add(value=hidden_states, bias=self.bias)
InvalidArgumentError: Exception encountered when calling layer 'predictions' (type TFDebertaV2LMPredictionHead).
{{function_node __wrapped__MatMul_device_/job:localhost/replica:0/task:0/device:CPU:0}} Matrix size-incompatible: In[0]: [15,256], In[1]: [32008,768] [Op:MatMul]
Call arguments received by layer 'predictions' (type TFDebertaV2LMPredictionHead):
• hidden_states=tf.Tensor(shape=(3, 5, 256), dtype=float32)
```
</details>
- `transformers` version: 4.31.0.dev0
- Platform: Linux-5.15.107+-x86_64-with-glibc2.31
- Python version: 3.10.12
- Huggingface_hub version: 0.15.1
- Safetensors version: 0.3.1
- PyTorch version (GPU?): 2.0.1+cu118 (False)
- Tensorflow version (GPU?): 2.12.0 (False)
- Flax version (CPU?/GPU?/TPU?): 0.6.9 (cpu)
- Jax version: 0.4.10
- JaxLib version: 0.4.10
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@kamalkraj @Rocketknight1 @ArthurZucker @younesbelkada @gante
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 06-12-2023 16:14:55 | 06-12-2023 16:14:55 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Hey @WissamAntoun 👋
If you check the config documentation for deberta, you see that there is no `embedding_size` attribute. There seems to be something wrong with the serialization of `almanach/camemberta-base-generator`<|||||>@gante Hey!
The thing is `camemberta-base-generator` needs to have embedding_size different from the hidden size, since it was trained using ELECTRA style and the generator needs to be smaller without messing the the embedding_size.
Also the code had support for the different sizes and already had the projection layer. But since it wasn't used by anyone, the bug with the MLM task was affecting anyone.
Actually it's now consistent with the code in the official deberta repo https://github.com/microsoft/DeBERTa/blob/master/DeBERTa/deberta/mlm.py#LL20C9-L20C84<|||||>@WissamAntoun makes sense -- since it was in the original implementation, I'll accept this PR 🤗 |
transformers | 24,204 | closed | Skip RWKV test in past CI | # What does this PR do?
In the past CI with torch 13.1 (and older), RWKV tests fail.
- The first failure is
```bash
test_model_parallelism
(line 114) RuntimeError: CUDA error: CUBLAS_STATUS_EXECUTION_FAILED when calling cublasSgemm( handle, opa, opb, m, n, k, &alpha, a, lda, b, ldb, &beta, c, ldc)
```
- this affects 1x more subsequential tests
- skip this particular test will reduce a lot the number of failures.
- The docker env. of that past CI uses a base docker image shipped with `cuda 11.6`
- **In a docker image with cuda `11.8` but installing `torch==1.13+cu116`, no failure.**
Let's not take too much time on identifying what exactly cause the failure here, and just skip RWKV tests on past CI with torch < 2.0.
The goal is just to have a clean past CI report. | 06-12-2023 15:46:16 | 06-12-2023 15:46:16 | |
transformers | 24,203 | closed | Remove unnecessary aten::to overhead in llama | As per title, in `LlamaRotaryEmbedding` the [`cos_cached` and `sin_cached` buffers](https://github.com/huggingface/transformers/blob/08ae37c820395e91fc3aa8b801696de5002481d2/src/transformers/models/llama/modeling_llama.py#L94-L104) are not initialized on the right dtype, because `inv_freq` is always initialized on fp32 no matter the default pytorch dtype in use.
The result is that `cos_cached` and `sin_cached` do not obey `torch_dtype=torch.float16` or `torch_dtype=torch.float32`, resulting in unnecessary overheads when running in fp16:

I leave the `to()` in the `forward`, but it may be removed, WDYT? It is also fine as is, as it is now a no-op:

| 06-12-2023 15:43:59 | 06-12-2023 15:43:59 | Thanks @sgugger good catch, edited as suggested. |
transformers | 24,202 | closed | Remove unnecessary aten::to overhead in llama | As per title, in `LlamaRotaryEmbedding` the [`cos_cached` and `sin_cached` buffers](https://github.com/huggingface/transformers/blob/08ae37c820395e91fc3aa8b801696de5002481d2/src/transformers/models/llama/modeling_llama.py#L94-L104) are not initialized on the right dtype, because `inv_freq` is always initialized on fp32 no matter the default pytorch dtype in use.
The result is that `cos_cached` and `sin_cached` do not obey `torch_dtype=torch.float16` or `torch_dtype=torch.float32`, resulting in unnecessary overheads:

I leave the `to()` in the `forward`, but it may be removed, WDYT? It is also fine as is, as it is now a no-op:

| 06-12-2023 15:42:26 | 06-12-2023 15:42:26 | woops wrong branch<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24202). All of your documentation changes will be reflected on that endpoint. |
transformers | 24,201 | closed | Finish dataloader integration | # What does this PR do?
Follow up to https://github.com/huggingface/transformers/pull/24028, which removes the TPU-specific dataloader bits.
The `MpDeviceLoader` does already what `Trainer` was doing before, it's just wrapped:
```python
class MpDeviceLoader(object):
"""Wraps an existing PyTorch DataLoader with background data upload.
This class should only be using with multi-processing data parallelism.
Args:
loader (:class:`torch.utils.data.DataLoader`): The PyTorch DataLoader to be
wrapped.
device (`torch.device`...): The device where the data has to be sent.
kwargs: Named arguments for the `ParallelLoader` constructor.
"""
def __init__(self, loader, device, **kwargs):
self._loader = loader
self._device = device
self._parallel_loader_kwargs = kwargs
def __iter__(self):
parallel_loader = ParallelLoader(self._loader, [self._device],
**self._parallel_loader_kwargs)
return parallel_loader.per_device_loader(self._device)
def __len__(self):
return len(self._loader)
```
So the native Accelerate integration will work just fine
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@sgugger | 06-12-2023 15:25:15 | 06-12-2023 15:25:15 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 24,200 | closed | Fix `_load_pretrained_model` | # What does this PR do ?
Fixes the following test from my old PR Add check for tied parameters (#24029):
`RUN_SLOW=1 python3 -m pytest -v tests/models/marian/test_modeling_marian.py::TestMarian_FI_EN_V2::test_batch_generation_en_fr` | 06-12-2023 14:33:13 | 06-12-2023 14:33:13 | This line what something I added in my previous PR thinking that we forgot to tie the weights. But I guess it was done intentionally as I see that one test failed and generated garbage value. Here the error that we get from the test:
```
tests/models/marian/test_modeling_marian.py:433: in _assert_generated_batch_equal_expected
self.assertListEqual(self.expected_text, generated_words)
E AssertionError: Lists differ: ['I like to read books', 'I like watching football'] != ['obliterat obliterat obliterat obliterat o[1345 chars]ɰɰɰ']
E
E First differing element 0:
E 'I like to read books'
E 'obliterat obliterat obliterat obliterat o[1214 chars]erat'
E
E Diff is 1556 characters long. Set self.maxDiff to None to see it.
```
> Could you explain a bit more why we need to remove this line? Thanks!
<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>BTW, let's make the PR title a bit more precise, sth like `fix _load_pretrained_model` or anything you think more precise, @SunMarc .
Thanks!<|||||>Yeah, I will explore a little bit more why we have this weird behavior before merging. |
transformers | 24,199 | closed | Update `(TF)SamModelIntegrationTest` | # What does this PR do?
- Add `require_tf`: otherwise, past CI's torch job (with only torch installed) will fail for this test class
- Add `TF` prefix: as usual convention | 06-12-2023 14:11:47 | 06-12-2023 14:11:47 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Good catch, and sorry! I'll tell GPT-4 to watch that one in future, lol |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.