repo
stringclasses
1 value
number
int64
1
25.3k
state
stringclasses
2 values
title
stringlengths
1
487
body
stringlengths
0
234k
created_at
stringlengths
19
19
closed_at
stringlengths
19
19
comments
stringlengths
0
293k
transformers
22,478
closed
Implement safetensors checkpoint loading for Trainer
### Feature request At the moment, Trainer loads models with `torch.load` method directly onto the cpu: (`Trainer._load_from_checkpoint` method) ```python ... # We load the model state dict on the CPU to avoid an OOM error. state_dict = torch.load(os.path.join(resume_from_checkpoint, WEIGHTS_NAME), map_location="cpu") # workaround for FSDP bug https://github.com/pytorch/pytorch/issues/82963 # which takes *args instead of **kwargs load_result = model.load_state_dict(state_dict, False) # release memory del state_dict self._issue_warnings_after_load(load_result) ... ``` Loading on cpu with safetensors is a lot faster, so this method should (?) be preffered if safetensors library is installed. ### Motivation I care to speed up checkpointing process since I use it to store the best model checkpoint every time the metric is improved (via callback). ### Your contribution The change should be straightforward, but I am not sure if safetensors checkpointing should be defaulted to if safetensors are installed, or configured manually like in `PreTrainedModel.from_pretrained` method.
03-30-2023 18:56:00
03-30-2023 18:56:00
The checkpoints are not saved in that format so there is no `model.safetensors` file to load from. We could add a training argument to use this format instead of the PyTorch format indeed.
transformers
22,477
closed
[WIP] Ignore this.
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
03-30-2023 15:42:31
03-30-2023 15:42:31
_The documentation is not available anymore as the PR was closed or merged._<|||||>Only deta affected confirmed.
transformers
22,476
closed
Export pix2struct models to ONNX
How can the pix2struct models be exported to ONNX? I think these models are not available yet to make the transformation with: !python -m transformers.onnx --model=google/pix2struct-docvqa-base --feature=vision2seq-lm scratch/onnx --atol 1e-3
03-30-2023 15:38:10
03-30-2023 15:38:10
ONNX conversion is now fully handled by the `optimum` library, so you should open your issue there :-)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
22,475
closed
Soft error whisper.
# What does this PR do? Moving the hard error on whisper timestamp to a soft error. Results are definitely odd potentially non sensical, but at least we're not hard errorring out. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? Fix #22053 https://github.com/huggingface/transformers/issues/22053 ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
03-30-2023 14:32:00
03-30-2023 14:32:00
_The documentation is not available anymore as the PR was closed or merged._
transformers
22,474
closed
Update `Wav2Vec2ProcessorWithLM` doc example
# What does this PR do? The doctest for this doc example starts to fail. Even I roll back to commits a few days ago, it still fails. I think it's a dataset issue, so just updating the expected values.
03-30-2023 13:26:37
03-30-2023 13:26:37
_The documentation is not available anymore as the PR was closed or merged._<|||||>I would rather make sure that the dataset was indeed changed! But otherwise ok for this<|||||>> I would rather make sure that the dataset was indeed changed! But otherwise ok for this It's caused by the new release of `datasets==2.11.0` yesterday.<|||||>> It's caused by the new release of datasets==2.11.0 yesterday. @ydshieh I'm assuming yes, but if we pin to an old version of datasets, do we have the old values? <|||||>> > It's caused by the new release of datasets==2.11.0 yesterday. > > @ydshieh I'm assuming yes, but if we pin to an old version of datasets, do we have the old values? Yes, that's why I see the issue is coming from the datasets version :-)
transformers
22,473
closed
Docs fix: Multinomial sampling decoding needs "num_beams=1", since by default it is usually not 1.
# Fix error in docs: multinomial sampling decoding strategy As indicated in the library source code: https://github.com/huggingface/transformers/blob/228792a9dc0c36f1e82ab441e1b1991d116ee0a0/src/transformers/generation/utils.py#LL1364-L1367 Multinomial sampling needs `num_beams=1`. However, this is not indicated in the docs, potentially leading to execute beam-search multinomial sampling instead of the intended multinomial sampling. This deviation from the expected behaviour happens quite often, since a lot of models have in their `generation_config.json` the parameter `num_beams` set to something higher than 1. This happens, for example, in the majority of top translation models from the Hub. Also, I have included "ancestral sampling" as another name for multinomial sampling, since it is the most common name in the decoding algorithms literature. ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). ## Who can review? Original authors of this piece of documentation: @gante, @sgugger, @stevhliu and @MKhalusova
03-30-2023 12:52:56
03-30-2023 12:52:56
_The documentation is not available anymore as the PR was closed or merged._
transformers
22,472
closed
Relax `eos_token_id < 0` checks in `generate()` from `ValueError` to warning
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> This PR relaxes a constraint that was introduced in eec46b4 to ensure `eos_token_id > 0` when `min_new_tokens > 0`. In particular, the current code would throw a `ValueError`: ```python from transformers import pipeline pipe = pipeline("text-generation") pipe("Hello, my dog is cute.", min_new_tokens=4, eos_token_id=-1) ``` This isn't ideal as setting `eos_token_id=-1` is a handy trick to enable generating text past the EOS token. The current PR now returns a warning instead. I've also taken the liberty to relax the error on `min_length < 0` but let me know if that's not a good idea. Internal Slack thread where this was discussed: https://huggingface.slack.com/archives/C01N44FJDHT/p1680175455762649 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
03-30-2023 11:48:26
03-30-2023 11:48:26
_The documentation is not available anymore as the PR was closed or merged._<|||||>Thanks for the review @sgugger - I've now fixed the logging and reverted the change on `min_length`
transformers
22,471
closed
Llama: support for `max_position_embeddings`
# What does this PR do? Related to #22433. `LlamaConfig`, as opposed to other models, did not support `max_position_embeddings`. Instead, the position embedding class was hardcoded to a length of `2048`. This PR rectifies that, keeping the `2048` default.
03-30-2023 11:33:19
03-30-2023 11:33:19
_The documentation is not available anymore as the PR was closed or merged._
transformers
22,470
closed
[NLLB-MoE] `model_type` update for auto mapping
# What does this PR do? Changes the `model_type` to allow users using the `AutoModel` . While we are at it, also moved the testing model to `hf-internal-testing` Fixes #22461
03-30-2023 11:28:36
03-30-2023 11:28:36
_The documentation is not available anymore as the PR was closed or merged._
transformers
22,469
closed
ViTModel extract only last layer's attention weights
### Feature request See: https://huggingface.co/docs/transformers/model_doc/vit#transformers.ViTModel I propose a new keyword argument for `forward()`, `output_attention_layers: Tuple[int, ...] = (,)`, that allows one to specify which layers to extract attention weights for when `output_attentions=True`. When specified, other layers' attention weights are **immediately discarded to save VRAM**. ### Motivation Setting `output_attentions=True` currently returns attention weights for all layers. When using a high resolution, this uses a prohibitive amount of VRAM, resulting in CUDA OOM. For context, I am developing a model that could benefit from utilizing the last layer attention weights of DINO. However, I currently OOM when I try and train at a higher resolution. After experimenting with the various return options for `forward()`, I realised `output_attentions` and `output_hidden_states` use much more VRAM than just returning the last hidden state. As such, I am unable to feasibly train my model without pre-caching DINO's outputs. ### Your contribution I might be able to work on a PR during the weekends.
03-30-2023 10:31:20
03-30-2023 10:31:20
Or you can adapt the modeling code of ViT to suit your needs. That is why each model is fully defined in their modeling file.<|||||>I see, okay then.
transformers
22,468
closed
Wrong results for inferencing a GLPN model on MPS device
### System Info Hi, there. I've discovered strange behaviour when using MPS device. When infer the same model with the same input on "mps" device, the result is numerically wrong and meaningless. Thanks in advance for the help! MPS | CPU :-------------------------:|:-------------------------: ![](https://user-images.githubusercontent.com/126671893/228804909-71194936-a266-4511-9ae0-1c667f762faa.png) | ![](https://user-images.githubusercontent.com/126671893/228804919-9916e01f-f4cd-43a2-be74-a7cb8eeb9c6f.png) - `transformers` version: 4.27.3 - Platform: macOS-13.3-arm64-arm-64bit - Python version: 3.9.6 - Huggingface_hub version: 0.13.3 - PyTorch version (GPU?): 2.1.0.dev20230329 (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? @amyeroberts ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Code to reproduce, just change run_on_mps flag to run it on CPU ```python from transformers import GLPNImageProcessor, GLPNForDepthEstimation import torch import numpy as np from PIL import Image import requests import matplotlib.pyplot as plt run_on_mps = True url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) feature_extractor = GLPNImageProcessor.from_pretrained("vinvino02/glpn-nyu") model = GLPNForDepthEstimation.from_pretrained("vinvino02/glpn-nyu") device = "mps" if run_on_mps else "cpu" model.to(device) inputs = feature_extractor(images=image, return_tensors="pt").to(device) with torch.no_grad(): outputs = model(**inputs) predicted_depth = outputs.predicted_depth output = predicted_depth.detach().squeeze().cpu().numpy() formatted = (output * 255 / np.max(output)).astype("uint8") depth = Image.fromarray(np.vstack([np.array(image), np.dstack([formatted]*3)])) plt.imshow(depth) plt.waitforbuttonpress() ``` ### Expected behavior The output is basically the same up to computational error
03-30-2023 10:20:15
03-30-2023 10:20:15
Hi @arybnikov, thanks for the detailed reproduction snippet and example pictures, it really helps. This is indeed odd. I'll look into it. <|||||>@amyeroberts thanks for the quick response! I'd like to add that it's getting a little bit better with removed ```with torch.no_grad():``` context manager. At least it's possible to visually see the similarities between the input and the output<|||||>Interesting! 🤔 Thanks for the added info <|||||>@arybnikov This is a very weird issue! After digging into this more, I've managed to track the issue down to the gelu activation in the GLPNMixFFN layer [here](https://github.com/huggingface/transformers/blob/6fc44656b43f1de939a1e62dd59c45d1fec9f1aa/src/transformers/models/glpn/modeling_glpn.py#L283). Setting this to the identity layer results in the CPU and MPS model's outputs to be identical. Directly before the activation, the difference between CPU and MPS values are ~1e-7. After the activation, the largest value difference is ~10 (!). For all of the layers, the activation used is the torch implementation `nn.functional.gelu`, although it's [wrapped here](https://github.com/huggingface/transformers/blob/6fc44656b43f1de939a1e62dd59c45d1fec9f1aa/src/transformers/activations.py#L59). If I observe some summary statistics about the hidden states, when passing in the test cat image, I see that the minimum values for the activations after gelu on MPS are still very low (I wouldn't expect to see values of -8). **MPS: Hidden states summary after first GLPNMixFNN** ``` {'hidden_states after dwconv': {'device': device(type='mps', index=0), 'dtype': torch.float32, 'max': tensor(5.6279, device='mps:0', grad_fn=<MaxBackward1>), 'mean': tensor(-0.3660, device='mps:0', grad_fn=<MeanBackward0>), 'min': tensor(-8.7750, device='mps:0', grad_fn=<MinBackward1>), 'shape': torch.Size([1, 19200, 256]), 'std': tensor(0.8287, device='mps:0', grad_fn=<StdBackward0>)}} {'hidden_states after intermediate act': {'device': device(type='mps', index=0), 'dtype': torch.float32, 'max': tensor(5.6279, device='mps:0', grad_fn=<MaxBackward1>), 'mean': tensor(-0.3660, device='mps:0', grad_fn=<MeanBackward0>), 'min': tensor(-8.7750, device='mps:0', grad_fn=<MinBackward1>), 'shape': torch.Size([1, 19200, 256]), 'std': tensor(0.8287, device='mps:0', grad_fn=<StdBackward0>)}} ``` **CPU: Hidden states summary after first GLPNMixFNN** ``` {'hidden_states after dwconv': {'device': device(type='cpu'), 'dtype': torch.float32, 'max': tensor(5.6279, grad_fn=<MaxBackward1>), 'mean': tensor(-0.3660, grad_fn=<MeanBackward0>), 'min': tensor(-8.7750, grad_fn=<MinBackward1>), 'shape': torch.Size([1, 19200, 256]), 'std': tensor(0.8287, grad_fn=<StdBackward0>)}} {'hidden_states after intermediate act': {'device': device(type='cpu'), 'dtype': torch.float32, 'max': tensor(5.6279, grad_fn=<MaxBackward1>), 'mean': tensor(0.0024, grad_fn=<MeanBackward0>), 'min': tensor(-0.1700, grad_fn=<MinBackward1>), 'shape': torch.Size([1, 19200, 256]), 'std': tensor(0.1962, grad_fn=<StdBackward0>)}} ``` **Replication** If I save out the first hidden state that's passed to this activation, then load it in into a separate python session, I can replicate the difference in CPU vs MPS computation e.g. ```py import torch import glob import numpy as np from torch import nn def max_diff(a, b): return np.amax(np.abs(a - b)) # Max diff observed between mps_arr and cpu_arr: 4.567206e-06 # just select first element in batch (didn't make a difference when comparing) mps_arr = np.load("path/to/mps_hidden_state.npy")[0] cpu_arr = np.load("path/to/cpu_hidden_state.npy")[0] mps_torch = torch.tensor(mps_arr, device="mps", requires_grad=True) cpu_torch = torch.tensor(cpu_arr, device="cpu", requires_grad=True) cpu_outputs = nn.GELU()(cpu_torch).cpu().detach().numpy() mps_outputs = nn.GELU()(mps_torch).cpu().detach().numpy() diff = max_diff(mps_outputs, cpu_outputs) print(f"Max diff: {diff}") ``` Prints out: 10.22504997253418 This difference is also observed if `cpu_arr = mps_arr = np.load("path/to/cpu_hidden_state.npy")` More bizarrely, if instead iterate over the rows of the hidden states, then the differences between the MPS and CPU activations become small again - ~1e-7. Comparing the concatenated row activations with the activations when the entire tensor is passed in shows large differences for MPS and not for CPU e.g. ```py cpu_per_row_outputs = [] mps_per_row_outputs = [] max_diff_seen = 0 n_rows, n_cols = cpu_torch.shape for row in range(n_rows): cpu_output = nn.GELU()(cpu_torch[row]).detach().cpu().numpy() mps_output = nn.GELU()(mps_torch[row]).detach().cpu().numpy() diff = max_diff(cpu_output, mps_output) max_diff_seen = max(max_diff_seen, diff) cpu_per_row_outputs.append(cpu_output) mps_per_row_outputs.append(mps_output) print(f"Max diff {max_diff_seen}") cpu_concat_arr = np.concatenate([x.reshape(1, -1) for x in cpu_per_row_outputs]) mps_concat_arr = np.concatenate([x.reshape(1, -1) for x in mps_per_row_outputs]) diff_cpu = max_diff(cpu_outputs, cpu_concat_arr) diff_mps = max_diff(mps_outputs, mps_concat_arr) print(f"Diff cpu:{diff_cpu}, mps:{diff_mps}") ``` Outputs: ``` Max diff 3.0994415283203125e-06 Diff cpu:0.0, mps:10.22504997253418 ``` So, it could be there's some numerical issue creeping in when vectorizing or with large arrays. However, if try and replicate with random arrays the differences disappear 🙃 I tried with: * Small `torch.randn` arrays * Large `torch.randn` arrays matching input size `(1, 19200, 256)` * `torch.randn` arrays shifted and rescaled to have mean -0.3660 and std 0.8287, to replicate the input stats * torch tensor with samples generated matching the quantiles of the input (see note below) Ultimately, I've not been able to track down the exact cause. However, it does seem to be arising from `torch` and its MPS backend rather than `transformers`. Sorry I couldn't be of more help - I suggest raising an issue in the pytorch repo. I'll make sure to share here if I discover and other reasons this might be occurring. ------------------------------------------- Simulating hidden states from quantiles: ```py # Taken from the mps_torch.cpu().quantile(torch.linspace(0, 1, 100) quantiles = np.array([-8.7750e+00, -3.5759e+00, -2.7891e+00, -2.2607e+00, -1.9741e+00, -1.7790e+00, -1.6474e+00, -1.5546e+00, -1.4819e+00, -1.4204e+00, -1.3666e+00, -1.3195e+00, -1.2764e+00, -1.2360e+00, -1.1970e+00, -1.1573e+00, -1.1165e+00, -1.0735e+00, -1.0291e+00, -9.8404e-01, -9.3565e-01, -8.8151e-01, -8.2654e-01, -7.7034e-01, -7.1312e-01, -6.5833e-01, -6.0665e-01, -5.5740e-01, -5.1208e-01, -4.6964e-01, -4.3010e-01, -3.9351e-01, -3.5903e-01, -3.2719e-01, -2.9822e-01, -2.7183e-01, -2.4816e-01, -2.2696e-01, -2.0797e-01, -1.9071e-01, -1.7518e-01, -1.6093e-01, -1.4776e-01, -1.3571e-01, -1.2464e-01, -1.1429e-01, -1.0468e-01, -9.5791e-02, -8.7355e-02, -7.9464e-02, -7.2036e-02, -6.4912e-02, -5.8190e-02, -5.1746e-02, -4.5613e-02, -3.9736e-02, -3.3970e-02, -2.8438e-02, -2.2998e-02, -1.7681e-02, -1.2420e-02, -7.1772e-03, -1.9396e-03, 3.2572e-03, 8.5318e-03, 1.3851e-02, 1.9226e-02, 2.4768e-02, 3.0411e-02, 3.6221e-02, 4.2207e-02, 4.8382e-02, 5.4769e-02, 6.1532e-02, 6.8543e-02, 7.5904e-02, 8.3736e-02, 9.2057e-02, 1.0091e-01, 1.1025e-01, 1.2032e-01, 1.3126e-01, 1.4295e-01, 1.5552e-01, 1.6927e-01, 1.8441e-01, 2.0094e-01, 2.1933e-01, 2.4002e-01, 2.6335e-01, 2.8948e-01, 3.2055e-01, 3.5769e-01, 4.0200e-01, 4.5551e-01, 5.2398e-01, 6.1828e-01, 7.3903e-01, 9.5955e-01, 5.6279e+00]) lower_bounds = torch.tensor(quantiles[:-1]) upper_bounds = torch.tensor(quantiles[1:]) mix = torch.distributions.Categorical(torch.ones(99,)) comp = torch.distributions.Uniform(low=lower_bounds, high=upper_bounds) mixture_model = torch.distributions.MixtureSameFamily(mix, comp) dummy_output = mixture_model.sample((1, 19200, 256)) ``` <|||||>Great investigation, @amyeroberts! Thanks a lot! I'll definitely create an issue in pytorch repo<|||||>@amyeroberts just FIY, thanks the clue from pytorch dev team, the problem can be solved if using contiguous() tensor here: https://github.com/huggingface/transformers/blob/1670be4bdec19d5a8893f943bf78a8d9b3dc8911/src/transformers/models/glpn/modeling_glpn.py#L283 something like this: `hidden_states = self.intermediate_act_fn(hidden_states.contiguous())` Not sure it's worth a PR, since it's not a solution of the root problem, but if so, please let me know, I'll create one<|||||>No we won't change code that works on all other hardware just to accommodate the MPS device. This needs to be fixed in PyTorch.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
22,467
closed
Running run_translation.py with mt5 model, but loss is always 0.0
### System Info transformers version 4.28.0.dev ### Who can help? _No response_ ### Information - [x] The official example scripts - [ ] My own modified scripts ### Tasks - [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction 1. training scripts: ``` python3 -m torch.distributed.launch --nproc_per_node=8 \ --nnodes=${WORLD_SIZE} --node_rank=${RANK} --master_addr=$MASTER_ADDR \ --master_port=$MASTER_PORT ${code_dir}/run_translation.py \ --model_name_or_path ${work_dir}/../pretrain_models/mt0-base \ --train_file ${data_dir}/ja2zh.json \ --validation_file ${data_dir}/ja2zh-head10.json \ --source_lang ja \ --target_lang zh \ --source_prefix "translate Japanese to Chinese: " \ --warmup_ratio 0.1 \ --save_total_limit 10 \ --save_steps 5000 \ --logging_steps 1 \ --weight_decay 0.001 \ --adam_beta2 0.98 \ --learning_rate 2e-4 \ --num_train_epochs 1 \ --gradient_accumulation_steps 1 \ --per_device_train_batch_size 8 \ --per_device_eval_batch_size 8 \ --cache_dir ${data_dir}/cache/ \ --do_train \ --do_eval \ --fp16 \ --output_dir ${ckpt_dir}/hf \ --preprocessing_num_workers 40 \ 2>&1 |tee ${LOG_FILE} ``` mt0-base is cloned from the huggingface. And the loss is always 0.0: ``` [INFO|trainer.py:598] 2023-03-30 09:56:13,151 >> Using cuda_amp half precision backend /home/user/miniconda/lib/python3.8/site-packages/transformers/optimization.py:391: FutureWarning: This implementation of AdamW is deprecated and will be removed in a future version. Use the PyTorch implementation torch.optim.AdamW instead, or set `no_deprecation_warning=True` to disable this warning warnings.warn( /home/user/miniconda/lib/python3.8/site-packages/transformers/optimization.py:391: FutureWarning: This implementation of AdamW is deprecated and will be removed in a future version. Use the PyTorch implementation torch.optim.AdamW instead, or set `no_deprecation_warning=True` to disable this warning warnings.warn( [INFO|trainer.py:1743] 2023-03-30 09:56:13,677 >> ***** Running training ***** [INFO|trainer.py:1744] 2023-03-30 09:56:13,677 >> Num examples = 31729970 [INFO|trainer.py:1745] 2023-03-30 09:56:13,677 >> Num Epochs = 1 [INFO|trainer.py:1746] 2023-03-30 09:56:13,677 >> Instantaneous batch size per device = 8 [INFO|trainer.py:1747] 2023-03-30 09:56:13,677 >> Total train batch size (w. parallel, distributed & accumulation) = 32 [INFO|trainer.py:1748] 2023-03-30 09:56:13,677 >> Gradient Accumulation steps = 1 [INFO|trainer.py:1749] 2023-03-30 09:56:13,677 >> Total optimization steps = 991562 [INFO|trainer.py:1750] 2023-03-30 09:56:13,680 >> Number of trainable parameters = 1229581312 [WARNING|logging.py:280] 2023-03-30 09:56:19,819 >> You're using a T5TokenizerFast tokenizer. Please note that with a fast tokenizer, using the `__call__` method is faster than using a method to encode the text followed by a call to the `pad` method to get a padded encoding. [WARNING|logging.py:280] 2023-03-30 09:56:20,010 >> You're using a T5TokenizerFast tokenizer. Please note that with a fast tokenizer, using the `__call__` method is faster than using a method to encode the text followed by a call to the `pad` method to get a padded encoding. [W reducer.cpp:1303] Warning: find_unused_parameters=True was specified in DDP constructor, but did not find any unused parameters in the forward pass. This flag results in an extra traversal of the autograd graph every iteration, which can adversely affect performance. If your model indeed never has any unused parameters in the forward pass, consider turning this flag off. Note that this warning may be a false positive if your model has flow control causing later iterations to have unused parameters. (function operator()) [W reducer.cpp:1303] Warning: find_unused_parameters=True was specified in DDP constructor, but did not find any unused parameters in the forward pass. This flag results in an extra traversal of the autograd graph every iteration, which can adversely affect performance. If your model indeed never has any unused parameters in the forward pass, consider turning this flag off. Note that this warning may be a false positive if your model has flow control causing later iterations to have unused parameters. (function operator()) {'loss': 0.0, 'learning_rate': 0.0, 'epoch': 0.0} {'loss': 0.0, 'learning_rate': 0.0, 'epoch': 0.0} {'loss': 0.0, 'learning_rate': 0.0, 'epoch': 0.0} {'loss': 0.0, 'learning_rate': 0.0, 'epoch': 0.0} {'loss': 0.0, 'learning_rate': 0.0, 'epoch': 0.0} {'loss': 0.0, 'learning_rate': 0.0, 'epoch': 0.0} {'loss': 0.0, 'learning_rate': 0.0, 'epoch': 0.0} {'loss': 0.0, 'learning_rate': 0.0, 'epoch': 0.0} ``` But if I try to train mt5 model from scratch with my mt data, the loss looks good. Did I miss something? Any advice is appreciated! Thx in advance! ### Expected behavior Loss is larger than 0.0 and the model parameter will update.
03-30-2023 10:02:57
03-30-2023 10:02:57
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
22,466
closed
🚨🚨🚨 Fix ordering of height, width for BLIP image processor
# What does this PR do? The BLIP image processor incorrectly passed in the dimensions to resize in the order `(width, height)`. This is reordered to be correct. In most cases, this won't have an effect as the default height and width are the same. However, this is not backwards compatible for custom configurations with different height, width settings and direct calls to the `resize` method with different height, width values. This also updates the docstring as this was incorrect. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests?
03-30-2023 10:02:45
03-30-2023 10:02:45
_The documentation is not available anymore as the PR was closed or merged._
transformers
22,465
closed
Whisper should suppress task tokens
### System Info - `transformers` version: 4.27.4 - Platform: Linux-5.19.0-38-generic-x86_64-with-glibc2.35 - Python version: 3.10.6 - Huggingface_hub version: 0.13.3 - PyTorch version (GPU?): 2.0.0+cu117 (True) ### Who can help? @ArthurZucker @sanchit-gandhi ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction I'm not sure if it is the best place to report the issue, but openai/whisper updated the default list of suppressed tokens in [this commit](https://github.com/openai/whisper/commit/eab8d920edf3947294c466f3912c24ed4b191264). In particular the task tokens should now be suppressed by default: ```python >>> import transformers >>> config = transformers.AutoConfig.from_pretrained("openai/whisper-tiny") >>> tokenizer = transformers.AutoTokenizer.from_pretrained("openai/whisper-tiny") >>> tokenizer.convert_tokens_to_ids("<|transcribe|>") in config.suppress_tokens False >>> tokenizer.convert_tokens_to_ids("<|translate|>") in config.suppress_tokens False ``` ### Expected behavior The default list of suppressed tokens should be match the list in the reference implementation. All Whisper configurations should be updated accordingly.
03-30-2023 09:50:15
03-30-2023 09:50:15
Nice! You indeed reported this to the correct place, thanks for this catch. We should update the model! <|||||>Great catch! Would you like to open some PR's on the HF Hub to fix these `suppress_tokens` lists @guillaumekln? E.g. for the `tiny` model we need to update the list of `suppress_tokens` in the generation config: https://huggingface.co/openai/whisper-tiny/blob/a8d76517e6d65d92771752dbbf5e9c0a1a5b3a0d/generation_config.json#L126<|||||>Sure, I just opened pull requests for all Whisper models on the Hub. I updated both `config.json` and `generation_config.json` in separate PRs.<|||||>Thank you for the Hub PRs @guillaumekln, very clean! They all looked correct so have merged them on the Hub side 👍 Closing as complete!
transformers
22,464
closed
Flaky test for NLLB-MoE Model
### System Info Environment: Circle CI test_torch image, defined [here](https://github.com/huggingface/transformers/blob/c15f937581048fa28f77794c4f6a257e14d272cb/.circleci/create_circleci_config.py#L211) ### Who can help? @ArthurZucker ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction The test `tests/models/nllb_moe/test_modeling_nllb_moe.py::NllbMoeModelTest::test_decoder_model_past_with_large_inputs` is flaky and occasionally fails. Note: the tolerance on the test is also very low. Error cannot be deterministically reproduced. The test passes when I run it locally. An example of a failed run: https://app.circleci.com/pipelines/github/huggingface/transformers/60931/workflows/4642cd50-8d8c-4a6b-bea9-b077ff7400cf/jobs/747651 This came from this run: https://github.com/huggingface/transformers/runs/12389736536 And this unrelated PR: https://github.com/huggingface/transformers/pull/21855 ### Expected behavior Tests pass on every run.
03-30-2023 09:50:07
03-30-2023 09:50:07
Yep, I have no idea why 😓 <|||||>Looked into this and I think the flakiness is caused by the natural variability in the sparse MoE layers. Specifically that when they calculate which experts to use in the gating logic, they’re computing probabilities imperfectly for two different sets of inputs: one with prior inputs concatenated with the past key values and one with just the past key values. The test usually passes cause magnitude of the difference is usually likely to be small. Notably, when the vocab size is increased this pass rate goes up (and vice versa) since the increased representational capacity can help the model make more accurate decisions about which experts to use for each input. For example, increasing the vocab size in the config from its current 99 to 999 increases the pass rate from ~80% to ~95%. I think this flakiness is inherent in the sparse layers, but if I understand right the point of the test is to check the decoder uses the past properly, so I put up a PR that edits the test to use dense layers and moves to the rtol down to 1e-3 to be in line with the other models’ version of this check.
transformers
22,463
closed
Skip flaky NLLB Moe test for now
# What does this PR do? Skips flaky test. See failed CI run here: https://app.circleci.com/pipelines/github/huggingface/transformers/60931/workflows/4642cd50-8d8c-4a6b-bea9-b077ff7400cf/jobs/747651 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests?
03-30-2023 09:36:18
03-30-2023 09:36:18
cc @sgugger @ArthurZucker <|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>Thanks Amy!
transformers
22,462
closed
Can not find a published transformers version which fits language-modeling example.
This [language-modeling example](https://github.com/huggingface/transformers/tree/main/examples/pytorch/language-modeling) example helps me a lot. However, the newest version so far is 4.27.4, which does not match the minimal version requirement. Looking forward to the newer version to be published. Thanks!
03-30-2023 03:51:26
03-30-2023 03:51:26
You can take the examples for v4.27 [here](https://github.com/huggingface/transformers/tree/v4.27.0/examples). The example on the main branch are compatible with the main branch of Transformers, so you need a source install.<|||||>> You can take the examples for v4.27 [here](https://github.com/huggingface/transformers/tree/v4.27.0/examples). The example on the main branch are compatible with the main branch of Transformers, so you need a source install. Got it. Thank you!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
22,461
closed
[Bug] KeyError: 'nllb-moe' when trying to load `nllb-moe-54b` model
### System Info - `transformers` version: 4.28.0.dev0 - Platform: Linux-5.4.0-74-generic-x86_64-with-glibc2.10 - Python version: 3.8.8 - Huggingface_hub version: 0.13.3 - PyTorch version (GPU?): 2.0.0+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ### Who can help? @ArthurZucker from https://github.com/huggingface/transformers/pull/22024 ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Following example script on https://huggingface.co/facebook/nllb-moe-54b (but pointing to local git copy), 1. `pip install git+https://github.com/huggingface/transformers.git` 2. `python` ```py >>> from transformers import AutoModelForSeq2SeqLM, AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("../hub/nllb-moe-54b") >>> model = AutoModelForSeq2SeqLM.from_pretrained("../hub/nllb-moe-54b") Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/opt/conda/lib/python3.8/site-packages/transformers/models/auto/auto_factory.py", line 441, in from_pretrained config, kwargs = AutoConfig.from_pretrained( File "/opt/conda/lib/python3.8/site-packages/transformers/models/auto/configuration_auto.py", line 920, in from_pretrained config_class = CONFIG_MAPPING[config_dict["model_type"]] File "/opt/conda/lib/python3.8/site-packages/transformers/models/auto/configuration_auto.py", line 626, in __getitem__ raise KeyError(key) KeyError: 'nllb_moe' ``` Note: The system might not have enough RAM, but this errored immediately after reaching it and does not seem like OOM. ### Expected behavior It can load model.
03-30-2023 03:20:29
03-30-2023 03:20:29
That's completely right! The `config.model_type` should be `nllb-moe` instead of `nllb_moe`. Will modify this in the checkpoints and in the code. Thanks for reporting! <|||||>@ArthurZucker , hello! I noticed that and have also attempted that, but I got the same error weirdly. I will try it again later. It is the config.json right?<|||||>Yes the `config.json` was wrong! <|||||>Hello @ArthurZucker , sorry for bothering you again. I have `git pull` the latest Huggingface repo and still got same error. ```py >>> tokenizer = AutoTokenizer.from_pretrained("../hub/nllb-moe-54b", use_auth_token=True, src_lang="eng_Latn") >>> model = AutoModelForSeq2SeqLM.from_pretrained("../hub/nllb-moe-54b") Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/opt/conda/lib/python3.8/site-packages/transformers/models/auto/auto_factory.py", line 441, in from_pretrained config, kwargs = AutoConfig.from_pretrained( File "/opt/conda/lib/python3.8/site-packages/transformers/models/auto/configuration_auto.py", line 920, in from_pretrained config_class = CONFIG_MAPPING[config_dict["model_type"]] File "/opt/conda/lib/python3.8/site-packages/transformers/models/auto/configuration_auto.py", line 626, in __getitem__ raise KeyError(key) KeyError: 'nllb-moe' ``` Do I need to install from your branch https://github.com/huggingface/transformers/pull/22470? Edit: Oh, it was just merged 1 min ago. <|||||>This is normal! You need to update your config.json file<|||||>If you were using a hub model, it would automatically update. The PR fixes the default value but for models that were already downloaded you need to update the config<|||||>> If you were using a hub model, it would automatically update. The PR fixes the default value but for models that were already downloaded you need to update the config Yes, I tried both 1) Updated config.json 2) git pull the downloaded HF repo with the model https://huggingface.co/facebook/nllb-moe-54b/commit/83c96e4658a2e02c182d0ab794229301862791ee (not the transformers). I'm not sure if it cached the config.json somewhere? Edit: Will pip install latest transformer from source.<|||||>Hm, I have pip install from source and also confirmed that `config.json` got updated. ```bash Unpacking objects: 100% (3/3), 342 bytes | 0 bytes/s, done. From https://huggingface.co/facebook/nllb-moe-54b 59fc265..83c96e4 main -> origin/main Updating 59fc265..83c96e4 Fast-forward config.json | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) ``` <img width="771" alt="image" src="https://user-images.githubusercontent.com/9899957/228855893-9cd170aa-4359-42f1-9f2e-307bac59ca95.png"> <|||||>> Hello @ArthurZucker , sorry for bothering you again. > > I have `git pull` the latest Huggingface repo and still got same error. > > ```python > >>> tokenizer = AutoTokenizer.from_pretrained("../hub/nllb-moe-54b", use_auth_token=True, src_lang="eng_Latn") > >>> model = AutoModelForSeq2SeqLM.from_pretrained("../hub/nllb-moe-54b") > Traceback (most recent call last): > File "<stdin>", line 1, in <module> > File "/opt/conda/lib/python3.8/site-packages/transformers/models/auto/auto_factory.py", line 441, in from_pretrained > config, kwargs = AutoConfig.from_pretrained( > File "/opt/conda/lib/python3.8/site-packages/transformers/models/auto/configuration_auto.py", line 920, in from_pretrained > config_class = CONFIG_MAPPING[config_dict["model_type"]] > File "/opt/conda/lib/python3.8/site-packages/transformers/models/auto/configuration_auto.py", line 626, in __getitem__ > raise KeyError(key) > KeyError: 'nllb-moe' > ``` > > Do I need to install from your branch #22470? > > Edit: Oh, it was just merged 1 min ago. I just saw. The key error is now `nllb-moe`. It is not the same error as the first post which was `nllb_moe`.<|||||>Okay, let me have another look!<|||||>> Okay, let me have another look! Sorry for disturbing. Thank you very much!<|||||>So, running this `model = AutoModelForSeq2SeqLM.from_pretrained("hf-internal-testing/random-nllb-moe-2-experts")` definitely worked for me. ```python In [3]: model = AutoModelForSeq2SeqLM.from_pretrained("hf-internal-testing/random-nllb-moe-2-experts") Downloading (…)lve/main/config.json: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1.40k/1.40k [00:00<00:00, 272kB/s] Downloading (…)model.bin.index.json: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 91.5k/91.5k [00:00<00:00, 992kB/s] Downloading (…)00001-of-00002.bin";: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 7.75G/7.75G [02:04<00:00, 62.0MB/s] Downloading (…)00002-of-00002.bin";: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 9.36G/9.36G [02:17<00:00, 68.0MB/s] Downloading shards: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [04:23<00:00, 131.96s/it] Loading checkpoint shards: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:11<00:00, 5.82s/it] In [4]: ``` The issue is most probably related to the config/ the cache! But still will look into it. In the mean time use the model directly 😉 <|||||>Hello @ArthurZucker , thank you for info!<|||||>> Hello 你好你好@ArthurZucker , thank you for info!,谢谢你的信息!,谢谢你的信息! Is the problem solved? <|||||>> > Hello 你好你好@ArthurZucker , thank you for info!,谢谢你的信息!,谢谢你的信息! > > Is the problem solved? Hey! I have not tried this yet. I think it could've been fixed. I probably had some caching issue with packages. I have not been recently able to get a machine to run this yet.<|||||>> > > Hello 你好你好@ArthurZucker , thank you for info!,谢谢你的信息!,谢谢你的信息! > > > > > > Is the problem solved? > > Hey! I have not tried this yet. I think it could've been fixed. I probably had some caching issue with packages. > > I have not been recently able to get a machine to run this yet. I have the same problem, I think changing the config file "nllb-moe" is not the solution, I tried many times, I am not cached, the first time I use <|||||>Hey! Really sorry but I can't reproduce this now : https://colab.research.google.com/drive/1uoAKGbkJA4rnZV9Lwg1unOvvEloudcvM?usp=sharing This notebook works as expected out of the box. I am pretty sur it is either: - you are not using the `main` transformers branch - your file is not well defined<|||||>> Hey! Really sorry but I can't reproduce this now : 嘿!真的很抱歉,但我现在无法重现这个:嘿!真的很抱歉,但我现在无法重现这个:[https://colab.research.google.com/drive/1uoAKGbkJA4rnZV9Lwg1unOvvEloudcvM?usp=sharinghttps://colab.research.google.com/drive/1uoAKGbkJA4rnZV9Lwg1unOvvEloudcvM?usp=sharinghttps://colab.research.google.com/drive/1uoAKGbkJA4rnZV9Lwg1unOvvEloudcvM?usp=sharing](https://colab.research.google.com/drive/1uoAKGbkJA4rnZV9Lwg1unOvvEloudcvM?usp=sharing) > > This notebook works as expected out of the box.此笔记本开箱即用,按预期工作。此笔记本开箱即用,按预期工作。 I am pretty sur it is either: 我很确定它是: 我很确定它是: > > * you are not using the 您没有使用 您没有使用 `main` transformers branch 变压器分支 变压器分支 > * your file is not well defined您的文件未明确定义您的文件未明确定义 Thanks, I'm trying。I see that your model is "hf-internal-testing/random-nllb-moe-2-experts" 。Can you try the "facebook/nllb-moe-54b" model?<|||||>Just did, it works the same<|||||>> OK,Thanks, I'm trying<|||||>![image](https://user-images.githubusercontent.com/46487979/233106293-ec1e054e-9ec8-4b7c-bb72-47eaed95889b.png) I have the same problem. I downloaded it separately and tried to make it work directly, but it still didn't work. Any idea when this will be fixed?<|||||>> ![image](https://user-images.githubusercontent.com/46487979/233106293-ec1e054e-9ec8-4b7c-bb72-47eaed95889b.png) I have the same problem. I downloaded it separately and tried to make it work directly, but it still didn't work. Any idea when this will be fixed? 我有同样的问题。我单独下载了它并试图让它直接工作,但它仍然不起作用。知道什么时候会解决这个问题吗? 我有同样的问题。我单独下载了它并试图让它直接工作,但它仍然不起作用。知道什么时候会解决这个问题吗? me too<|||||>are you sure that you are on the latest release of transformers? `pip install --upgrade transformers`<|||||>> are you sure that you are on the latest release of transformers? `pip install --upgrade transformers` Wow, I had forgotten about this, but after trying it, I ran it and it works fine, thank you very much.
transformers
22,460
closed
Add the Nucleotide Transformer models
### Model description The Nucleotide Transformer project is composed of four language models trained through Masked Language Modelling with an architecture similar to the ESM-1b architecture. The models have been developed by InstaDeep in collaboration with Nvidia and TUM. They have been trained on the NVIDIA Cambridge-1 cluster. The models include: - one model with 500M parameters trained on the human reference genome - one model with 500M parameters trained on a dataset composed of 3000+ human genomes - one model with 2.5B parameters trained on a dataset composed of 3000+ human genomes - one model with 2.5B parameters trained on a dataset composed of reference genomes from 850+ species ### Open source status - [X] The model implementation is available - [X] The model weights are available ### Provide useful links for the implementation The model code and weights can be found here: https://github.com/instadeepai/nucleotide-transformer. The initial implementation is in Jax relying on haiku. The paper can be found here: https://www.biorxiv.org/content/10.1101/2023.01.11.523679v2. I am one of the last authors of the paper. Other authors include @dallatt, @e-trop and @GRichard513.
03-29-2023 21:14:20
03-29-2023 21:14:20
Question for the huggingface team: The Nucleotide Transformer networks use the ESM-1b architecture and ESM is supported in transformers, however, the ESM implementation relies on PyTorch whereas the Nucleotide Tansformer are implemented in Jax. Would you recommend to port them in PyTorch by following the ESM code or to port them in Jax directly? Also, if we were to port in Jax; Do you plan to support [haiku](https://github.com/deepmind/dm-haiku) which has been used for the original implementation? <|||||>Hi @ranzenTom! If your architecture is **exactly** the same as an existing model in `transformers` like ESM-1b or ESM-2, the quickest approach would be to use our existing ESM-1b implementation in PyTorch or TF and crossload your weights into it, then upload that as a model checkpoint. We could then optionally add a JAX/FLAX port of the model architecture in a later PR, but this would get your model into `transformers` with the minimum of fuss. The way models are stored in `transformers` is that the main repository contains the architectural code, and weight checkpoints and model configurations are stored in user repos. As such, if your model deviates from the ESM-1b architecture in any way, we won't be able to use this approach, because the model code in the main repo won't run your weights correctly. If you're confident that the architectures are the same and you want to try porting the weights yourself, I made a quick [demo notebook](https://colab.research.google.com/drive/1I6uo8SPAnikcOiMQY-2eyofZo3DAbdA-?usp=sharing) to show the process. If there are any architectural differences, we should still be able to port Nucleotide Transformer! We'll just have to use a different method, and not just treat it as an ESM-1b checkpoint.<|||||>Also, I should mention - if you have any questions or need any assistance at any point here, please let us know! We're seeing a lot of interest in biological and clinical models, with models like ESM and BioClinicalBERT getting hundreds of thousands of downloads per month, so we're excited for you to be our first DNA models too!<|||||>For anyone following this issue: we have started a slack channel to further iterate on this. :) <|||||>Hi, I'm interested in developing/using nucleotide levels LLMS for variant calling applications. Please let me know how I can be added to the Slack channel on the integration of nucleotide transformer within hugging face transformer.<|||||>They've just been added, and so we're going to close this issue! You can see the model list [here](https://huggingface.co/InstaDeepAI). You can use them using standard classes like `AutoTokenizer`,`AutoModelForMaskedLM` or `AutoModelForSequenceClassification`. We fixed a couple of issues related to this in recent PRs, so please update transformers to the latest version on main with `pip install --upgrade git+https://github.com/huggingface/transformers.git` before trying to use them. PyTorch only for now, but we expect TensorFlow versions to be available in a couple of days!
transformers
22,459
closed
Pix2StructForConditionalGeneration not exported
### System Info transformers version = 4.27.4 torch = 2.0.0 Getting an import error ImportError: cannot import name 'Pix2StructForConditionalGeneration' from 'transformers' (D:\code\alpaca\env\lib\site-packages\transformers\__init__.py) And when I check the package files, I don't see pix2struct in transformers/models ![image](https://user-images.githubusercontent.com/76161333/228655699-b9f32fcc-ce53-42a1-a16a-ea9adbe5a3c8.png) This is how it is in git source code : ![image](https://user-images.githubusercontent.com/76161333/228656078-b3b39601-8e9c-4916-b82b-d24c608d6300.png) ### Who can help? _No response_ ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction from transformers import ( Pix2StructForConditionalGeneration, Pix2StructProcessor, ) ### Expected behavior Imports occur successfully.
03-29-2023 20:12:31
03-29-2023 20:12:31
Yes, Pix2Struct was not in the last release. You need to install `transformers` [from source](https://huggingface.co/docs/transformers/installation#install-from-source) to use it.
transformers
22,458
closed
Rescale image back if it was scaled during PIL conversion
# What does this PR do? Resolves an issue that occurs, when a float image, with values between `[0-1]` is passed into an image processor, its inputs are rescaled back between `[0, 255]` to convert to a `PIL.Image.Image` for resizing and isn't rescaled again. This results in inconsistent outputs depending on whether the `do_resize` flag is `True` or `False`. Images of this type are typically fed in when pipelines that convert images to torch tensors using `ToTensor`. Fixes #22392 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests?
03-29-2023 20:03:48
03-29-2023 20:03:48
_The documentation is not available anymore as the PR was closed or merged._
transformers
22,457
closed
Update: ignore padding support for TransfoXL training when n_clusters==0
# What does this PR do? I created a new PR since I couldn't seem to re-open the [original PR](https://github.com/huggingface/transformers/pull/20326). Requests in the original PR are handled. --------------------------------------------- This PR solves [an issue](https://github.com/huggingface/transformers/issues/17446) I raised about TransformerXL. As @sgugger mentioned in [another issue](https://github.com/huggingface/transformers/issues/19914) I raised, he [says](https://github.com/huggingface/transformers/issues/19914#issuecomment-1293656206) > I don't think TransformerXL supports FP16 as this is an old model with very specific code for the softmax layer. This won't be an issue we will fix ourselves given that Transformer-XL is not very used anymore, but if someone wants to make a PR, we'll review! I'm using TransformerXL in a [research project](https://github.com/StefanHeng/Symbolic-Music-Generation) and disabling the adaptive softmax is an option I would like to explore. So here I am. In the `n_clusters==0` branch, the current TransformerXL implementation does not work with padding (-100), it beaks at `.gather(1, labels)`. This PR solves that bug. I tested with my research data and confirmed my implementation is working. It's able to overfit the training data to up to 99% next-token prediction on multiple hyper-parameter setups, for #samples from 8 to 48, batch size from 48 to 64, epochs from 128 to 512. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) [Issue 19914](https://github.com/huggingface/transformers/issues/19914) ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> @sgugger @patrickvonplaten @thomwolf
03-29-2023 18:07:22
03-29-2023 18:07:22
_The documentation is not available anymore as the PR was closed or merged._
transformers
22,456
closed
[PLEASE IGNORE][WIP][POC][DO NOT MERGE] Tokenizer guard broken
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
03-29-2023 17:46:30
03-29-2023 17:46:30
_The documentation is not available anymore as the PR was closed or merged._<|||||>This PR was used for testing purposes and is superseded by #22285.
transformers
22,455
closed
Pin ruff
# What does this PR do? ruff is fast and is also released on a fast pace 😅 Those releases sometimes break the CI and all existing PRs (which then require a rebase on main) so let's pin it and we will do an upgrade in 6 months or for 2024.
03-29-2023 17:25:27
03-29-2023 17:25:27
_The documentation is not available anymore as the PR was closed or merged._
transformers
22,454
closed
Update release instructions
# What does this PR do? This PR updates the release instructions with a few additional lines and fixes the line that never worked.
03-29-2023 17:23:02
03-29-2023 17:23:02
_The documentation is not available anymore as the PR was closed or merged._
transformers
22,453
closed
Avoid sgugger become THE bot
# What does this PR do? With more and more such <img width="380" alt="Screenshot 2023-03-29 192000" src="https://user-images.githubusercontent.com/2521628/228618109-0f4f33d7-db0b-43a5-b425-7dac596448d3.png"> `sgugger` will become a bot one day soon (or at least Google will think `sgugger` is a bot). Let's save them from becoming a bot. ------ Serious part: This is discussed offline > Would be good to create HF-testing Bot user and use that token as people really think it's me updating those models :sweat_smile: **I already set this secret in the repository.**
03-29-2023 17:22:59
03-29-2023 17:22:59
> 🤖 🤖 🤖 LGTM! 🤖 🤖 🤖 I am not sure if sgugger will say LGTM 😆 but I am going to merge!<|||||>_The documentation is not available anymore as the PR was closed or merged._
transformers
22,452
closed
Update Neptune docs
# What does this PR do? Updates the Neptune example docs. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @sgugger <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
03-29-2023 17:06:39
03-29-2023 17:06:39
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_22452). All of your documentation changes will be reflected on that endpoint.
transformers
22,451
closed
Revert "Fix --bf16 option support for Neuron after PR #22300"
This reverts commit fd81746dbec5f17c8285a0fdc72ca4b4c025cc33. # What does this PR do? This reverts https://github.com/huggingface/transformers/pull/22307 as CPU AMP doesn't cause torch/xla to emit the correctly autocasted XLA HLO, while GPU AMP path correctly emits autocasted XLA HLO. We are left with "RuntimeError: No CUDA GPUs are available" as noted in the previous PR which can be workaround with "torch.cuda.is_bf16_supported = lambda: True". ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @sgugger
03-29-2023 16:10:26
03-29-2023 16:10:26
transformers
22,450
closed
bridgetower model RuntimeError: The size of tensor a (865) must match the size of tensor b (325) at non-singleton dimension 1
[/usr/local/lib/python3.9/dist-packages/transformers/models/bridgetower/modeling_bridgetower.py](https://localhost:8080/#) in forward(self, pixel_values) 296 class_embeds = self.class_embedding.expand(batch_size, 1, -1) 297 embeddings = torch.cat([class_embeds, patch_embeds], dim=1) --> 298 embeddings = embeddings + self.position_embedding(self.position_ids) 299 return embeddings 300 RuntimeError: The size of tensor a (865) must match the size of tensor b (325) at non-singleton dimension 1 when I have set max_len =128. @abhiwand @tileintel
03-29-2023 16:01:12
03-29-2023 16:01:12
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
22,449
closed
Generate: basic token streaming
# What does this PR do? Adds token streaming to `.generate()` 🎉 ### Why now? I want to showcase and communicate how much faster assisted generation can be... and for that, I need token streaming :D Non-image/video results have a much lower impact. ### What's being added This PR adds a `streamer` input to generate. If it is non-`None`, generate will call `streamer.put(new_tokens)` as they are being generated. `streamer` can, therefore, be a wide array of things. This PR adds the simplest case: print tokens as they are generated. At first, I thought of adding a simpler `stream=True` option. However, the tokenizer would have to be passed into `.generate()`, which we have been avoiding, and it wouldn't be nearly as flexible. I've made the call to make streaming+`.generate()` flexible, and to keep it simple at a `pipeline` level. ### If this PR gets accepted The plan is to: 1. Communicate this feature on Twitter (w/Colab examples) 2. Add to pipelines, maybe with a simpler `stream=True` flag to start 3. Add Gradio examples (and, if needed, a specific streamer class) 4. Add the beam search case to the streamer classes (beam search is much trickier -- we should only print tokens when all candidate beams agree, which means logic needs to be added) ### How does it look Here's an example. Note that it is running on CPU, so we can actually see the streaming effect (3090 is too fast 😅 ). On GPU it also streams, but much faster 🔥 https://user-images.githubusercontent.com/12240844/228595317-0f234e95-bd39-43a5-83e5-ef620da08eb0.mov
03-29-2023 15:54:18
03-29-2023 15:54:18
_The documentation is not available anymore as the PR was closed or merged._<|||||>@sgugger revised with the simpler implementation (no context manager nor multiprocessing) 🤗 <|||||>Just a FYI: I have been doing this using `transformers.StoppingCriteria` to create a callback: ```python class Stream(transformers.StoppingCriteria): def __init__(self, callback_func=None): self.callback_func = callback_func def __call__(self, input_ids, scores) -> bool: if self.callback_func is not None: self.callback_func(input_ids[0]) return False ``` The callback is then used to create an iterator with the Iteratorize class here: https://github.com/oobabooga/text-generation-webui/blob/main/modules/callbacks.py#L42 Usage becomes: ```python def generate_with_callback(callback=None, **kwargs): kwargs['stopping_criteria'].append(Stream(callback_func=callback)) with torch.no_grad(): shared.model.generate(**kwargs) def generate_with_streaming(**kwargs): return Iteratorize(generate_with_callback, kwargs, callback=None) with generate_with_streaming(**generate_params) as generator: for output in generator: ```<|||||>@oobabooga 🧠 That's a smart (and [unexpected](https://www.hyrumslaw.com/)!) use of the stopping criteria. I'm going to work on a standardized Gradio solution today, and a Queue+iterator was indeed my plan. If you don't mind, I will take inspiration in your code 💛 A question regarding your implementation -- you use a separate thread in the `Iteratorize`, not a separate process. Any reason for in picking a thread over a process? (Without running the code, I'd argue in favor of a separate thread for GIL purposes) <|||||>> If you don't mind, I will take inspiration in your code Feel free to copy anything you want. > Any reason for in picking a thread over a process? Honestly, I have no specific reason to give. I just spent several days trying to get the text generation to run in the background independently of where the `for` loop was at in the queue, and this is what ended up working. With this, I get close to as many tokens/s with streaming as without.
transformers
22,448
closed
[`Pix2Struct`] Fix slow test
# What does this PR do? Fixes: https://github.com/huggingface/transformers/actions/runs/4538560416/jobs/7997667941 This test was currently failing due to the fact that I forgot to add `.to(torch_device)` on a slow test cc @sgugger
03-29-2023 15:28:30
03-29-2023 15:28:30
transformers
22,447
closed
added biogpt token classifier
# What does this PR do? Added Token Classifier for BioGpt Fixes #21786 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @ArthurZucker @younesbelkada @NielsRogge @sgugger
03-29-2023 15:02:25
03-29-2023 15:02:25
_The documentation is not available anymore as the PR was closed or merged._<|||||>@ArthurZucker am attaching the error from circleci in here. Please do a check on it. ```python python setup.py egg_info did not run successfully. │ exit code: 1 ╰─> [7 lines of output] fatal: not a git repository (or any of the parent directories): .git Traceback (most recent call last): File "<string>", line 2, in <module> File "<pip-setuptools-caller>", line 34, in <module> File "/tmp/pip-install-xnk_5lxf/onnx_fafa120a391e44089efd375a22a6dba2/setup.py", line 72, in <module> assert CMAKE, 'Could not find "cmake" executable!' AssertionError: Could not find "cmake" executable! [end of output] ``` <|||||>> @ArthurZucker am attaching the error from circleci in here. Please do a check on it. > > ```python > python setup.py egg_info did not run successfully. > │ exit code: 1 > ╰─> [7 lines of output] > fatal: not a git repository (or any of the parent directories): .git > Traceback (most recent call last): > File "<string>", line 2, in <module> > File "<pip-setuptools-caller>", line 34, in <module> > File "/tmp/pip-install-xnk_5lxf/onnx_fafa120a391e44089efd375a22a6dba2/setup.py", line 72, in <module> > assert CMAKE, 'Could not find "cmake" executable!' > AssertionError: Could not find "cmake" executable! > [end of output] > ``` You need to pull from main for this!
transformers
22,446
closed
TypeError: __init__() got an unexpected keyword argument 'forward_prefetch'
### System Info - `transformers` version: 4.28.0.dev0 - Platform: Linux-5.15.0-67-generic-x86_64-with-glibc2.17 - Python version: 3.8.16 - Huggingface_hub version: 0.13.1 - Safetensors version: not installed - PyTorch version (GPU?): 1.12.1 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? @AlexWertheim ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction 1. run stanford-alpaca's training command: https://github.com/tatsu-lab/stanford_alpaca ``` torchrun --nproc_per_node=4 --master_port=<your_random_port> train.py \ --model_name_or_path <your_path_to_hf_converted_llama_ckpt_and_tokenizer> \ --data_path ./alpaca_data.json \ --bf16 True \ --output_dir <your_output_dir> \ --num_train_epochs 3 \ --per_device_train_batch_size 4 \ --per_device_eval_batch_size 4 \ --gradient_accumulation_steps 8 \ --evaluation_strategy "no" \ --save_strategy "steps" \ --save_steps 2000 \ --save_total_limit 1 \ --learning_rate 2e-5 \ --weight_decay 0. \ --warmup_ratio 0.03 \ --lr_scheduler_type "cosine" \ --logging_steps 1 \ --fsdp "full_shard auto_wrap" \ --fsdp_transformer_layer_cls_to_wrap 'LLaMADecoderLayer' \ --tf32 True ``` ### Expected behavior ``` Traceback (most recent call last): File "train.py", line 231, in <module> train() File "train.py", line 225, in train trainer.train() File "/home/projects/transformers/src/transformers/trainer.py", line 1644, in train return inner_training_loop( File "/home/projects/transformers/src/transformers/trainer.py", line 1731, in _inner_training_loop model = self._wrap_model(self.model_wrapped) File "/home/projects/transformers/src/transformers/trainer.py", line 1469, in _wrap_model self.model = model = FSDP( TypeError: __init__() got an unexpected keyword argument 'forward_prefetch' ``` The error is raised at the trainer.py: ``` if type(model) != FSDP: # XXX: Breaking the self.model convention but I see no way around it for now. self.model = model = FSDP( model, sharding_strategy=self.fsdp, cpu_offload=cpu_offload, auto_wrap_policy=auto_wrap_policy, mixed_precision=mixed_precision_policy, device_id=self.args.device, backward_prefetch=self.backward_prefetch, forward_prefetch=self.forword_prefetch, limit_all_gathers=self.limit_all_gathers, ) ``` I think forward_prefetch is not supported in PyTorch1.12. Is there a possible solution to enable me to use FSDP with PyTorch 1.12? If not, I suggest adding some version-checking codes.
03-29-2023 14:55:18
03-29-2023 14:55:18
FSDP support in Transformers requires PyTorch 1.12, so no. You should have hit [this error](https://github.com/huggingface/transformers/blob/55dae94c0ccd088003aa46bcecb2e55321a7f00b/src/transformers/trainer.py#L429) before anything else, not sure why you did not.<|||||>Hi, thanks for your reply. This is not an issue with FSDP support. It's an issue that FSDP does not support the keyword argument "forward_prefetch" in torch1.12<|||||>Hi, I met the same problem with transformers==4.27.1 and the solution is to degrade to transformers==4.26.1 .This may be a version compatibility issues for hugging face transformers .<|||||>Oh thanks for clarifying. cc @pacman100
transformers
22,445
closed
Add PVT(Pyramid Vision Transformer)
# Add PVT(Pyramid Vision Transformer) Partially fixes: [issue](https://github.com/huggingface/transformers/issues/17596) Currently, the classification model is added, later it should be extended with Detection and Segmentation models. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? @amyeroberts
03-29-2023 14:51:33
03-29-2023 14:51:33
Hi @Xrenya - thanks for opening this PR. It seems there is an issue with your CircleCI permissions as the tests won’t run. Could you try refreshing your permissions as shown [here](https://support.circleci.com/hc/en-us/articles/360048210711-How-to-Refresh-User-Permissions-)? Excited to see this model implemented and to be added to the library! <|||||>@amyeroberts Thank you, I have updated the permission, could you please rerun workflow to verify whether it is working?<|||||>@Xrenya I don't think I can re-run as it's triggered based on the permissions on your end. At least, when I go onto circleci any options to re-run are not available. Could you try pushing an empty commit using: `git commit -m "Trigger CI" --allow-empty` to see if the updates have worked? <|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>@amyeroberts Indeed, I had to trigger it, I think this part is ready, but it looks like circleci is stuck even after reopening the PR.<|||||>@Xrenya - yes, that's funny 🤔 . If I search on circleCI, I can't see any runs associated with this PR. I'll review and then we can address this again if it's still not running after any code updates. <|||||>@amyeroberts I think I have updated everything<|||||>@Xrenya - thanks for the update! Two quick things before I do a full review: * I can see the models still have `PVT` as a prefix. We know use camel-case for our models, and so the prefix should be updated to `Pvt` everywhere e.g. `PvtImageProcessor` * Unfortunately the commits still haven't triggered the test suite. Could you rebase on main and retry refreshing your CircleCI permissions again? If this still doesn't work, I'll do some more digging. <|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_22445). All of your documentation changes will be reflected on that endpoint.<|||||>@Xrenya - great to see Circle CI is now behaving itself :) Before I review again, could you make sure all of the tests are passing? Let me know if you have any questions about getting them to run and pass. <|||||>@amyeroberts I have passed tests, except flax one, but the problem not with my model: `FAILED tests/models/big_bird/test_modeling_flax_big_bird.py::FlaxBigBirdModelTest::test_checkpoint_sharding_from_hub` Should I fixed the issue?<|||||>@Xrenya No, that test must be flaky and isn't your responsibility to fix, we can ignore it for this PR 👍 <|||||>@Xrenya - I can still see some unresolved commented from my last review. Could you either make the suggested change or comment on the suggestion, explaining why you aren't. Once these are all resolved I'll review again. <|||||>@Xrenya There's still several comments which haven't been addressed e.g. [this one](https://github.com/huggingface/transformers/pull/22445/files#r1156236335) or [this one](https://github.com/huggingface/transformers/pull/22445/files/#r1157356696). For suggestions that have been applied, could you mark the comments as resolved? It will make it easier to track the changes in the PR. <|||||>@amyeroberts put model checkpoint under the model's organisation and push changes<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
22,444
closed
Revert "Error (also in original) model, scaling only q matrix not qk.T dot product (qk.T/sqrt(dim_per_head))"
Reverts huggingface/transformers#21627 This PR changed the modeling code and as a result broke training as reported in #22426 Moreover, since the "error" was in the original code from the authors, it's not an error anymore but a feature. This will be put in the patch v4.27.4
03-29-2023 14:46:00
03-29-2023 14:46:00
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_22444). All of your documentation changes will be reflected on that endpoint.
transformers
22,443
closed
Logger message during training
### System Info - Platform: Linux-4.15.0-204-generic-x86_64-with-glibc2.27 - Python version: 3.9.13 - Huggingface_hub version: 0.12.1 - PyTorch version (GPU?): 1.13.1 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Hi, Just to report a problem with the logger. When I use transformers-4.26.1 everything is fine. But after I upgrade to transformers-4.27.3, the logger output in the trainer._inner_training_loop() disappear. In other words, the message below is gone: ***** Running training ***** Num examples = 53920 Num Epochs = 5 Instantaneous batch size per device = 8 Total train batch size (w. parallel, distributed & accumulation) = 16 Gradient Accumulation steps = 1 Total optimization steps = 16850 Number of trainable parameters = 134302848 After I downgrade to 4.26.1, the logger works again. ### Expected behavior -
03-29-2023 13:56:38
03-29-2023 13:56:38
It works, it just now respects your default logging level (which is probably warning). You will need to set it to info (see any example) to get those logs again :-)<|||||>Hi, thanks for the quick reply. I've tried with this before `import logging` `logging.basicConfig(level=logging.INFO)"` But it still does not work for the new version <|||||>You need to set the Transformers logging value to this level (unless you do the change of logging before the first import of Transformers), for instance like [this](https://github.com/huggingface/transformers/blob/9b494a1537e3c6a30e5648ab3d4e983380792a91/examples/pytorch/text-classification/run_glue.py#L232)<|||||>I got it. Thanks!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
22,442
closed
Beam search not always better Greedy search
@patrickvonplaten I was recently tinkering around the generation strategies for a decoder, and came across this [blog post](https://huggingface.co/blog/how-to-generate) on decoding methods. It was quite useful to clarify how generation works for transformers. Thank you for this excellent work. I notice that in the section on Beam Search, it is stated that: > **Beam search will always find an output sequence with higher probability than greedy search**, but is not guaranteed to find the most likely output. It seems to me that the highlighted part is not true, which can illustrated by the following, extended from the example in your blog post. ![image](https://user-images.githubusercontent.com/3142085/228549175-d37bb8af-ed62-44bf-93f8-38d22bf6aba3.png) Basically if better beams at a certain time are followed by more tokens in which the preference of tokens is not that strong (distribution being flatter), then the greedy result could have a higher probability, and being missed out by the beam search. I got curious about this claim, because I tried beam search and greedy search in a vision encoder decoder model (Donut), and got lower probability generations quite often with beam search. What do you think? Is there something else that I over-looked in this example? Thank you and have a nice day.
03-29-2023 13:15:41
03-29-2023 13:15:41
Actually, copied to the issues section of the blog.
transformers
22,441
closed
Model parallelize for LlaMA
The LlaMA model currently does not support model parallel training, when will this feature be added? 'LlamaForCausalLM' object has no attribute 'parallelize'
03-29-2023 12:39:13
03-29-2023 12:39:13
This API is deprecated on existing models and won't be implemented on new ones. Use `from_pretrained(xxx, device_map="auto") to parallelize your model.<|||||>Hi sgugger, I wonder if `from_pretrained(xxx, device_map="auto")` can be used to train models?<|||||>Yes it can, as long as it all fits on GPUs (like was the case with `parallelize` before).<|||||>Thanks for your reply! I've tried it and it works well. And I noticed that `device_map="auto"` can use CPU or disk offloading. But when I try to train a large model with offloading, I get the GPU OOM error. I also tried setting a low `max_ memory` to set aside buffer for optimizer states in GPU, but still get the GPU OOM error. Does `device_map="auto"` enable training large model with offloading?<|||||>Yes, if you have the hardware for it. To train a model, you need 4x the size in GPU memory (one for the model, one for the gradients and two for the optimizer state if using Adam). If not, then you will need to look at techniques like Zero-3 in DeepSpeed or FSDP, or use `peft` for fine-tuning.<|||||>Thanks. My point is that using `device_map="auto"` can **offload** some modules of large model to CPU or Disk, so do we still need DeepSpeed or FSDP?<|||||>As I said before, CPU and disk offload with `device_map="auto"` do not support training, only inference.<|||||>Will device_map = "auto" support pipeline parallel training?<|||||>> Hi,will model paralleize and zero strategy work together when training? or any plan for that? Thanks!<|||||>No, ZeRo-3 from DeepSpeed has its own way of initalizing the model which is supported in Transformers already. You shouldn't mix it up with `device_map="auto"`.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
22,440
closed
Hyperparameter search reporting to W&B
# What does this PR do? Fixes #22429 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @sgugger <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
03-29-2023 11:27:47
03-29-2023 11:27:47
_The documentation is not available anymore as the PR was closed or merged._
transformers
22,439
closed
[`bnb`] fix bnb failing test
# What does this PR do? Fixes https://github.com/huggingface/transformers/actions/runs/4538560416/jobs/7997676956 The PR https://github.com/huggingface/transformers/pull/22377 introduced the correct way to compute the device_map for int8 models to avoid issues in some corner cases. The `max_memory` argument was causing some issues and led to failing test due to the presence of a non-empty `special_dtypes`. Script to reproduce: ```python import torch from accelerate import init_empty_weights, infer_auto_device_map from transformers import BloomForCausalLM, AutoConfig config = AutoConfig.from_pretrained("bigscience/bloom-560m") max_memory = {0: 1000000000, 1: 1000000000} with init_empty_weights(): model = BloomForCausalLM(config) torch_dtype = torch.float16 modules_not_to_convert = ['transformer.word_embeddings', 'lm_head'] special_dtypes = { name: torch_dtype for name, _ in model.named_parameters() if any(m in name for m in modules_not_to_convert) } device_map = infer_auto_device_map(model, max_memory=max_memory, no_split_module_classes=['BloomBlock'], dtype=torch.int8, special_dtypes=special_dtypes) print(set(device_map.values())) ``` I assume that the previous test was too much a corner case (with hardcoded `max_memory`), the correct fix should be just to use `balanced` in the `device_map` argument. cc @sgugger
03-29-2023 11:13:18
03-29-2023 11:13:18
_The documentation is not available anymore as the PR was closed or merged._
transformers
22,438
closed
Got an error `/data/llama/7B/ does not appear to have a file named config.json. Checkout 'https://huggingface.co//data/llama/7B//None' for available files.` when start a supervised instructs fine-tuning
I have installed https://github.com/hpcaitech/transformers according to the documentation: https://github.com/hpcaitech/ColossalAI/tree/main/applications/Chat#install-the-transformers - I got an error when I start a supervised instructs fine-tuning ``` (/code/conda-envs) ubuntu@ip-172-31-15-61:/code/valdanito/ColossalAI/applications/Chat/examples$ cat train_sft.sh torchrun --standalone --nproc_per_node=4 train_sft.py \ --pretrain "/data/llama/7B/" \ --model 'llama' \ --strategy colossalai_zero2 \ --log_interval 10 \ --save_path /data/coati/7B \ --dataset /data/coati/data.json \ --batch_size 4 \ --accimulation_steps 8 \ --lr 2e-5 \ --max_datasets_size 512 \ --max_epochs 1 \ (/code/conda-envs) ubuntu@ip-172-31-15-61:/code/valdanito/ColossalAI/applications/Chat/examples$ ./train_sft.sh ``` - Error ``` ╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮ │ /code/valdanito/ColossalAI/applications/Chat/examples/train_sft.py:184 in <module> │ │ │ │ 181 │ parser.add_argument('--lr', type=float, default=5e-6) │ │ 182 │ parser.add_argument('--accimulation_steps', type=int, default=8) │ │ 183 │ args = parser.parse_args() │ │ ❱ 184 │ train(args) │ │ 185 │ │ │ │ /code/valdanito/ColossalAI/applications/Chat/examples/train_sft.py:50 in train │ │ │ │ 47 │ │ elif args.model == 'gpt2': │ │ 48 │ │ │ model = GPTLM(pretrained=args.pretrain, lora_rank=args.lora_rank).to(torch.c │ │ 49 │ │ elif args.model == 'llama': │ │ ❱ 50 │ │ │ model = LlamaLM(pretrained=args.pretrain, lora_rank=args.lora_rank, │ │ 51 │ │ │ │ │ │ │ checkpoint=True).to(torch.float16).to(torch.cuda.current_dev │ │ 52 │ │ else: │ │ 53 │ │ │ raise ValueError(f'Unsupported model "{args.model}"') │ │ │ │ /code/conda-envs/lib/python3.10/site-packages/coati/models/llama/llama_lm.py:28 in __init__ │ │ │ │ 25 │ │ │ │ lora_train_bias: str = 'none') -> None: │ │ 26 │ │ │ │ 27 │ │ if pretrained is not None: │ │ ❱ 28 │ │ │ model = LlamaForCausalLM.from_pretrained(pretrained) │ │ 29 │ │ elif config is not None: │ │ 30 │ │ │ model = LlamaForCausalLM(config) │ │ 31 │ │ else: │ │ │ │ /code/conda-envs/lib/python3.10/site-packages/transformers/modeling_utils.py:2175 in │ │ from_pretrained │ │ │ │ 2172 │ │ # Load config if we don't provide a configuration │ │ 2173 │ │ if not isinstance(config, PretrainedConfig): │ │ 2174 │ │ │ config_path = config if config is not None else pretrained_model_name_or_pat │ │ ❱ 2175 │ │ │ config, model_kwargs = cls.config_class.from_pretrained( │ │ 2176 │ │ │ │ config_path, │ │ 2177 │ │ │ │ cache_dir=cache_dir, │ │ 2178 │ │ │ │ return_unused_kwargs=True, │ │ │ │ /code/conda-envs/lib/python3.10/site-packages/transformers/configuration_utils.py:546 in │ │ from_pretrained │ │ │ │ 543 │ │ assert config.output_attentions == True │ │ 544 │ │ assert unused_kwargs == {"foo": False} │ │ 545 │ │ ```""" │ │ ❱ 546 │ │ config_dict, kwargs = cls.get_config_dict(pretrained_model_name_or_path, **kwarg │ │ 547 │ │ if "model_type" in config_dict and hasattr(cls, "model_type") and config_dict["m │ │ 548 │ │ │ logger.warning( │ │ 549 │ │ │ │ f"You are using a model of type {config_dict['model_type']} to instantia │ │ │ │ /code/conda-envs/lib/python3.10/site-packages/transformers/configuration_utils.py:573 in │ │ get_config_dict │ │ │ │ 570 │ │ """ │ │ 571 │ │ original_kwargs = copy.deepcopy(kwargs) │ │ 572 │ │ # Get config dict associated with the base config file │ │ ❱ 573 │ │ config_dict, kwargs = cls._get_config_dict(pretrained_model_name_or_path, **kwar │ │ 574 │ │ if "_commit_hash" in config_dict: │ │ 575 │ │ │ original_kwargs["_commit_hash"] = config_dict["_commit_hash"] │ │ 576 │ │ │ │ /code/conda-envs/lib/python3.10/site-packages/transformers/configuration_utils.py:628 in │ │ _get_config_dict │ │ │ │ 625 │ │ │ │ │ 626 │ │ │ try: │ │ 627 │ │ │ │ # Load from local folder or from cache or download from model Hub and ca │ │ ❱ 628 │ │ │ │ resolved_config_file = cached_file( │ │ 629 │ │ │ │ │ pretrained_model_name_or_path, │ │ 630 │ │ │ │ │ configuration_file, │ │ 631 │ │ │ │ │ cache_dir=cache_dir, │ │ │ │ /code/conda-envs/lib/python3.10/site-packages/transformers/utils/hub.py:380 in cached_file │ │ │ │ 377 │ │ resolved_file = os.path.join(os.path.join(path_or_repo_id, subfolder), filename) │ │ 378 │ │ if not os.path.isfile(resolved_file): │ │ 379 │ │ │ if _raise_exceptions_for_missing_entries: │ │ ❱ 380 │ │ │ │ raise EnvironmentError( │ │ 381 │ │ │ │ │ f"{path_or_repo_id} does not appear to have a file named {full_filen │ │ 382 │ │ │ │ │ f"'https://huggingface.co/{path_or_repo_id}/{revision}' for availabl │ │ 383 │ │ │ │ ) │ ╰──────────────────────────────────────────────────────────────────────────────────────────────────╯ OSError: /data/llama/7B/ does not appear to have a file named config.json. Checkout 'https://huggingface.co//data/llama/7B//None' for available files. ```
03-29-2023 10:32:28
03-29-2023 10:32:28
Sorry, I sent the wrong repo, please close this issue
transformers
22,437
closed
Making sure we can use safetensors to serialize all the time.
# What does this PR do? Making sure `save_pretrained(..., safe_serialization=True)` works in all cases. It seems `_keys_to_ignore_on_load_missing` was the only one to be set, and so `save_pretrained` does not properly ignore those keys on saving. Status before the fix: ``` -- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html =========================== short test summary info ============================ FAILED tests/models/albert/test_modeling_albert.py::AlbertModelTest::test_can_use_safetensors - Exception: Class AlbertForPreTraining cannot be saved using safetensors: Some tensors share memory, this will lead to duplicate memory on disk and potential differences when loading them again: [{'albert.embeddings.word_embeddings.weight', 'predictions.decoder.weight'}, {'predictions.decoder.bias', 'predictions.bias'}] FAILED tests/models/bart/test_modeling_bart.py::BartModelTest::test_can_use_safetensors - Exception: Class BartModel cannot be saved using safetensors: Some tensors share memory, this will lead to duplicate memory on disk and potential differences when loading them again: [{'shared.weight', 'decoder.embed_tokens.weight', 'encoder.embed_tokens.weight'}] FAILED tests/models/bert/test_modeling_bert.py::BertModelTest::test_can_use_safetensors - Exception: Class BertLMHeadModel cannot be saved using safetensors: Some tensors share memory, this will lead to duplicate memory on disk and potential differences when loading them again: [{'cls.predictions.decoder.weight', 'bert.embeddings.word_embeddings.weight'}, {'cls.predictions.bias', 'cls.predictions.decoder.bias'}] FAILED tests/models/bart/test_modeling_bart.py::BartStandaloneDecoderModelTest::test_can_use_safetensors - Exception: Class BartForCausalLM cannot be saved using safetensors: Some tensors share memory, this will lead to duplicate memory on disk and potential differences when loading them again: [{'lm_head.weight', 'model.decoder.embed_tokens.weight'}] FAILED tests/models/bert_generation/test_modeling_bert_generation.py::BertGenerationEncoderTest::test_can_use_safetensors - Exception: Class BertGenerationDecoder cannot be saved using safetensors: Some tensors share memory, this will lead to duplicate memory on disk and potential differences when loading them again: [{'bert.embeddings.word_embeddings.weight', 'lm_head.decoder.weight'}, {'lm_head.decoder.bias', 'lm_head.bias'}] FAILED tests/models/big_bird/test_modeling_big_bird.py::BigBirdModelTest::test_can_use_safetensors - Exception: Class BigBirdForPreTraining cannot be saved using safetensors: Some tensors share memory, this will lead to duplicate memory on disk and potential differences when loading them again: [{'bert.embeddings.word_embeddings.weight', 'cls.predictions.decoder.weight'}, {'cls.predictions.decoder.bias', 'cls.predictions.bias'}] FAILED tests/models/biogpt/test_modeling_biogpt.py::BioGptModelTest::test_can_use_safetensors - Exception: Class BioGptForCausalLM cannot be saved using safetensors: Some tensors share memory, this will lead to duplicate memory on disk and potential differences when loading them again: [{'output_projection.weight', 'biogpt.embed_tokens.weight'}] FAILED tests/models/bigbird_pegasus/test_modeling_bigbird_pegasus.py::BigBirdPegasusModelTest::test_can_use_safetensors - Exception: Class BigBirdPegasusModel cannot be saved using safetensors: Some tensors share memory, this will lead to duplicate memory on disk and potential differences when loading them again: [{'encoder.embed_tokens.weight', 'shared.weight', 'decoder.embed_tokens.weight'}] FAILED tests/models/blenderbot/test_modeling_blenderbot.py::BlenderbotModelTest::test_can_use_safetensors - Exception: Class BlenderbotModel cannot be saved using safetensors: Some tensors share memory, this will lead to duplicate memory on disk and potential differences when loading them again: [{'encoder.embed_tokens.weight', 'shared.weight', 'decoder.embed_tokens.weight'}] FAILED tests/models/blenderbot_small/test_modeling_blenderbot_small.py::BlenderbotSmallModelTest::test_can_use_safetensors - Exception: Class BlenderbotSmallModel cannot be saved using safetensors: Some tensors share memory, this will lead to duplicate memory on disk and potential differences when loading them again: [{'shared.weight', 'decoder.embed_tokens.weight', 'encoder.embed_tokens.weight'}] FAILED tests/models/blenderbot_small/test_modeling_blenderbot_small.py::BlenderbotSmallStandaloneDecoderModelTest::test_can_use_safetensors - Exception: Class BlenderbotSmallForCausalLM cannot be saved using safetensors: Some tensors share memory, this will lead to duplicate memory on disk and potential differences when loading them again: [{'lm_head.weight', 'model.decoder.embed_tokens.weight'}] FAILED tests/models/blenderbot/test_modeling_blenderbot.py::BlenderbotStandaloneDecoderModelTest::test_can_use_safetensors - Exception: Class BlenderbotForCausalLM cannot be saved using safetensors: Some tensors share memory, this will lead to duplicate memory on disk and potential differences when loading them again: [{'model.decoder.embed_tokens.weight', 'lm_head.weight'}] FAILED tests/models/bigbird_pegasus/test_modeling_bigbird_pegasus.py::BigBirdPegasusStandaloneDecoderModelTest::test_can_use_safetensors - Exception: Class BigBirdPegasusForCausalLM cannot be saved using safetensors: Some tensors share memory, this will lead to duplicate memory on disk and potential differences when loading them again: [{'lm_head.weight', 'model.decoder.embed_tokens.weight'}] FAILED tests/models/blip_2/test_modeling_blip_2.py::Blip2ForConditionalGenerationDecoderOnlyTest::test_can_use_safetensors - Exception: Class Blip2ForConditionalGeneration cannot be saved using safetensors: Some tensors share memory, this will lead to duplicate memory on disk and potential differences when loading them again: [{'language_model.lm_head.weight', 'language_model.model.decoder.embed_tokens.weight'}] FAILED tests/models/bloom/test_modeling_bloom.py::BloomModelTest::test_can_use_safetensors - Exception: Class BloomForCausalLM cannot be saved using safetensors: Some tensors share memory, this will lead to duplicate memory on disk and potential differences when loading them again: [{'lm_head.weight', 'transformer.word_embeddings.weight'}] FAILED tests/models/blip_2/test_modeling_blip_2.py::Blip2ModelTest::test_can_use_safetensors - Exception: Class Blip2ForConditionalGeneration cannot be saved using safetensors: Some tensors share memory, this will lead to duplicate memory on disk and potential differences when loading them again: [{'language_model.shared.weight', 'language_model.decoder.embed_tokens.weight', 'language_model.lm_head.weight', 'language_model.encoder.embed_tokens.weight'}] FAILED tests/models/blip/test_modeling_blip.py::BlipTextImageModelTest::test_can_use_safetensors - Exception: Class BlipForConditionalGeneration cannot be saved using safetensors: Some tensors share memory, this will lead to duplicate memory on disk and potential differences when loading them again: [{'text_decoder.cls.predictions.bias', 'text_decoder.cls.predictions.decoder.bias'}] FAILED tests/models/convbert/test_modeling_convbert.py::ConvBertModelTest::test_can_use_safetensors - Exception: Class ConvBertForMaskedLM cannot be saved using safetensors: Some tensors share memory, this will lead to duplicate memory on disk and potential differences when loading them again: [{'generator_lm_head.weight', 'convbert.embeddings.word_embeddings.weight'}] FAILED tests/models/cpm/test_tokenization_cpm.py::XLNetModelTest::test_can_use_safetensors - Exception: Class XLNetLMHeadModel cannot be saved using safetensors: Some tensors share memory, this will lead to duplicate memory on disk and potential differences when loading them again: [{'lm_loss.weight', 'transformer.word_embedding.weight'}] FAILED tests/models/ctrl/test_modeling_ctrl.py::CTRLModelTest::test_can_use_safetensors - Exception: Class CTRLLMHeadModel cannot be saved using safetensors: Some tensors share memory, this will lead to duplicate memory on disk and potential differences when loading them again: [{'transformer.w.weight', 'lm_head.weight'}] FAILED tests/models/deberta/test_modeling_deberta.py::DebertaModelTest::test_can_use_safetensors - Exception: Class DebertaForMaskedLM cannot be saved using safetensors: Some tensors share memory, this will lead to duplicate memory on disk and potential differences when loading them again: [{'deberta.embeddings.word_embeddings.weight', 'cls.predictions.decoder.weight'}, {'cls.predictions.bias', 'cls.predictions.decoder.bias'}] FAILED tests/models/deberta_v2/test_modeling_deberta_v2.py::DebertaV2ModelTest::test_can_use_safetensors - Exception: Class DebertaV2ForMaskedLM cannot be saved using safetensors: Some tensors share memory, this will lead to duplicate memory on disk and potential differences when loading them again: [{'deberta.embeddings.word_embeddings.weight', 'cls.predictions.decoder.weight'}, {'cls.predictions.bias', 'cls.predictions.decoder.bias'}] FAILED tests/models/deformable_detr/test_modeling_deformable_detr.py::DeformableDetrModelTest::test_can_use_safetensors - Exception: Class DeformableDetrForObjectDetection cannot be saved using safetensors: Some tensors share memory, this will lead to duplicate memory on disk and potential differences when loading them again: [{'class_embed.0.weight', 'class_embed.1.weight'}, {'class_embed.1.bias', 'class_embed.0.bias'}, {'bbox_embed.0.layers.0.weight', 'bbox_embed.1.layers.0.weight'}, {'bbox_embed.1.layers.0.bias', 'bbox_embed.0.layers.0.bias'}, {'bbox_embed.0.layers.1.weight', 'bbox_embed.1.layers.1.weight'}, {'bbox_embed.1.layers.1.bias', 'bbox_embed.0.layers.1.bias'}, {'bbox_embed.1.layers.2.weight', 'bbox_embed.0.layers.2.weight'}, {'bbox_embed.0.layers.2.bias', 'bbox_embed.1.layers.2.bias'}] FAILED tests/models/deta/test_modeling_deta.py::DetaModelTest::test_can_use_safetensors - Exception: Class DetaForObjectDetection cannot be saved using safetensors: Some tensors share memory, this will lead to duplicate memory on disk and potential differences when loading them again: [{'bbox_embed.0.layers.0.weight', 'model.decoder.bbox_embed.0.layers.0.weight'}, {'model.decoder.bbox_embed.0.layers.0.bias', 'bbox_embed.0.layers.0.bias'}, {'bbox_embed.0.layers.1.weight', 'model.decoder.bbox_embed.0.layers.1.weight'}, {'bbox_embed.0.layers.1.bias', 'model.decoder.bbox_embed.0.layers.1.bias'}, {'model.decoder.bbox_embed.0.layers.2.weight', 'bbox_embed.0.layers.2.weight'}, {'bbox_embed.0.layers.2.bias', 'model.decoder.bbox_embed.0.layers.2.bias'}, {'bbox_embed.1.layers.0.weight', 'model.decoder.bbox_embed.1.layers.0.weight'}, {'model.decoder.bbox_embed.1.layers.0.bias', 'bbox_embed.1.layers.0.bias'}, {'model.decoder.bbox_embed.1.layers.1.weight', 'bbox_embed.1.layers.1.weight'}, {'bbox_embed.1.layers.1.bias', 'model.decoder.bbox_embed.1.layers.1.bias'}, {'bbox_embed.1.layers.2.weight', 'model.decoder.bbox_embed.1.layers.2.weight'}, {'model.decoder.bbox_embed.1.layers.2.bias', 'bbox_embed.1.layers.2.bias'}] FAILED tests/models/distilbert/test_modeling_distilbert.py::DistilBertModelTest::test_can_use_safetensors - Exception: Class DistilBertForMaskedLM cannot be saved using safetensors: Some tensors share memory, this will lead to duplicate memory on disk and potential differences when loading them again: [{'distilbert.embeddings.word_embeddings.weight', 'vocab_projector.weight'}] FAILED tests/models/electra/test_modeling_electra.py::ElectraModelTest::test_can_use_safetensors - Exception: Class ElectraForMaskedLM cannot be saved using safetensors: Some tensors share memory, this will lead to duplicate memory on disk and potential differences when loading them again: [{'generator_lm_head.weight', 'electra.embeddings.word_embeddings.weight'}] FAILED tests/models/ernie/test_modeling_ernie.py::ErnieModelTest::test_can_use_safetensors - Exception: Class ErnieForCausalLM cannot be saved using safetensors: Some tensors share memory, this will lead to duplicate memory on disk and potential differences when loading them again: [{'ernie.embeddings.word_embeddings.weight', 'cls.predictions.decoder.weight'}, {'cls.predictions.bias', 'cls.predictions.decoder.bias'}] FAILED tests/models/esm/test_modeling_esm.py::EsmModelTest::test_can_use_safetensors - Exception: Class EsmForMaskedLM cannot be saved using safetensors: Some tensors share memory, this will lead to duplicate memory on disk and potential differences when loading them again: [{'esm.embeddings.word_embeddings.weight', 'lm_head.decoder.weight'}] FAILED tests/models/flaubert/test_modeling_flaubert.py::FlaubertModelTest::test_can_use_safetensors - Exception: Class FlaubertWithLMHeadModel cannot be saved using safetensors: Some tensors share memory, this will lead to duplicate memory on disk and potential differences when loading them again: [{'pred_layer.proj.weight', 'transformer.embeddings.weight'}] FAILED tests/models/fnet/test_modeling_fnet.py::FNetModelTest::test_can_use_safetensors - Exception: Class FNetForPreTraining cannot be saved using safetensors: Some tensors share memory, this will lead to duplicate memory on disk and potential differences when loading them again: [{'fnet.embeddings.word_embeddings.weight', 'cls.predictions.decoder.weight'}, {'cls.predictions.decoder.bias', 'cls.predictions.bias'}] FAILED tests/models/fsmt/test_modeling_fsmt.py::FSMTModelTest::test_can_use_safetensors - Exception: Class FSMTModel cannot be saved using safetensors: Some tensors share memory, this will lead to duplicate memory on disk and potential differences when loading them again: [{'decoder.embed_tokens.weight', 'decoder.output_projection.weight'}] FAILED tests/models/funnel/test_modeling_funnel.py::FunnelModelTest::test_can_use_safetensors - Exception: Class FunnelForMaskedLM cannot be saved using safetensors: Some tensors share memory, this will lead to duplicate memory on disk and potential differences when loading them again: [{'funnel.embeddings.word_embeddings.weight', 'lm_head.weight'}] FAILED tests/models/gpt2/test_modeling_gpt2.py::GPT2ModelTest::test_can_use_safetensors - Exception: Class GPT2LMHeadModel cannot be saved using safetensors: Some tensors share memory, this will lead to duplicate memory on disk and potential differences when loading them again: [{'transformer.wte.weight', 'lm_head.weight'}] FAILED tests/models/flava/test_modeling_flava.py::FlavaForPreTrainingTest::test_can_use_safetensors - Exception: Class FlavaForPreTraining cannot be saved using safetensors: Some tensors share memory, this will lead to duplicate memory on disk and potential differences when loading them again: [{'mim_head.bias', 'mim_head.decoder.bias'}, {'mlm_head.decoder.bias', 'mlm_head.bias'}, {'mmm_image_head.decoder.bias', 'mmm_image_head.bias'}, {'mmm_text_head.decoder.bias', 'mmm_text_head.bias'}] FAILED tests/models/gpt_neox_japanese/test_modeling_gpt_neox_japanese.py::GPTNeoXModelJapaneseTest::test_can_use_safetensors - Exception: Class GPTNeoXJapaneseForCausalLM cannot be saved using safetensors: Some tensors share memory, this will lead to duplicate memory on disk and potential differences when loading them again: [{'embed_out.weight', 'gpt_neox_japanese.embed_in.weight'}] FAILED tests/models/gptsan_japanese/test_modeling_gptsan_japanese.py::GPTSanJapaneseForConditionalGenerationTest::test_can_use_safetensors - Exception: Class GPTSanJapaneseForConditionalGeneration cannot be saved using safetensors: Some tensors share memory, this will lead to duplicate memory on disk and potential differences when loading them again: [{'lm_head.weight', 'model.embed_tokens.weight'}] FAILED tests/models/ibert/test_modeling_ibert.py::IBertModelTest::test_can_use_safetensors - Exception: Class IBertForMaskedLM cannot be saved using safetensors: Some tensors share memory, this will lead to duplicate memory on disk and potential differences when loading them again: [{'lm_head.decoder.weight', 'ibert.embeddings.word_embeddings.weight'}, {'lm_head.bias', 'lm_head.decoder.bias'}] FAILED tests/models/layoutlm/test_modeling_layoutlm.py::LayoutLMModelTest::test_can_use_safetensors - Exception: Class LayoutLMForMaskedLM cannot be saved using safetensors: Some tensors share memory, this will lead to duplicate memory on disk and potential differences when loading them again: [{'cls.predictions.decoder.weight', 'layoutlm.embeddings.word_embeddings.weight'}, {'cls.predictions.decoder.bias', 'cls.predictions.bias'}] FAILED tests/models/led/test_modeling_led.py::LEDModelTest::test_can_use_safetensors - Exception: Class LEDModel cannot be saved using safetensors: Some tensors share memory, this will lead to duplicate memory on disk and potential differences when loading them again: [{'encoder.embed_tokens.weight', 'shared.weight', 'decoder.embed_tokens.weight'}] FAILED tests/models/longformer/test_modeling_longformer.py::LongformerModelTest::test_can_use_safetensors - Exception: Class LongformerForMaskedLM cannot be saved using safetensors: Some tensors share memory, this will lead to duplicate memory on disk and potential differences when loading them again: [{'longformer.embeddings.word_embeddings.weight', 'lm_head.decoder.weight'}, {'lm_head.decoder.bias', 'lm_head.bias'}] FAILED tests/models/longt5/test_modeling_longt5.py::LongT5ModelTest::test_can_use_safetensors - Exception: Class LongT5Model cannot be saved using safetensors: Some tensors share memory, this will lead to duplicate memory on disk and potential differences when loading them again: [{'encoder.embed_tokens.weight', 'shared.weight', 'decoder.embed_tokens.weight'}] FAILED tests/models/lxmert/test_modeling_lxmert.py::LxmertModelTest::test_can_use_safetensors - Exception: Class LxmertForPreTraining cannot be saved using safetensors: Some tensors share memory, this will lead to duplicate memory on disk and potential differences when loading them again: [{'lxmert.embeddings.word_embeddings.weight', 'cls.predictions.decoder.weight'}] FAILED tests/models/longt5/test_modeling_longt5.py::LongT5TGlobalModelTest::test_can_use_safetensors - Exception: Class LongT5Model cannot be saved using safetensors: Some tensors share memory, this will lead to duplicate memory on disk and potential differences when loading them again: [{'encoder.embed_tokens.weight', 'shared.weight', 'decoder.embed_tokens.weight'}] FAILED tests/models/m2m_100/test_modeling_m2m_100.py::M2M100ModelTest::test_can_use_safetensors - Exception: Class M2M100Model cannot be saved using safetensors: Some tensors share memory, this will lead to duplicate memory on disk and potential differences when loading them again: [{'shared.weight', 'decoder.embed_tokens.weight', 'encoder.embed_tokens.weight'}] FAILED tests/models/marian/test_modeling_marian.py::MarianModelTest::test_can_use_safetensors - Exception: Class MarianModel cannot be saved using safetensors: Some tensors share memory, this will lead to duplicate memory on disk and potential differences when loading them again: [{'shared.weight', 'decoder.embed_tokens.weight', 'encoder.embed_tokens.weight'}] FAILED tests/models/longt5/test_modeling_longt5.py::LongT5EncoderOnlyModelTest::test_can_use_safetensors - Exception: Class LongT5EncoderModel cannot be saved using safetensors: Some tensors share memory, this will lead to duplicate memory on disk and potential differences when loading them again: [{'encoder.embed_tokens.weight', 'shared.weight'}] FAILED tests/models/longt5/test_modeling_longt5.py::LongT5EncoderOnlyTGlobalModelTest::test_can_use_safetensors - Exception: Class LongT5EncoderModel cannot be saved using safetensors: Some tensors share memory, this will lead to duplicate memory on disk and potential differences when loading them again: [{'encoder.embed_tokens.weight', 'shared.weight'}] FAILED tests/models/marian/test_modeling_marian.py::MarianStandaloneDecoderModelTest::test_can_use_safetensors - Exception: Class MarianForCausalLM cannot be saved using safetensors: Some tensors share memory, this will lead to duplicate memory on disk and potential differences when loading them again: [{'lm_head.weight', 'model.decoder.embed_tokens.weight'}] FAILED tests/models/mbart/test_modeling_mbart.py::MBartModelTest::test_can_use_safetensors - Exception: Class MBartModel cannot be saved using safetensors: Some tensors share memory, this will lead to duplicate memory on disk and potential differences when loading them again: [{'shared.weight', 'decoder.embed_tokens.weight', 'encoder.embed_tokens.weight'}] FAILED tests/models/mbart/test_modeling_mbart.py::MBartStandaloneDecoderModelTest::test_can_use_safetensors - Exception: Class MBartForCausalLM cannot be saved using safetensors: Some tensors share memory, this will lead to duplicate memory on disk and potential differences when loading them again: [{'lm_head.weight', 'model.decoder.embed_tokens.weight'}] FAILED tests/models/megatron_bert/test_modeling_megatron_bert.py::MegatronBertModelTest::test_can_use_safetensors - Exception: Class MegatronBertForMaskedLM cannot be saved using safetensors: Some tensors share memory, this will lead to duplicate memory on disk and potential differences when loading them again: [{'bert.embeddings.word_embeddings.weight', 'cls.predictions.decoder.weight'}, {'cls.predictions.bias', 'cls.predictions.decoder.bias'}] FAILED tests/models/mobilebert/test_modeling_mobilebert.py::MobileBertModelTest::test_can_use_safetensors - Exception: Class MobileBertForMaskedLM cannot be saved using safetensors: Some tensors share memory, this will lead to duplicate memory on disk and potential differences when loading them again: [{'cls.predictions.decoder.weight', 'mobilebert.embeddings.word_embeddings.weight'}, {'cls.predictions.bias', 'cls.predictions.decoder.bias'}] FAILED tests/models/mpnet/test_modeling_mpnet.py::MPNetModelTest::test_can_use_safetensors - Exception: Class MPNetForMaskedLM cannot be saved using safetensors: Some tensors share memory, this will lead to duplicate memory on disk and potential differences when loading them again: [{'mpnet.embeddings.word_embeddings.weight', 'lm_head.decoder.weight'}, {'lm_head.decoder.bias', 'lm_head.bias'}] FAILED tests/models/mvp/test_modeling_mvp.py::MvpModelTest::test_can_use_safetensors - Exception: Class MvpModel cannot be saved using safetensors: Some tensors share memory, this will lead to duplicate memory on disk and potential differences when loading them again: [{'shared.weight', 'decoder.embed_tokens.weight', 'encoder.embed_tokens.weight'}] FAILED tests/models/nezha/test_modeling_nezha.py::NezhaModelTest::test_can_use_safetensors - Exception: Class NezhaForMaskedLM cannot be saved using safetensors: Some tensors share memory, this will lead to duplicate memory on disk and potential differences when loading them again: [{'nezha.embeddings.word_embeddings.weight', 'cls.predictions.decoder.weight'}, {'cls.predictions.bias', 'cls.predictions.decoder.bias'}] FAILED tests/models/mvp/test_modeling_mvp.py::MvpStandaloneDecoderModelTest::test_can_use_safetensors - Exception: Class MvpForCausalLM cannot be saved using safetensors: Some tensors share memory, this will lead to duplicate memory on disk and potential differences when loading them again: [{'lm_head.weight', 'model.decoder.embed_tokens.weight'}] FAILED tests/models/nllb_moe/test_modeling_nllb_moe.py::NllbMoeModelTest::test_can_use_safetensors - Exception: Class NllbMoeModel cannot be saved using safetensors: Some tensors share memory, this will lead to duplicate memory on disk and potential differences when loading them again: [{'shared.weight', 'decoder.embed_tokens.weight', 'encoder.embed_tokens.weight'}] FAILED tests/models/nystromformer/test_modeling_nystromformer.py::NystromformerModelTest::test_can_use_safetensors - Exception: Class NystromformerForMaskedLM cannot be saved using safetensors: Some tensors share memory, this will lead to duplicate memory on disk and potential differences when loading them again: [{'cls.predictions.decoder.weight', 'nystromformer.embeddings.word_embeddings.weight'}, {'cls.predictions.bias', 'cls.predictions.decoder.bias'}] FAILED tests/models/openai/test_modeling_openai.py::OpenAIGPTModelTest::test_can_use_safetensors - Exception: Class OpenAIGPTLMHeadModel cannot be saved using safetensors: Some tensors share memory, this will lead to duplicate memory on disk and potential differences when loading them again: [{'lm_head.weight', 'transformer.tokens_embed.weight'}] FAILED tests/models/opt/test_modeling_opt.py::OPTModelTest::test_can_use_safetensors - Exception: Class OPTForCausalLM cannot be saved using safetensors: Some tensors share memory, this will lead to duplicate memory on disk and potential differences when loading them again: [{'lm_head.weight', 'model.decoder.embed_tokens.weight'}] FAILED tests/models/pegasus/test_modeling_pegasus.py::PegasusModelTest::test_can_use_safetensors - Exception: Class PegasusModel cannot be saved using safetensors: Some tensors share memory, this will lead to duplicate memory on disk and potential differences when loading them again: [{'encoder.embed_tokens.weight', 'shared.weight', 'decoder.embed_tokens.weight'}] FAILED tests/models/pegasus/test_modeling_pegasus.py::PegasusStandaloneDecoderModelTest::test_can_use_safetensors - Exception: Class PegasusForCausalLM cannot be saved using safetensors: Some tensors share memory, this will lead to duplicate memory on disk and potential differences when loading them again: [{'lm_head.weight', 'model.decoder.embed_tokens.weight'}] FAILED tests/models/pegasus_x/test_modeling_pegasus_x.py::PegasusXModelTest::test_can_use_safetensors - Exception: Class PegasusXModel cannot be saved using safetensors: Some tensors share memory, this will lead to duplicate memory on disk and potential differences when loading them again: [{'encoder.embed_tokens.weight', 'shared.weight', 'decoder.embed_tokens.weight'}] FAILED tests/models/pix2struct/test_modeling_pix2struct.py::Pix2StructTextImageModelTest::test_can_use_safetensors - Exception: Class Pix2StructForConditionalGeneration cannot be saved using safetensors: Some tensors share memory, this will lead to duplicate memory on disk and potential differences when loading them again: [{'decoder.lm_head.weight', 'decoder.embed_tokens.weight'}] FAILED tests/models/plbart/test_modeling_plbart.py::PLBartModelTest::test_can_use_safetensors - Exception: Class PLBartModel cannot be saved using safetensors: Some tensors share memory, this will lead to duplicate memory on disk and potential differences when loading them again: [{'encoder.embed_tokens.weight', 'shared.weight', 'decoder.embed_tokens.weight'}] FAILED tests/models/plbart/test_modeling_plbart.py::PLBartStandaloneDecoderModelTest::test_can_use_safetensors - Exception: Class PLBartForCausalLM cannot be saved using safetensors: Some tensors share memory, this will lead to duplicate memory on disk and potential differences when loading them again: [{'lm_head.weight', 'model.decoder.embed_tokens.weight'}] FAILED tests/models/prophetnet/test_modeling_prophetnet.py::ProphetNetModelTest::test_can_use_safetensors - Exception: Class ProphetNetModel cannot be saved using safetensors: Some tensors share memory, this will lead to duplicate memory on disk and potential differences when loading them again: [{'word_embeddings.weight', 'encoder.word_embeddings.weight', 'decoder.word_embeddings.weight'}] FAILED tests/models/realm/test_modeling_realm.py::RealmModelTest::test_can_use_safetensors - Exception: Class RealmEmbedder cannot be saved using safetensors: Some tensors share memory, this will lead to duplicate memory on disk and potential differences when loading them again: [{'cls.predictions.decoder.bias', 'cls.predictions.bias'}] FAILED tests/models/reformer/test_modeling_reformer.py::ReformerLocalAttnModelTest::test_can_use_safetensors - Exception: Class ReformerModelWithLMHead cannot be saved using safetensors: Some tensors share memory, this will lead to duplicate memory on disk and potential differences when loading them again: [{'lm_head.decoder.bias', 'lm_head.bias'}] FAILED tests/models/prophetnet/test_modeling_prophetnet.py::ProphetNetStandaloneDecoderModelTest::test_can_use_safetensors - Exception: Class ProphetNetForCausalLM cannot be saved using safetensors: Some tensors share memory, this will lead to duplicate memory on disk and potential differences when loading them again: [{'prophetnet.decoder.word_embeddings.weight', 'lm_head.weight'}] FAILED tests/models/reformer/test_modeling_reformer.py::ReformerLSHAttnModelTest::test_can_use_safetensors - Exception: Class ReformerModelWithLMHead cannot be saved using safetensors: Some tensors share memory, this will lead to duplicate memory on disk and potential differences when loading them again: [{'lm_head.decoder.bias', 'lm_head.bias'}] FAILED tests/models/roc_bert/test_modeling_roc_bert.py::RoCBertModelTest::test_can_use_safetensors - Exception: Class RoCBertForMaskedLM cannot be saved using safetensors: Some tensors share memory, this will lead to duplicate memory on disk and potential differences when loading them again: [{'roc_bert.embeddings.word_embeddings.weight', 'cls.predictions.decoder.weight'}, {'cls.predictions.bias', 'cls.predictions.decoder.bias'}] FAILED tests/models/roformer/test_modeling_roformer.py::RoFormerModelTest::test_can_use_safetensors - Exception: Class RoFormerForMaskedLM cannot be saved using safetensors: Some tensors share memory, this will lead to duplicate memory on disk and potential differences when loading them again: [{'cls.predictions.decoder.weight', 'roformer.embeddings.word_embeddings.weight'}, {'cls.predictions.decoder.bias', 'cls.predictions.bias'}] FAILED tests/models/speech_to_text/test_modeling_speech_to_text.py::Speech2TextModelTest::test_can_use_safetensors - Exception: Class Speech2TextForConditionalGeneration cannot be saved using safetensors: Some tensors share memory, this will lead to duplicate memory on disk and potential differences when loading them again: [{'lm_head.weight', 'model.decoder.embed_tokens.weight'}] FAILED tests/models/speech_to_text_2/test_modeling_speech_to_text_2.py::Speech2Text2StandaloneDecoderModelTest::test_can_use_safetensors - Exception: Class Speech2Text2ForCausalLM cannot be saved using safetensors: Some tensors share memory, this will lead to duplicate memory on disk and potential differences when loading them again: [{'lm_head.weight', 'model.decoder.embed_tokens.weight'}] FAILED tests/models/squeezebert/test_modeling_squeezebert.py::SqueezeBertModelTest::test_can_use_safetensors - Exception: Class SqueezeBertForMaskedLM cannot be saved using safetensors: Some tensors share memory, this will lead to duplicate memory on disk and potential differences when loading them again: [{'cls.predictions.decoder.weight', 'transformer.embeddings.word_embeddings.weight'}, {'cls.predictions.decoder.bias', 'cls.predictions.bias'}] FAILED tests/models/speecht5/test_modeling_speecht5.py::SpeechT5ForSpeechToTextTest::test_can_use_safetensors - Exception: Class SpeechT5ForSpeechToText cannot be saved using safetensors: Some tensors share memory, this will lead to duplicate memory on disk and potential differences when loading them again: [{'speecht5.decoder.prenet.embed_tokens.weight', 'text_decoder_postnet.lm_head.weight'}] FAILED tests/models/switch_transformers/test_modeling_switch_transformers.py::SwitchTransformersModelTest::test_can_use_safetensors - Exception: Class SwitchTransformersModel cannot be saved using safetensors: Some tensors share memory, this will lead to duplicate memory on disk and potential differences when loading them again: [{'shared.weight', 'decoder.embed_tokens.weight', 'encoder.embed_tokens.weight'}] FAILED tests/models/t5/test_modeling_t5.py::T5ModelTest::test_can_use_safetensors - Exception: Class T5Model cannot be saved using safetensors: Some tensors share memory, this will lead to duplicate memory on disk and potential differences when loading them again: [{'encoder.embed_tokens.weight', 'shared.weight', 'decoder.embed_tokens.weight'}] FAILED tests/models/t5/test_modeling_t5.py::T5EncoderOnlyModelTest::test_can_use_safetensors - Exception: Class T5EncoderModel cannot be saved using safetensors: Some tensors share memory, this will lead to duplicate memory on disk and potential differences when loading them again: [{'encoder.embed_tokens.weight', 'shared.weight'}] FAILED tests/models/switch_transformers/test_modeling_switch_transformers.py::SwitchTransformersEncoderOnlyModelTest::test_can_use_safetensors - Exception: Class SwitchTransformersEncoderModel cannot be saved using safetensors: Some tensors share memory, this will lead to duplicate memory on disk and potential differences when loading them again: [{'shared.weight', 'encoder.embed_tokens.weight'}] FAILED tests/models/tapas/test_modeling_tapas.py::TapasModelTest::test_can_use_safetensors - Exception: Class TapasForMaskedLM cannot be saved using safetensors: Some tensors share memory, this will lead to duplicate memory on disk and potential differences when loading them again: [{'cls.predictions.decoder.weight', 'tapas.embeddings.word_embeddings.weight'}, {'cls.predictions.decoder.bias', 'cls.predictions.bias'}] FAILED tests/models/transfo_xl/test_modeling_transfo_xl.py::TransfoXLModelTest::test_can_use_safetensors - Exception: Class TransfoXLLMHeadModel cannot be saved using safetensors: Some tensors share memory, this will lead to duplicate memory on disk and potential differences when loading them again: [{'transformer.word_emb.emb_layers.0.weight', 'crit.out_layers.0.weight'}, {'crit.out_layers.1.weight', 'transformer.word_emb.emb_layers.1.weight'}, {'crit.out_layers.2.weight', 'transformer.word_emb.emb_layers.2.weight'}, {'crit.out_layers.3.weight', 'transformer.word_emb.emb_layers.3.weight'}, {'crit.out_projs.1', 'transformer.word_emb.emb_projs.1'}, {'crit.out_projs.2', 'transformer.word_emb.emb_projs.2'}, {'transformer.word_emb.emb_projs.3', 'crit.out_projs.3'}] FAILED tests/models/trocr/test_modeling_trocr.py::TrOCRStandaloneDecoderModelTest::test_can_use_safetensors - Exception: Class TrOCRForCausalLM cannot be saved using safetensors: Some tensors share memory, this will lead to duplicate memory on disk and potential differences when loading them again: [{'output_projection.weight', 'model.decoder.embed_tokens.weight'}] FAILED tests/models/vilt/test_modeling_vilt.py::ViltModelTest::test_can_use_safetensors - Exception: Class ViltForMaskedLM cannot be saved using safetensors: Some tensors share memory, this will lead to duplicate memory on disk and potential differences when loading them again: [{'mlm_score.bias', 'mlm_score.decoder.bias'}] FAILED tests/models/visual_bert/test_modeling_visual_bert.py::VisualBertModelTest::test_can_use_safetensors - Exception: Class VisualBertForRegionToPhraseAlignment cannot be saved using safetensors: Some tensors share memory, this will lead to duplicate memory on disk and potential differences when loading them again: [{'cls.predictions.bias', 'cls.predictions.decoder.bias'}] FAILED tests/models/xlm/test_modeling_xlm.py::XLMModelTest::test_can_use_safetensors - Exception: Class XLMWithLMHeadModel cannot be saved using safetensors: Some tensors share memory, this will lead to duplicate memory on disk and potential differences when loading them again: [{'pred_layer.proj.weight', 'transformer.embeddings.weight'}] FAILED tests/models/xglm/test_modeling_xglm.py::XGLMModelTest::test_can_use_safetensors - Exception: Class XGLMForCausalLM cannot be saved using safetensors: Some tensors share memory, this will lead to duplicate memory on disk and potential differences when loading them again: [{'lm_head.weight', 'model.embed_tokens.weight'}] FAILED tests/models/xlnet/test_modeling_xlnet.py::XLNetModelTest::test_can_use_safetensors - Exception: Class XLNetLMHeadModel cannot be saved using safetensors: Some tensors share memory, this will lead to duplicate memory on disk and potential differences when loading them again: [{'lm_loss.weight', 'transformer.word_embedding.weight'}] FAILED tests/models/yoso/test_modeling_yoso.py::YosoModelTest::test_can_use_safetensors - Exception: Class YosoForMaskedLM cannot be saved using safetensors: Some tensors share memory, this will lead to duplicate memory on disk and potential differences when loading them again: [{'cls.predictions.decoder.weight', 'yoso.embeddings.word_embeddings.weight'}, {'cls.predictions.decoder.bias', 'cls.predictions.bias'}] == 90 failed, 10714 passed, 8750 skipped, 2172 warnings in 1202.20s (0:20:02) == Exited with code exit status 1 ``` <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
03-29-2023 09:38:17
03-29-2023 09:38:17
_The documentation is not available anymore as the PR was closed or merged._<|||||>This all happens because of the design decision in `safetensors` to error out in case of tied weights. Nothing wrong will happen if users actually save those models and reload them as the weights are re-tied by Transformers. I think instead of forcing to change Transformers, `safetensors` should just adapt to its users and only issue a warning when asked to save a state dict with tied weights, or at the very least have an option to ignore tied weights.<|||||>> I'm not so sure. The tests are currently failing hard (incorrect reloaded tensors) because of incorrect configuration within some models: - Llama - ImageGPT - Blip2 - Pix2struct https://app.circleci.com/pipelines/github/huggingface/transformers/60854/workflows/4049cf2e-afe5-40cb-a218-889434cb0b80/jobs/746519 While it is not **currently** an issue because the saved torch files are creating the aliasing, and so it is actually unpacked during loading, I think all 4 (only checked llama for now) have an incorrect `_keys_to_ignore_on_load`, or and incorrect `tie_weight_embeddings`. If we make the hard error a simple warning, that would just lead to wrong models reloaded from safetensors. (The weight will get ignored so no warning to user, and yet the weights won't be tied to the output head will be random) For LLama: - https://github.com/huggingface/transformers/blob/main/src/transformers/models/llama/configuration_llama.py#L94 Tied word embeddings is set to False. That **should** mean iiuc that the embedding and the lm_head are actually disjoint tensors. However, this is set : https://github.com/huggingface/transformers/blob/main/src/transformers/models/llama/modeling_llama.py#L633 meaning that a file lacking this aliasing would load without warning and be actually incorrect. This is true in pure torch world and has nothing to do with safetensors. It happens to be a minor issue because we currently save the alias. Normally, we're saved because the convertion script will disallow this convertion (since reloaded model is incorrect). I am checking this at this instant. We're also saved because `save_pretrained(.., safe_serialization=True)` will simply fail right now. So afaik, we're not creating bogus files at the moment and only users manually discarding the alias will see the issue, which seems highly unlikely. For these 4 models, provided they are the same issue, either we need to fix the configuation, and retie the weights (which would make the current proposed fix just work) or actually remove the `ignore_on_load` and make sure the tensors are actually disjoint. <|||||>Yet we went from 90 failures to just 4 models. I'm not saying Transformers is perfect and does not need any fix at all. Even with safetensors enabling save of state dictionaries having tied weights we should make sure we only save one of those weights to have the most efficient safetensors checkpoints. I'm just highlighting that an API that is too rigid will never be broadly used, so I really think safetensors should add support for bypassing this hard error.<|||||>(Also can confirm that the embeddings and LM head are different tensors for Llama-7b at least, so the _key_to_ignore_on_load_missing is just wrongly set)<|||||>Confirmed on ImageGPT it's the same. ~It's funny for Llama though, because the model tester does share the weights though...~ No it doesn't my bad. > I'm just highlighting that an API that is too rigid will never be broadly used, so I really think safetensors should add support for bypassing this hard error. I respectfully disagree. You're not wrong, but I really think it's not the case here (simply allowing it is just allowing ourselves to shoot in the foot). While it's definitely inconvenient, saving a file that will not get reloaded properly, is >>> worse than preventing saving it in the first place. And the biggest problem is that it might take a while before the issue in the file is found.<|||||>In any case, a fix will need to be different than what is suggested in the PR: the `_keys_to_ignore_on_save` cannot be always set to ignore the decoder at the class level, because the option to break that connection exists in the config. So we may have GPT2 models for instance with an lm_weight distinct from the encoder. The example actually exists with the T5 architecture: the canonical t5 checkpoints have the decoder tied to the embeddings but not T0pp (see [here](https://huggingface.co/bigscience/T0pp/blob/main/config.json)). So `_keys_to_ignore_on_save` can only contain the name of layers that are always tied (so in the case of T5 it could contain `"encoder.embeds_token"` which is always tied to the shared layer for instance) or always generated at inits. Likewise you can't write code like in this PR that always deletes keys based on `_keys_to_ignore_on_load_xxx` for the same reasons. What could be done instead is during saving with safetensors: - use Accelerate `find_tied_parameters` (Accelerate will soon be a dep on the torch side anyway, this might be the turning point to actually do it) to identify the tied parameters - delete all tied parameters but the first in each group found - then save the rest We might need a new class attribute in XxxPreTrainedModel for the edge case where the *main* tied parameter is not the first one as returned by `find_tied_parameters` but not even sure it's needed as the tie_weights is done before the actual `load_state_dict` so loading tensor data in any of the tied weights should automatically populate the data in all of them. But the code needs to be dynamic (depending on the actual model seen) not static (in the sense that it uses the class variables).<|||||>> delete all tied parameters but the first in each group found But we need to know which ones are actually used to recreate the others then. There's a main weight, and the others are deduced from the others. At least to properly not get a warning. > Likewise you can't write code like in this PR that always deletes keys based on _keys_to_ignore_on_load_xxx for the same reasons. Couldn't it remove keys that are both in the `_ignore_on_load` AND shared pointers then ? That allows to untied weights AND knowing the name of the main weights (the only one which is not in those keys) In general I'm confused about having weights untied at runtime, since if you untied the, save your model, erase the tied weights then you would reload an not have a warning and getting an erroneous model. A very long stretched shot for sure, but it's the reason why I think `ignore_on_load` and `ignore_on_save` play relatively the same role. > find_tied_parameters Nice, but I don't think a full function from a dependency is necessary for that : ```python # Checking the tensor sharing are correct ptrs = defaultdict(list) for k, v in model_tied.state_dict().items(): ptrs[v.data_ptr()].append(k) shared_ptrs = {k: v for k, v in ptrs.items() if len(v) > 1} ``` Is enough. > Not even sure it's needed as the tie_weights is done before the actual load_state_dict so loading tensor data in any of the tied weights should automatically populate the data in all of them. I confirmed. It just erroneously raises a warning, but the underlying model is fine. However, I still think having a consistent name for the main weight would be better in general. <|||||>> Couldn't it remove keys that are both in the _ignore_on_load AND shared pointers then? That allows to untied weights AND knowing the name of the main weights (the only one which is not in those keys) You can take those names as suggestions but you will still need to leave only one weight per group of tied parameters or risk getting an error from safetensors. While you are fine with `safetensors` save function not working for some models, I am not fine with the same behavior in Transformers. > Nice, but I don't think a full function from a dependency is necessary for that Like I said Accelerate is becoming a torch dependency anyway (since the Trainer will be rewritten to use it), so I don't see how it's wrong to use it. Your snippet of code will not present the groups of shared parameters (T5 as 4 of them tied together) as nicely, and you'd need to add tests for it (whereas Accelerate already heavily tests its utils). > In general I'm confused about having weights untied at runtime, since if you untied the, save your model, erase the tied weights then you would reload an not have a warning and getting an erroneous model. I have no idea what this means. Are you referring to the situation where a user breaks the tie weights connection somehow without changing the model config and then save the weights and reloads the model with `from_pretrained`? That would also fail in torch and I don't think I have ever seen a user complain about it.<|||||>@sgugger All tests are now passing with relative minor code changes. I think we could push out some fixes to their respective PRs (blip2, pix2struct, llama) since what this uncovered seems to really be affecting current models. For deta I think since it's marked exotic the proposed fix could work. And just for note, my insistence for disallowing aliasing doesn't come from nowhere. Having *any* aliasing like torch does, instantly renders lazy loading buggy. If you do : ``` tensor1 = safe_open(filename).get_tensor("lm_head.weight") tensor2 = safe_open(filename).get_tensor("wte.weight") ``` Then necessarily the tensors aren't shared, while they are if you did `weights = load_file(filename)` (if we respected the aliasing which we probably should since otherwise fine-tuning is screwed). So enabling aliasing forces safetensors to give up lazy loading. The bar to do that is pretty high in my mind since lazy loading is a very nice feature we get out of it. Note: silently dropping tensors on save in safetensors will necessarily lead to bugs in transformers too that's why I'm not considering it as an option. (Since the reloaded file will be wrong)<|||||>I'm not sure why you are ignoring the comments I made with respect to this PR and safetensors as it is now and go back to defend your choice of API for safetensors (which I still think is wrong but I'm done debating this). So once again: - the changes in `modeling_utils` should only leave one of every group of tied weights, so that the save with `safetensors` does not fail. `_keys_to_ignore_on_load_missing` can inform which weight to drop, but if that variable is incomplete (like in DETA, or any other model that does not normally have tied weights but where a user chose to apply `tie_weights` for their purposes), we should still drop something. - the proposed DETA change cannot be accepted as it will yield to silent bugs for users not having tied weights and an incomplete `state_dict`.<|||||>> but if that variable is incomplete (like in DETA, or any other model that does not normally have tied weights but where a user chose to apply tie_weights for their purposes), we should still drop something. I've done that. Adding the necessary other piece which is dropping missing keys on shared tensors regardless of the `_keys_to_ignore` for shared tensors. That way we don't trigger the warning when loading from safetensors even without the key being present (which it would otherwise). Doing both allows to remove the needs of the deta key modification. (Still needs to fix the deepcopy, again nothing to do with safetensors, but the parameters are cloned and not shared and so the tensors are not properly filled for layers > 1 without the fix)<|||||>@sgugger If you want to do a final check (maybe we want a global `warn_once` too.)<|||||>Hey @Narsil The doctest for `DetaForObjectDetection` fails after this PR. You can run the code snippet below. Could you take a look 🙏 Thanks. ###previous results ```python {'scores': tensor([0.6831, 0.6826, 0.5684, 0.5464], grad_fn=<IndexBackward0>), 'labels': tensor([17, 17, 75, 75]), 'boxes': tensor([[345.8479, 23.6753, 639.8561, 372.8265], [ 8.7996, 52.4945, 316.9348, 473.4509], [ 40.0171, 73.7522, 175.9579, 117.3332], [333.6797, 77.1251, 370.1172, 187.5138]], grad_fn=<IndexBackward0>)} ``` ###now ```python {'scores': tensor([], grad_fn=<IndexBackward0>), 'labels': tensor([], dtype=torch.int64), 'boxes': tensor([], size=(0, 4), grad_fn=<IndexBackward0>)} ``` ``` from transformers import AutoImageProcessor, DetaForObjectDetection from PIL import Image import requests import torch url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) image_processor = AutoImageProcessor.from_pretrained("jozhang97/deta-swin-large") model = DetaForObjectDetection.from_pretrained("jozhang97/deta-swin-large") inputs = image_processor(images=image, return_tensors="pt") outputs = model(**inputs) # convert outputs (bounding boxes and class logits) to COCO API target_sizes = torch.tensor([image.size[::-1]]) results = image_processor.post_process_object_detection(outputs, threshold=0.5, target_sizes=target_sizes)[ 0 ] print(results) ```<|||||>Another one affected ```bash tests/models/vit/test_modeling_vit.py::ViTModelIntegrationTest::test_inference_fp16 (line 136) ValueError: weight is on the meta device, we need a value to put in on 1. ``` ### Full trace ```bash self = <tests.models.vit.test_modeling_vit.ViTModelIntegrationTest testMethod=test_inference_fp16> @slow @require_accelerate @require_torch_gpu def test_inference_fp16(self): r""" A small test to make sure that inference work in half precision without any problem. """ > model = ViTModel.from_pretrained("facebook/dino-vits8", torch_dtype=torch.float16, device_map="auto") tests/models/vit/test_modeling_vit.py:324: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ src/transformers/modeling_utils.py:2760: in from_pretrained dispatch_model(model, device_map=device_map, offload_dir=offload_folder, offload_index=offload_index) /usr/local/lib/python3.8/dist-packages/accelerate/big_modeling.py:370: in dispatch_model attach_align_device_hook_on_blocks( /usr/local/lib/python3.8/dist-packages/accelerate/hooks.py:478: in attach_align_device_hook_on_blocks add_hook_to_module(module, hook) /usr/local/lib/python3.8/dist-packages/accelerate/hooks.py:155: in add_hook_to_module module = hook.init_hook(module) /usr/local/lib/python3.8/dist-packages/accelerate/hooks.py:251: in init_hook set_module_tensor_to_device(module, name, self.execution_device) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ module = Linear(in_features=384, out_features=384, bias=True), tensor_name = 'weight', device = 0, value = None, dtype = None def set_module_tensor_to_device( module: nn.Module, tensor_name: str, device: Union[int, str, torch.device], value: Optional[torch.Tensor] = None, dtype: Optional[Union[str, torch.dtype]] = None, ): """ A helper function to set a given tensor (parameter of buffer) of a module on a specific device (note that doing `param.to(device)` creates a new tensor not linked to the parameter, which is why we need this function). Args: module (`torch.nn.Module`): The module in which the tensor we want to move lives. param_name (`str`): The full name of the parameter/buffer. device (`int`, `str` or `torch.device`): The device on which to set the tensor. value (`torch.Tensor`, *optional*): The value of the tensor (useful when going from the meta device to any other device). dtype (`torch.dtype`, *optional*): If passed along the value of the parameter will be cast to this `dtype`. Otherwise, `value` will be cast to the dtype of the existing parameter in the model. """ # Recurse if needed if "." in tensor_name: splits = tensor_name.split(".") for split in splits[:-1]: new_module = getattr(module, split) if new_module is None: raise ValueError(f"{module} has no attribute {split}.") module = new_module tensor_name = splits[-1] if tensor_name not in module._parameters and tensor_name not in module._buffers: raise ValueError(f"{module} does not have a parameter or a buffer named {tensor_name}.") is_buffer = tensor_name in module._buffers old_value = getattr(module, tensor_name) if old_value.device == torch.device("meta") and device not in ["meta", torch.device("meta")] and value is None: > raise ValueError(f"{tensor_name} is on the meta device, we need a `value` to put in on {device}.") E ValueError: weight is on the meta device, we need a `value` to put in on 0. ```<|||||>https://github.com/huggingface/transformers/pull/22656#pullrequestreview-1376414092
transformers
22,436
closed
`stopping_criteria` not working with llama
### System Info I am generating text from llama-13b model. But it continues generating even though it met stopping criteria. the stopping criteria works fine with other models such as GPT-J 6B. I loaded llama-13b by `model = AutoModelForCausalLM.from_pretrained(model_name, device_map='auto', load_in_8bit=True)` and my stopping criteria list looks like below ```python stopping_criteria_list = transformers.StoppingCriteriaList([ _SentinelTokenStoppingCriteria( sentinel_token_ids=tokenizer( "\n", add_special_tokens=False, return_tensors="pt", ).input_ids.to("cuda"), starting_idx=tokenized_items.input_ids.shape[-1]) ]) ``` Thank you. ### Who can help? _No response_ ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction 1. load lama `model = AutoModelForCausalLM.from_pretrained(model_name, device_map='auto', load_in_8bit=True)` 2. make stopping criteria ```python stopping_criteria_list = transformers.StoppingCriteriaList([ _SentinelTokenStoppingCriteria( sentinel_token_ids=tokenizer( "\n", add_special_tokens=False, return_tensors="pt", ).input_ids.to("cuda"), starting_idx=tokenized_items.input_ids.shape[-1]) ]) ... class _SentinelTokenStoppingCriteria(transformers.StoppingCriteria): def __init__(self, sentinel_token_ids: torch.LongTensor, starting_idx: int): transformers.StoppingCriteria.__init__(self) self.sentinel_token_ids = sentinel_token_ids self.starting_idx = starting_idx def __call__(self, input_ids: torch.LongTensor, _scores: torch.FloatTensor) -> bool: for sample in input_ids: trimmed_sample = sample[self.starting_idx:] # Can't unfold, output is still too tiny. Skip. if trimmed_sample.shape[-1] < self.sentinel_token_ids.shape[-1]: continue for window in trimmed_sample.unfold( 0, self.sentinel_token_ids.shape[-1], 1): if torch.all(torch.eq(self.sentinel_token_ids, window)): return True return False ``` 3. generate ```python model_output = model.generate(stopping_criteria=stopping_criteria_list, **tokenized_items, **generation_settings, pad_token_id=tokenizer.eos_token_id) ``` ### Expected behavior Stop generating when it generated `\n`.
03-29-2023 08:21:44
03-29-2023 08:21:44
cc @gante Note that this might require #22402 as the Llama tokenizer has a few bugs we are fixing.<|||||>@mk-cupist 👋 Let's see if the PR above fixes it. If it doesn't... we need to find a way to reproduce the issue with publicly available weights, otherwise it will be hell for me to figure out what's going on 😅 <|||||>@gante I tried the pr with `decapoda-research/llama-13b-hf` and changed tokenizer_config to LlamaTokenizer but it still does not work.<|||||>That repo is based on an intermediate state of the PR done to Transformers. It cannot even work with the main branch.<|||||>Is there any llama 13b that you know would worth try?<|||||>I tried `swype/deepshard-13B-raw` which uses `4.28.0.dev0` but doesn't work neither.<|||||>@mk-cupist let's wait for the resolution of #22402 :) Your issue depends on the use of the tokenizer, so it may be related<|||||>> @mk-cupist let's wait for the resolution of #22402 :) Your issue depends on the use of the tokenizer, so it may be related Thank you!<|||||>Thanks for this, it is also needed to get LLaMa performing correctly with Langchain chains.<|||||>I re-converted with conversion code from the pr, but still have the same issue.<|||||>I can reproduce the issue. Here is some additional code for testing: ```python from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained('models/llama-7b/') >>> tokenizer.encode('\nYou:', add_special_tokens=False) [29871, 13, 3492, 29901] >>> tokenizer.decode([29871, 13, 3492, 29901]) ' \nYou:' >>> tokenizer.decode([13, 3492, 29901]) ' \nYou:' ``` There is always an extra space (29871) everywhere. Also, ```python >>> tokenizer.encode(' ', add_special_tokens=False) [259] >>> tokenizer.decode([259]) ' ' # two spaces >>> tokenizer.decode([29871]) ' ' # one space ``` If you encode a space, it becomes id 259 instead of 29871. And if you decode [259], the result is two spaces. Very confusing behavior overall.<|||||>@oobabooga Those issues will be fixed by #22402 <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Hi i had experience the same problem and i have install transformers using git with the main branch the model seem to ignore the stop parms completely.<|||||>@poohzaza166 would you be able to share a stand-alone reproduction script? :)<|||||>https://github.com/poohzaza166/daiagnosing-llama<|||||>@poohzaza166 that is not a short reproducible script :) I can only give a hand if you help me pin down the issue with a short reproducer<|||||>```from transformers import LlamaForCausalLM from transformers import LlamaTokenizer import transformers import torch import random seeds = 56416 preprompt = '''Utachi's Persona: Meet utachi, a half-British, half-Japanese playful kind anime girl who loves sukiyaki and reading novels. On the side, she does a bit of programming and is a curious person who often does some odd things. Scenario: i am hanging out on discord and someone message me Utachi: Hi there! My name is utachi and I'm so excited to meet you all! I love exploring new things and trying out new hobbies. Do you have any recommendations for what I should try next? pooh: Hi nice to meet you! i am pooh nice to meet you. Are you interesting in watching anime? i am watching this new show call Bochi the rock it basiclly k-on but for people with social anxiety. Utachi: i see.''' model = 'Neko-Institute-of-Science/pygmalion-7b' tokenizer = LlamaTokenizer.from_pretrained(model) model = LlamaForCausalLM.from_pretrained(model, low_cpu_mem_usage=True, load_in_8bit=True, device_map='auto',early_stopping=True,) class _SentinelTokenStoppingCriteria(transformers.StoppingCriteria): def __init__(self, sentinel_token_ids: torch.LongTensor, starting_idx: int): transformers.StoppingCriteria.__init__(self) self.sentinel_token_ids = sentinel_token_ids self.starting_idx = starting_idx def __call__(self, input_ids: torch.LongTensor, _scores: torch.FloatTensor) -> bool: for sample in input_ids: trimmed_sample = sample[self.starting_idx:] # Can't unfold, output is still too tiny. Skip. if trimmed_sample.shape[-1] < self.sentinel_token_ids.shape[-1]: continue for window in trimmed_sample.unfold( 0, self.sentinel_token_ids.shape[-1], 1): if torch.all(torch.eq(self.sentinel_token_ids, window)): return True return False tokenized = tokenizer(preprompt, return_tensors="pt").to('cuda') stopping_criteria_list = transformers.StoppingCriteriaList([ _SentinelTokenStoppingCriteria( sentinel_token_ids=tokenizer( "pooh:", add_special_tokens=False, return_tensors="pt", ).input_ids.to("cuda"), starting_idx=tokenized.input_ids.shape[-1]) ]) random.seed(seeds) torch.manual_seed(seeds) if torch.cuda.is_available(): torch.cuda.manual_seed_all(seeds) token = model.generate(**tokenized, stopping_criteria=stopping_criteria_list, do_sample=True, max_new_tokens = 250, temperature=0.7, top_p=0.9, top_k = 0,typical_p = 1.0, repetition_penalty = 1.05, early_stopping=True) output = tokenizer.decode(token[0], skip_special_tokens=True) print(output)``` `Utachi's Persona: Meet utachi, a half-British, half-Japanese playful kind anime girl who loves sukiyaki and reading novels. On the side, she does a bit of programming and is a curious person who often does some odd things. Scenario: i am hanging out on discord and someone message me Utachi: Hi there! My name is utachi and I'm so excited to meet you all! I love exploring new things and trying out new hobbies. Do you have any recommendations for what I should try next? pooh: Hi nice to meet you! i am pooh nice to meet you. Are you interesting in watching anime? i am watching this new show call Bochi the rock it basiclly k-on but for people with social anxiety. Utachi: i see. i don't know much about anime but i've heard good things about it. i like to watch shows with interesting characters and plots. what kind of anime do you recommend? pooh: well if you like comedy i would recommend girlish number. it is a very cute show and has a lot of comedic scenes. also if you like romance i would recommend Kimi no Na wa or Your Name. if you like action i would recommend Attack on Titan or Death Note. Utachi: thank you so much! i will check those out and let you know my thoughts!` #if the code was working it should have not gen the pooh: token <|||||>> @poohzaza166 that is not a short reproducible script :) I can only give a hand if you help me pin down the issue with a short reproducer sorry about that i am using mini conda virtual env ``` model pip list Package Version Editable project location ------------------------ ------------ -------------------------------------------- absl-py 1.4.0 accelerate 0.18.0 aiofiles 23.1.0 aiohttp 3.8.4 aiosignal 1.3.1 altair 4.2.2 anyio 3.6.2 asttokens 2.2.1 async-timeout 4.0.2 attrs 23.1.0 backcall 0.2.0 bitsandbytes 0.38.1 Bottleneck 1.3.5 Brotli 1.0.9 brotlipy 0.7.0 cachetools 5.3.0 cairocffi 1.4.0 CairoSVG 2.5.2 cchardet 2.1.7 certifi 2022.12.7 cffi 1.15.0 chardet 5.1.0 charset-normalizer 3.1.0 chess 1.9.4 chess-gym 0.0.5 click 8.1.3 cloudpickle 2.2.1 cmake 3.26.3 colorama 0.4.6 comm 0.1.2 contourpy 1.0.7 cryptography 3.4.8 cssselect2 0.7.0 cycler 0.11.0 dataclasses-json 0.5.7 datasets 2.11.0 debugpy 1.6.6 decorator 5.1.1 defusedxml 0.7.1 dill 0.3.6 entrypoints 0.4 et-xmlfile 1.1.0 executing 1.2.0 fastapi 0.95.1 ffmpy 0.3.0 filelock 3.12.0 filetype 1.2.0 flexgen 0.1.7 fonttools 4.39.3 frozenlist 1.3.3 fsspec 2023.4.0 google-auth 2.16.1 google-auth-oauthlib 0.4.6 gptcache 0.1.21 gradio 3.25.0 gradio_client 0.1.4 greenlet 2.0.2 grpcio 1.51.3 gym 0.26.2 gym-notices 0.0.8 h11 0.14.0 httpcore 0.17.0 httpx 0.24.0 huggingface-hub 0.14.1 idna 3.4 importlib-metadata 6.0.0 inputs 0.5 ipykernel 6.21.2 ipython 8.11.0 jedi 0.18.2 Jinja2 3.1.2 joblib 1.1.1 jsonschema 4.17.3 jupyter_client 8.0.3 jupyter_core 5.2.0 kiwisolver 1.4.4 langchain 0.0.155 linkify-it-py 2.0.0 lit 16.0.2 llama-cpp-python 0.1.36 Markdown 3.4.1 markdown-it-py 2.2.0 MarkupSafe 2.1.2 marshmallow 3.19.0 marshmallow-enum 1.5.1 matplotlib 3.7.1 matplotlib-inline 0.1.6 mdit-py-plugins 0.3.3 mdurl 0.1.2 mkl-fft 1.3.1 mkl-random 1.2.2 mkl-service 2.4.0 mpmath 1.3.0 multidict 6.0.4 multiprocess 0.70.14 mutagen 1.46.0 mypy-extensions 1.0.0 nest-asyncio 1.5.6 networkx 3.1 numexpr 2.8.4 numpy 1.24.3 nvidia-cublas-cu11 11.10.3.66 nvidia-cuda-cupti-cu11 11.7.101 nvidia-cuda-nvrtc-cu11 11.7.99 nvidia-cuda-runtime-cu11 11.7.99 nvidia-cudnn-cu11 8.5.0.96 nvidia-cufft-cu11 10.9.0.58 nvidia-curand-cu11 10.2.10.91 nvidia-cusolver-cu11 11.4.0.1 nvidia-cusparse-cu11 11.7.4.91 nvidia-nccl-cu11 2.14.3 nvidia-nvtx-cu11 11.7.91 oauthlib 3.2.2 openai 0.23.1 openapi-schema-pydantic 1.2.4 openpyxl 3.0.9 orjson 3.8.11 packaging 23.1 pandas 2.0.1 pandas-stubs 2.0.0.230412 parso 0.8.3 peft 0.3.0.dev0 pexpect 4.8.0 pickleshare 0.7.5 Pillow 9.5.0 pip 23.0.1 platformdirs 3.1.0 prompt-toolkit 3.0.38 protobuf 4.22.0 psutil 5.9.5 ptyprocess 0.7.0 PuLP 2.7.0 pure-eval 0.2.2 pyarrow 11.0.0 pyasn1 0.4.8 pyasn1-modules 0.2.8 pycparser 2.21 pydantic 1.10.7 pydub 0.25.1 Pygments 2.14.0 pynput 1.7.6 pyOpenSSL 20.0.1 pyparsing 3.0.9 pyrsistent 0.19.3 PySide2 5.15.2.1 PySide6-Essentials 6.4.1 PySocks 1.7.1 python-chess 1.999 python-dateutil 2.8.2 python-multipart 0.0.6 python-xlib 0.31 pytz 2023.3 PyYAML 6.0 pyzmq 25.0.0 regex 2023.3.23 requests 2.29.0 requests-oauthlib 1.3.1 responses 0.18.0 rsa 4.9 rwkv 0.7.3 sacremoses 0.0.43 safetensors 0.3.0 semantic-version 2.10.0 sentencepiece 0.1.98 setuptools 66.0.0 shiboken2 5.15.2.1 shiboken6 6.4.1 six 1.16.0 sniffio 1.3.0 SQLAlchemy 2.0.11 stack-data 0.6.2 starlette 0.26.1 streamdeck 0.9.3 streamdeck-ui 2.0.6 stringcase 1.2.0 sympy 1.11.1 tenacity 8.2.2 tensorboard 2.12.0 tensorboard-data-server 0.7.0 tensorboard-plugin-wit 1.8.1 tinycss2 1.2.1 tokenizers 0.13.3 toolz 0.12.0 torch 2.0.0 tornado 6.2 tqdm 4.65.0 traitlets 5.9.0 transformers 4.29.0.dev0 /mnt/sharessd/code/python/QABOT/transformers triton 2.0.0 types-pytz 2023.3.0.0 typing_extensions 4.5.0 typing-inspect 0.8.0 tzdata 2023.3 uc-micro-py 1.0.1 urllib3 1.26.15 uvicorn 0.22.0 wcwidth 0.2.6 webencodings 0.5.1 websockets 10.4 Werkzeug 2.2.3 wheel 0.38.4 xxhash 3.2.0 yarl 1.9.2 yt-dlp 2023.2.17 zipp 3.11.0```<|||||>Hey @poohzaza166 👋 I had a look at your snippet, and the problem does not step from the stopping criteria nor the llama model itself, but rather from how the tokenizer works. It also doesn't seem to be a bug. My recommendation would be to design the stopping criteria from the token ids, and not from raw text :) See this example: <details> <summary>Click me</summary> ```python from transformers import LlamaTokenizer import transformers import torch tokenizer = LlamaTokenizer.from_pretrained('huggyllama/llama-7b') class _SentinelTokenStoppingCriteria(transformers.StoppingCriteria): def __init__(self, sentinel_token_ids: torch.LongTensor, starting_idx: int): transformers.StoppingCriteria.__init__(self) self.sentinel_token_ids = sentinel_token_ids self.starting_idx = starting_idx def __call__(self, input_ids: torch.LongTensor, _scores: torch.FloatTensor) -> bool: for sample in input_ids: trimmed_sample = sample[self.starting_idx:] # Can't unfold, output is still too tiny. Skip. if trimmed_sample.shape[-1] < self.sentinel_token_ids.shape[-1]: continue for window in trimmed_sample.unfold(0, self.sentinel_token_ids.shape[-1], 1): if torch.all(torch.eq(self.sentinel_token_ids, window)): return True return False sentinel_token_ids = tokenizer("pooh:", add_special_tokens=False, return_tensors="pt").input_ids.to("cuda") print(sentinel_token_ids) stopping_criteria_list = transformers.StoppingCriteriaList([ _SentinelTokenStoppingCriteria(sentinel_token_ids=sentinel_token_ids, starting_idx=0) ]) test_input_1 = """This is a test.\npooh: potato.""" test_input_ids = tokenizer(test_input_1, add_special_tokens=False, return_tensors="pt").input_ids.to("cuda") print(stopping_criteria_list(test_input_ids, None)) test_input_2 = """This is a test. pooh: potato.""" test_input_ids = tokenizer(test_input_2, add_special_tokens=False, return_tensors="pt").input_ids.to("cuda") print(stopping_criteria_list(test_input_ids, None)) ``` </details> <|||||>@gante Hi thanks for the help. Though now i have a problem where if i fix the stop condition to token id sometime there multiple token that produce the same plaintext stop word. is there a way to get around this? my orginal idea for this is to just stream the genration and append the word to a string and use regex to halt the loop with it detect the stop token in plain text. though this seem janky is there a "proper way to do this"<|||||>@poohzaza166 we do not have a solution for that problem, but as always you can design a custom stopping criteria -- nothing prevents you to expand the code you shared to check against multiple stop sequences :D (and yes, it is better to do it at a token level, otherwise you need to pass the tokens back to the CPU and decode them, which will slow generation down significantly)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
22,435
closed
Question about masking tokens in DataCollatorForPermutationLanguageModeling
According to the description from XLNet paper, the permutation LM does not need to corrupt the input in its data processing, and this is different from the Masked LM in BERT. But I found the below code in `DataCollatorForPermutationLanguageModeling` for XLNet replacing the specific tokens in the input data with the given mask_token. I'm confused about this operation. Could you please kindly make a simple explanation? Thank you very much in advance. https://github.com/huggingface/transformers/blob/b29fd6971d9cd6ba2a824628effe243f543b8f61/src/transformers/data/data_collator.py#L1295
03-29-2023 07:03:11
03-29-2023 07:03:11
transformers
22,434
closed
Ray[Tune] ValueError: checkpoint not in list (still persists with latest version of transformers)
### System Info - `transformers` version: 4.27.3 - Platform: Linux-5.10.112-108.499.amzn2.x86_64-x86_64-with-glibc2.17 - Python version: 3.8.11 - Huggingface_hub version: 0.12.0 - PyTorch version (GPU?): 1.10.1+cu102 (True) - Tensorflow version (GPU?): 2.9.1 (True) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? _No response_ ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction **Snippets of Configuration File:** pbt_scheduler: time_attr: "training_iteration" metric : "eval_f1" mode : "max" synch : true hyperparam_mutations : {"weight_decay" : [0.0, 0.1], "learning_rate": [0.0000001, 0.0001]} perturbation_interval: 4 param_search: hp_spance : {"checkpoint_interval": 4, "per_device_train_batch_size": 8, "per_device_eval_batch_size": 8, "num_train_epochs": [512], "max_steps": -1} backend : "ray" n_trials : 8 max_concurrent_trials: 8 resources_per_trial: {"cpu": 12, "gpu": 1} scheduler: "pbt" keep_checkpoints_num: 1 checkpoint_score_attr: "eval_f1" checkpoint_at_end: true max_failures: 5 resume: false stop: {"eval_f1": 0.85, "training_iteration": 512} local_dir: "./_logs/tune_RAY" name: "tune_transformer_pbt" log_to_file: false` **Snippets of Code that Uses the Configuration File's Information:** def get_ray_pbt_scheduler(config): scheduler = PopulationBasedTraining( time_attr=config.pbt_scheduler.time_attr, metric=config.pbt_scheduler.metric, mode=config.pbt_scheduler.mode, synch=config.pbt_scheduler.synch, perturbation_interval=config.pbt_scheduler.perturbation_interval, hyperparam_mutations={ "weight_decay": tune.uniform(*config.pbt_scheduler.\ hyperparam_mutations["weight_decay"]), "learning_rate": tune.uniform(*config.pbt_scheduler.\ hyperparam_mutations["learning_rate"]), }, ) return scheduler def get_ray_hp_space(config): hp_space = dict() hp_space["per_device_train_batch_size"] = int( config.param_search.hp_spance.per_device_train_batch_size) hp_space["per_device_eval_batch_size"] = int( config.param_search.hp_spance.per_device_eval_batch_size) hp_space["num_train_epochs"] = tune.choice( list(config.param_search.hp_spance.num_train_epochs)) hp_space["max_steps"] = int( config.param_search.hp_spance.max_steps) return hp_space def param_search(self): if self.config.param_search.scheduler == "pbt": scheduler = get_ray_pbt_scheduler(self.config) else: raise NotImplementedError reporter = get_ray_cli_reporter() ray_hp_space = get_ray_hp_space(self.config) best_trial = self.trainer.hyperparameter_search( scheduler=scheduler, hp_space=lambda _: ray_hp_space, progress_reporter=reporter, backend=self.config.param_search.backend, n_trials=self.config.param_search.n_trials, resources_per_trial=self.config.param_search.resources_per_trial, keep_checkpoints_num=self.config.param_search.keep_checkpoints_num, checkpoint_score_attr=self.config.param_search.checkpoint_score_attr, stop=dict(self.config.param_search.stop), local_dir=self.config.param_search.local_dir, name=self.config.param_search.name, log_to_file=self.config.param_search.log_to_file, ) save_best_trial(best_trial, self.config) ### Expected behavior Get Same error described here: https://github.com/huggingface/transformers/issues/10247 This one: best_model_index = checkpoints_sorted.index(str(Path(self.state.best_model_checkpoint))) ValueError: 'results/run-34e77498/checkpoint-10' is not in list - I am using last version of transformers. Thought this had been corrected, hasn't it? - Is there any dependency (and specific version) I need to install to avoid this error?
03-29-2023 00:53:27
03-29-2023 00:53:27
This is the exact error I get: Failure # 1 (occurred at 2023-03-29_00-47-59) ray::ImplicitFunc.train() (pid=30599, ip=100.89.2.207, repr=_objective) File "/opt/omniai/work/instance1/jupyter/envs/qci/lib/python3.8/site-packages/ray/tune/trainable/trainable.py", line 368, in train raise skipped from exception_cause(skipped) File "/opt/omniai/work/instance1/jupyter/envs/qci/lib/python3.8/site-packages/ray/tune/trainable/function_trainable.py", line 337, in entrypoint return self._trainable_func( File "/opt/omniai/work/instance1/jupyter/envs/qci/lib/python3.8/site-packages/ray/tune/trainable/function_trainable.py", line 654, in _trainable_func output = fn() File "/opt/omniai/work/instance1/jupyter/envs/qci/lib/python3.8/site-packages/transformers/integrations.py", line 336, in dynamic_modules_import_trainable return trainable(*args, **kwargs) File "/opt/omniai/work/instance1/jupyter/envs/qci/lib/python3.8/site-packages/ray/tune/trainable/util.py", line 398, in inner return trainable(config, **fn_kwargs) File "/opt/omniai/work/instance1/jupyter/envs/qci/lib/python3.8/site-packages/transformers/integrations.py", line 237, in _objective local_trainer.train(resume_from_checkpoint=checkpoint, trial=trial) File "/opt/omniai/work/instance1/jupyter/envs/qci/lib/python3.8/site-packages/transformers/trainer.py", line 1633, in train return inner_training_loop( File "/opt/omniai/work/instance1/jupyter/envs/qci/lib/python3.8/site-packages/transformers/trainer.py", line 1994, in _inner_training_loop self._maybe_log_save_evaluate(tr_loss, model, trial, epoch, ignore_keys_for_eval) File "/opt/omniai/work/instance1/jupyter/envs/qci/lib/python3.8/site-packages/transformers/trainer.py", line 2240, in _maybe_log_save_evaluate self._save_checkpoint(model, trial, metrics=metrics) File "/opt/omniai/work/instance1/jupyter/envs/qci/lib/python3.8/site-packages/transformers/trainer.py", line 2388, in _save_checkpoint self._rotate_checkpoints(use_mtime=True, output_dir=run_dir) File "/opt/omniai/work/instance1/jupyter/envs/qci/lib/python3.8/site-packages/transformers/trainer.py", line 2875, in _rotate_checkpoints checkpoints_sorted = self._sorted_checkpoints(use_mtime=use_mtime, output_dir=output_dir) File "/opt/omniai/work/instance1/jupyter/envs/qci/lib/python3.8/site-packages/transformers/trainer.py", line 2865, in _sorted_checkpoints best_model_index = checkpoints_sorted.index(str(Path(self.state.best_model_checkpoint))) ValueError: '/opt/omniai/work/instance1/jupyter/repos/qci_rates_archive/_logs/tune_RAY/tune_transformer_pbt/run-895a0_00002/checkpoint-4550' is not in list<|||||>And Get stuck here even when I use 4 trials with 4 GPUs and 48 CPUs. == Status == Current time: 2023-03-29 00:55:48 (running for 00:27:58.76) Memory usage on this node: 40.6/186.6 GiB PopulationBasedTraining: 2 checkpoints, 2 perturbs Resources requested: 0/48 CPUs, 0/4 GPUs, 0.0/114.68 GiB heap, 0.0/53.14 GiB objects Result logdir: /opt/omniai/work/instance1/jupyter/repos/qci_rates_archive/_logs/tune_RAY/tune_transformer_pbt Number of trials: 4/4 (1 ERROR, 3 PAUSED) +------------------------+----------+--------------------+------------+-------------+----------------+--------------+-----------+-------------+---------+----------------------+ | Trial name | status | loc | w_decay | lr | train_bs/gpu | num_epochs | eval_f1 | eval_loss | epoch | training_iteration | |------------------------+----------+--------------------+------------+-------------+----------------+--------------+-----------+-------------+---------+----------------------| | _objective_895a0_00000 | PAUSED | 100.89.2.207:11371 | 0.0124815 | 1.88206e-05 | 8 | 512 | 0.762118 | 1.76025 | 12 | 12 | | _objective_895a0_00002 | PAUSED | 100.89.2.207:48461 | 0.0156019 | 1.56839e-05 | 8 | 512 | 0.762118 | 1.76025 | 12 | 12 | | _objective_895a0_00003 | PAUSED | 100.89.2.207:48150 | 0.00580836 | 8.6631e-05 | 8 | 512 | 0.222621 | 2.26957 | 12 | 12 | | _objective_895a0_00001 | ERROR | 100.89.2.207:30599 | 0.0124815 | 1.88206e-05 | 8 | 512 | 0.754937 | 1.56503 | 9 | 9 | +------------------------+----------+--------------------+------------+-------------+----------------+--------------+-----------+-------------+---------+----------------------+ Number of errored trials: 1 +------------------------+--------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Trial name | # failures | error file | |------------------------+--------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------|<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>I'm having the same issue with 4.26.0, any solution?<|||||>Same issue with transformers==4.30.2 and ray[tune]==2.51.0<|||||>This has been closed by github-actions, but the problem has not been solved ofc ...<|||||>@TimbusCalin @SergioG-M So that we can help you, could you share a _minimal_ reproducible code snippet and information about the running environment (run `transformers-cli env` in the terminal and copy-paste the output)?<|||||>@amyeroberts Sure, thank you for the prompt response. So whenever I run a PopulationBasedTraining() with perturbation_interval > 1, I get such an error, exactly like the one mentioned above. `ValueError: '/opt/omniai/work/instance1/jupyter/repos/qci_rates_archive/_logs/tune_RAY/tune_transformer_pbt/run-895a0_00002/checkpoint-4550' is not in list`. Of course the `checkopoint-abcd `not in list depends on the experiment that I am doing, but what I found is that it's always happening when using PopulationBasedTraining + perturbation_interval > 1. This is the code I have (almost copy-paste from https://docs.ray.io/en/latest/tune/examples/pbt_transformers.html#tune-huggingface-example: ``` """ This example is uses the official huggingface transformers `hyperparameter_search` API. """ import os import ray from ray import tune from ray.tune import CLIReporter from ray.tune.examples.pbt_transformers.utils import ( download_data, ) from utils import compute_metrics from ray.tune.schedulers import PopulationBasedTraining from transformers import ( glue_tasks_num_labels, AutoConfig, AutoModelForSequenceClassification, AutoTokenizer, Trainer, GlueDataset, GlueDataTrainingArguments, TrainingArguments, ) def tune_transformer(num_samples=8, gpus_per_trial=0, smoke_test=False): data_dir_name = "./data" if not smoke_test else "./test_data" data_dir = os.path.abspath(os.path.join(os.getcwd(), data_dir_name)) if not os.path.exists(data_dir): os.mkdir(data_dir, 0o755) # Change these as needed. model_name = ( "distilbert-base-uncased" if not smoke_test else "sshleifer/tiny-distilroberta-base" ) task_name = "rte" task_data_dir = os.path.join(data_dir, task_name.upper()) num_labels = glue_tasks_num_labels[task_name] config = AutoConfig.from_pretrained( model_name, num_labels=num_labels, finetuning_task=task_name ) # Download and cache tokenizer, model, and features print("Downloading and caching Tokenizer") tokenizer = AutoTokenizer.from_pretrained(model_name) # Triggers tokenizer download to cache print("Downloading and caching pre-trained model") AutoModelForSequenceClassification.from_pretrained( model_name, config=config, ) def get_model(): return AutoModelForSequenceClassification.from_pretrained( model_name, config=config, ) # Download data. download_data(task_name, data_dir) data_args = GlueDataTrainingArguments(task_name=task_name, data_dir=task_data_dir) train_dataset = GlueDataset( data_args, tokenizer=tokenizer, mode="train", cache_dir=task_data_dir ) eval_dataset = GlueDataset( data_args, tokenizer=tokenizer, mode="dev", cache_dir=task_data_dir ) training_args = TrainingArguments( output_dir=".", learning_rate=1e-5, # config do_train=True, do_eval=True, no_cuda=gpus_per_trial <= 0, evaluation_strategy="epoch", save_strategy="epoch", load_best_model_at_end=True, num_train_epochs=10, # config max_steps=-1, per_device_train_batch_size=16, # config per_device_eval_batch_size=16, # config warmup_steps=0, weight_decay=0.1, # config logging_dir="./logs", skip_memory_metrics=True, report_to="none", ) trainer = Trainer( model_init=get_model, args=training_args, train_dataset=train_dataset, eval_dataset=eval_dataset, compute_metrics=compute_metrics, ) tune_config = { "per_device_train_batch_size": 32, "per_device_eval_batch_size": 32, "num_train_epochs": tune.choice([4,5,6,7]), "max_steps": 1 if smoke_test else -1, # Used for smoke test. } scheduler = PopulationBasedTraining( time_attr="training_iteration", metric="eval_f1", mode="max", #if perturbation_interval > 1, such an error as the one below occurs perturbation_interval=2, hyperparam_mutations={ "weight_decay": tune.uniform(0.0, 0.3), "learning_rate": tune.uniform(1e-5, 5e-5), "per_device_train_batch_size": tune.choice([16, 24, 32, 48, 64]), }, quantile_fraction=0.125, resample_probability=0.25, ) reporter = CLIReporter( parameter_columns={ "weight_decay": "w_decay", "learning_rate": "lr", "per_device_train_batch_size": "train_bs/gpu", "num_train_epochs": "num_epochs", }, metric_columns=["eval_acc", "eval_loss", "eval_f1", "epoch", "training_iteration"], max_progress_rows=40, ) best_results = trainer.hyperparameter_search( hp_space=lambda _: tune_config, backend="ray", n_trials=num_samples, resources_per_trial={"cpu": 8, "gpu": gpus_per_trial}, scheduler=scheduler, keep_checkpoints_num=1, direction="maximize", checkpoint_score_attr="training_iteration", stop={"training_iteration": 1} if smoke_test else None, progress_reporter=reporter, local_dir="~/ray_results/", name="tune_transformer_only4ptbint2", log_to_file=True, ) print("Best hparams", best_results.hyperparameters) if __name__ == "__main__": import argparse parser = argparse.ArgumentParser() parser.add_argument( "--smoke-test", default=False, action="store_true", help="Finish quickly for testing", ) args, _ = parser.parse_known_args() ray.init() if args.smoke_test: tune_transformer(num_samples=1, gpus_per_trial=0, smoke_test=True) else: # You can change the number of GPUs here: tune_transformer(num_samples=4, gpus_per_trial=1) ``` For example, this error now: ``` Failure # 1 (occurred at 2023-07-05_12-40-10) ray::ImplicitFunc.train() (pid=74890, ip=192.168.1.139, actor_id=df9f5e052e6dd84c774b695501000000, repr=_objective) File "/home/calin/PycharmProjects/hparams_search/venv/lib/python3.8/site-packages/ray/tune/trainable/trainable.py", line 389, in train raise skipped from exception_cause(skipped) File "/home/calin/PycharmProjects/hparams_search/venv/lib/python3.8/site-packages/ray/tune/trainable/function_trainable.py", line 336, in entrypoint return self._trainable_func( File "/home/calin/PycharmProjects/hparams_search/venv/lib/python3.8/site-packages/ray/tune/trainable/function_trainable.py", line 653, in _trainable_func output = fn() File "/home/calin/PycharmProjects/hparams_search/venv/lib/python3.8/site-packages/transformers/integrations.py", line 357, in dynamic_modules_import_trainable return trainable(*args, **kwargs) File "/home/calin/PycharmProjects/hparams_search/venv/lib/python3.8/site-packages/ray/tune/trainable/util.py", line 324, in inner return trainable(config, **fn_kwargs) File "/home/calin/PycharmProjects/hparams_search/venv/lib/python3.8/site-packages/transformers/integrations.py", line 258, in _objective local_trainer.train(resume_from_checkpoint=checkpoint, trial=trial) File "/home/calin/PycharmProjects/hparams_search/venv/lib/python3.8/site-packages/transformers/trainer.py", line 1645, in train return inner_training_loop( File "/home/calin/PycharmProjects/hparams_search/venv/lib/python3.8/site-packages/transformers/trainer.py", line 2081, in _inner_training_loop checkpoints_sorted = self._sorted_checkpoints(use_mtime=False, output_dir=run_dir) File "/home/calin/PycharmProjects/hparams_search/venv/lib/python3.8/site-packages/transformers/trainer.py", line 2986, in _sorted_checkpoints best_model_index = checkpoints_sorted.index(str(Path(self.state.best_model_checkpoint))) ValueError: 'run-e6e7a_00003/checkpoint-78' is not in list ```<|||||>Hi @TimbusCalin, thanks for providing more details. > it's always happening when using PopulationBasedTraining + perturbation_interval > 1. In this case, it seems that the issue is coming from the `ray` library and its interactions with `Trainer` and not something we can help with. I suggest raising an issue on ray's github, as they'll be more able to resolve this issue.
transformers
22,433
closed
Llama-13B gives nonsensical output past 1024 tokens
### System Info Latest transformers main branch, Python 3.10 ### Who can help? @sgu ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction ```python import torch import os import transformers from transformers import LlamaTokenizer, LlamaForCausalLM model_path = "/home/ubuntu/LLaMa-13B" tokenizer_path = "/home/ubuntu/LLaMa-13B" model = LlamaForCausalLM.from_pretrained(model_path).cuda() # or something like {"": 0} tokenizer = LlamaTokenizer.from_pretrained(tokenizer_path) input_prompt = "some text that is over 1024 tokens" batch = tokenizer(input_prompt, return_tensors="pt", truncation=False) with torch.no_grad(): out = model.generate( input_ids=batch["input_ids"].cuda(), attention_mask=batch["attention_mask"].cuda(), max_new_tokens=100, do_sample=False, top_k=50, top_p=1.0, temperature=1.0, use_cache=True ) print(tokenizer.decode(out[0])) ``` ### Expected behavior Text that makes sense. Text makes sense when `truncation=True`. There shouldn't be any arbitrary limitation for sequence lengths greater than 1024 given that Llama was trained on 2048 sequence lengths and has rotary embeddings that should theoretically support any sequence length.
03-28-2023 19:59:56
03-28-2023 19:59:56
cc @ArthurZucker and @gante <|||||>Hey @michaelroyzen 👋 Double-checking -- if you print `model.config.max_sequence_length`, do you get `2048`? If not, overwriting it would be the first thing I'd do. Secondly, there is this [ongoing PR](https://github.com/huggingface/transformers/pull/22402) that may be related. If `model.config.max_sequence_length == 2048` and the PR above doesn't fix it, debugging becomes trickier, as the weights and configuration files are not public. In that case, can you try to reproduce the issue with GPTNeoX (e.g. with [this model](https://huggingface.co/EleutherAI/pythia-12b/tree/main))?<|||||>Hi, @gante -- thanks for getting back to me! Unfortunately, I get `AttributeError: 'LlamaConfig' object has no attribute 'max_sequence_length'`. It doesn't seem like it's been implemented for Llama. I converted the official weights from FB using https://github.com/huggingface/transformers/blob/main/src/transformers/models/llama/convert_llama_weights_to_hf.py, which doesn't seem to have created a 'max_sequence_length' property in the config.<|||||>Hey @michaelroyzen! Two notes: 1. The should not be an exception, I'll submit a PR to fix it. However, upon further inspection, that would not be the problem -- the maximum pre-initialized rotary position index is hardcoded to `2048`, so it's okay (it should be `config. max_sequence_length`, but doesn't change the issue here) 2. I've attempted to reproduce, and I noticed you set `do_sample=False`. Without sampling, the results are often underwhelming. I've ran locally with sampling, and everything looks fine. LMK if the example below works well on your checkpoint/input prompt :) ```py import torch from transformers import LlamaTokenizer, LlamaForCausalLM weights_path = "your-path-to-llama" tokenizer = LlamaTokenizer.from_pretrained(weights_path, use_auth_token=True) input_prompt = "The cat is" batch = tokenizer(input_prompt, return_tensors="pt") print(batch["input_ids"].shape) model = LlamaForCausalLM.from_pretrained(weights_path, torch_dtype=torch.float16, use_auth_token=True).cuda() # Manual left-padding with 1024 tokens batch["input_ids"] = torch.cat([torch.ones((1, 1024), dtype=torch.long) * model.config.eos_token_id, batch["input_ids"]], dim=1) batch["attention_mask"] = torch.cat([torch.zeros((1, 1024), dtype=torch.long), batch["attention_mask"]], dim=1) print(batch["input_ids"].shape) # Set seed for reproduction torch.cuda.manual_seed(0) with torch.no_grad(): out = model.generate( input_ids=batch["input_ids"].cuda(), attention_mask=batch["attention_mask"].cuda(), max_new_tokens=100, do_sample=True, top_k=50, top_p=1.0, temperature=1.0, use_cache=True ) print(out[0].shape) print(tokenizer.decode(out[0])) ```<|||||>Thanks @gante. May I ask why it's hardcoded to 2048? This is not mentioned in the paper. And isn't the whole point of rotary embeddings to support infinite sequence length? Would it be possible to get a sequence length of 4096+ to work?<|||||>@michaelroyzen Yes, rotary embeddings are, in practice, relative (and periodic!) position embeddings. See eq 12 in [the original paper](https://arxiv.org/pdf/2104.09864.pdf). As you can see in our code, the hardcoded `2048` (now `config.max_position_embeddings`) is the initialization size -- [they are immediately expanded upon request](https://github.com/huggingface/transformers/blob/da68fd691c3738fde4955ef99cdef9955f8ab07a/src/transformers/models/llama/modeling_llama.py#L112). They will never be the bottleneck 🙌 So... why `2048`? Well, we'd have to ask the `Llama` creators, since they have [hardcoded it in their repo](https://github.com/facebookresearch/llama/blob/57b0eb62de0636e75af471e49e2f1862d908d9d8/llama/model.py#L30) 😅 It is not mentioned in the paper, as far as I can see, but I suspect they capped training at this sequence length. If my assumption is correct: while the model works beyond `2048` tokens (try changing `1024` to `2048` in the script I shared above), I would expect the quality to drop as we go beyond `2048` tokens, simply because of train-test skew :)<|||||>just wondering if there's any more issue, if you do_sample=True? I tried this on apple silicon (MPS) and got decent result. The default sample code from huggingface is missing this argument, so I suspect the default is do_sample=False. And I was (falsely) disappointed I started getting either non-sense, or repetitions.<|||||>Thank you @gante! `max_position_embeddings` is indeed the fix here. Closing this issue now.<|||||>> just wondering if there's any more issue, if you do_sample=True? I tried this on apple silicon (MPS) and got decent result. The default sample code from huggingface is missing this argument, so I suspect the default is do_sample=False. And I was (falsely) disappointed I started getting either non-sense, or repetitions. @kechan That's correct, `do_sample=False` is the default, and it decreases the performance of this task in particular (open text generation) :) [This blog post](https://huggingface.co/blog/how-to-generate) talks about it, and explains why.
transformers
22,432
closed
Official MLflow integration
### Feature request Hi there maintainers, on behalf of the maintainers group for MLflow, we'd like to extend an offer to build out a full integration with MLflow. We're currently in the process of creating an "official" transformers flavor (a named flavor) that will support serialization and logging of components (models, feature extractors, image processors, etc) and Pipelines, as well as building support for LLM-based PIpelines for pyfunc inference (other modeling task types will be supported later). As part of this, we'd like to offer to expand the functionality in the current CallBack implementation from within the transformers library so that when a Trainer is instantiated, we integrate with the serialization implementation in MLflow and to handle run start context slightly differently. Would you all be interested in us taking a stab at this in the following weeks? ### Motivation People love this package. Rightfully so. It's great. We'd like to make the integration with Mlflow seamless. ### Your contribution We'd like to contribute these changes to the transformers library.
03-28-2023 16:47:28
03-28-2023 16:47:28
This sounds very exciting! Happy to have a look at any PR that makes a stronger integration of MLflow!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
22,431
closed
int8 with device_map doesn't work well in generation
**Versions** `transformers==4.26.0` `accelerate==0.17.1` I was trying to do inference with the `.generate()` method using a 7B OPT model. When using fp16 in an 8-gpu(80G A100) node, I figure out that fitting each whole model into a GPU with independent process achieves the best speed. But when trying to use int8, I can't use `auto` in device_map(it will shard model into different gpus, which I don't want) and I'll have to design a device_map like this: `{ "model.decoder.embed_tokens": 1, "lm_head": 1, "model.decoder.embed_positions": 1, "model.decoder.final_layer_norm": 1, "model.decoder.layers.0": 1, "model.decoder.layers.1": 1, "model.decoder.layers.2": 1, "model.decoder.layers.3": 1, "model.decoder.layers.4": 1, "model.decoder.layers.5": 1, ... }` I found the speed is rather slow compared to fp16 in generation. How can I correctly use int8 in a multi-process way(one independent process for a GPU) in an 8-card node?
03-28-2023 16:01:07
03-28-2023 16:01:07
cc @younesbelkada Also if you want your whole model on one GPU, you shouldn't use the `device_map` argument but just place your model on that GPU.<|||||>@fnshwi you can run the following: ```python import torch ... device_map = {"":torch.cuda.current_device()} ``` and use this device map instead of `"auto"`. the `"":device` means that you want to fit your entire model on that device Related and similar issue: https://github.com/huggingface/transformers/issues/21736 Also if you run multiple processes, you can use a trick that [we use in `trl`](https://github.com/lvwerra/trl/blob/main/trl/models/modeling_base.py#L246-L258): ```python from accelerate import Accelerator current_device = Accelerator().process_index device_map = {"":current_device} ```<|||||>@younesbelkada Thanks for your help, the device_map works well. But I still have the inference speed issue, it's much slower than fp16, the code is: ``` device_map = {"": torch.cuda.current_device()} model = OPTForCausalLM.from_pretrained(ckpt_dir, load_in_8bit=True, device_map=device_map) input_id = torch.tensor(input_id, device=device) gen = model.generate(input_ids, min_length=1, max_length=512, do_sample=True, temperature=1.0, num_return_sequences=topk) ``` I manually send input_id to device, is that correct?<|||||>Thanks for the feedback, yes you need to manually send the input ids to the correct device How much slower is the int8 model compared to fp16?<|||||>@younesbelkada generating 130 cases in fp16 costs ~20mins, but int8 costs more than 1h <|||||>Hi i'm having the same problem, as i'm having to convert the input to device every time it makes int8 way more slow than regular fp16<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
22,430
closed
Why ViltProcessor couldn't resize image to 384*384?
### System Info - `transformers` version: 4.27.1 - Platform: Linux-5.15.0-52-generic-x86_64-with-glibc2.35 - Python version: 3.10.8 - Huggingface_hub version: 0.11.1 - PyTorch version (GPU?): 1.13.1+cu117 (True) - Tensorflow version (GPU?): 2.11.0 (False) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? @amyeroberts ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction ```python import numpy as np from transformers import ViltProcessor # create two image shape_list = [(283, 400), (550, 469)] fake_imgs = [np.random.randint(0, 255, (3,) + shape) for shape in shape_list] # preprocess processor = ViltProcessor.from_pretrained("dandelin/vilt-b32-mlm") inputs = processor( fake_imgs, ["hello word"] * len(shape_list), return_tensors="pt" ) # check shape print(inputs.pixel_values.shape) # torch.Size([2, 3, 448, 512]) ``` ### Expected behavior It seems that `ViltProcessor` is supposed to resize image to 384 * 384 according to `preprocessor_config` (https://huggingface.co/dandelin/vilt-b32-mlm/blob/main/preprocessor_config.json). However, the script below gets 448 * 512.
03-28-2023 15:59:50
03-28-2023 15:59:50
Hi @Yam0214, This is because of the padding step that occurs when processing the image - cf [this line](https://github.com/huggingface/transformers/blob/b29fd6971d9cd6ba2a824628effe243f543b8f61/src/transformers/models/vilt/image_processing_vilt.py#L475). This pads all the images in the batch to the largest height and width dimensions in the batch. The largest height and width dimensions in the batch are `(448, 512)`. This is because the first image is resized to `(3, 384, 512)` and the second to `(3, 448, 384)`. The reason these images are not resized to `(3, 384, 384)` is because this models preprocessing follows torchvision [resizing logic](https://pytorch.org/vision/main/generated/torchvision.transforms.Resize.html) where: * `size` is 384 * `max_size` is `int(1333 / 800 * 384)`<|||||>@amyeroberts Thank you for getting back to me, it's very helpful.
transformers
22,429
closed
Hyperparameter search cannot report to W&B
### System Info - `transformers` version: 4.27.3 - Platform: Linux-4.15.0-206-generic-x86_64-with-glibc2.27 - Python version: 3.9.16 - Huggingface_hub version: 0.13.3 - PyTorch version (GPU?): 2.0.0+cu117 (False) - Tensorflow version (GPU?): 2.11.1 (False) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: yes - Using distributed or parallel set-up in script?: no ### Who can help? @sgugger ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction My code is an adaptation of the tutorial on [hyperparameter search](https://huggingface.co/docs/transformers/main/en/hpo_train) for text classification. Steps to reproduce the error: 1. Create `TrainingArguments` ```python training_args = TrainingArguments( output_dir="./results", per_device_eval_batch_size=16, warmup_steps=500, weight_decay=0.01, logging_dir="./logs", logging_steps=10, evaluation_strategy="epoch", # eval_steps=500, save_strategy="epoch", report_to=["wandb"], metric_for_best_model="loss", load_best_model_at_end=True, ) ``` 1. Create a `Trainer` ```python trainer = Trainer( # model=model, model=None, model_init=model_init, args=training_args, train_dataset=dataset["train"], eval_dataset=dataset["test"], tokenizer=tokenizer, data_collator=data_collator, compute_metrics=compute_metrics, ) ``` 1. Hyperparameter search ```python best_trial = trainer.hyperparameter_search( backend="wandb", hp_space=wandb_hp_space, n_trials=20, ) ``` Error: ``` wandb: Waiting for W&B process to finish... (failed 1). Press Control-C to abort syncing. wandb: \ 0.006 MB of 0.006 MB uploaded (0.000 MB deduped) wandb: Run history: wandb: eval/accuracy ▁▇██▇ wandb: eval/f1 ▁▇▇██ wandb: eval/loss █▄▂▁▃ wandb: eval/runtime ▃█▆▆▁ wandb: eval/samples_per_second ▆▁▃▃█ wandb: eval/steps_per_second ▆▁▃▃█ wandb: train/epoch ▁▁▂▂▂▂▂▂▃▃▃▃▄▄▄▄▄▄▅▅▅▅▅▅▆▆▆▇▇▇▇▇▇█████ wandb: train/global_step ▁▁▂▂▂▂▂▂▃▃▃▃▄▄▄▄▄▄▅▅▅▅▅▅▆▆▆▇▇▇▇▇▇█████ wandb: train/learning_rate ▁▁▂▂▂▂▃▃▃▃▄▄▄▄▅▅▅▆▆▆▆▇▇▇▇██ wandb: train/loss ████▇▇▆▆▅▅▄▄▄▄▃▃▃▃▂▂▂▂▁▁▁▁▂ wandb: train/total_flos ▁ wandb: train/train_accuracy ▁▅▆▇█ wandb: train/train_f1 ▁▅▆▇█ wandb: train/train_loss █▅▃▂▁▅ wandb: train/train_runtime ▁▁▁▁▁█ wandb: train/train_samples_per_second █████▁ wandb: train/train_steps_per_second █████▁ wandb: wandb: Run summary: wandb: eval/accuracy 0.66364 wandb: eval/f1 0.57069 wandb: eval/loss 0.84213 wandb: eval/runtime 4.6265 wandb: eval/samples_per_second 95.104 wandb: eval/steps_per_second 6.052 wandb: train/epoch 5.0 wandb: train/global_step 275 wandb: train/learning_rate 4e-05 wandb: train/loss 0.4968 wandb: train/total_flos 102513759674592.0 wandb: train/train_accuracy 0.88326 wandb: train/train_f1 0.80948 wandb: train/train_loss 0.84575 wandb: train/train_runtime 606.785 wandb: train/train_samples_per_second 14.47 wandb: train/train_steps_per_second 0.453 wandb: wandb: 🚀 View run autumn-sweep-1 at: wandb: Synced 6 W&B file(s), 0 media file(s), 0 artifact file(s) and 0 other file(s) wandb: Find logs at: ./wandb/run-20230328_162937-ean356xn/logs Run ean356xn errored: ValueError('w is not supported, only azure_ml, comet_ml, mlflow, neptune, tensorboard, wandb, codecarbon, clearml, dagshub are supported.') wandb: ERROR Run ean356xn errored: ValueError('w is not supported, only azure_ml, comet_ml, mlflow, neptune, tensorboard, wandb, codecarbon, clearml, dagshub are supported.') ``` It appears that someone encountered a similar error when training a model (see [here](https://stackoverflow.com/questions/73244442/huggingface-trainer-cannot-report-to-wandb)). ### Expected behavior The best model of the hyperparameter search should be saved as an artifact in W&B. According to the documentation the argument `report_to` can be either a `str` or a `List[str]`. After my investigation, I found that the function [get_reporting_integration_callbacks](https://github.com/huggingface/transformers/blob/b29fd6971d9cd6ba2a824628effe243f543b8f61/src/transformers/integrations.py#L1550) does not support string. To fix the issue, I changed this [line](https://github.com/huggingface/transformers/blob/b29fd6971d9cd6ba2a824628effe243f543b8f61/src/transformers/integrations.py#L481): ` trainer.args.report_to = "wandb"` to ` trainer.args.report_to = ["wandb"]`.
03-28-2023 15:08:45
03-28-2023 15:08:45
Would you like to make a PR with your fix?
transformers
22,428
closed
Use real tokenizers if tiny version(s) creation has issue(s)
# What does this PR do? When creating tiny versions of a model loaded with real checkpoints, we might have some trouble converting the tokenizer(s) to the tiny version. One situation is: **we have a fast tokenizer with its vocabulary shrunk then saved to a local path, but the slow tokenizer loaded from this path have the original (large) vocabulary size.** This happens for at least `XLNet` and `MBart`. See the code snippet at the end. As discussed offline, **we decide to keep the original tokenizer(s) in such problematic situation(s)**. See the quote at the end. This PR therefore implements this logic. Some existing tiny models on the Hub have been updated. ### Code snippet (to show one of the problematic situations) ```python from transformers import MBartTokenizer, MBartTokenizerFast from datasets import load_dataset TARGET_VOCAB_SIZE = 1024 ds = load_dataset("wikitext", "wikitext-2-raw-v1") training_ds = ds["train"] testing_ds = ds["test"] # ckpt = "hf-internal-testing/tiny-random-MBartModel" ckpt = "facebook/mbart-large-cc25" original_fast_tokenizer = MBartTokenizerFast.from_pretrained(ckpt) new_fast_tokenizer = original_fast_tokenizer.train_new_from_iterator(training_ds["text"], TARGET_VOCAB_SIZE, show_progress=False) local_ckpt = "tiny-random-MBartModel" new_fast_tokenizer.save_pretrained(local_ckpt) new_slow_tokenizer = MBartTokenizer.from_pretrained(local_ckpt) print(f"original_fast_tokenizer.vocab_size: {original_fast_tokenizer.vocab_size}") print(f"new_fast_tokenizer.vocab_size: {new_fast_tokenizer.vocab_size}") print(f"new_slow_tokenizer.vocab_size: {new_slow_tokenizer.vocab_size}") ``` This gives: ```bash original_fast_tokenizer.vocab_size: 250027 new_fast_tokenizer.vocab_size: 1054 loaded_slow_tokenizer.vocab_size: 250027 ``` ### Offline discussion > **(from Lysandre)** From a development perspective it seems to me that having the highest number of use-cases available from tiny models would be optimal, even if we sacrifice performance. So in that case, if we would need to choose between "Lightweight tokenizer, but only the fast tokenizer file" and "Normal tokenizer, with both fast and slow files", then I would personally choose to go with the latter even if it's heavier. I'd aim for both files to be the same though, so both files as heavy, rather than one small and one large (otherwise we run in the discrepancies that Nico was describing). Going with the first one means that we might run in situations where we want to test the slow file but we can't for some specific models (leading to model-specific code), meaning that we have to find a workaround for something that would have otherwise worked. (This wouldn't be necessary if we were trying to have 100% parity between slow and fast, but that's not the case, so keeping slow files is all the more relevant, imo) However I don't feel strongly (and I won't be the one fixing tests/issues :grin:) so if you all think it'd be better to go with just the fast file in these use-cases, then I'll trust you with it.
03-28-2023 13:33:47
03-28-2023 13:33:47
_The documentation is not available anymore as the PR was closed or merged._<|||||>Since this PR also creats more slow tokenizers (the real one), we get more test failures for some model types. I will have to deal with them (by skipping them first)<|||||>Ready to be reviewed 🚀 !<|||||>I'm not sure I fully understand if this PR is desirable: if we push the original tokenizer on the Hub, we end up with a model that won't be tiny anymore since it will have a very big vocab size (most tokenizers have a vocab of 30k/50k tokens...) so then all the pipeline tests for this model will become significantly slower. I think it's best to not push the tokenizer built automatically in those cases and either try to build one manually or fix the conversion script so the issue does not appear or just not test the last models for which we can't get a small tokenizer if they are not heavily downloaded.<|||||>> won't be tiny anymore since it will have a very big vocab size Yes, this is true. However, it's not as large as the original model(s) in terms of number of layers, hidden dimension etc. The embedding matrix will be large, but getting the embeddings from input ids should be fast. The only part that will be slow down is the logits computation at the end. I can perform some measurement of running time. > build one manually or fix the conversion script so the issue In some cases, there is no way to fix: the example I showed in the PR description involves sentencepiece based tokenizers for which we are not going to get the slow tokenizer from the (converted) fast tokenizer. (not an expert here, but that's what I understand/hear). Regarding building one manually - one disadvantage is that if we ever find another new issue for that tokenizer, we will have to adding some fix(es) to that manual process. This brings some questions about: who develop that manual way for that tokenizer, where to save that manual process (in another script[s]?) etc. <|||||>Also see the end in the PR description (**Offline discussion**) if that part is skipped.<|||||>Thanks for the context. I looked at an extreme case with the "tiny" transfo-xl models (vocab size of 225k) and it's true the final model stays under a reasonable size and goes fast.<|||||>Thank you for checking on your side. I was running it too but you already done checking and approved 😃
transformers
22,427
closed
Don't hard error when cache version can't be converted to int
# What does this PR do? As reported in #22412 if the string stored in the cache version file is not an int, all import from the Transformers library will fail. This PR adds more safeguards.
03-28-2023 13:31:53
03-28-2023 13:31:53
_The documentation is not available anymore as the PR was closed or merged._<|||||>Is `ValueError` the only kind of error that can be returned when this fails?<|||||>Yup
transformers
22,426
closed
4.27.1 breaks fp16 training of Flaubert
### System Info - `transformers` version: 4.27.1 - Platform: Linux-4.18.0-372.26.1.el8_6.x86_64-x86_64-with-glibc2.28 - Python version: 3.9.16 - Huggingface_hub version: 0.12.1 - PyTorch version (GPU?): 1.12.1 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: yes - Using distributed or parallel set-up in script?: no ### Who can help? @sgugger @ArthurZucker ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction Using transformers 4.26.1 the following script behaves properly (train and validation loss decreasing), using transformers >=4.27.1, the training loss is always 0, the validation loss always is nan. Please note that the problem doesn't occur when removing `fp16=True`. ```python from transformers import AutoModelForSequenceClassification, AutoTokenizer, Trainer, TrainingArguments from datasets import load_dataset model_name = 'flaubert/flaubert_base_cased' model = AutoModelForSequenceClassification.from_pretrained( model_name, num_labels=2, problem_type='single_label_classification' ) tokenizer = AutoTokenizer.from_pretrained(model_name) def tokenize_function(example): return tokenizer(example['text'], padding=True, truncation=True) dataset = load_dataset('imdb', split='train[:1%]') dataset = dataset.train_test_split(test_size=0.2) dataset = dataset.map(tokenize_function, batched=True, remove_columns=['text']) train_args = TrainingArguments( 'out', report_to=[], logging_strategy='steps', logging_steps=1, evaluation_strategy='epoch', num_train_epochs=1, fp16=True ) trainer = Trainer(model=model, train_dataset=dataset['train'], eval_dataset=dataset['test'], args=train_args) trainer.train() ``` ### Expected behavior Upgrading to >=4.27.1 should produce a similar training to 4.26.1. Thank you for your help!
03-28-2023 12:55:43
03-28-2023 12:55:43
Thanks for reporting and providing a clear reproducer! It let me pinpoint the regression to [this PR](https://github.com/huggingface/transformers/pull/21627). I think we shouldn't have touched that modeling code. Let me just consult internally and I will report back here with the next steps soon!<|||||>The PR mentions above reverts the commit that introduced the bug. This will be released in a patch (4.27.4) later today.<|||||>Awesome, thank you!<|||||>Tested on 4.27.4, this issue is fixed, thank you again!<|||||>Thanks for letting us know!
transformers
22,425
closed
Weights not aligned for pt and jax
### System Info - `transformers` version: 4.26.1 - Platform: Linux-5.4.0-1043-gcp-x86_64-with-glibc2.17 - Python version: 3.8.16 - Huggingface_hub version: 0.13.1 - PyTorch version (GPU?): 1.10.1 (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): 0.6.6 (tpu) - Jax version: 0.4.5 - JaxLib version: 0.4.4 - Using GPU in script?: (False) - Using distributed or parallel set-up in script?: (True) ### Who can help? @sanchit-gandhi since it's about the weights in jax, and @ArthurZucker @younesbelkada since the sample below is based on XLM-R ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [x] My own task or dataset (give details below) ### Reproduction ```python from transformers import (AutoConfig, AutoTokenizer, FlaxAutoModelForMaskedLM, AutoModelForMaskedLM) model_name = "xlm-roberta-base" config_name = model_name tokenizer_name = model_name num_labels = 1 config = AutoConfig.from_pretrained(config_name, num_labels=num_labels) tokenizer = AutoTokenizer.from_pretrained(tokenizer_name, use_fast=False) sentence = ["This is a sentence."] def jax_inference(): data = tokenizer(sentence, return_tensors='np', return_attention_mask=True) model = FlaxAutoModelForMaskedLM.from_pretrained(model_name, config=config) embedding = model(**data, params=model.params, train=False)[0] return embedding def jax_from_pt_inference(): data = tokenizer(sentence, return_tensors='np', return_attention_mask=True) model = FlaxAutoModelForMaskedLM.from_pretrained(model_name, config=config, from_pt=True) embedding = model(**data, params=model.params, train=False)[0] return embedding def torch_inference(): data = tokenizer(sentence, return_tensors='pt', return_attention_mask=True) model = AutoModelForMaskedLM.from_pretrained(model_name, config=config) embedding = model(**data, return_dict=True).logits return embedding.cpu().detach().numpy() e1 = jax_inference() e2 = jax_from_pt_inference() e3 = torch_inference() print(e1[0, 0, :10]) print(e2[0, 0, :10]) print(e3[0, 0, :10]) ``` ### Expected behavior The above script was supposed to output the same (or very close) values, however, it would produce: ``` [64.36088 0.12701216 37.773556 26.37121 26.858221 28.791494 25.630554 21.905432 21.001484 25.389727 ] [64.36088 0.12701216 37.773556 26.37121 26.858221 28.791494 25.630554 21.905432 21.001484 25.389727 ] [64.29784 0.12513931 37.865814 26.475258 26.956318 28.914783 25.684874 21.950882 21.039997 25.494867 ] ``` The first two lines are the same (results using jax model, from jax weights or pytorch weights), however, they are different from the third line, the results produced by pytorch model. The difference is around 1~2 decimal points (e.g., `64.36 vs 64.297` and even `26.371 vs 26.475`, which isn't neglectable)
03-28-2023 12:10:13
03-28-2023 12:10:13
Hey @crystina-z! Thanks for the great code example! I see that you're running the script on TPU, could you try repeating the benchmark using the highest JAX matmul precision (see https://github.com/huggingface/transformers/issues/15754#issuecomment-1048163411 for details)? I think this should close the gap to PyTorch. Some more details about the behaviour of JAX matmul here: https://github.com/google/jax/issues/10413#issue-1212211265<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
22,424
closed
[`Generate`] Add conditional generation for multimodal models
# Motivation Some multi-modal models (precisely, image to text models) can perform better if conditional text is passed. This simply means that `input_ids` created by `_prepare_decoder_input_ids_for_generation` is concatenated with `input_ids` that is passed along `model_kwargs`. This PR aims to add the support for this feature for `VisionEncoderDecoderModel`, precisely now this script should be able to run without any problem: ```python import torch import requests from PIL import Image from transformers import ViTFeatureExtractor, AutoTokenizer, VisionEncoderDecoderModel loc = "ydshieh/vit-gpt2-coco-en" feature_extractor = ViTFeatureExtractor.from_pretrained(loc) tokenizer = AutoTokenizer.from_pretrained(loc) model = VisionEncoderDecoderModel.from_pretrained(loc) model.eval() def predict(image, text): pixel_values = feature_extractor(images=image, return_tensors="pt").pixel_values input_ids = tokenizer(text, return_tensors="pt").input_ids with torch.no_grad(): output_ids = model.generate(pixel_values, input_ids=input_ids, max_length=16, num_beams=4, return_dict_in_generate=True).sequences preds = tokenizer.batch_decode(output_ids, skip_special_tokens=True) preds = [pred.strip() for pred in preds] return preds # We will verify our results on an image of cute cats url = "http://images.cocodataset.org/val2017/000000039769.jpg" text = "an image of" with Image.open(requests.get(url, stream=True).raw) as image: preds = predict(image, text) print(preds) >>> ['an image of two cats sleeping on a bed'] ``` cc @gante Related: https://github.com/huggingface/transformers/pull/22423
03-28-2023 11:23:51
03-28-2023 11:23:51
_The documentation is not available anymore as the PR was closed or merged._<|||||>As this is slightly experimental, I ran blip slow tests that also includes conditional generation tests and they all pass, will merge!<|||||>Hi @younesbelkada , I get a following error during the training stage when providing decoder_input_ids argument. Does this modification only works for inference stage or training too ? ``` -> 3029 return torch._C._nn.cross_entropy_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index, label_smoothing) ValueError: Expected input batch_size (18) to match target batch_size (512). ``` I used batch_size 2 and beam_size 10 For a batch of examples (during training or inference), does the **input_ids** has to be same shape with padding even though each example prefix can be different length ? Does `input_ids = tokenizer(text, return_tensors="pt").input_ids` has to exclude special_tokens by giving the argument `add_special_tokens=False` <|||||>@younesbelkada @gante @sgugger When I inspected the intermediate outputs during the training, `decoder_input_ids` being shape [2, 8] and `logits` being shape [2, 8, 64002], where batch_size is 2 and prefix length is 8. Looks like `decoder_outputs = self.decoder()` is not predicting anything else right after prefix given `decoder_input_ids`<|||||>Hey @cramraj8 -- would you be able to open a new issue, containing a short self-contained script so we can reproduce it? :)
transformers
22,423
closed
[`pipeline`] Add conditional text support for `ImageToTextPipeline`
# What does this PR do? This PR aims to add conditional generation support for image to text models. Sometimes if you guide the model with a prompt, you can achieve better results. This PR also adds `pix2struct` on the supported models for ImageToTextPipeline. As most of pix2struct models uses conditional generation (for VQA models) and as we have a single class `Pix2StructForConditionalGeneration` that wraps both the VQA models and Image captioning models, I thought the best solution would be to simply add the conditional text support for `ImageToTextPipeline`. The reason why a `Pix2StructForVQA` class is not implemented is that this model renders the question directly on the image, instead of feeding the text input into the model. Hence a single class `Pix2StructForConditionalGeneration` is needed and the changes will be done on the processor side that will take care of rendering the text on the image. cc @NielsRogge @Narsil
03-28-2023 09:58:30
03-28-2023 09:58:30
_The documentation is not available anymore as the PR was closed or merged._<|||||>I have doubts about this: My biggest issue is with the expected I/O. Pipeline are **defined** by I/O. In the particular case it's (image) -> (text) And here we're modifying to (image, text) -> (text). This is ok, if and only if the extra text is always purely accessory, which doesn't seem to be the case. - Can blip work **without** a prompt ? if not then it does not respect the pipeline I/O and cannot be used. - All the swap logic seems highly specific and not really great ways to handle the logic (inspecting signature is bad in general). - For instance for models that DO NOT handle extra text, the pipeline is going to start generating errors. - `padding` and `truncation` cannot be top-level parameters, we need to use `tokenizer_kwargs` instead. The reason is that padding and truncation could mean thing towards images too, and to avoid confusion it's best splitting them altogether. In general I think we can reduce the complexity added in the PR a lot by removing a lot of introduced ifs. For reference, the prompt reminds me of `hypothesis_template` within the zero-shot-classification. If a good sane default exists which can alleviate the I/O issue then this becomes a bit better.<|||||>Thanks for your comments! To reply to some of your doubts: > Can blip work without a prompt ? if not then it does not respect the pipeline I/O and cannot be used. Definitely yes, what I meant here is that text is always accessory for most of the models (Blip, Pix2Struct trained on image captioning), however some Pix2Struct models, trained on VQA needs text inputs, but the text inputs are dealt in an unusual way (the question is directly rendered on the image) > All the swap logic seems highly specific and not really great ways to handle the logic (inspecting signature is bad in general). Agreed on that, I have updated the checking logic for pix2struct, I also realized that `vision-encoder-decoder` models does not support conditional generation. However I believe that there is a fix that I can quickly address on another PR, if that PR gets merged, this pipeline would support text-conditioned image-to-text inference for all models. (EDIT: #22424) > padding and truncation cannot be top-level parameters, we need to use tokenizer_kwargs instead. The reason is that padding and truncation could mean thing towards images too, and to avoid confusion it's best splitting them altogether. Agreed, this has been removed > In general I think we can reduce the complexity added in the PR a lot by removing a lot of introduced ifs. I had to come up with this as the CI tests handles multiple input type (list of images, generators, etc.), I'll do my best to refactor this to make things simpler <|||||>@Narsil what's your opinion on my comment above? e.g. `Pix2StructForConditionalGeneration` solves both image captioning and VQA with the same model, using the same approach. You can, in addition to the input image, also feed a text prompt to either 1) guide the captioning 2) ask a question related to the image. In both cases, the model renders the prompt on top of the image to make the prediction. Should we add `Pix2StructForConditionalGeneration` to both the image-to-text and VQA pipelines?<|||||>Is there any difference in the code between vqa and captionning ? In general, pipelines are defined by I/O (input/output meaning (image, text) -> (text)). The rest is semantics, naming and goals. For instance NER and token-classification are the same, and alias each other.<|||||>> Is there any difference in the code between vqa and captioning ? For models like BLIP-2 and Pix2Struct, the code is identical. For BLIP on the other hand, 2 different models are defined, but other than that the code is also identical (correct me if I'm wrong @younesbelkada). I think adding an optional text prompt to the `ImageToTextPipeline` makes sense, however I wonder if that doesn't make the VQA pipeline obsolete for those models <|||||>> however I wonder if that doesn't make the VQA pipeline obsolete for those models Why does it ? You said above that both were correct ? Did I misunderstand something ?<|||||>Yes we can technically add them to both pipelines, if you are fine with that.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Closing in favor of #23362 <|||||>@younesbelkada @NielsRogge I wonder this conditional text support for the ImageToTextPipeline models only enable inference stage or training stage as well ?
transformers
22,422
closed
MBart: Fix docs and doctests
# What does this PR do? Related issue: #22397 Most fixes are on the TF side. Elaborated a sentence in the main docstring -- MBart can be used for summarization, but none of the pre-trained models were trained for summarization. Alongside this PR, TF checkpoints for MBart are being pushed to the Hub 🤗
03-28-2023 09:28:19
03-28-2023 09:28:19
_The documentation is not available anymore as the PR was closed or merged._
transformers
22,421
closed
transformers is incompatible with Tensorflow nightly build version
### System Info Python version: 3.7 TF branch: dev (installed by `pip install tf-nightly`) transformers version: 4.27.3 ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ``` from transformers import TFBertModel ``` then got error like: ``` Traceback (most recent call last): File "/Users/weichen.xu/opt/miniconda3/envs/py38/lib/python3.8/site-packages/transformers/utils/import_utils.py", line 1126, in _get_module return importlib.import_module("." + module_name, self.__name__) File "/Users/weichen.xu/opt/miniconda3/envs/py38/lib/python3.8/importlib/__init__.py", line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "<frozen importlib._bootstrap>", line 1014, in _gcd_import File "<frozen importlib._bootstrap>", line 991, in _find_and_load File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 671, in _load_unlocked File "<frozen importlib._bootstrap_external>", line 843, in exec_module File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed File "/Users/weichen.xu/opt/miniconda3/envs/py38/lib/python3.8/site-packages/transformers/models/bert/modeling_tf_bert.py", line 38, in <module> from ...modeling_tf_utils import ( File "/Users/weichen.xu/opt/miniconda3/envs/py38/lib/python3.8/site-packages/transformers/modeling_tf_utils.py", line 69, in <module> from keras.engine import data_adapter ModuleNotFoundError: No module named 'keras.engine' The above exception was the direct cause of the following exception: Traceback (most recent call last): File "<stdin>", line 1, in <module> File "<frozen importlib._bootstrap>", line 1039, in _handle_fromlist File "/Users/weichen.xu/opt/miniconda3/envs/py38/lib/python3.8/site-packages/transformers/utils/import_utils.py", line 1117, in __getattr__ value = getattr(module, name) File "/Users/weichen.xu/opt/miniconda3/envs/py38/lib/python3.8/site-packages/transformers/utils/import_utils.py", line 1116, in __getattr__ module = self._get_module(self._class_to_module[name]) File "/Users/weichen.xu/opt/miniconda3/envs/py38/lib/python3.8/site-packages/transformers/utils/import_utils.py", line 1128, in _get_module raise RuntimeError( RuntimeError: Failed to import transformers.models.bert.modeling_tf_bert because of the following error (look up to see its traceback): No module named 'keras.engine' ``` ### Expected behavior The importing command should not raise error.
03-28-2023 09:13:51
03-28-2023 09:13:51
cc @Rocketknight1 and @gante<|||||>Hey @WeichenXu123 👋 I get a slightly different (but related) error when I create the `tf-nightly` environment on my end. They may be changing a few things in the namespace (like they did with `keras.saving` in TF 2.12), so the wise decision here is to wait few days before committing to a fix :) ``` >>> from transformers import TFBertModel 2023-03-28 15:03:57.852621: E tensorflow/tsl/lib/monitoring/collection_registry.cc:81] Cannot register 2 metrics with the same name: /tensorflow/api/keras/data_adapters Traceback (most recent call last): File "/home/joao/transformers/src/transformers/utils/import_utils.py", line 1125, in _get_module return importlib.import_module("." + module_name, self.__name__) File "/usr/lib/python3.10/importlib/__init__.py", line 126, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "<frozen importlib._bootstrap>", line 1050, in _gcd_import File "<frozen importlib._bootstrap>", line 1027, in _find_and_load File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 688, in _load_unlocked File "<frozen importlib._bootstrap_external>", line 883, in exec_module File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed File "/home/joao/transformers/src/transformers/models/bert/modeling_tf_bert.py", line 38, in <module> from ...modeling_tf_utils import ( File "/home/joao/transformers/src/transformers/modeling_tf_utils.py", line 69, in <module> from keras.engine import data_adapter File "/home/joao/tf_nightly_venv/lib/python3.10/site-packages/keras/engine/data_adapter.py", line 47, in <module> keras_data_adapter_gauge = tf.__internal__.monitoring.BoolGauge( File "/home/joao/tf_nightly_venv/lib/python3.10/site-packages/tensorflow/python/eager/monitoring.py", line 356, in __init__ super(BoolGauge, self).__init__('BoolGauge', _bool_gauge_methods, File "/home/joao/tf_nightly_venv/lib/python3.10/site-packages/tensorflow/python/eager/monitoring.py", line 131, in __init__ self._metric = self._metric_methods[self._label_length].create(*args) tensorflow.python.framework.errors_impl.AlreadyExistsError: Another metric with the same name already exists. The above exception was the direct cause of the following exception: Traceback (most recent call last): File "<stdin>", line 1, in <module> File "<frozen importlib._bootstrap>", line 1075, in _handle_fromlist File "/home/joao/transformers/src/transformers/utils/import_utils.py", line 1116, in __getattr__ value = getattr(module, name) File "/home/joao/transformers/src/transformers/utils/import_utils.py", line 1115, in __getattr__ module = self._get_module(self._class_to_module[name]) File "/home/joao/transformers/src/transformers/utils/import_utils.py", line 1127, in _get_module raise RuntimeError( RuntimeError: Failed to import transformers.models.bert.modeling_tf_bert because of the following error (look up to see its traceback): Another metric with the same name already exists. ```<|||||>Seconding @gante's comment here - TF 2.12 is very recent, and the nightly branch for 2.13 is likely to have multiple changes before release. We don't want to modify `transformers` until the TF team have committed to their API changes and we see a beta or RC0 release. Closing this for now, but feel free to ping us and reopen this issue if a beta or release candidate version appears and the incompatibilities are still there!<|||||> > Seconding @gante's comment here - TF 2.12 is very recent, and the nightly branch for 2.13 is likely to have multiple changes before release. We don't want to modify `transformers` until the TF team have committed to their API changes and we see a beta or RC0 release. > > Closing this for now, but feel free to ping us and reopen this issue if a beta or release candidate version appears and the incompatibilities are still there! Sounds good, it is not urgent. :)
transformers
22,420
closed
[i18n-<languageCode>] Translating docs to <languageName>
<!-- Note: Please search to see if an issue already exists for the language you are trying to translate. --> Hi! Let's bring the documentation to all the <languageName>-speaking community 🌐 (currently 0 out of 267 complete) Who would want to translate? Please follow the 🤗 [TRANSLATING guide](https://github.com/huggingface/transformers/blob/main/docs/TRANSLATING.md). Here is a list of the files ready for translation. Let us know in this issue if you'd like to translate any, and we'll add your name to the list. Some notes: * Please translate using an informal tone (imagine you are talking with a friend about transformers 🤗). * Please translate in a gender-neutral way. * Add your translations to the folder called `<languageCode>` inside the [source folder](https://github.com/huggingface/transformers/tree/main/docs/source). * Register your translation in `<languageCode>/_toctree.yml`; please follow the order of the [English version](https://github.com/huggingface/transformers/blob/main/docs/source/en/_toctree.yml). * Once you're finished, open a pull request and tag this issue by including #issue-number in the description, where issue-number is the number of this issue. Please ping @ArthurZucker, @sgugger for review. * 🙋 If you'd like others to help you with the translation, you can also post in the 🤗 [forums](https://discuss.huggingface.co/). ## Get Started section - [x] [index.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/index.mdx) https://github.com/huggingface/transformers/pull/20180 - [x] [quicktour.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/quicktour.mdx) (waiting for initial PR to go through) - [x] [installation.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/installation.mdx). ## Tutorial section - [x] [pipeline_tutorial.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/pipeline_tutorial.mdx) - [ ] [autoclass_tutorial.mdx](https://github.com/huggingface/transformers/blob/master/docs/source/autoclass_tutorial.mdx) - [ ] [preprocessing.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/preprocessing.mdx) - [ ] [training.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/training.mdx) - [ ] [accelerate.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/accelerate.mdx) - [ ] [model_sharing.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/model_sharing.mdx) - [ ] [multilingual.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/multilingual.mdx) <!-- Keep on adding more as you go 🔥 -->
03-28-2023 09:13:21
03-28-2023 09:13:21
transformers
22,419
closed
Problem of training gpt-neox from transformer
### System Info ``` PyTorch Transformer 2x A100 (40 GB) ``` ### Who can help? _No response_ ### Information - [X] The official example scripts - [X] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction I couldn't fine tune the gpt-neox 20b from transformer library. Also, there is no official documentation how to train or fine tune the LLM with ease with transformer library. https://huggingface.co/EleutherAI/gpt-neox-20b ### Expected behavior It should run.
03-28-2023 08:50:33
03-28-2023 08:50:33
Maybe you should try getting help on the [forums](https://discuss.huggingface.co/)? We only use GitHub issues for bugs and feature requests only, and I don't think this is relevant here.<|||||>@sgugger Unfortunately this forum is not much helpful, you can search with 'gpt-neox' word, and you'd find many questions without response. Not blaming, but I like to know if transformer support training with this large LM model or just the inference? The documentaiton is missing for this model, or I've missed! <|||||>Transformers supports training and inference of all its models, but you will probably need to use other libraries like DeepSpeed or PyTorch FSDP to do such training depending on your hardware. Since this is a model that will 80GB in float32 by itself, it's very unlikely you will be able to train it on the hardware your provided at the top. Training those models from scratch require a lot more GPU compute (which is also why there are fewer tutorials available as it doesn't touch as many people as smaller models). You might have more luck fine-tuning the exisiting model with the [peft library](https://github.com/huggingface/peft). Otherwise this [doc page](https://huggingface.co/docs/transformers/perf_train_gpu_many) will list resources you can use.
transformers
22,418
closed
load_from_checkpoints does not work on fine-tuning Llama 13B
### System Info - `transformers` version: 4.28.0.dev0 - Platform: Linux-5.4.0-137-generic-x86_64-with-glibc2.27 - Python version: 3.10.8 - Huggingface_hub version: 0.13.3 - Safetensors version: not installed - PyTorch version (GPU?): 2.0.0+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: yes ### Who can help? @sgugger ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction I'm trying to finetune Llama 13B on a custom dataset. Training code: ```python model = transformers.LlamaForCausalLM.from_pretrained( model_args.model_name_or_path, cache_dir=training_args.cache_dir, use_auth_token=HF_TOKEN ) tokenizer = transformers.LlamaTokenizer.from_pretrained( model_args.model_name_or_path, cache_dir=training_args.cache_dir, model_max_length=training_args.model_max_length, padding_side="right", use_auth_token=HF_TOKEN, use_fast=False, ) training_args = TrainingArguments("test", evaluation_strategy="steps", eval_steps=100, do_eval=True) trainer = Trainer(tokenizer=tokenizer, args=training_args, **data_module, compute_metrics=compute_metrics, model_init=model_init) ``` Script command: ```sh python3 -m torch.distributed.run \ --nproc_per_node=8 \ --master_port=5102 /root/deepshard/training/train.py \ --model_name_or_path swype/deepshard-13B-raw \ --data_path /root/deepshard/datasets/data/train.jsonl \ --eval_path /root/deepshard/datasets/data/test.jsonl \ --search False \ --fp16 True \ --output_dir /root/deepshard/training/finetuned \ --num_train_epochs 3 \ --per_device_train_batch_size 2 \ --per_device_eval_batch_size 2 \ --gradient_accumulation_steps 4 \ --evaluation_strategy "steps" \ --eval_steps 1000 \ --save_strategy "steps" \ --save_steps 1000 \ --learning_rate 2e-6 \ --weight_decay 0. \ --warmup_ratio 0.03 \ --resume_from_checkpoint True \ --lr_scheduler_type "cosine" \ --logging_steps 10 \ --fsdp "full_shard auto_wrap" \ --fsdp_transformer_layer_cls_to_wrap 'LlamaDecoderLayer' ``` I also try setting `resume_from_checkpoint` to the checkpoint folder, but the script still starts from epoch 0 and starts training from scratch rather than from the checkpoint. ### Expected behavior The `Trainer` should continue from the last saved checkpoint.
03-28-2023 06:39:59
03-28-2023 06:39:59
I'm not too sure how to reconcile the command you typed with the code you wrote, as the code you are showing doesn't accept all the arguments you are passing to the launching command.<|||||>@raj-swype As a debugging step, do you experience the same issue if you remove the `compute_metrics` argument from the `Trainer(...)` initialization? It is causing my Trainer to hang.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>I'm confused what metrics are you trying to compute for a text generation model ?, i thought you only need to run `prediction_loss_only=True`, cause usual metrics doesn't really work with this type of models
transformers
22,417
closed
Add important warning padding attention mask
# What does this PR do? Realase <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
03-28-2023 06:35:48
03-28-2023 06:35:48
_The documentation is not available anymore as the PR was closed or merged._<|||||>cc @gante and @ArthurZucker <|||||>Hey @Alainterieur85 👋 This PR has the same purpose as #21916, opened by @anruijian. @anruijian, since you've opened yours first, you have the priority here: are you planing to continue the work on #21916?<|||||>@gante Sorry for the delay. I will work on #21916. <|||||>In that case, @Alainterieur85, I won't accept these changes :) Closing the PR 🤗
transformers
22,416
closed
[Roformer] Fixing a bug in RoFormerEncoder where it was ignoring the length of past_key_values when generating as a decoder
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> This pull request fixes the bug relating to the generation. When using `RoFormerForCausalLM`, the `RoFormerEncoder` generates position embeddings based on the `seq_len` of the `hidden_states`. However, when using the `generate()` method to generate text token by token, the `seq_len` is always 1 and the length of `past_key_values` is ignored. Using the solution from https://github.com/JunnYu/RoFormer_pytorch/blob/roformer_v2/src/roformer/modeling_roformer.py My setup: >- `transformers` version: 4.28.0.dev0 >- Platform: Windows-10-10.0.19045-SP0 >- Python version: 3.10.10 >- Huggingface_hub version: 0.13.3 >- PyTorch version (GPU?): 2.0.0 (False) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 --> @JunnYu @ArthurZucker @younesbelkada
03-28-2023 06:01:15
03-28-2023 06:01:15
_The documentation is not available anymore as the PR was closed or merged._<|||||>Congrats for this first PR 🔥 And thanks for contributing 🤗
transformers
22,415
closed
Allowing adding new token as unk token for gpt2 tokenizer
What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) #https://github.com/huggingface/transformers/issues/22414 ## Before submitting - [yes ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [yes] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [yes] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ no] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @ArthurZucker <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
03-28-2023 04:54:28
03-28-2023 04:54:28
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_22415). All of your documentation changes will be reflected on that endpoint.<|||||>I think we should add some tests to clarify what behavior is modified an how. It could be for just those 4 tokenizers, but still I think the effect of this PR is not entirely clear by just reading it.<|||||>Also let's make sure that the tests are all green ! <|||||>What do you need me to do to help this merge?<|||||>Hey! As mentioned in my last comments, the CI tests need to be all green 😉 Mostly `make fixup` should help you<|||||>I remember testing for a few models and having some issues with this update in token addition, I'll have to check once the PR is ready<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
22,414
open
Error while loading GPT2 tokenizer with specifying "unk_token"
### System Info - `transformers` version: 4.28.0.dev0 - Platform: Linux-4.18.0-305.65.1.el8_4.x86_64-x86_64-with-glibc2.17 - Python version: 3.8.16 - Huggingface_hub version: 0.13.3 - PyTorch version (GPU?): 1.11.0 (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? @ArthurZucker ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction For a certain reason, I need to modify the default unk_token of GPT2Tokenizer. Currently, it is "<|endoftext|>". When I tried to change it, I encounter problems. ```python from transformers import GPT2Tokenizer control_tokens ={"sep_token": "<|sep|>", "pad_token": "<|pad|>", "cls_token": "<|cls|>", "mask_token": "<|mask|>", "unk_token": "<|unk|>"} tokenizer = GPT2Tokenizer.from_pretrained("./tokenizer/", **control_tokens) tokenizer.encode(["<|unk|>"]) ``` , where directory ./tokenizer has all tokenizer files provided by gpt2-small: tokenizer.json, merges.txt, vocab.json error information: Traceback (most recent call last): File "./model/unit_test_customed_gpt2.py", line 451, in test_BuildMappingFileTestCase_bpe_mhp_gpt self.tokenizer.build_mapping_file(self.mapped_tokenizer, "./tokenizer/customed-mhp-gpt-bpe/mapping_%s.json"%text, max_length=32, is_chinese_vocab=False) File "/home/X/scratch/variable-text-segmentation/data_utils/sp_tokenizer.py", line 500, in build_mapping_file mapping_ids= mapping_tokenizer.encode(mapped_text,add_special_tokens=False) File "/home/lsiyang/scratch/miniconda3/envs/mix/lib/python3.8/site-packages/transformers/tokenization_utils_base.py", line 2302, in encode encoded_inputs = self.encode_plus( File "/home/X/scratch/miniconda3/envs/mix/lib/python3.8/site-packages/transformers/tokenization_utils_base.py", line 2710, in encode_plus return self._encode_plus( File "/home/X/scratch/miniconda3/envs/mix/lib/python3.8/site-packages/transformers/tokenization_utils.py", line 650, in _encode_plus return self.prepare_for_model( File "/home/X/scratch/miniconda3/envs/mix/lib/python3.8/site-packages/transformers/tokenization_utils_base.py", line 3189, in prepare_for_model encoded_inputs = self.pad( File "/home/X/scratch/miniconda3/envs/mix/lib/python3.8/site-packages/transformers/tokenization_utils_base.py", line 2979, in pad raise ValueError( ValueError: type of None unknown: <class 'NoneType'>. Should be one of a python, numpy, pytorch or tensorflow object. **I may know the reason. When we specify a new token as unk_token via GPT2Tokenizer.from_pretrained(*, unk_token=XX), it would not first add this new token to the vocabulary but only update self.tokenizer.unk_token=XX. It makes the tokenizer able to correctly return its unk_token but actually cannot find the token id of that new unk_token in the vocab. The problem lies in tokenization_utils.py** ```python def _add_tokens(self, new_tokens: Union[List[str], List[AddedToken]], special_tokens: bool = False) -> int: new_tokens = [str(tok) for tok in new_tokens] tokens_to_add = [] for token in new_tokens: if not isinstance(token, str): raise TypeError(f"Token {token} is not a string but a {type(token)}.") if not special_tokens and hasattr(self, "do_lower_case") and self.do_lower_case: token = token.lower() if ( token != self.unk_token #PROBLEM! self.unk_token has been changed to the newest. So newest unk_token cannot be added. and self.convert_tokens_to_ids(token) == self.convert_tokens_to_ids(self.unk_token) and token not in tokens_to_add ): tokens_to_add.append(token) if self.verbose: logger.info(f"Adding {token} to the vocabulary") ``` **For other tokens, like sep_token, it is allowed to specify it via GPT2Tokenizer.from_pretrained(*, sep_token=XX). Even if it doesn't exist in vocab, it would add a new token to vocab.** This is also impossible. ```python from transformers import GPT2Tokenizer control_tokens ={"sep_token": "<|sep|>", "pad_token": "<|pad|>", "cls_token": "<|cls|>", "mask_token": "<|mask|>"} tokenizer = GPT2Tokenizer.from_pretrained("./tokenizer/", **control_tokens) tokenizer.add_special_tokens({"unk_token": "<|unk|>"}) tokenizer.encode(["<|unk|>"]) ``` **I think we should also allow unk_token specification before its existence, like other special tokens.** ### Expected behavior I think we should also allow unk_token specification before its existence, like other special tokens
03-28-2023 02:59:55
03-28-2023 02:59:55
Hey! Indeed this is a problem I stumbled on when integrating `Whisper`. Two things are at play for me here: 1. We should support re-assignment of the unk token, so yes PR makes sens (and I think it makes sens for all tokenizers). The following output is not good: ```python In [9]: tokenizer.all_special_ids Out[9]: [50256, None, 50257, 50258, 50259, 50260] ``` Which is what we get when trying to add this token. So I am in for the fix 2. As we can see in the traceback, when a token is OOV, we don't raise an error ourself, which ends up being a bit hard to debug. We can't really change the default behaviour for GPT2 (it's too old), but we can raise the error ourselves! ( I'll probably tackle this in another PR!) Good catch! 🔥 (cc @Narsil fyi)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>I still have this issue when using : tokenizer = BertTokenizer.from_pretrained('bert-base-multilingual-cased') This is the output error from bark : Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained. tokenizer.all_special_ids : [None, 0, 1, 2, 3] BertTokenizer(name_or_path='bert-base-multilingual-cased', vocab_size=0, model_max_length=512, is_fast=False, padding_side='right', truncation_side='right', special_tokens={'unk_token': '<|unk|>', 'sep_token': '<|sep|>', 'pad_token': '<|pad|>', 'cls_token': '<|cls|>', 'mask_token': '<|mask|>'}, clean_up_tokenization_spaces=True) Traceback (most recent call last): File "/home/gpc2/codes_ood/Codes/TTS/bark/text_to_speech_bark.py", line 11, in <module> audio_array = generate_audio(text_prompt) File "/home/gpc2/codes_ood/Codes/TTS/bark/bark/api.py", line 107, in generate_audio semantic_tokens = text_to_semantic( File "/home/gpc2/codes_ood/Codes/TTS/bark/bark/api.py", line 25, in text_to_semantic x_semantic = generate_text_semantic( File "/home/gpc2/codes_ood/Codes/TTS/bark/bark/generation.py", line 434, in generate_text_semantic encoded_text = np.array(_tokenize(tokenizer, text)) + TEXT_ENCODING_OFFSET File "/home/gpc2/codes_ood/Codes/TTS/bark/bark/generation.py", line 356, in _tokenize return tokenizer.encode(text, add_special_tokens=False) File "/home/gpc2/anaconda3/envs/valle/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 2319, in encode encoded_inputs = self.encode_plus( File "/home/gpc2/anaconda3/envs/valle/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 2727, in encode_plus return self._encode_plus( File "/home/gpc2/anaconda3/envs/valle/lib/python3.10/site-packages/transformers/tokenization_utils.py", line 652, in _encode_plus return self.prepare_for_model( File "/home/gpc2/anaconda3/envs/valle/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 3206, in prepare_for_model encoded_inputs = self.pad( File "/home/gpc2/anaconda3/envs/valle/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 2996, in pad raise ValueError( ValueError: type of None unknown: <class 'NoneType'>. Should be one of a python, numpy, pytorch or tensorflow object. <|||||>Thanks for reporting, as you can see the PR is still open, the bug has not been adressed yet! I'll take care of it, this is also related to the potential refactoring of how tokens are added.
transformers
22,413
open
Add interpolation of position encodings to BLIP-2
### Feature request ViT implemented in Huggingface Transformers has the feature to enable finetuning with different resolution of images https://huggingface.co/docs/transformers/model_doc/vit#transformers.ViTModel.forward.interpolate_pos_encoding while the newly implemented BLIP-2 model does not. Would like to add those following the ViT implementation. ### Motivation I was playing around with the model whether different (mainly higher) resolution of input images helps downstream tasks. (Curious to get feedback on whether this feature would be needed or not for the sake of keeping the code simple.) ### Your contribution It's mostly copying & pasting from the ViT implementation `interpolate_pos_encoding` but have a working code ready and ready for PR to get reviewed (and address bugs).
03-28-2023 02:28:21
03-28-2023 02:28:21
it is good if the clip-pretrained model has the interpolate_pos_encoding like vit, <|||||>@amyeroberts Shall I open a pull request? Have one handy.<|||||>Hi @akkikiki, thanks for opening this issue! `interpolate_pos_encoding` was added to the ViT model the enable cross-loading of DINO weights into the architecture. In general, we try and keep the forward passes of the models as simple as possible (few if/else branches). As such, it's not something that we'll be adding to the model at the moment. Let's keep this issue open, if there's many requests for it from the community (I'll measure with 👍 on your issue description) then we can revisit. If you have your own fork with these changes, feel free to share here so others can see and benefit from your work. <|||||>Sounds good! +1 to follow the "keep it simple (and stupid)" principle.
transformers
22,412
closed
ValueError: invalid literal for int() with base 10: ''
### System Info - `transformers` version: 4.27.3 - Platform: macOS-13.2.1-x86_64-i386-64bit - Python version: 3.8.16 - Huggingface_hub version: 0.13.3 - PyTorch version (GPU?): 1.12.1 (False) - Tensorflow version (GPU?): 2.11.1 (False) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: True - Using distributed or parallel set-up in script?: True (via Horovod) ### Who can help? _No response_ ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction This might complex, but this happens very occasionally. ``` [1,0]<stderr>:Traceback (most recent call last): [1,0]<stderr>: File "run_training.py", line 12, in <module> [1,0]<stderr>: from proj.modeling import Model1, Model2 [1,0]<stderr>: File "/model_proj/proj/modeling.py", line 6, in <module> [1,0]<stderr>: from transformers import BertConfig, TFBertMainLayer [1,0]<stderr>: File "/usr/local/lib/python3.8/dist-packages/transformers/__init__.py", line 26, in <module> [1,0]<stderr>: from . import dependency_versions_check [1,0]<stderr>: File "/usr/local/lib/python3.8/dist-packages/transformers/dependency_versions_check.py", line 17, in <module> [1,0]<stderr>: from .utils.versions import require_version, require_version_core [1,0]<stderr>: File "/usr/local/lib/python3.8/dist-packages/transformers/utils/__init__.py", line 56, in <module> [1,0]<stderr>: from .hub import ( [1,0]<stderr>: File "/usr/local/lib/python3.8/dist-packages/transformers/utils/hub.py", line 1085, in <module> [1,0]<stderr>: cache_version = int(f.read()) [1,0]<stderr>:ValueError: invalid literal for int() with base 10: '' ``` 1. Just import Transformers, and the error happens, randomly. Retrying import fixes the issue, so I had ignored this for a while. However, this had broken some of my infrastructures yesterday. This must be fixed. ### Expected behavior It must be imported without errors.
03-28-2023 01:53:58
03-28-2023 01:53:58
Thanks for reporting! The PR above should fix this.<|||||>Closing issue since the PR has been merged into main branch :)
transformers
22,411
closed
Fix bug in perplexity guide calculations and update perplexity numbers. Fixes #22348
# What does this PR do? This pull request fixes the bug mentioned in issue #22348 relating to the Perplexity concept guide in the English documentation. It removes the multiplication by `trg_len` (which was incorrect), and uses `mean` outside the loop instead of sum and division. This both fixes the bug and simplifies the code. I've updated the comments in the example code to reflect the changes, as well as leaving a note about the sub-optimal nature of the code. N.B. This pull request does not touch the documentation in other languages besides English. N.B. These changes are based on my understanding of perplexity and how the Transformers library works. I'm not an expert. Finally, the perplexity numbers were updated based on the new code. I ran the example on my machine to get those new numbers. My setup: ``` PyTorch Version: 2.0.0 GPU: NVIDIA GeForce RTX 3090 CUDA Version: 11.7 CUDA Device Capability: (8, 6) CUDA Toolkit Version: 11 Transformers Version: 4.27.2 ``` Fixes #22348 ## Before submitting - [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @sgugger
03-27-2023 23:16:44
03-27-2023 23:16:44
_The documentation is not available anymore as the PR was closed or merged._
transformers
22,410
closed
Bump redis from 4.1.4 to 4.5.3 in /examples/research_projects/decision_transformer
Bumps [redis](https://github.com/redis/redis-py) from 4.1.4 to 4.5.3. <details> <summary>Release notes</summary> <p><em>Sourced from <a href="https://github.com/redis/redis-py/releases">redis's releases</a>.</em></p> <blockquote> <h2>4.5.3</h2> <h1>Changes</h1> <p>Update urgency: HIGH: There is a critical bug that may affect a subset of users. Upgrade!</p> <h2>🐛 Bug Fixes</h2> <ul> <li><a href="https://cwe.mitre.org/data/definitions/404.html">CWE-404</a> AsyncIO Race Condition Fix (<a href="https://redirect.github.com/redis/redis-py/issues/2624">#2624</a>, <a href="https://redirect.github.com/redis/redis-py/issues/2579">#2579</a>)</li> </ul> <h2>4.5.2</h2> <h1>Changes</h1> <h2>🚀 New Features</h2> <ul> <li>Introduce AbstractConnection so that UnixDomainSocketConnection can call super().<strong>init</strong> (<a href="https://redirect.github.com/redis/redis-py/issues/2588">#2588</a>)</li> <li>Added queue_class to REDIS_ALLOWED_KEYS (<a href="https://redirect.github.com/redis/redis-py/issues/2577">#2577</a>)</li> <li>Made search document subscriptable (<a href="https://redirect.github.com/redis/redis-py/issues/2615">#2615</a>)</li> <li>Sped up the protocol parsing (<a href="https://redirect.github.com/redis/redis-py/issues/2596">#2596</a>)</li> </ul> <h2>🐛 Bug Fixes</h2> <ul> <li>Fix behaviour of async PythonParser to match RedisParser as for issue <a href="https://redirect.github.com/redis/redis-py/issues/2349">#2349</a> (<a href="https://redirect.github.com/redis/redis-py/issues/2582">#2582</a>)</li> <li>Replace async_timeout by asyncio.timeout (<a href="https://redirect.github.com/redis/redis-py/issues/2602">#2602</a>)</li> <li>Update json().arrindex() default values (<a href="https://redirect.github.com/redis/redis-py/issues/2611">#2611</a>)</li> </ul> <h2>🧰 Maintenance</h2> <ul> <li>Coverage for pypy-3.9 (<a href="https://redirect.github.com/redis/redis-py/issues/2608">#2608</a>)</li> <li>Developer Experience: Adding redis version compatibility details to the README (<a href="https://redirect.github.com/redis/redis-py/issues/2621">#2621</a>)</li> <li>Remove redundant assignment to RedisCluster.nodes_manager. (<a href="https://redirect.github.com/redis/redis-py/issues/2620">#2620</a>)</li> <li>Developer Experience: [types] update return type of smismember to list[int] (<a href="https://redirect.github.com/redis/redis-py/issues/2617">#2617</a>)</li> <li>Developer Experience: [docs] ConnectionPool SSL example (<a href="https://redirect.github.com/redis/redis-py/issues/2605">#2605</a>)</li> <li>Developer Experience: Fixed CredentialsProvider examples (<a href="https://redirect.github.com/redis/redis-py/issues/2587">#2587</a>)</li> <li>Developer Experience: Update README to make pip install copy-pastable on zsh (<a href="https://redirect.github.com/redis/redis-py/issues/2584">#2584</a>)</li> <li>Developer Experience: Fix for <code>lpop</code> and <code>rpop</code> return typing (<a href="https://redirect.github.com/redis/redis-py/issues/2590">#2590</a>)</li> </ul> <h2>Contributors</h2> <p>We'd like to thank all the contributors who worked on this release!</p> <p><a href="https://github.com/CrimsonGlory"><code>@​CrimsonGlory</code></a>, <a href="https://github.com/Galtozzy"><code>@​Galtozzy</code></a>, <a href="https://github.com/aksinha334"><code>@​aksinha334</code></a>, <a href="https://github.com/barshaul"><code>@​barshaul</code></a>, <a href="https://github.com/chayim"><code>@​chayim</code></a>, <a href="https://github.com/davemcphee"><code>@​davemcphee</code></a>, <a href="https://github.com/dvora-h"><code>@​dvora-h</code></a>, <a href="https://github.com/kristjanvalur"><code>@​kristjanvalur</code></a>, <a href="https://github.com/ryin1"><code>@​ryin1</code></a>, <a href="https://github.com/sileht"><code>@​sileht</code></a>, <a href="https://github.com/thebarbershop"><code>@​thebarbershop</code></a>, <a href="https://github.com/uglide"><code>@​uglide</code></a>, <a href="https://github.com/woutdenolf"><code>@​woutdenolf</code></a> and <a href="https://github.com/zakaf"><code>@​zakaf</code></a></p> <h2>4.5.1</h2> <h1>Changes</h1> <h2>🐛 Bug Fixes</h2> <ul> <li>Fix <a href="https://redirect.github.com/redis/redis-py/issues/2581">#2581</a> <code>UnixDomainSocketConnection</code> object has no attribute <code>_command_packer</code> (<a href="https://redirect.github.com/redis/redis-py/issues/2583">#2583</a>)</li> </ul> <h2>Contributors</h2> <p>We'd like to thank all the contributors who worked on this release!</p> <!-- raw HTML omitted --> </blockquote> <p>... (truncated)</p> </details> <details> <summary>Commits</summary> <ul> <li><a href="https://github.com/redis/redis-py/commit/66a4d6b2a493dd3a20cc299ab5fef3c14baad965"><code>66a4d6b</code></a> AsyncIO Race Condition Fix (<a href="https://redirect.github.com/redis/redis-py/issues/2641">#2641</a>)</li> <li><a href="https://github.com/redis/redis-py/commit/318b114f4da9846a2a7c150e1fb702e9bebd9fdf"><code>318b114</code></a> Version 4.5.2 (<a href="https://redirect.github.com/redis/redis-py/issues/2627">#2627</a>)</li> <li><a href="https://github.com/redis/redis-py/commit/1b2f408259405d412d7530291902f9e0c8bd34b3"><code>1b2f408</code></a> Fix behaviour of async PythonParser to match RedisParser as for issue <a href="https://redirect.github.com/redis/redis-py/issues/2349">#2349</a> (...</li> <li><a href="https://github.com/redis/redis-py/commit/7d474f90453c7b90bd06c94e0250b618120a599d"><code>7d474f9</code></a> introduce AbstractConnection so that UnixDomainSocketConnection can call supe...</li> <li><a href="https://github.com/redis/redis-py/commit/c87172347584301f453c601c483126e4800257b7"><code>c871723</code></a> pypy-3.9 CI (<a href="https://redirect.github.com/redis/redis-py/issues/2608">#2608</a>)</li> <li><a href="https://github.com/redis/redis-py/commit/d63313bf6080acaf18d61e072c78303adc0d4166"><code>d63313b</code></a> add queue_class to REDIS_ALLOWED_KEYS (<a href="https://redirect.github.com/redis/redis-py/issues/2577">#2577</a>)</li> <li><a href="https://github.com/redis/redis-py/commit/c61eeb2e3b5dff1f01eb1e665f424c7e75354f56"><code>c61eeb2</code></a> Adding supported redis/library details (<a href="https://redirect.github.com/redis/redis-py/issues/2621">#2621</a>)</li> <li><a href="https://github.com/redis/redis-py/commit/25e85e51e57b7aae9eb8fc77cfb0a45a07a501a7"><code>25e85e5</code></a> fix: replace async_timeout by asyncio.timeout (<a href="https://redirect.github.com/redis/redis-py/issues/2602">#2602</a>)</li> <li><a href="https://github.com/redis/redis-py/commit/91ab12a0f1bdf0e433131e1a51578e9fa2f89718"><code>91ab12a</code></a> Remove redundant assignment. (<a href="https://redirect.github.com/redis/redis-py/issues/2620">#2620</a>)</li> <li><a href="https://github.com/redis/redis-py/commit/8bfd492240fd33489a86cd3d353e3ece1fc94c10"><code>8bfd492</code></a> Making search document subscriptable (<a href="https://redirect.github.com/redis/redis-py/issues/2615">#2615</a>)</li> <li>Additional commits viewable in <a href="https://github.com/redis/redis-py/compare/v4.1.4...v4.5.3">compare view</a></li> </ul> </details> <br /> [![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=redis&package-manager=pip&previous-version=4.1.4&new-version=4.5.3)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) --- <details> <summary>Dependabot commands and options</summary> <br /> You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts). </details>
03-27-2023 22:31:33
03-27-2023 22:31:33
_The documentation is not available anymore as the PR was closed or merged._
transformers
22,409
closed
Llama: retrocompatibility support for inner layers
# What does this PR do?
03-27-2023 18:52:41
03-27-2023 18:52:41
_The documentation is not available anymore as the PR was closed or merged._<|||||>(It seems like it is no longer needed, closing)<|||||>@gante hi, i also got the error "None type is not subscribable " in apply_rotary_pos_emb. Why is it no longer need? Why not merge it?<|||||>Hi @Adam1679 -- have you followed the conversation in [this issue](https://github.com/huggingface/transformers/issues/22407)? It explains the rationale behind the change 🤗 <|||||>@gante yes. but i encountered the same issee without using GPTQ-for-LLaMA package.<|||||>@Adam1679 then one of the two things is happening: 1. You are using exclusively hugging face libraries -- ensure you have the latest versions and, if the error persists, please an issue with a short snippet that reproduces the issue 2. You are using external libraries -- request their owners to update the transformers version and/or update their internal code
transformers
22,408
closed
ray hyperparameter_search - ModuleNotFoundError: No module named 'evaluate_modules'
### System Info - `transformers` version: 4.26.1 - Platform: Linux-5.4.0-1097-aws-x86_64-with-glibc2.35 - Python version: 3.10.6 - Huggingface_hub version: 0.13.3 - PyTorch version (GPU?): 1.13.1+cu117 (True) - Tensorflow version (GPU?): 2.11.0 (True) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: yes - Using distributed or parallel set-up in script?: not explicitly, selected "ray" as `trainer.hyperparameter_search` backend on a Databricks cluster with 2 workers ### Who can help? @richardliaw, @amogkam ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction ## Note I do see a similar issue https://github.com/huggingface/transformers/issues/11565, would similar fix also apply for this case? ## Code snippet ```python """ tokenizer = ... small_train_dataset = ... small_test_dataset = ... data_collator = ... """ ### import numpy as np import evaluate f1_metric = evaluate.load("f1") def compute_metrics(eval_pred): logits, labels = eval_pred predictions = np.argmax(logits, axis=-1) return f1_metric.compute(predictions=predictions, references=labels) ### from transformers import AutoModelForSequenceClassification def model_init(): return AutoModelForSequenceClassification.from_pretrained( base_model, num_labels=2, return_dict=True) ### from transformers import TrainingArguments, Trainer training_args = TrainingArguments(output_dir=training_output_dir, evaluation_strategy="steps", eval_steps=500, save_total_limit=20, disable_tqdm=True) ### trainer = Trainer( args=training_args, tokenizer=tokenizer, train_dataset=small_train_dataset, eval_dataset=small_test_dataset, model_init=model_init, compute_metrics=compute_metrics, # uses compute_metrics defined above data_collator=data_collator, ) ### # the code that triggered error trainer.hyperparameter_search( direction="maximize", backend="ray", n_trials=10 # number of trials ) ``` ## Error Message The same error showed up for each trial (all 10 trials failed), ```sh 2023-03-24 13:08:07,642 ERROR trial_runner.py:1062 -- Trial _objective_d2895_00000: Error processing event. Traceback (most recent call last): File "/local_disk0/.ephemeral_nfs/envs/pythonEnv-585a9e45-1e91-40e0-a214-8e2132580d15/lib/python3.10/site-packages/ray/tune/execution/ray_trial_executor.py", line 1276, in get_next_executor_event future_result = ray.get(ready_future) File "/local_disk0/.ephemeral_nfs/envs/pythonEnv-585a9e45-1e91-40e0-a214-8e2132580d15/lib/python3.10/site-packages/ray/_private/client_mode_hook.py", line 105, in wrapper return func(*args, **kwargs) File "/local_disk0/.ephemeral_nfs/envs/pythonEnv-585a9e45-1e91-40e0-a214-8e2132580d15/lib/python3.10/site-packages/ray/_private/worker.py", line 2380, in get raise value.as_instanceof_cause() ray.exceptions.RayTaskError: ray::ImplicitFunc.train() (pid=1068, ip=10.68.133.32, repr=_objective) File "/local_disk0/.ephemeral_nfs/envs/pythonEnv-585a9e45-1e91-40e0-a214-8e2132580d15/lib/python3.10/site-packages/ray/tune/trainable/trainable.py", line 368, in train raise skipped from exception_cause(skipped) File "/local_disk0/.ephemeral_nfs/envs/pythonEnv-585a9e45-1e91-40e0-a214-8e2132580d15/lib/python3.10/site-packages/ray/tune/trainable/function_trainable.py", line 337, in entrypoint return self._trainable_func( File "/local_disk0/.ephemeral_nfs/envs/pythonEnv-585a9e45-1e91-40e0-a214-8e2132580d15/lib/python3.10/site-packages/ray/tune/trainable/function_trainable.py", line 654, in _trainable_func output = fn() File "/databricks/python/lib/python3.10/site-packages/transformers/integrations.py", line 332, in dynamic_modules_import_trainable return trainable(*args, **kwargs) File "/local_disk0/.ephemeral_nfs/envs/pythonEnv-585a9e45-1e91-40e0-a214-8e2132580d15/lib/python3.10/site-packages/ray/tune/trainable/util.py", line 397, in inner fn_kwargs[k] = parameter_registry.get(prefix + k) File "/local_disk0/.ephemeral_nfs/envs/pythonEnv-585a9e45-1e91-40e0-a214-8e2132580d15/lib/python3.10/site-packages/ray/tune/registry.py", line 244, in get return ray.get(self.references[k]) ray.exceptions.RaySystemError: System error: No module named 'evaluate_modules' traceback: Traceback (most recent call last): ModuleNotFoundError: No module named 'evaluate_modules' ``` ### Expected behavior According to the blog post (https://huggingface.co/blog/ray-tune), I would expect each trial to complete without errors.
03-27-2023 18:14:17
03-27-2023 18:14:17
can you try moving ``import evaluate``, ``f1_metric``, and ``compute_metrics`` into ``model_init `` for now? this is a workaround that should unblock you. we need to fix this import same way as this previous PR: https://github.com/huggingface/transformers/pull/12749<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
22,407
closed
modeling_llama - LlamaAttention attempts to subscript `None` position_ids
### System Info - `transformers` version: 4.28.0.dev0 - Platform: Linux-5.15.0-67-generic-x86_64-with-glibc2.35 - Python version: 3.10.6 - Huggingface_hub version: 0.13.3 - Safetensors version: 0.3.0 - PyTorch version (GPU?): 2.0.0+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ### Who can help? @gante ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction When trying to convert llama weights with https://github.com/qwopqwop200/GPTQ-for-LLaMa I encountered the following: ``` ❯ CUDA_VISIBLE_DEVICES=0 python llama.py ./models/hf/13B/llama-13b c4 --wbits 4 --true-sequential --act-order --new-eval --save_safetensors llama-13b-4bit.safetensors Starting ... Ready. Traceback (most recent call last): File "./GPTQ-for-LLaMA/llama.py", line 449, in <module> quantizers = llama_sequential(model, dataloader, DEV) File "./GPTQ-for-LLaMA/venv/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "./GPTQ-for-LLaMA/llama.py", line 100, in llama_sequential outs[j] = layer(inps[j].unsqueeze(0), attention_mask=attention_mask)[0] File "./GPTQ-for-LLaMA/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "./GPTQ-for-LLaMA/venv/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py", line 311, in forward hidden_states, self_attn_weights, present_key_value = self.self_attn( File "./GPTQ-for-LLaMA/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "./GPTQ-for-LLaMA/venv/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py", line 220, in forward query_states, key_states = apply_rotary_pos_emb(query_states, key_states, cos, sin, position_ids) File "./GPTQ-for-LLaMA/venv/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py", line 132, in apply_rotary_pos_emb gather_indices = position_ids[:, None, :, None] # [bs, 1, seq_len, 1] TypeError: 'NoneType' object is not subscriptable ``` This appears to be due to a recent change in 7dcd8703ef904adc3ac19b47f769879221c33849 - LlamaAttention passes position_ids to apply_rotary_pos_emb, but defaults them to `None` and does not generate them if missing (unlike LlamaModel, which appears to generate them). ### Expected behavior `None` position_ids should not be passed to `apply_rotary_pos_emb`. I'm not quite sure of what the right fix here is, but at a minimum, I suspect that if the caller is expected to provide them, defaulting to `None` is incorrect.
03-27-2023 17:36:11
03-27-2023 17:36:11
Sorry, I failed to autocomplete @gante 's handle on the inital ticket. Adding a comment for the tag.<|||||>Hey @cheald 👋 For context, `position_ids` is required for correct behavior with left-padding, which in turn is needed for batched generation. Having a look at the issue!<|||||>Yup. I don't have the context to grok the proper place to be creating and passing them, but it seems like an interface error, at the minimum, to make a parameter optional and then use it non-optionally.<|||||>@cheald The issue stems from the `GPTQ-for-Llama` package, which should catch all intermediary inputs for proper quantization. I've [opened an issue there](https://github.com/qwopqwop200/GPTQ-for-LLaMa/issues/89). You can follow it and make the corresponding local changes, which should work 🤗 However, the ball is on their side -- the changes we made are retrocompatible with our public API and, while we avoid creating these sort of issues, we have no bandwidth to fix problems regarding the use of internal variables/methods. Is there anything else I can help you with? :)<|||||>All good. I'd suggest that an interface change to LlamaAttention to remove the `None` default value for `position_ids` would be appropriate, making the parameter required; it seems like a bit of a landmine to have a nominally optional argument which causes an exception if it's not provided (or, perhaps, at least an explicit check and exception if they're missing). If the answer is "no, for the purposes of API compatibility", then that's fine, but at least then this ticket might help the next person to run into it! Thanks so much - I realize this is cut-myself-on-the-bleeding-edge stuff, but I appreciate the swift help!<|||||>@cheald Due to Llama's popularity, I've made an exception -- [this PR](https://github.com/huggingface/transformers/pull/22409) should make it retrocompatible. Would you be able to test it on your end? 🤗 <|||||>I'll test it in a bit. Thank you so much (for this, and for all the amazing work you do on the transformers project!)<|||||>My quantization pass is still running (it takes quite some time), but it appears this is working as intended. Thank you! :tada: <|||||>@cheald hehe it turns out it is no longer needed, as the maintainers of `GPTQ-for-Llama` have pushed a fix on their end!
transformers
22,406
closed
[Whisper] Potential inconsistencies across whisper tokenizers (`pad_token_id`)
### System Info `transformers` version: 4.27.2 - Platform: Windows-10-10.0.19041-SP0 - Python version: 3.8.3 - Huggingface_hub version: 0.12.0 - PyTorch version (GPU?): 1.13.0+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: no - Using distributed or parallel set-up in script?: no ### Who can help? @sanchit-gandhi ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction There seems to be a mismatch between english and multilingual versions of the whisper models, which are present on the hub: | Size | Parameters | English-only | Multilingual | |----------|------------|------------------------------------------------------|-----------------------------------------------------| | tiny | 39 M | [✓](https://huggingface.co/openai/whisper-tiny.en) | [✓](https://huggingface.co/openai/whisper-tiny) | | base | 74 M | [✓](https://huggingface.co/openai/whisper-base.en) | [✓](https://huggingface.co/openai/whisper-base) | | small | 244 M | [✓](https://huggingface.co/openai/whisper-small.en) | [✓](https://huggingface.co/openai/whisper-small) | | medium | 769 M | [✓](https://huggingface.co/openai/whisper-medium.en) | [✓](https://huggingface.co/openai/whisper-medium) | | large | 1550 M | x | [✓](https://huggingface.co/openai/whisper-large) | | large-v2 | 1550 M | x | [✓](https://huggingface.co/openai/whisper-large-v2) | --- For example, the tiny models: - the [english version](https://huggingface.co/openai/whisper-tiny.en/blob/main/generation_config.json) uses a pad token of 50256 - the [multilingual version](https://huggingface.co/openai/whisper-tiny/blob/main/generation_config.json) uses a pad token of 50257 I would assume that both models would use similar vocabularies? I could be mistaken though. The multilingual model might of course include additional tokens for languages, but the pad_token (and other special tokens) would probably have the same IDs? Moreover, the pad_token_ids were recently updated for multilingual versions (see [commit](https://huggingface.co/openai/whisper-tiny/commit/a8d76517e6d65d92771752dbbf5e9c0a1a5b3a0d)). ### Expected behavior Tokenizers should use the same `pad_token_id`
03-27-2023 17:34:31
03-27-2023 17:34:31
Actually... looking at the vocabularies for each tokenizer: - [multilingual](https://huggingface.co/openai/whisper-tiny/raw/main/vocab.json)'s token with id 50257 is `<|endoftext|>`, vs `""`, which has id 50256 - [english](https://huggingface.co/openai/whisper-tiny.en/raw/main/vocab.json)'s token with id 50256 is `<|endoftext|>` (and does not seem to have a `""` token) --- I assume this is intended then. Will close the issue!<|||||>It is indeed intended! The English-only and multilingual tokenizers have different vocabulary items, and hence different index: vocab mappings, so the padding token is in different indices for each.
transformers
22,405
closed
model.generate temperature parameter is completely ineffective
### System Info - `transformers` version: 4.26.1 - Platform: Linux-5.15.0-56-generic-x86_64-with-glibc2.31 - Python version: 3.10.4 - Huggingface_hub version: 0.12.1 - PyTorch version (GPU?): 1.13.1+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: Yes ### Who can help? @gante @sg ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction Hello, I am trying to generate text using different models and different temperature parameters. I have noticed, however, that while changing hyperparameters such as `num_beams` affects the output text, changing the `temperature` parameter doesn't seem to do anything, and setting a temperature to 0.0 or 1.0 (very different) always leads to the same output. This has been observed across multiple different language models. I suspect this might be a bug such that the set temperature is not shown to the model. In order to reproduce, run the example below (feel free to try with different text samples to convince yourself it's not a one-off occurrence) ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM, AutoConfig from accelerate import init_empty_weights, infer_auto_device_map import torch tokenizer = AutoTokenizer.from_pretrained('bigscience/T0pp') # feel free to try a different model config = AutoConfig.from_pretrained('bigscience/T0pp') max_memory={i: "24GiB" for i in range(torch.cuda.device_count())} with init_empty_weights(): model = AutoModelForSeq2SeqLM.from_config(config) device_map = infer_auto_device_map(model, no_split_module_classes=['T5Block']) print(device_map) device_map['lm_head'] = 0 model = AutoModelForSeq2SeqLM.from_pretrained('bigscience/T0pp', device_map=device_map, load_in_8bit=True, max_memory=max_memory) text = "Complete the following story: Once upon a time there was a " input_ids = tokenizer.encode(text, return_tensors='pt').to(0) for temp in [0.0, 1.0]: beam_outputs = model.generate( input_ids, max_length=512, num_beams=5, no_repeat_ngram_size=4, temperature=temp, num_return_sequences=1, early_stopping=True, ) print(tokenizer.decode(beam_outputs[0], skip_special_tokens=True)) ``` ### Expected behavior I would expect the two printed outputs to be different. I understand that occasionally they might be the same, but I've tried with over 1,000 different inputs, and the generated outputs with `temperature=0` and `temperature=1` are ALWAYS the same which means there is something wrong
03-27-2023 16:47:42
03-27-2023 16:47:42
Hey @AndreaSottana 👋 I would recommend reading our [blog post on how to generate](https://huggingface.co/blog/how-to-generate). TL;DR -- there are several generation modes, and not all `.generate()` parameters are active for a given generation mode. In particular, the popular `temperature`, `top_p`, and `top_k` are only active when `do_sample=True` is also passed. Some tasks benefit from `do_sample=True`, while others do not. Popular tools like `ChatGPT` operate with sampling. We are aware that our `.generate()` has too many options and too little checks/examples, we are working on it 🤗 <|||||>Hi @gante Thank you very much for clarifying, I wasn't aware that some parameters were not effective when `do_sample=False`. Closing the issue for now
transformers
22,404
closed
Trainer: missing None check
# What does this PR do? Adds a missing `None` check. This is causing the example test to fail.
03-27-2023 16:47:01
03-27-2023 16:47:01
_The documentation is not available anymore as the PR was closed or merged._
transformers
22,403
closed
[WIP] Add GeoV model
# What does this PR do? This PR adds the 9B parameter GeoV language model trained by Georges Harik.
03-27-2023 16:30:15
03-27-2023 16:30:15
cc @ArthurZucker <|||||>[Model weights GeoV/GeoV-9b](https://huggingface.co/GeoV/GeoV-9b)<|||||>Hey @vpj feel free to ping me for any guidance ! 😉 Also if you need a review tell me<|||||>Yeah need a review. Im new to huggingface transformers. Just went by the tutorials. Let me know what else needs to be done in order to merge this. Thanks<|||||>@ArthurZucker <|||||>Sure ! Reviewing now! <|||||>@ArthurZucker I pushed a bunch of changes and replied to your comments. Can you please take a look. Thanks<|||||>Sure! Can you also follow the instruction in the failures of the CI ( for example add `geoV` to the `toctree.yml` and running `make style` to reformat the files) ? Also the `tests_tf` seems to be failing because of the import of `GeoVForCausalLM`. It should be protected let me check<|||||>I ran `make style` and thats what changes that assert statement in the reformer. It didn't do any changes to geov code.<|||||>``` FAILED tests/models/whisper/test_modeling_flax_whisper.py::FlaxWhisperModelTest::test_equivalence_pt_to_flax - AssertionError: 1.1205673e-05 not less than or equal to 1e-05 : outputs.encoder_last_hidden_state: Difference between PyTorch and Flax is 1.1205673217773438e-05 (>= 1e-05). ``` This is why torch_and_flax test is failing<|||||>@ArthurZucker Very much appreciate the help so far. Can you please help me get this PR ready by tomorrow since I won't be available for a week after tomorrow. Thank you<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_22403). All of your documentation changes will be reflected on that endpoint.<|||||>``` FAILED tests/extended/test_trainer_ext.py::TestTrainerExt::test_run_seq2seq_no_dist - TypeError: unsupported operand type(s) for +: 'NoneType' and 'int' FAILED tests/models/pix2struct/test_image_processing_pix2struct.py::Pix2StructImageProcessingTest::test_expected_patches - PIL.UnidentifiedImageError: cannot identify image file <_io.BytesIO object at 0x7f2bc8801950> ``` This error for tests_torch<|||||>These two tests seem unrelated to your PR, pull from main and normally they should dissappear <|||||>Ok the history of the PR got a little bit messed up 😅 it's alright it can happen from time to time! You can either rebase on main starting from `[4bd65f3](https://github.com/huggingface/transformers/pull/22403/commits/4bd65f354c8366feccae95278c1f0b3a85110b3b)` (for example as it is one commit unnafected) or you can do a soft reset to your first commit, add only your modifications and force push. And then pull from main <|||||>The styling depends on the version of `black` that you are using. Seems like most of the test are now good, the last should work with a `pip install black==23.1` <|||||>Yeah messed up by doing a rebase instead of a merge<|||||>I am not sure what the check means by imports order/format, to me it looks quite similar to other files.<|||||>Ok this is ruff acting up, I use `ruff 0.0.258` ( we recently pinned the correct one)<|||||>What should I do? Do I have to install `ruff 0.0.258` and run `make style`?<|||||>Oh thanks, didn't know that. I saw this https://huggingface.co/docs/transformers/add_new_model and thought I had to create a pull request. So, just to make sure I'm clear, should I close this PR and share the model according to https://huggingface.co/docs/transformers/custom_models?<|||||>Yes! It would be the best 😉 Thanks for your comprehension! <|||||>Just out of curiosity, how do you choose which models to add to the repo and what goes to the hub?<|||||>Added to the hub, but it doesn't work with pipelines (text-generation). How do I register the model for `text-generation`. This is what I'm doing now ``` GeoVConfig.register_for_auto_class() GeoVModel.register_for_auto_class("AutoModel") GeoVForCausalLM.register_for_auto_class("AutoModelForCausalLM") GeoVTokenizer.register_for_auto_class() ``` Trying to load the pipeline with ``` generator = pipeline(model="GeoV/GeoV-9b", trust_remote_code=True) ``` gives the error ``` The model 'GeoVForCausalLM' is not supported for text-generation. Supported models are ... ``` Thanks<|||||>> Just out of curiosity, how do you choose which models to add to the repo and what goes to the hub? The more we grow, the more we are trying to add models to the hub! Especially if the model does not have a lot of changes compared to a model that we already support! For the issue, I think you have to update the mapping `MODEL_FOR_MASKED_LM_MAPPING` by adding your model<|||||>How can I change `MODEL_FOR_MASKED_LM_MAPPING` if I'm adding to the hub?<|||||>The same way you did for the `AUTO_CONFIG_MAPPING`. An example from [here](https://huggingface.co/THUDM/glm-2b/blob/main/config.json): ```python config.json: ... "auto_map": {   "AutoConfig": "configuration_glm.GLMConfig",   "AutoModel": "modeling_glm.GLMModel",   "AutoModelForSeq2SeqLM": "modeling_glm.GLMForConditionalGeneration",   "AutoModelForMultipleChoice": "modeling_glm.GLMForMultipleChoice",   "AutoModelForSequenceClassification": "modeling_glm.GLMForSequenceClassification"   }, ... ```<|||||>So my bad, you just need to add `AutoModelForMaskedLM` !<|||||>This is a causal lm, is it ok to add it to masked lm?<|||||>Ah sorry for Causal LM it should be `AutoModelForSeq2SeqLM`<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
22,402
closed
Fix llama tokenizer
# What does this PR do? Draft but: - Fixes the conversion script - update the llama default special tokens - fixed compatibility issues - cleanup llama tokeniztion code - add tests
03-27-2023 16:02:12
03-27-2023 16:02:12
_The documentation is not available anymore as the PR was closed or merged._<|||||>cc @Narsil for visibility! <|||||>This will need to wait for #22341 <|||||>Yes, on it! <|||||>Will finish this tomorrow!<|||||>Hi! Does this PR the decoding part of the tokenizer? Seems like it always prefixes the output with space. For instance, `tokenizer.decode(1)` returns ` <s>`, <|||||>Yes, it does: `print(f'\'{tokenizer.decode(tokenizer.encode("Hello world"), skip_special_tokens = True)}\'',)` outputs `'Hello world' 😉
transformers
22,401
closed
Trainer: move Seq2SeqTrainer imports under the typing guard
# What does this PR do? (see title)
03-27-2023 15:07:20
03-27-2023 15:07:20
_The documentation is not available anymore as the PR was closed or merged._
transformers
22,400
closed
Transformers env safetensors
# What does this PR do? Clean version of #22374, GitHub is not allowing force-pushes for some reason (probably linked to the outage of this morning). Sorry to bother you again @amyeroberts !
03-27-2023 15:01:01
03-27-2023 15:01:01
_The documentation is not available anymore as the PR was closed or merged._
transformers
22,399
closed
Pytorch 2 compile + fsdp + transformers crash
### System Info - `transformers` version: 4.28.0.dev0 - Platform: Linux-5.10.147+-x86_64-with-glibc2.31 - Python version: 3.9.16 - Huggingface_hub version: 0.13.3 - PyTorch version (GPU?): 2.0.0+cu117 (False) - Tensorflow version (GPU?): 2.11.0 (False) - Flax version (CPU?/GPU?/TPU?): 0.6.7 (cpu) - Jax version: 0.4.6 - JaxLib version: 0.4.6 - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? text models: @ArthurZucker and @younesbelkada trainer: @sgugger PyTorch: @sgugger ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Training the official "run_clm.py" script works on TPU only when: 1. base training. 2. base training + PyTorch compile. 3. base training + FSDP. But it doesn't work when I combine both FSDP + PyTorch compile. I have created an example here to reproduce the problem: https://colab.research.google.com/drive/1RmarhGBIjeWHIngO7fAp239eqt5Za8bZ?usp=sharing ### Expected behavior The script should work using both FSDP + PyTorch compile.
03-27-2023 14:42:59
03-27-2023 14:42:59
I'm not sure PyTorch XLA supports torch.compile + FSDP yet.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>@agemagician, did you resolve this issue? If so, could you share the details with me?<|||||>No, it is not supported yet .
transformers
22,398
closed
error with protoBug in v4.27.3
Hello, I'be just updated my stack to python 3.11.2 and everything works fine except, my tensorflow transformer models :-) I'm using tensorflow-cpu implementation and here is the error I am facing when loading my model. Any idea of what could be wrong ? ``` File "/usr/local/lib/python3.11/site-packages/transformers/pipelines/__init__.py", line 873, in pipeline tokenizer = AutoTokenizer.from_pretrained( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/site-packages/transformers/models/auto/tokenization_auto.py", line 697, in from_pretrained return tokenizer_class_fast.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/site-packages/transformers/tokenization_utils_base.py", line 1804, in from_pretrained return cls._from_pretrained( ^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/site-packages/transformers/tokenization_utils_base.py", line 1958, in _from_pretrained tokenizer = cls(*init_inputs, **init_kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/site-packages/transformers/models/camembert/tokenization_camembert_fast.py", line 128, in __init__ super().__init__( File "/usr/local/lib/python3.11/site-packages/transformers/tokenization_utils_fast.py", line 114, in __init__ fast_tokenizer = convert_slow_tokenizer(slow_tokenizer) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/site-packages/transformers/convert_slow_tokenizer.py", line 1199, in convert_slow_tokenizer return converter_class(transformer_tokenizer).converted() ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/site-packages/transformers/convert_slow_tokenizer.py", line 438, in __init__ from .utils import sentencepiece_model_pb2 as model_pb2 File "/usr/local/lib/python3.11/site-packages/transformers/utils/sentencepiece_model_pb2.py", line 91, in <module> _descriptor.EnumValueDescriptor( File "/usr/local/lib/python3.11/site-packages/google/protobuf/descriptor.py", line 796, in __new__ _message.Message._CheckCalledFromGeneratedFile() TypeError: Descriptors cannot not be created directly. If this call came from a _pb2.py file, your generated code is out of date and must be regenerated with protoc >= 3.19.0. If you cannot immediately regenerate your protos, some other possible workarounds are: 1. Downgrade the protobuf package to 3.20.x or lower. 2. Set PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=python (but this will use pure-Python parsing and will be much slower). More information: https://developers.google.com/protocol-buffers/docs/news/2022-05-06#python-updates ``` If it can help, the code that trigger the issue is this one: ``` SENTIMENT_FR = pipeline( task="text-classification", # type: ignore model="cmarkea/distilcamembert-base-sentiment", # type: ignore tokenizer="cmarkea/distilcamembert-base-sentiment", # type: ignore ) ```
03-27-2023 14:11:00
03-27-2023 14:11:00
cc @Narsil and @ArthurZucker <|||||>@quertenmont Could you share your environment by running `transformers-cli env`? While @Narsil and @ArthurZucker find the time to check, for a quick fix, probably downgrade `tensorflow` and `protobuf` version. <|||||>@sgugger Didn't we upgrade the protobuf generated file in the end ? Also this happens to be a Camembert, which is BPE + spm (so subject to the bug we fixed in the merge ordering). @quertenmont this used to be extremely slow no ? (This code converts from slow to fast on the fly, and for this particular tokenizer brand, it should be excruciantingly slow, since `tokenizers` has to recreate information by doing an O(n²) search over the tokens.)<|||||>No the protobuf generated file has not been touched at all @Narsil (and we will probably need to have two versions of it to support several versions of protobuf).<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Created this: https://github.com/huggingface/transformers/pull/23013 I'll try to run slow tests of tokenization on some machine in addition to the standard tests there <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
22,397
closed
MBart summarization code example does not work
The summarization code example does not work - https://huggingface.co/docs/transformers/main/model_doc/mbart#transformers.TFMBartForConditionalGeneration.call.example ```python from transformers import AutoTokenizer, TFMBartForConditionalGeneration, MBartConfig model = TFMBartForConditionalGeneration.from_pretrained("facebook/mbart-large-cc25") tokenizer = AutoTokenizer.from_pretrained("facebook/mbart-large-cc25") ARTICLE_TO_SUMMARIZE = "Meine Freunde sind cool, aber sie essen zu viel Kuchen." inputs = tokenizer([ARTICLE_TO_SUMMARIZE], max_length=1024, return_tensors="tf") # Generate Summary summary_ids = model.generate(inputs["input_ids"], num_beams=4, max_length=5) print(tokenizer.batch_decode(summary_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)) ``` > OSError: facebook/mbart-large-cc25 does not appear to have a file named tf_model.h5 but there is a file for PyTorch weights. Use `from_pt=True` to load this model from those weights. If you initialize the parameter, the model returns nothing. Even with large texts. ``` 2023-03-27 16:21:08.334458: I tensorflow/core/common_runtime/pluggable_device/pluggable_device_factory.cc:306] Could not identify NUMA node of platform GPU ID 0, defaulting to 0. Your kernel may not have been built with NUMA support. 2023-03-27 16:21:08.334881: I tensorflow/core/common_runtime/pluggable_device/pluggable_device_factory.cc:272] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 0 MB memory) -> physical PluggableDevice (device: 0, name: METAL, pci bus id: <undefined>) All PyTorch model weights were used when initializing TFMBartForConditionalGeneration. All the weights of TFMBartForConditionalGeneration were initialized from the PyTorch model. If your task is similar to the task the model of the checkpoint was trained on, you can already use TFMBartForConditionalGeneration for predictions without further training. Truncation was not explicitly activated but `max_length` is provided a specific value, please use `truncation=True` to explicitly truncate examples to max length. Defaulting to 'longest_first' truncation strategy. If you encode pairs of sequences (GLUE-style) with the tokenizer you can select this strategy more precisely by providing a specific strategy to `truncation`. [''] ```
03-27-2023 13:19:30
03-27-2023 13:19:30
CC @patrickvonplaten <|||||>cc @gante Could you add a converted TensorFlow checkpoint for this model?<|||||>@genert two issues here: 1. The documentation is inaccurate -- mbart is a translation model (see [paper](https://arxiv.org/pdf/2001.08210.pdf)), and the model in that example is best used for the pre-trained task, mask filling 2. There are no TF checkpoints Will fix both :)<|||||>TF checkpoints added to the hub and the PR above fixes the documentation (and its examples) 👍 @genert, is there anything else I can help you with?<|||||>@gante Thanks for the improvements! I will close this issue as the problems have been solved. OT: the issue arouse as I am trying to create (preferably abstractive) summary out of multilingual text (1...10 000 characters). The MBART caught my attention because of the multilingual support and its ability to create summary (according to previously flawed documentation). Nevertheless, I think I will just go with Google's pegasus model for summarisation with lack of multilingual support as trade-off. I really need to investigate how to train these models, and go deep into transformers, perhaps I can make MBART excel at summarisation task (maybe create pipeline of translating text to english, creating summary, and back to the original language).<|||||>I would advocate in favor of a pipeline of models rather than a single model. The field is evolving very fast, and it allows you to stay nimble :) (e.g. a new summarization model came out? Easy, simply replace the summarization model, no need to fine-tune for multiple languages)<|||||>Thanks for the tip, I will use the pipelines instead.
transformers
22,396
closed
[`bnb`] Force `requires_grad` to be `False`
# What does this PR do? Addresses: https://github.com/huggingface/accelerate/pull/1237#discussion_r1146045485 Some users uses `replace_8bit_linear` as a standalone function, by default the `Linear8bitLinear` layers that are newly created have `requires_grad` set to `True` leading to a bug on `bitsandbytes` library. This PR forces `requires_grad` to be `False` for these layers to avoid these issues Related: https://github.com/huggingface/accelerate/pull/1237 All bnb slow tests pass cc @sgugger
03-27-2023 10:53:01
03-27-2023 10:53:01
_The documentation is not available anymore as the PR was closed or merged._
transformers
22,395
closed
Whisper Prompting
### Feature request Add prompting for the Whisper model to control the style/formatting of the generated text. ### Motivation During training, Whisper can be fed a "previous context window" to condition on longer passages of text. The original OpenAI Whisper implementation provides the user with the option of passing an [`initial_prompt`](https://github.com/openai/whisper/blob/6dea21fd7f7253bfe450f1e2512a0fe47ee2d258/whisper/transcribe.py#L96) to the model. This prompt is replaces the "previous context window" during inference. By passing the prompt as the "previous context window", the Whisper model conditions its generation on whatever text is passed as the prompt. This allows the user to control aspects of the generation, such as spellings of named entities and punctuation formatting (see https://github.com/openai/whisper/discussions/963#discussioncomment-4987057). This is possibly a cheaper way of adapting the Whisper model to specific decoding constraints than fine-tuning. This notebook demonstrates prompting with the initial codebase, and explains how this can be achieved for HF's Whisper: https://colab.research.google.com/drive/14FSeaoRvgs5arOTfiMQBnQ5NaLyma7Tq?usp=sharing The proposed API for prompting would look something as follows: 1. Encode prompt text to prompt token ids (`processor.get_prompt_ids`) - this method is a wrapper around `processor.tokenizer.__call__` that **doesn't** add the special token ids: ```python prompt = "IR, Newswire" prompt_ids = processor.get_prompt_ids(prompt) ``` 2. Pass the input audio and prompt token ids to the `.generate` method to get the predicted ids: ```python pred_ids = model.generate(input_features, prompt_ids=prompt_ids) ``` 3. Decode the predicted ids and 'slice' off the prompt (we can do this by passing the `prompt_ids`): ```python pred_str = processor.batch_decode(pred_ids, prompt_ids=prompt_ids) ``` => We would need to wrap all of this `forced_decoder_ids` logic into the generate method and update the processor/tokenizer accordingly. ### Your contribution Happy to guide the integration and review any PRs!
03-27-2023 10:24:32
03-27-2023 10:24:32
cc @hollance <|||||>Hello, I'd like to pick up this issue!<|||||>Hey @mollerup23! Super cool! We would first need to update the `generate` modelling code to slide the forced decoder ids as explained in the notebook: https://github.com/huggingface/transformers/blob/d5de578c2227250d615f73a8fb88a5ce7f1743be/src/transformers/models/whisper/modeling_whisper.py#L1453 And then add a new method in the tokenizer to ignore the prompt ids. Does this sound good to you?<|||||>Hey @mollerup23 @sanchit-gandhi. Apologies, I'm not sure how picking these up works, I started working on it cause I saw there was no assignee and now have something I think is ready for review. Should I just keep it locally or push it up? Totally fine with whatever, @mollerup23 commented first.<|||||>@connor-henderson @sanchit-gandhi I have not yet started on this issue, feel free to push your commits and pick it up!<|||||>I will continue to look into what @sanchit-gandhi mentioned in the meantime.<|||||>Sounds good, thanks<|||||>Closed via https://github.com/huggingface/transformers/pull/22496
transformers
22,394
closed
[Pix2Struct] Add support to resize embeddings
# What does this PR do? This PR adds `resize_token_embeddings` support for Pix2Struct. This was required when I fine-tuned Pix2Struct on a key-value pair dataset (the one from [this Donut notebook](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/Donut/CORD/Fine_tune_Donut_on_a_custom_dataset_(CORD)_with_PyTorch_Lightning.ipynb)). It oftentimes helps to add additional special tokens to the language decoder. However, I noticed `tie_word_embeddings` is set to `True` in both the general config of Pix2Struct (`Pix2StructConfig`) as well as its text config (`Pix2StructTextConfig`). Printing out the weights of the decoder's embedding layer and its language modeling head seems to reveal weights aren't tied: ``` from transformers import Pix2StructForConditionalGeneration model = Pix2StructForConditionalGeneration.from_pretrained("google/pix2struct-base") print(model.decoder.embed_tokens.weight) print(model.decoder.lm_head.weight) ``` So before merging this PR, we probably need to update the `tie_word_embeddings` attribute in the config of the models. Cause when you would load the model with this branch, it would break. Currently you have to do: ``` from transformers import Pix2StructConfig, Pix2StructForConditionalGeneration config = Pix2StructConfig(text_config={"tie_word_embeddings": False}, tie_word_embeddings=False) model = Pix2StructForConditionalGeneration.from_pretrained("google/pix2struct-textcaps-base", config=config) ``` to make it work. The PR also fixes some typos in configuration_pix2struct.py. cc @younesbelkada
03-27-2023 08:49:31
03-27-2023 08:49:31
_The documentation is not available anymore as the PR was closed or merged._<|||||>@NielsRogge could you share a notebook on finetuning in this dataset?
transformers
22,393
closed
(Re-)Enable Nightly + Past CI
# What does this PR do? (Re-)Enable Nightly + Past CI cc @stas00 : I don't think there is something (related to `DeepSpeed`) that really needs your review in this PR. But if you prefer, you can take a look the 2 `Dockerfile` files under `docker` (and more files if you want). Thank you. p.s. I launched a full run (without TensorFlow past version CIs) [here](https://github.com/huggingface/transformers/actions/runs/4532718828)
03-27-2023 07:31:03
03-27-2023 07:31:03
_The documentation is not available anymore as the PR was closed or merged._<|||||>The DS part looks good, @ydshieh I wonder if you want to continue testing torchdynamo at all. Users wanting to use it should be encouraged to move to torch>=2.0 instead, where it's built in. But a subject for a different PR I guess.<|||||>> The DS part looks good, @ydshieh > > I wonder if you want to continue testing torchdynamo at all. Users wanting to use it should be encouraged to move to torch>=2.0 instead, where it's built in. But a subject for a different PR I guess. From my side, it would be great if I don't have to deal with all the potential (installation/runtime) issues for such 3rd party libraries across with different torch versions (at least, not with previous torch versions). It's best to focus on the torch and torch+DeepSpeed testing results.<|||||>oh, I meant not testing torchdynamo in general transformers-wide. For sure you don't need any unrelated packages installed to test deepspeed, other its own deps. <|||||>Without TensorFlow Past CI - it takes 2.5 days to run the Nightly CI + PyTorch Past CI. I put the schedule to trigger the workflow on Sunday and Thursday at 2 AM. The TensorFlow past CI will only run under push events.
transformers
22,392
closed
Inconsistent Normalization for ViTImageProcessor when `do_resize` is False
### System Info - `transformers` version: 4.26.1 - Platform: Linux-5.4.0-121-generic-x86_64-with-glibc2.31 - Python version: 3.10.9 - Huggingface_hub version: 0.13.2 - PyTorch version (GPU?): 2.0.0+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: no - Using distributed or parallel set-up in script?: no ### Who can help? @amyeroberts ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ```py from transformers import AutoImageProcessor from PIL import Image import torchvision.transforms as T im = Image.open("t.png").convert("RGB") to_tens = T.ToTensor() extractor = AutoImageProcessor.from_pretrained("./pretrained/facebook/vit-msn-small") print(extractor) # Instance of ViTImageProcessor. # When `do_resize` is True: x1 = extractor(im, return_tensors="pt").pixel_values x2 = extractor(to_tens(im), return_tensors="pt").pixel_values print(abs(x2 - x1).mean()) # Close to 0; Correct. # When `do_resize` is False: x1 = extractor(im, return_tensors="pt", do_resize=False).pixel_values x2 = extractor(to_tens(im), return_tensors="pt", do_resize=False).pixel_values print(abs(x2 - x1).mean()) # Not close to 0; Differing behaviour. # Additional multiplication of 255 to torch.Tensor input: x1 = extractor(im, return_tensors="pt", do_resize=False).pixel_values x2 = extractor(to_tens(im) * 255, return_tensors="pt", do_resize=False).pixel_values print(abs(x2 - x1).mean()) # Close to 0; Correct again. ``` ### Expected behavior Currently, when `do_resize` is False, the tensor has to be multiplied by 255 first, while when `do_resize` is True, it is not needed. The behaviour should be consistent.
03-27-2023 07:27:43
03-27-2023 07:27:43
cc @amyeroberts <|||||>Hi @Interpause, thanks for raising this issue! Indeed, this is a funny behaviour. This is happening because of the use of the PIL library to resize images and the rescaling behaviour that happens in `ToTensor`. To explain in more detail, I'll refer to the input `im` and `im_pil` and `to_tens(im)` as `im_arr` below. Where `im_pil` is a `PIL.Image.Image` with integer pixel values between 0-255, and `im_arr` an array with pixel values between 0-1. In the first case, when`do_resize` is `True`: * `im_pil` and `im_arr` are converted to numpy arrays, preserving their pixel values * When passed to `resize` the images are converted to a `PIL.Image.Image` object. `im_pil` can be converted directly. However for `im_arr`, the values have to be multiplied by 255, as PIL can only store integer pixel values between 0-255. * Images are resized then converted back to numpy arrays. `im_arr` now is a numpy array with values between 0-255, rather than the original 0-1. This shouldn't be happening - I'll try to think about the best way to handle this and open a PR. For the other cases, no conversion to `PIL` is happening and this behaviour is expected. Without rescaling by 255, the input arrays are different and different outputs are expected. Rescaling `to_tens(im)` by 255 makes them equivalent and so the same output is expected.
transformers
22,391
closed
Docs: Clarify stride for upcoming token classification pipeline
I just tried out the upcoming `stride` option for token classification pipelines (#21771, very useful!) without being familiar with the non-standard use of `stride` in the underlying tokenizer settings. I think it would be helpful to also explain in the pipelines API documentation that the `stride` parameter sets the overlap and not the stride. I thought it was the stride and spent a while trying to figure out why the performance was so abysmal.
03-27-2023 06:57:58
03-27-2023 06:57:58
Sounds like something missing indeed. Would you like to open a PR with such documentation?<|||||>I'm not confident that I could hit the style that you're looking for in your docs, especially given the history behind the naming. It might be a lot simpler to document if `stride` were renamed, though, would you potentially consider renaming it for `TokenClassificationPipeline`?<|||||>cc @Narsil what do you think?<|||||>Indeed the name `stride` is not particularly well chosen, my oversight on this. Seems we have the same thing in question answering: https://github.com/huggingface/transformers/blob/main/src/transformers/pipelines/question_answering.py#L361 And here: https://github.com/huggingface/transformers/blob/main/src/transformers/pipelines/automatic_speech_recognition.py#L169 I think controlling the overlap is much better in general since when you sending text (or audio) you have no idea of the max_length of the truncated texts, so controlling real stride would mean requiring arithmetic with that maximum size. (stride = tokenizer,model_max_length - overlap) Given the history of that parameter I'm not sure what we should do. Documenting it better would be a start. Renaming would warrant a rename if those 2 other pipelines. My current off the bat feeling is that we simply shouldn't. It's ok if it just means something different than for the convolution operator.<|||||>For the name `stride`, I choose the same as mentioned for tokenizers: stride (int, optional) — The length of the previous first sequence to be included in the overflowing sequence This parameter is directly passed through the tokenizer in the `preprocess()` method. We can change the name of course, but to keep consistency throughout the documentation, it's better to change all names related to `stride` which in fact refer to the number of overlapping tokens from the previous chunk/sequence.
transformers
22,390
closed
ImportError: cannot import name 'hf_bucket_url' from 'transformers.file_utils'
### System Info `from transformers.file_utils import default_cache_path, hf_bucket_url` I want to import hf_bucket_url on Colab, but I got the error "ImportError: cannot import name 'hf_bucket_url' from 'transformers.file_utils' (/usr/local/lib/python3.9/dist-packages/transformers/file_utils.py)" ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ImportError: cannot import name 'hf_bucket_url' from 'transformers.file_utils' (/usr/local/lib/python3.9/dist-packages/transformers/file_utils.py) ### Expected behavior Please tell me am I doing something wrong?
03-27-2023 06:37:45
03-27-2023 06:37:45
Yes, this function was removed several versions ago. It was only relevant for downloading files before the model Hub was properly setup. You should now use the `huggingface_hub` library to manage downloads of models from the Hub.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
22,389
closed
Exception: expected value at line 1 column 1
### System Info - `transformers` version: 4.28.0.dev0 - Platform: Linux-5.4.0-144-generic-x86_64-with-glibc2.31 - Python version: 3.9.16 - Huggingface_hub version: 0.13.2 - PyTorch version (GPU?): 2.0.0+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? @ArthurZucker @sgugger @gante ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction File "/mnt1/wcp/BEELE/BELLE-main/generate_instruction.py", line 28, in tokenizer = AutoTokenizer.from_pretrained(checkpoint) File "/home/appuser/miniconda3/envs/wcppy39/lib/python3.9/site-packages/transformers/models/auto/tokenization_auto.py", line 679, in from_pretrained return tokenizer_class.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs) File "/home/appuser/miniconda3/envs/wcppy39/lib/python3.9/site-packages/transformers/tokenization_utils_base.py", line 1804, in from_pretrained return cls._from_pretrained( File "/home/appuser/miniconda3/envs/wcppy39/lib/python3.9/site-packages/transformers/tokenization_utils_base.py", line 1958, in _from_pretrained tokenizer = cls(*init_inputs, **init_kwargs) File "/home/appuser/miniconda3/envs/wcppy39/lib/python3.9/site-packages/transformers/models/bloom/tokenization_bloom_fast.py", line 118, in init super().init( File "/home/appuser/miniconda3/envs/wcppy39/lib/python3.9/site-packages/transformers/tokenization_utils_fast.py", line 111, in init fast_tokenizer = TokenizerFast.from_file(fast_tokenizer_file) Exception: expected value at line 1 column 1 ### Expected behavior i hope the file is run
03-27-2023 05:21:43
03-27-2023 05:21:43
Hey @wccccp 👋 That exception is not due to `transformers`, but rather due to a `.json` file (or similar). There is probably something fishy with your tokenizer checkpoint. See [this](https://stackoverflow.com/questions/16573332/jsondecodeerror-expecting-value-line-1-column-1-char-0) stack overflow issue.<|||||>> 嘿@wccccp 👋 > > 该异常不是由于`transformers`,而是由于`.json`文件(或类似文件)。您的分词器检查点可能有问题。 > > 请参阅[此](https://stackoverflow.com/questions/16573332/jsondecodeerror-expecting-value-line-1-column-1-char-0)堆栈溢出问题。 you are right,the question is solute <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>This is Exactly What is Happening For Me: I'm Working On My Personal Project, This Error Happens While Using The Official Tokenizer For RWKV Model using Langchain which uses rwkv pip package and tokenizer module File "/content/Intellique/main.py", line 442, in <module> main() File "/content/Intellique/main.py", line 408, in main result = execution_agent(OBJECTIVE, task["task_name"]) File "/content/Intellique/main.py", line 363, in execution_agent return call_execution_llm(prompt) File "/content/Intellique/main.py", line 290, in call_execution_llm excu_llm = rwkv_llm() File "/content/Intellique/main.py", line 42, in rwkv_llm model = RWKV(model=model_path, tokens_path="/content/Intellique/20B_tokenizer.json", strategy='cuda fp16i8 *20 -> cuda fp16') File "pydantic/main.py", line 339, in pydantic.main.BaseModel.__init__ task_name = task_parts[1].strip() File "pydantic/main.py", line 1102, in pydantic.main.validate_model File "/usr/local/lib/python3.9/dist-packages/langchain/llms/rwkv.py", line 113, in validate_environment values["tokenizer"] = tokenizers.Tokenizer.from_file(values["tokens_path"]) Exception: expected value at line 1 column 1<|||||>Does Anyone Got Solution For This. @wccccp ....<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
22,388
closed
Translated documentation in italian
## What does this PR do? Italian translation of doc related to the preprocessing of :hugs: Transformers. * updated _toctree.yml * added perf_infer_tpu.mdx * added perf_infer_special.mdx * added perf_train_tpu.mdx * added perf_train_special.mdx ## Before submitting - [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [X] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). See issue: [[#17459](https://www.linkedin.com/feed/hashtag/?keywords=%2317459)](https://github.com/huggingface/transformers/issues/17459) @sgugger, @stevhliu, @MKhalusova and @omarespejel
03-27-2023 05:16:40
03-27-2023 05:16:40
_The documentation is not available anymore as the PR was closed or merged._
transformers
22,387
closed
Pipeline for inference "You seem to be using the pipelines sequentially on GPU. In order to maximize efficiency please use a dataset"
### System Info Transformers 4.16.2 Windows 10 Python 3.9.12 Datasets 2.2.2 @Narsil ### Who can help? @Narsil ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction I'm currently using the zero shot text classifier pipeline with datasets and batching. The "You seem to be using the pipelines sequentially on GPU. In order to maximize efficiency please use a dataset" warning appears with each iteration of my loop. I am using datasets and I am batching. I can't tell if this warning is a bug or just not descriptive enough to help me diagnose the true issue. ```python # initialize pipeline classifier = pipeline("zero-shot-classification", model='MoritzLaurer/DeBERTa-v3-large-mnli-fever-anli-ling-wanli', device = 0, batch_size = 24) # convert pandas df to dataset dataset = Dataset.from_pandas(data) # loop through documents according to subsamples that contain target name in the text for i in tqdm(range(len(targets)), desc="Classifying docs"): target = targets[i] # define template template = 'The author of this doc {} ' + target +'.' # get a list of text samples that contain the target samples = dataset.filter(lambda text: text[targets[i]] == 1) # Use classifier to get predictions for each sample res = [] for result in classifier(KeyDataset(samples, 'text'), labels, hypothesis_template = template, multi_label = False, batch_size = 32): res.append(result) # add results to pandas df data.loc[data[target] == 1, label_col_names[i]] = pd.Series([label['labels'][0] for label in res], index=data.index[data[target] == 1]) ``` As a side note, I appear to be getting significantly worse performance when using datasets and batching vs. just converting samples to a list and classifying sequentially. I'm assuming that's just a function of my data and not related to any bug though. ### Expected behavior Batched classification without the "You seem to be using the pipelines sequentially on GPU. In order to maximize efficiency please use a dataset" warning.
03-27-2023 05:11:14
03-27-2023 05:11:14
Hey, there are a few things: First: - I cannot really reproduce your example since your data is missing, meaning I'm not able to see exactly what's going on for your particular case. Second: There are 2 things at play, `streaming` vs `n-calls` and `batching` vs `no-batching`. Streaming is always better that doing n-calls for a GPU because in the streaming fashion, we can make use of torch `DataLoader` meaning using separate thread for data preparation, which should keep the GPU busier. However, this has the most significant impact when the actual GPU runtime is small (making the CPU overhead more visible). The second is batching, which is not automatically a win: https://huggingface.co/docs/transformers/main_classes/pipelines#pipeline-batching In your particular case, using a GTX 970 this is what I get: ```bash No batching, streaming 100%|████████████████████████████████████████████████████████████████████████████████████████| 100/100 [00:15<00:00, 6.50it/s] Batching, streaming 100%|████████████████████████████████████████████████████████████████████████████████████████| 100/100 [00:03<00:00, 32.92it/s] No batching, no streaming 8%|███████▏ | 8/100 [00:01<00:14, 6.55it/s]/home/nicolas/src/transformers/src/transformers/pipelines/base.py:1070: UserWarning: You seem to be using the pipelines sequentially on GPU. In order to maximize efficiency please use a dataset warnings.warn( 100%|████████████████████████████████████████████████████████████████████████████████████████| 100/100 [00:15<00:00, 6.55it/s] ``` So it seems batching is helping (understandable here, I have extremely aligned data so no waste of padding and model seems simple enough). Script: ```python from transformers import pipeline import tqdm # initialize pipeline classifier = pipeline( "zero-shot-classification", model="MoritzLaurer/DeBERTa-v3-large-mnli-fever-anli-ling-wanli", device=0, ) candidate_labels = ["politics", "science", "fashion"] TOTAL = 100 SENTENCE = "This is a test" def data(): for i in range(TOTAL): yield SENTENCE print("No batching, streaming") for result in tqdm.tqdm(classifier(data(), candidate_labels=candidate_labels), total=TOTAL): pass # print(result) print("Batching, streaming") for result in tqdm.tqdm(classifier(data(), candidate_labels=candidate_labels, batch_size=24), total=TOTAL): pass # print(result) print("No batching, no streaming") for i in tqdm.tqdm(range(TOTAL)): result = classifier(SENTENCE, candidate_labels=candidate_labels) pass # print(result) ```<|||||>Note: > for result in classifier(KeyDataset(samples, 'text'), labels, hypothesis_template = template, multi_label = False, batch_size = 32): This is the line of code I'm concerned about. It's perfectly ok if there's a relatively low amount of different labels (meaning low amount of datasets being created). However, if you're creating datasets with very low amount of data, then the overhead of creating the dataset + dataloader + spawning the threads might actually kill performance here.<|||||>Thank you for your assistance, this is all very insightful. My dataset is a set of tweets with three categories, I had assumed it was overhead slowing it down but wasn't sure. That said I'm still not really clear on what is triggering this warning, and it seems to be inconsistent. Passing it via KeyDataset(), a list, or a generator like in your example all seem to trigger the warning but never consistently. In this image I used a generator and the warning wasn't triggered on the first two iterations of the loop, but then was triggered on the third every iteration thereafter. ![image](https://user-images.githubusercontent.com/41241150/228046456-1372fb97-1e46-4b5f-a0ce-60ebd1beda1c.png) I once passed the data as a list and the warning wasn't triggered on any iteration of the loop, but when I refreshed the data and re-ran the loop with no changes it was triggered on the second and all subsequent iterations. Below I've shared the complete code and a sample of the data if that's helpful. This version uses the generator function for batching rather than the KeyDataset() function. The warning is almost always triggered. I tried removing the classification loop from the function as well and the warning still triggered, weirdly on the 7th and 8th iteration of the loop. ```python import pandas as pd from transformers import pipeline from datasets import Dataset from tqdm import tqdm # initialize classifier classifier = pipeline("zero-shot-classification", model='MoritzLaurer/DeBERTa-v3-large-mnli-fever-anli-ling-wanli', device = 1, batch_size = 16) # define data streamer def data_stream(samples): for i in range(samples.num_rows): yield samples['text'][i] # classifier function with batching option def classify_tweets(targets, labels, label_columns, classifier, data, batching=False): """ Classify tweets based on given targets and labels using a HuggingFace pipeline. Args: - targets: list of targets in the data frame that will be classified - labels: list of labels that will be passed to the template - label_columns: name of the label columns - classifier: HuggingFace pipeline object - data: pandas DataFrame that contains the tweets to classify - batching: whether to use batching or not Returns: - pandas DataFrame with modified columns """ # Create label column names label_col_names = [target + '_lab' for target in targets] data = data.copy() # suppress setting with copy warning # convert to huggingface dataset for batching dataset = Dataset.from_pandas(data) if batching else None # Classify tweets for each target for i in tqdm(range(len(targets)), desc="Classifying tweets"): target = targets[i] # define template template = 'The author of this tweet {} ' + target +'.' if batching: samples = dataset.filter(lambda text: text[targets[i]] == 1) # Use classifier to get predictions for each sample res = [] for result in classifier(data_stream(samples), labels, hypothesis_template = template, multi_label = False, batch_size = 32): res.append(result) else: # Use classifier to get predictions from list of text samples with the target res = classifier(list(data.loc[data[target] == 1, 'text']), labels, hypothesis_template=template, multi_label=False) # Add results to dataframe data.loc[data[target] == 1, label_col_names[i]] = [label['labels'][0] for label in res] # recode results to integers for column in tqdm(label_col_names, desc="Re-coding results"): data.loc[:,column] = data[column].replace(to_replace = {'supports':-1, 'opposes':1, 'does not express an opinion about': 0}) # Fill NaN values with zero data[label_col_names] = data[label_col_names].fillna(0) # Create columns for liberal and conservative classifications data[label_columns + '_lib'] = [1 if label <= -1 else 0 for label in data[label_col_names].sum(axis = 1)] data[label_columns + '_con'] = [1 if label >= 1 else 0 for label in data[label_col_names].sum(axis = 1)] return data # define targets to be classified and labels to use targets = ['Stewart', 'Oliver', 'Maddow', 'Hayes', 'O\'Donnell', 'Klein', 'Krugman', 'Thunberg'] labels = ['supports', 'opposes', 'does not express an opinion about'] lib_df = classify_tweets(targets = targets, labels = labels, label_columns = 'libmed', classifier = classifier, data = lib_df, batching=False) ``` [libsample.csv](https://github.com/huggingface/transformers/files/11082282/libsample.csv) <|||||>The warning is generated after simply 10 different calls of the pipeline on GPU (since with streaming there's only 1 call): https://github.com/huggingface/transformers/blob/main/src/transformers/pipelines/base.py#L1069 I'll look into this more thoroughly tomorrow.<|||||>Ahh that makes sense. So my current loop will trigger the warning regardless of whether or not I'm streaming because it divides the data based on which hypotheses should be used. I'm not sure if there is a more appropriate triggering condition or if the wording of the warning could be tweaked. Might be work a look though in case there is some other poor soul out there like me thinking their data isn't properly streaming/batching. Appreciate your help!<|||||>Ok, I had to rework your example so that I could understand what was going on.: Ultimately I see similar results: ``` Batching 124it [00:24, 5.07it/s] No Batching 124it [00:32, 3.77it/s] Raw iteration| 124it [00:34, 3.63it/s] ``` In terms of management, the main thing is that your n targets are actually n different datasets. With the snippet I got I don't think it's actually an issue, but with much larger datasets iterating over the ignored values might start to become an significant overhead (especially with added targets). I think having n different datasets, and iterating on each is perfectly OK. In order to ignore the warning, you could just reset the call_count. (`classifier.call_count = 0`) I don't think adding a new parameter is worth the effort since the overhead is still there and the warning can also just be safely ignored. (The warning is there mostly to avoid the naive calls on each separate item which do seem slower in my tests even if not by much) ```python from transformers import pipeline import pandas as pd from datasets import Dataset from tqdm import tqdm # initialize classifier classifier = pipeline( "zero-shot-classification", model="MoritzLaurer/DeBERTa-v3-large-mnli-fever-anli-ling-wanli", device=0, ) # define targets to be classified and labels to use lib_df = pd.read_csv("libsample.csv") dataset = Dataset.from_pandas(lib_df) candidate_labels = ["supports", "opposes", "does not express an opinion about"] def data(dataset, target): for row in dataset: if row[target]: yield row["text"] # for target in ["Stewart", "Oliver", "Maddow", "Hayes", "O'Donnell", "Klein", "Krugman", "Thunberg"]: for target in ["Stewart"]: hypothesis_template = "The author of this tweet {} " + target + "." print("Batching") for result in tqdm( classifier( data(dataset, target), candidate_labels=candidate_labels, hypothesis_template=hypothesis_template, multi_label=False, batch_size=32, ), ): pass print("No Batching") for result in tqdm( classifier( data(dataset, target), candidate_labels=candidate_labels, hypothesis_template=hypothesis_template, multi_label=False, batch_size=1, ), ): pass # print(result) print("Raw iteration") for text in tqdm( data(dataset, target), ): result = classifier( text, candidate_labels=candidate_labels, hypothesis_template=hypothesis_template, multi_label=False, ) pass # print(result) ```<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
22,386
closed
Add memory-efficient attention and optional features to Llama
This PR adds memory-efficient attention to Llama, resulting in a 30% improvement in training efficiency. We also removed some transposes to adapt to the shapes allowed by the *memory_efficient_attention* operation. Additionally, we have added hidden dropout and attention dropout to the model, which helps with better generalization during training. Furthermore, two optional features have been added: stable embedding, used in Bloom, and shared input-output vectors, used in PALM. These features have been tested and found to improve training stability and performance. The main changes are as follows: ```python if xops is not None and self.training: attn_weights = None attn_output = xops.memory_efficient_attention(query_states, key_states, value_states, attn_bias=self.causal_mask, p=self.dropout_prob) ``` As we use operators from the xformers library, we need to add a dependency on xformers. We implemented pre-training of the Llama model based on transformers + accelerate, incorporating the modifications described above. https://github.com/Bayes-Song/Open-Llama/blob/main/README_en.md
03-27-2023 03:21:04
03-27-2023 03:21:04
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_22386). All of your documentation changes will be reflected on that endpoint.<|||||>> Thanks for your PR. Transformers is not meant to be a modular toolbox, so we don't add every feature to every model. Llama was trained without stable embedding or shared input-output vectors, so we won't add them to the modeling code of Llama. Likewise for the dropouts. > > Since you are training new models using this code, as soon as you have checkpoints available, I would advise to make a PR with a new model (mostly copied from Llama) like we have all the variants of GPT-2 for instance. Thank you for your response. The memory_efficient_attention in xformers is actually mentioned in the original Llama paper. So, it is possible to integrate this component into the Llama training code.<|||||>@Bayes-Song Thanks for the PR Can we use this when Torch2.0 is supported? Like in https://github.com/huggingface/diffusers/pull/2303/files cc: @sgugger <|||||>If it's non-breaking and actually faster on **all** setups, we can add it yes. The PR makes other modifications for the time being, which we cannot accept as mentioned in my comment above.<|||||>Currently I have trained a new model based on the above changes, and I am adding a new model to the transformers library based on @sgugger 's suggestion. I will re-open a PR after I finish all the code.
transformers
22,385
closed
How to use the method model.generate() correctly?
### System Info - `transformers` version: 4.19.4 - Platform: Linux-4.9.93-010.ali3000.alios7.x86_64-x86_64-with-redhat-7.2-Paladin - Python version: 3.7.11 - Huggingface_hub version: 0.7.0 - PyTorch version (GPU?): 1.12.1+cu113 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed ### Who can help? _No response_ ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction ## This is my code self.decoder.generate(inputs=A, do_sample=True, max_length=80, min_length=1, top_k=50, top_p=0.95) ## self.decoder here is the BART model and A here is the input_features not the input_ids. ## The official document says that for the model of the encoder-decoder architecture, the generate method can input input_features, but an error occurs. According to the error log, input_features input is not supported, and only input_ids can be used. ## error log: RuntimeError: Expected tensor for argument #1 'indices' to have one of the following scalar types: Long, Int; but got torch.FloatTensor instead (while checking arguments for embedding) ## I want to know whether my usage is wrong or there is a bug in the source code ### Expected behavior I hope my code will return output_ids
03-27-2023 02:08:04
03-27-2023 02:08:04
cc @gante <|||||>@zt991211, thanks for raising an issue! Could you provide a more detailed snippet which is reproducible i.e. can be directly copied and run as well as a full traceback of the error encountered? <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
22,384
closed
ValueError: Could not load model EleutherAI/gpt-neo-2.7B with any of the following classes:
### Feature request Want to run "EleutherAI/gpt-neo-2.7B" ### Motivation Want to run "EleutherAI/gpt-neo-2.7B" ### Your contribution ```python (datasci) werner@X10DAi:~$ ipython Python 3.11.1 (main, Dec 22 2022, 17:06:07) [GCC 12.2.0] Type 'copyright', 'credits' or 'license' for more information IPython 8.7.0 -- An enhanced Interactive Python. Type '?' for help. ...: ...: # Load the ChatGPT-4 pipeline ...: chatbot = pipeline("text2text-generation", model="EleutherAI/gpt-neo- ...: 2.7B") ...: ...: # Define a function to interact with the chatbot ...: def chat(): ...: while True: ...: # Get user input ...: user_input = input("You: ") ...: ...: # Exit if user enters "exit" ...: if user_input.lower() == "exit": ...: break ...: ...: # Generate response from chatbot ...: response = chatbot(user_input, max_length=50)[0]["generated_t ...: ext"] ...: ...: # Print response ...: print("Chatbot:", response) ...: ...: # Call the chat function to start the chatbot ...: chat() 2023-03-27 08:24:43.589129: I tensorflow/tsl/cuda/cudart_stub.cc:28] Could not find cuda drivers on your machine, GPU will not be used. 2023-03-27 08:24:43.636446: I tensorflow/tsl/cuda/cudart_stub.cc:28] Could not find cuda drivers on your machine, GPU will not be used. 2023-03-27 08:24:43.637021: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations. To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags. 2023-03-27 08:24:44.519506: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT --------------------------------------------------------------------------- ValueError Traceback (most recent call last) Cell In[1], line 4 1 from transformers import pipeline 3 # Load the ChatGPT-4 pipeline ----> 4 chatbot = pipeline("text2text-generation", model="EleutherAI/gpt-neo-2.7B") 6 # Define a function to interact with the chatbot 7 def chat(): File ~/.pyenv/versions/3.11.1/envs/datasci/lib/python3.11/site-packages/transformers/pipelines/__init__.py:776, in pipeline(task, model, config, tokenizer, feature_extractor, image_processor, framework, revision, use_fast, use_auth_token, device, device_map, torch_dtype, trust_remote_code, model_kwargs, pipeline_class, **kwargs) 772 # Infer the framework from the model 773 # Forced if framework already defined, inferred if it's None 774 # Will load the correct model if possible 775 model_classes = {"tf": targeted_task["tf"], "pt": targeted_task["pt"]} --> 776 framework, model = infer_framework_load_model( 777 model, 778 model_classes=model_classes, 779 config=config, 780 framework=framework, 781 task=task, 782 **hub_kwargs, 783 **model_kwargs, 784 ) 786 model_config = model.config 787 hub_kwargs["_commit_hash"] = model.config._commit_hash File ~/.pyenv/versions/3.11.1/envs/datasci/lib/python3.11/site-packages/transformers/pipelines/base.py:271, in infer_framework_load_model(model, config, model_classes, task, framework, **model_kwargs) 268 continue 270 if isinstance(model, str): --> 271 raise ValueError(f"Could not load model {model} with any of the following classes: {class_tuple}.") 273 framework = "tf" if "keras.engine.training.Model" in str(inspect.getmro(model.__class__)) else "pt" 274 return framework, model ValueError: Could not load model EleutherAI/gpt-neo-2.7B with any of the following classes: (<class 'transformers.models.auto.modeling_tf_auto.TFAutoModelForSeq2SeqLM'>,). ```
03-27-2023 00:28:31
03-27-2023 00:28:31
Hi @hongyi-zhao, The issue is arising because the checkpoint `"EleutherAI/gpt-neo-2.7B"` is for the [GPT Neo](https://huggingface.co/docs/transformers/v4.27.2/en/model_doc/gpt_neo), which has architectures for the text generation -- [GPTNeoForCausalLM](https://huggingface.co/docs/transformers/v4.27.2/en/model_doc/gpt_neo#transformers.GPTNeoForCausalLM) -- and sequence classification -- [GPTNeoForSequenceClassification](https://huggingface.co/docs/transformers/v4.27.2/en/model_doc/gpt_neo#transformers.GPTNeoForSequenceClassification) -- tasks. The pipeline in the shared snippet is `"text2text-generation"` for which this model doesn't have a compatible class. <|||||>I am very confused about the names of these models and the matching relationships between them, so I get straight to the point where I am most concerned: will this project help me to use the latest GPT-4 or their other future newest models?<|||||>There are many cutting-edge models available and that continue to be added to transformers library. Unfortunately GPT-4 isn't one of them, as OpenAI hasn't open sourced the weights. The models can be explored on [the hub](https://huggingface.co/). For example [here are the models](https://huggingface.co/models?pipeline_tag=text2text-generation&sort=downloads) for the selected `text2text-generation` pipeline in the example above. There's more information about the [Text2TextGenerationPipeline in the docs](https://huggingface.co/docs/transformers/v4.27.2/en/main_classes/pipelines#transformers.Text2TextGenerationPipeline). <|||||>Thank you very much for your comments and explanations.<|||||>for anyone facing this issue again: I had this error when environment had the new PyTorch v2 . Uninstalling torch `v2.0` and installing torch `v1.11` solved the issue.
transformers
22,383
closed
TensorFlow: additional missing `cmake` dependencies in CI
# What does this PR do? Adds `cmake` to CI runs that depend on `transformers[tensorflow]`, on all missing cases.
03-26-2023 16:36:13
03-26-2023 16:36:13
_The documentation is not available anymore as the PR was closed or merged._
transformers
22,382
closed
Generate: support for left-padding on GPTNeoX and Llama
# What does this PR do? As the title indicates, adds left-padding support for GPTNeoX and Llama. It adds the `position_ids` input, propagates all the way to the position embedding, and gathers the position embeddings given the value in `position_ids`. All slow tests are now passing in both models, including the newly added left-padding support test and the GPTNeoX integration test. Also makes a few changes on Llama to make it more similar to other models 🤗
03-26-2023 16:08:34
03-26-2023 16:08:34
_The documentation is not available anymore as the PR was closed or merged._<|||||>The failing CI is fixed by #22383 :)<|||||>@ArthurZucker @sgugger woopsie, I forgot that it affected the weight loading code -- I come from a place where weight names have to be specified 👼 Reverted (`self.llama` is `self.model` again)!<|||||>It appears as if this may have broken FSDP. For example, as specified in the Alpaca repo, finetuning with `--fsdp "full_sh ard auto_wrap" --fsdp_transformer_layer_cls_to_wrap LlamaDecoderLayer` worked before this commit, but after it gives the error such as: ```python File "/home/fsuser/.local/lib/python3.8/site-packages/transformers/models/llama/modeling_llama.py", line 313, in forward hidden_states, self_attn_weights, present_key_value = self.self_attn( File "/home/fsuser/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) TypeError: forward() got an unexpected keyword argument 'position_ids' ``` Reverting the commit fixes it, although perhaps the problem is with `accelerate` not supporting `position_ids`? cc: @ArthurZucker <|||||>@jquesnelle can you paste the full stack trace? It would allow us to find the root cause :D (maybe, as you mention, the problem is in accelerate... or maybe it comes from the Alpaca repo!)<|||||>I'm seeing a pretty significant performance hit on RedPajama-7b-chat that I think is due to this change. I ran the PyTorch profiler and all of the `repeat` operators in `apply_rotary_pos_emb` are expensive and run mostly on CPU. Reverting to transformers 4.27.x resolves the performance issue.<|||||>You should try the `main` branch, #22785 removed the repeat solving this
transformers
22,381
closed
Changed world_size() to get_world_size() bugfix
Edited one line in src/transormers/generation/utils.py. Changed dist.….world_size() to dist.get_world_size() since world_size() doesn't exist in pytorch.dist. # What does this PR do? Fixes # Pytorch 2 generation/utils.py , 'torch.distributed' has no attribute 'world_size' #22375 https://github.com/huggingface/transformers/issues/22375 ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. Library: - generate: @gante
03-26-2023 11:26:47
03-26-2023 11:26:47
_The documentation is not available anymore as the PR was closed or merged._<|||||>BTW, there is a CI error due to this branch being created from an older version of `main` -- you should rebase with `main` to make our CI green <|||||>> BTW, there is a CI error due to this branch being created from an older version of `main` -- you should rebase with `main` to make our CI green Funny, GitHub is telling me that "This branch is 1 commit ahead of huggingface:main." and also tells me the fork is already synced when I try syncing. Also when I fetch upstream and rebase as in the contribution guidlines I am told "Current branch changed-world-size-to-get-world-size-in-generation-utils is up to date." Maybe I missed something, but it seems the only difference in codebase is the 1 line change. Maybe it's worth it to re-run the ci/circleci: tests_torch_and_tf?<|||||>@Charlie-Bell my apologies, there is indeed a problem in `main` I've found after writing the comment above! #22383 will fix it -- apologies for the confusion 🙏
transformers
22,380
closed
Bump tensorflow from 2.8.1 to 2.11.1 in /examples/research_projects/decision_transformer
Bumps [tensorflow](https://github.com/tensorflow/tensorflow) from 2.8.1 to 2.11.1. <details> <summary>Release notes</summary> <p><em>Sourced from <a href="https://github.com/tensorflow/tensorflow/releases">tensorflow's releases</a>.</em></p> <blockquote> <h2>TensorFlow 2.11.1</h2> <h1>Release 2.11.1</h1> <p><strong>Note</strong>: TensorFlow 2.10 was the last TensorFlow release that supported GPU on native-Windows. Starting with TensorFlow 2.11, you will need to install TensorFlow in WSL2, or install tensorflow-cpu and, optionally, try the TensorFlow-DirectML-Plugin.</p> <ul> <li>Security vulnerability fixes will no longer be patched to this Tensorflow version. The latest Tensorflow version includes the security vulnerability fixes. You can update to the latest version (recommended) or patch security vulnerabilities yourself <a href="https://github.com/tensorflow/tensorflow#patching-guidelines">steps</a>. You can refer to the <a href="https://github.com/tensorflow/tensorflow/releases">release notes</a> of the latest Tensorflow version for a list of newly fixed vulnerabilities. If you have any questions, please create a GitHub issue to let us know.</li> </ul> <p>This release also introduces several vulnerability fixes:</p> <ul> <li>Fixes an FPE in TFLite in conv kernel <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-27579">CVE-2023-27579</a></li> <li>Fixes a double free in Fractional(Max/Avg)Pool <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-25801">CVE-2023-25801</a></li> <li>Fixes a null dereference on ParallelConcat with XLA <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-25676">CVE-2023-25676</a></li> <li>Fixes a segfault in Bincount with XLA <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-25675">CVE-2023-25675</a></li> <li>Fixes an NPE in RandomShuffle with XLA enable <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-25674">CVE-2023-25674</a></li> <li>Fixes an FPE in TensorListSplit with XLA <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-25673">CVE-2023-25673</a></li> <li>Fixes segmentation fault in tfg-translate <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-25671">CVE-2023-25671</a></li> <li>Fixes an NPE in QuantizedMatMulWithBiasAndDequantize <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-25670">CVE-2023-25670</a></li> <li>Fixes an FPE in AvgPoolGrad with XLA <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-25669">CVE-2023-25669</a></li> <li>Fixes a heap out-of-buffer read vulnerability in the QuantizeAndDequantize operation <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-25668">CVE-2023-25668</a></li> <li>Fixes a segfault when opening multiframe gif <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-25667">CVE-2023-25667</a></li> <li>Fixes an NPE in SparseSparseMaximum <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-25665">CVE-2023-25665</a></li> <li>Fixes an FPE in AudioSpectrogram <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-25666">CVE-2023-25666</a></li> <li>Fixes a heap-buffer-overflow in AvgPoolGrad <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-25664">CVE-2023-25664</a></li> <li>Fixes a NPE in TensorArrayConcatV2 <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-25663">CVE-2023-25663</a></li> <li>Fixes a Integer overflow in EditDistance <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-25662">CVE-2023-25662</a></li> <li>Fixes a Seg fault in <code>tf.raw_ops.Print</code> <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-25660">CVE-2023-25660</a></li> <li>Fixes a OOB read in DynamicStitch <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-25659">CVE-2023-25659</a></li> <li>Fixes a OOB Read in GRUBlockCellGrad <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-25658">CVE-2023-25658</a></li> </ul> <h2>TensorFlow 2.11.0</h2> <h1>Release 2.11.0</h1> <h2>Breaking Changes</h2> <ul> <li> <p>The <code>tf.keras.optimizers.Optimizer</code> base class now points to the new Keras optimizer, while the old optimizers have been moved to the <code>tf.keras.optimizers.legacy</code> namespace.</p> <p>If you find your workflow failing due to this change, you may be facing one of the following issues:</p> <ul> <li><strong>Checkpoint loading failure.</strong> The new optimizer handles optimizer state differently from the old optimizer, which simplifies the logic of checkpoint saving/loading, but at the cost of breaking checkpoint backward compatibility in some cases. If you want to keep using an old checkpoint, please change your optimizer to <code>tf.keras.optimizer.legacy.XXX</code> (e.g. <code>tf.keras.optimizer.legacy.Adam</code>).</li> <li><strong>TF1 compatibility.</strong> The new optimizer, <code>tf.keras.optimizers.Optimizer</code>, does not support TF1 any more, so please use the legacy optimizer <code>tf.keras.optimizer.legacy.XXX</code>. We highly recommend <a href="https://www.tensorflow.org/guide/migrate">migrating your workflow to TF2</a> for stable support and new features.</li> <li><strong>Old optimizer API not found.</strong> The new optimizer, <code>tf.keras.optimizers.Optimizer</code>, has a different set of public APIs from the old optimizer. These API changes are mostly related to getting rid of slot variables and TF1 support. Please check the API documentation to find alternatives to the missing API. If you must call the deprecated API, please change your optimizer to the legacy optimizer.</li> <li><strong>Learning rate schedule access.</strong> When using a <code>tf.keras.optimizers.schedules.LearningRateSchedule</code>, the new optimizer's <code>learning_rate</code> property returns the current learning rate value instead of a <code>LearningRateSchedule</code> object as before. If you need to access the <code>LearningRateSchedule</code> object, please use <code>optimizer._learning_rate</code>.</li> <li><strong>If you implemented a custom optimizer based on the old optimizer.</strong> Please set your optimizer to subclass <code>tf.keras.optimizer.legacy.XXX</code>. If you want to migrate to the new optimizer and find it does not support your optimizer, please file an issue in the <a href="https://github.com/keras-team/keras/issues">Keras GitHub repo</a>.</li> <li><strong>Errors, such as <code>Cannot recognize variable...</code>.</strong> The new optimizer requires all optimizer variables to be created at the first <code>apply_gradients()</code> or <code>minimize()</code> call. If your workflow calls the optimizer to update different parts of the model in multiple stages, please call <code>optimizer.build(model.trainable_variables)</code> before the training loop.</li> <li><strong>Timeout or performance loss.</strong> We don't anticipate this to happen, but if you see such issues, please use the legacy optimizer, and file an issue in the Keras GitHub repo.</li> </ul> <p>The old Keras optimizer will never be deleted, but will not see any new feature additions. New optimizers (for example, <code>tf.keras.optimizers.Adafactor</code>) will only be implemented based on the new <code>tf.keras.optimizers.Optimizer</code> base class.</p> </li> <li> <p><code>tensorflow/python/keras</code> code is a legacy copy of Keras since the TensorFlow v2.7 release, and will be deleted in the v2.12 release. Please remove any import of <code>tensorflow.python.keras</code> and use the public API with <code>from tensorflow import keras</code> or <code>import tensorflow as tf; tf.keras</code>.</p> </li> </ul> <h2>Major Features and Improvements</h2> <!-- raw HTML omitted --> </blockquote> <p>... (truncated)</p> </details> <details> <summary>Changelog</summary> <p><em>Sourced from <a href="https://github.com/tensorflow/tensorflow/blob/master/RELEASE.md">tensorflow's changelog</a>.</em></p> <blockquote> <h1>Release 2.11.1</h1> <p><strong>Note</strong>: TensorFlow 2.10 was the last TensorFlow release that supported GPU on native-Windows. Starting with TensorFlow 2.11, you will need to install TensorFlow in WSL2, or install tensorflow-cpu and, optionally, try the TensorFlow-DirectML-Plugin.</p> <ul> <li>Security vulnerability fixes will no longer be patched to this Tensorflow version. The latest Tensorflow version includes the security vulnerability fixes. You can update to the latest version (recommended) or patch security vulnerabilities yourself <a href="https://github.com/tensorflow/tensorflow#patching-guidelines">steps</a>. You can refer to the <a href="https://github.com/tensorflow/tensorflow/releases">release notes</a> of the latest Tensorflow version for a list of newly fixed vulnerabilities. If you have any questions, please create a GitHub issue to let us know.</li> </ul> <p>This release also introduces several vulnerability fixes:</p> <ul> <li>Fixes an FPE in TFLite in conv kernel <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-27579">CVE-2023-27579</a></li> <li>Fixes a double free in Fractional(Max/Avg)Pool <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-25801">CVE-2023-25801</a></li> <li>Fixes a null dereference on ParallelConcat with XLA <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-25676">CVE-2023-25676</a></li> <li>Fixes a segfault in Bincount with XLA <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-25675">CVE-2023-25675</a></li> <li>Fixes an NPE in RandomShuffle with XLA enable <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-25674">CVE-2023-25674</a></li> <li>Fixes an FPE in TensorListSplit with XLA <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-25673">CVE-2023-25673</a></li> <li>Fixes segmentation fault in tfg-translate <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-25671">CVE-2023-25671</a></li> <li>Fixes an NPE in QuantizedMatMulWithBiasAndDequantize <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-25670">CVE-2023-25670</a></li> <li>Fixes an FPE in AvgPoolGrad with XLA <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-25669">CVE-2023-25669</a></li> <li>Fixes a heap out-of-buffer read vulnerability in the QuantizeAndDequantize operation <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-25668">CVE-2023-25668</a></li> <li>Fixes a segfault when opening multiframe gif <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-25667">CVE-2023-25667</a></li> <li>Fixes an NPE in SparseSparseMaximum <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-25665">CVE-2023-25665</a></li> <li>Fixes an FPE in AudioSpectrogram <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-25666">CVE-2023-25666</a></li> <li>Fixes a heap-buffer-overflow in AvgPoolGrad <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-25664">CVE-2023-25664</a></li> <li>Fixes a NPE in TensorArrayConcatV2 <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-25663">CVE-2023-25663</a></li> <li>Fixes a Integer overflow in EditDistance <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-25662">CVE-2023-25662</a></li> <li>Fixes a Seg fault in <code>tf.raw_ops.Print</code> <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-25660">CVE-2023-25660</a></li> <li>Fixes a OOB read in DynamicStitch <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-25659">CVE-2023-25659</a></li> <li>Fixes a OOB Read in GRUBlockCellGrad <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-25658">CVE-2023-25658</a></li> </ul> <h1>Release 2.11.0</h1> <h2>Breaking Changes</h2> <ul> <li> <p><code>tf.keras.optimizers.Optimizer</code> now points to the new Keras optimizer, and old optimizers have moved to the <code>tf.keras.optimizers.legacy</code> namespace. If you find your workflow failing due to this change, you may be facing one of the following issues:</p> <ul> <li><strong>Checkpoint loading failure.</strong> The new optimizer handles optimizer state differently from the old optimizer, which simplies the logic of checkpoint saving/loading, but at the cost of breaking checkpoint backward compatibility in some cases. If you want to keep using an old checkpoint, please change your optimizer to <code>tf.keras.optimizers.legacy.XXX</code> (e.g. <code>tf.keras.optimizers.legacy.Adam</code>).</li> <li><strong>TF1 compatibility.</strong> The new optimizer does not support TF1 any more, so please use the legacy optimizer <code>tf.keras.optimizer.legacy.XXX</code>. We highly recommend to migrate your workflow to TF2 for stable support and new features.</li> <li><strong>API not found.</strong> The new optimizer has a different set of public APIs from the old optimizer. These API changes are mostly related to getting rid of slot variables and TF1 support. Please check the API</li> </ul> </li> </ul> <!-- raw HTML omitted --> </blockquote> <p>... (truncated)</p> </details> <details> <summary>Commits</summary> <ul> <li><a href="https://github.com/tensorflow/tensorflow/commit/a3e2c692c18649329c4210cf8df2487d2028e267"><code>a3e2c69</code></a> Merge pull request <a href="https://redirect.github.com/tensorflow/tensorflow/issues/60016">#60016</a> from tensorflow/fix-relnotes</li> <li><a href="https://github.com/tensorflow/tensorflow/commit/13b85dcf966d0c94b2e5c21291be039db2dec7b9"><code>13b85dc</code></a> Fix release notes</li> <li><a href="https://github.com/tensorflow/tensorflow/commit/48b18dbf1301f24be9f2f41189d318ce5398540a"><code>48b18db</code></a> Merge pull request <a href="https://redirect.github.com/tensorflow/tensorflow/issues/60014">#60014</a> from tensorflow/disable-test-that-ooms</li> <li><a href="https://github.com/tensorflow/tensorflow/commit/eea48f50d6982879909bf8e0d0151bbce3f9bf4a"><code>eea48f5</code></a> Disable a test that results in OOM+segfault</li> <li><a href="https://github.com/tensorflow/tensorflow/commit/a63258434247784605986cfc2b43cb3be846cf8a"><code>a632584</code></a> Merge pull request <a href="https://redirect.github.com/tensorflow/tensorflow/issues/60000">#60000</a> from tensorflow/venkat-patch-3</li> <li><a href="https://github.com/tensorflow/tensorflow/commit/93dea7a67df44bde557e580dfdcde5ba0a7a344d"><code>93dea7a</code></a> Update RELEASE.md</li> <li><a href="https://github.com/tensorflow/tensorflow/commit/a2ba9f16f0154bf93f21132878b154238d89fad6"><code>a2ba9f1</code></a> Updating Release.md with Legal Language for Release Notes</li> <li><a href="https://github.com/tensorflow/tensorflow/commit/fae41c76bdc760454b3e5c1d3af9b8d5a5c6c548"><code>fae41c7</code></a> Merge pull request <a href="https://redirect.github.com/tensorflow/tensorflow/issues/59998">#59998</a> from tensorflow/fix-bad-cherrypick-again</li> <li><a href="https://github.com/tensorflow/tensorflow/commit/2757416dcd4a2d00ea36512c2ffd347030c1196b"><code>2757416</code></a> Fix bad cherrypick</li> <li><a href="https://github.com/tensorflow/tensorflow/commit/c78616f4b00125c8a563e10ce6b76bea8070bdd0"><code>c78616f</code></a> Merge pull request <a href="https://redirect.github.com/tensorflow/tensorflow/issues/59992">#59992</a> from tensorflow/fix-2.11-build</li> <li>Additional commits viewable in <a href="https://github.com/tensorflow/tensorflow/compare/v2.8.1...v2.11.1">compare view</a></li> </ul> </details> <br /> [![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=tensorflow&package-manager=pip&previous-version=2.8.1&new-version=2.11.1)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) --- <details> <summary>Dependabot commands and options</summary> <br /> You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts). </details>
03-25-2023 01:36:23
03-25-2023 01:36:23
_The documentation is not available anymore as the PR was closed or merged._<|||||>OK, I won't notify you again about this release, but will get in touch when a new version is available. If you'd rather skip all updates until the next major or minor version, let me know by commenting `@dependabot ignore this major version` or `@dependabot ignore this minor version`. If you change your mind, just re-open this PR and I'll resolve any conflicts on it.
transformers
22,379
closed
CLIP default download location is /root/.cache/..., not current working dir like other models
### System Info - `transformers` version: 4.27.3 - Platform: Linux-5.10.147+-x86_64-with-glibc2.31 - Python version: 3.9.16 - Huggingface_hub version: 0.13.3 - PyTorch version (GPU?): 1.13.1+cu116 (False) - Tensorflow version (GPU?): 2.11.0 (False) - Flax version (CPU?/GPU?/TPU?): 0.6.4 (cpu) - Jax version: 0.3.25 - JaxLib version: 0.3.25 - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? flax: @sanchit-gandhi ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ```python clip_model = jax.device_get(FlaxCLIPModel.from_pretrained('openai/clip-vit-large-patch14')) ``` downloads by default if not exists locally: `Downloading flax_model.msgpack: 100% 1.71G/1.71G [00:08<00:00, 210MB/s]` BUT, unlike all other models [that i'm using in the HF pipelines for SD-Flax], the file download location is far away from the working directory: `find / -iname 'flax_model.msgpack'` shows that the SD weights are where they should be, but CLIP's weights are off in some hidden, hashed directory: `/root/.cache/huggingface/hub/models--openai--clip-vit-large-patch14/snapshots/8d052a0f05efbaefbc9e8786ba291cfdf93e5bff/flax_model.msgpack` is this intended? if so why break from the pattern of other models that download to cwd? ### Expected behavior files would download to current working directory, e.g. something like `/content/openai/clip-vit-large-patch14/` and by extension, plugging in `_name_or_path` value of `'openai/clip-vit-large-patch14'` would be one-and-the-same to the file location as well as the hub's catalogue name (i.e. can i confidently put in a different path that i saved the weights to manually?)
03-25-2023 01:04:57
03-25-2023 01:04:57
Hi @krahnikblis, thanks for raising this issue! In the transformers library, `from_pretrained` can be used to load a model from the hub and from a local file. When `from_pretrained(path)` is called, if `path` is a local folder, these weights are loaded. If it's a checkpoint on the hub e.g. `openai/clip-vit-large-patch14`, then the checkpoint is download to the cache directory, as you've correctly noticed. If `from_pretrained(path)` is called again, then the weights are loaded from the cache. This happens for all frameworks: PyTorch, TensorFlow and Flax. For SD-Flax, am I correct in understanding this as the Stable Diffusion pipeline from the diffusers library? Could you share a more detailed snippet showing what exactly is being run? For the diffusers pipelines, if using the `pipeline.from_pretrained(model_weights)` API, then the same behaviour will happen (download to cache, can load from local) [as noted in the documentation](https://huggingface.co/docs/diffusers/using-diffusers/loading#loading-pipelines).<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.