repo
stringclasses
1 value
number
int64
1
25.3k
state
stringclasses
2 values
title
stringlengths
1
487
body
stringlengths
0
234k
โŒ€
created_at
stringlengths
19
19
closed_at
stringlengths
19
19
comments
stringlengths
0
293k
transformers
23,967
closed
๐ŸŒ [i18n-KO] Translated `bertology.mdx` to Korean
<!-- PR์˜ ์ œ๋ชฉ์€ "๐ŸŒ [i18n-KO] Translated `bertology.mdx` to Korean" ์œผ๋กœ ๋ถ€ํƒ๋“œ๋ฆฝ๋‹ˆ๋‹ค --> # What does this PR do? Translated the `bertology.mdx` file of the documentation to Korean. Thank you in advance for your review. Part of https://github.com/huggingface/transformers/issues/20179 <!-- ๋ฉ”์ธ ์ด์Šˆ์— ๊ธฐ๋ก์ด ๋‚จ์•„์š”! ๊ฐ€์งœ์—ฐ๊ตฌ์†Œ ๋ฆฌํฌ๋ฅผ ์‚ฌ์šฉํ•ด ์—ฐ์Šตํ•˜์‹ค๋•Œ๋Š” ์ œ๊ฑฐํ•ด์ฃผ์‹œ๋ฉด ๊ฐ์‚ฌํ•˜๊ฒ ์Šต๋‹ˆ๋‹ค! :smile: --> ## Before reviewing - [x] Check for missing / redundant translations (๋ฒˆ์—ญ ๋ˆ„๋ฝ/์ค‘๋ณต ๊ฒ€์‚ฌ) - [x] Grammar Check (๋งž์ถค๋ฒ• ๊ฒ€์‚ฌ) - [x] Review or Add new terms to glossary (์šฉ์–ด ํ™•์ธ ๋ฐ ์ถ”๊ฐ€) - [x] Check Inline TOC (e.g. `[[lowercased-header]]`) - [ ] Check live-preview for gotchas (live-preview๋กœ ์ •์ƒ์ž‘๋™ ํ™•์ธ) ## Who can review? (Initial) <!-- 1. ์œ„ ์ฒดํฌ๊ฐ€ ๋ชจ๋‘ ์™„๋ฃŒ๋œ ๋’ค์—๋งŒ ๊ฐ€์งœ์—ฐ๊ตฌ์†Œ ํŒ€์›๋“ค์—๊ฒŒ ๋ฆฌ๋ทฐ ์š”์ฒญํ•˜๋Š” ์•„๋ž˜ ์ฃผ์„์„ ๋…ธ์ถœํ•ด์ฃผ์„ธ์š”! --> <!-- Team PseudoLab, may you please review this PR? @0525hhgus, @KIHOON71, @sim-so, @gabrielwithappy, @HanNayeoniee, @wonhyeongseo, @jungnerd --> ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? (Final) <!-- 2. ๊ฐ€์งœ์—ฐ๊ตฌ์†Œ ํŒ€์›๋“ค๊ณผ ๋ฆฌ๋ทฐ๊ฐ€ ๋๋‚œ ํ›„์—๋งŒ ํ—ˆ๊น…ํŽ˜์ด์Šค ์ง์›๋“ค์—๊ฒŒ ๋ฆฌ๋ทฐ ์š”์ฒญํ•˜๋Š” ์•„๋ž˜ ์ฃผ์„์„ ๋…ธ์ถœํ•ด์ฃผ์„ธ์š”! --> <!-- @sgugger, @ArthurZucker, @eunseojo May you please review this PR? -->
06-03-2023 02:50:11
06-03-2023 02:50:11
Missed the first header in my first commit. Closing in favor of https://github.com/huggingface/transformers/pull/23968<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23967). All of your documentation changes will be reflected on that endpoint.
transformers
23,966
closed
Support "OptimizedModule" models obtained from torch.compile() in inference pipelines
### Feature request Currently models obtained by the [torch.compile()](https://pytorch.org/docs/stable/generated/torch.compile.html) feature introduced in Pytorch 2.0 are not supported in inference pipelines from :hugs: Transformers. ### Motivation The same way pipelines support optimization methods such as the use of [accelerate](https://github.com/huggingface/accelerate) with "device_map" and [bitsandbytes](https://github.com/TimDettmers/bitsandbytes) with "load_in_8bit", torch.compile() support would make a great addition for better inference performance in pipelines. ### Your contribution Opening this feature request :P. Unfortunately I'm not familiar enough with the :hugs: Transformers codebase and the quirks that one could find with such a new feature such as torch.compile() to feel as I can tackle his task.
06-02-2023 22:45:31
06-02-2023 22:45:31
cc @Narsil <|||||>Hi @iranz15 can you share an example of code of failure ? I tried to do : ```python pipe.model = torch.compile(pipe.model) ``` And everything seems to work ok after removing `inference_mode` and just using `torch.no_grad`. @sgugger any preference between compilation and inference mode ? I haven't played enough with either but I remember `inference_mode` benefits being minor, maybe `torch.compile` benefits are currently much higher.<|||||>I don't know `inference_mode` very much, so not sure. It also seems like it should work with compiled model so might be worth reporting to the PyTorch team.<|||||>Hey @Narsil. A error/warning message seems to show when using `torch.compile()` on the model before adding to the pipeline, that is, for the following case ```python model = AutoModelForCausalLM.from_pretrained("bigscience/bloomz-560m") tokenizer = AutoTokenizer.from_pretrained("bigscience/bloomz-560m") model = torch.compile(model) pipe = pipeline("text-generation", model=model, tokenizer=tokenizer) ``` it will complain that `The model 'OptimizedModule' is not supported for $TASK_NAME ...`, but first loading the pipeline and then accessing the model and compiling it like you did seems to work fine. Surprisingly, after doing some additional tests, even when it complains about 'OptimizedModule' it seems that you can still use the pipeline without any problem. The additional errors that I was having where due to some oversights in my code that wrongly pointed me to blame `torch.compile()`. <|||||>> The additional errors that I was having where due to some oversights in my code that wrongly pointed me to blame torch.compile(). OK thanks, a warning will trigger when sending arbitrary objects (because it's likely an issue in user code) but still will use the model pass through so you can use any class you want so you can run onnx, optimum etc.. too. (as long as they support the same API as the original model)
transformers
23,965
closed
Auto tokenizer registration
# What does this PR do? Add loop check over `CONFIG_MAPPING._extra_content` to look for newly registered config. Fixes https://github.com/huggingface/transformers/issues/23338 ## Who can review? @sgugger
06-02-2023 21:51:28
06-02-2023 21:51:28
_The documentation is not available anymore as the PR was closed or merged._
transformers
23,964
closed
Bump cryptography from 39.0.1 to 41.0.0 in /examples/research_projects/decision_transformer
Bumps [cryptography](https://github.com/pyca/cryptography) from 39.0.1 to 41.0.0. <details> <summary>Changelog</summary> <p><em>Sourced from <a href="https://github.com/pyca/cryptography/blob/main/CHANGELOG.rst">cryptography's changelog</a>.</em></p> <blockquote> <p>41.0.0 - 2023-05-30</p> <pre><code> * **BACKWARDS INCOMPATIBLE:** Support for OpenSSL less than 1.1.1d has been removed. Users on older version of OpenSSL will need to upgrade. * **BACKWARDS INCOMPATIBLE:** Support for Python 3.6 has been removed. * **BACKWARDS INCOMPATIBLE:** Dropped support for LibreSSL &lt; 3.6. * Updated the minimum supported Rust version (MSRV) to 1.56.0, from 1.48.0. * Updated Windows, macOS, and Linux wheels to be compiled with OpenSSL 3.1.1. * Added support for the :class:`~cryptography.x509.OCSPAcceptableResponses` OCSP extension. * Added support for the :class:`~cryptography.x509.MSCertificateTemplate` proprietary Microsoft certificate extension. * Implemented support for equality checks on all asymmetric public key types. * Added support for ``[email protected]`` encrypted keys in :func:`~cryptography.hazmat.primitives.serialization.load_ssh_private_key`. * Added support for obtaining X.509 certificate signature algorithm parameters (including PSS) via :meth:`~cryptography.x509.Certificate.signature_algorithm_parameters`. * Support signing :class:`~cryptography.hazmat.primitives.asymmetric.padding.PSS` X.509 certificates via the new keyword-only argument ``rsa_padding`` on :meth:`~cryptography.x509.CertificateBuilder.sign`. * Added support for :class:`~cryptography.hazmat.primitives.ciphers.aead.ChaCha20Poly1305` on BoringSSL. <p>.. _v40-0-2:</p> <p>40.0.2 - 2023-04-14 </code></pre></p> <ul> <li>Fixed compilation when using LibreSSL 3.7.2.</li> <li>Added some functions to support an upcoming <code>pyOpenSSL</code> release.</li> </ul> <p>.. _v40-0-1:</p> <p>40.0.1 - 2023-03-24</p> <pre><code> * Fixed a bug where certain operations would fail if an object happened to be in the top-half of the memory-space. This only impacted 32-bit systems. <p>.. _v40-0-0:</p> <p>40.0.0 - 2023-03-24 </code></pre></p> <ul> <li><strong>BACKWARDS INCOMPATIBLE:</strong> As announced in the 39.0.0 changelog, the way <code>cryptography</code> links OpenSSL has changed. This only impacts users who</li> </ul> <!-- raw HTML omitted --> </blockquote> <p>... (truncated)</p> </details> <details> <summary>Commits</summary> <ul> <li><a href="https://github.com/pyca/cryptography/commit/c4d494fd3ee907316bd846e90cbf4a8df75a25ac"><code>c4d494f</code></a> 41.0.0 version bump (<a href="https://redirect.github.com/pyca/cryptography/issues/8991">#8991</a>)</li> <li><a href="https://github.com/pyca/cryptography/commit/8708245ccdeaff21d65eea68a4f8d2a7c5949a22"><code>8708245</code></a> new openssl day (<a href="https://redirect.github.com/pyca/cryptography/issues/8990">#8990</a>)</li> <li><a href="https://github.com/pyca/cryptography/commit/31436a486661cd863d4c77e40facf93fbb2d9f54"><code>31436a4</code></a> admit to the existence of nuance in HKDF (<a href="https://redirect.github.com/pyca/cryptography/issues/8987">#8987</a>)</li> <li><a href="https://github.com/pyca/cryptography/commit/91e41898e6d1d2a9a6e980c39e2f8baa2fa8a1f8"><code>91e4189</code></a> Port DSA to Rust (<a href="https://redirect.github.com/pyca/cryptography/issues/8978">#8978</a>)</li> <li><a href="https://github.com/pyca/cryptography/commit/f302d28b81607aab28d22b653da78d564824f267"><code>f302d28</code></a> Update CI for new LibreSSL releases (<a href="https://redirect.github.com/pyca/cryptography/issues/8975">#8975</a>)</li> <li><a href="https://github.com/pyca/cryptography/commit/851d8ccb340bfc93c827b9e80af939a216b34925"><code>851d8cc</code></a> Bump openssl from 0.10.52 to 0.10.53 in /src/rust (<a href="https://redirect.github.com/pyca/cryptography/issues/8986">#8986</a>)</li> <li><a href="https://github.com/pyca/cryptography/commit/0918c7236c94c29272e0790ba0227cfa9401943b"><code>0918c72</code></a> Bump coverage from 7.2.6 to 7.2.7 (<a href="https://redirect.github.com/pyca/cryptography/issues/8985">#8985</a>)</li> <li><a href="https://github.com/pyca/cryptography/commit/730a5ce11a91f40c1bb0f881ab22bc52d6cecef6"><code>730a5ce</code></a> Bump openssl-sys from 0.9.87 to 0.9.88 in /src/rust (<a href="https://redirect.github.com/pyca/cryptography/issues/8984">#8984</a>)</li> <li><a href="https://github.com/pyca/cryptography/commit/88e8c288975709228005e70301644034463d9823"><code>88e8c28</code></a> Bump BoringSSL and/or OpenSSL in CI (<a href="https://redirect.github.com/pyca/cryptography/issues/8983">#8983</a>)</li> <li><a href="https://github.com/pyca/cryptography/commit/3e24e44527a69884ca0c3247e1b5e9c8bbf590c9"><code>3e24e44</code></a> Bump once_cell from 1.17.1 to 1.17.2 in /src/rust (<a href="https://redirect.github.com/pyca/cryptography/issues/8982">#8982</a>)</li> <li>Additional commits viewable in <a href="https://github.com/pyca/cryptography/compare/39.0.1...41.0.0">compare view</a></li> </ul> </details> <br /> [![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=cryptography&package-manager=pip&previous-version=39.0.1&new-version=41.0.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) --- <details> <summary>Dependabot commands and options</summary> <br /> You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts). </details>
06-02-2023 20:12:15
06-02-2023 20:12:15
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23964). All of your documentation changes will be reflected on that endpoint.
transformers
23,963
closed
[docs] Performance docs tidy up, part 1
The [performance and scalability section](https://huggingface.co/docs/transformers/v4.29.1/en/performance) of the docs is difficult to navigate and find actionable advice in, according to feedback. Some topics are missing too. As part of the 2023 docs roadmap, we planned to refactor the section to make the actionable bits easier and quicker to find. The refactor will come in several PRs to split it into more manageable chunks. This is the first part where you'll find: - reworked Overview for easier navigation - reworked "Training on a single GPU" part. This part has been reorganized to have a clearer structure, the conceptual content has been clarified, the how to use/enable examples made visible, while the benchmarking comparison has been removed (it bloats the doc), instead there are links to benchmarks. Overall, I think that this section should now be easier to navigate and find actionable pieces with reasonable amount of explanation left in the doc, and relevant links for more information.
06-02-2023 18:16:05
06-02-2023 18:16:05
_The documentation is not available anymore as the PR was closed or merged._<|||||>> Also cc @lvwerra and @stas00 who contributed the original guide. Thank you for the ping, Sylvain. @lvwerra and I had two very different approaches on how to do the new documentation, and since we went with Leandro's way I will let Leandro comment on these changes.<|||||>Sorry about delay. I have moved the "Model training anatomy" part into a separate conceptual guide, and linked the two docs. This should address most of the concerns voiced in this discussion. Other feedback has also been incorporated. Please take another look: cc @lvwerra @sgugger <|||||>Would you like to take another pass on this PR, @lvwerra ?
transformers
23,962
closed
`OperatorNotAllowedInGraphError` on using TF XLA for `generate` with Tf-GPT2 model
### System Info - `transformers` version: 4.29.2 - Platform: Linux-5.4.0-1099-aws-x86_64-with-debian-buster-sid - Python version: 3.7.16 - Huggingface_hub version: 0.15.1 - Safetensors version: not installed - PyTorch version (GPU?): 1.13.1+cu117 (True) - Tensorflow version (GPU?): 2.11.0 (True) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: yes - Using distributed or parallel set-up in script?: no ### Who can help? @gante ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction https://huggingface.co/blog/tf-xla-generate ### Expected behavior It should simply work w/o errors Instead got the following error: ```python --------------------------------------------------------------------------- OperatorNotAllowedInGraphError Traceback (most recent call last) ~/platypus/instant/los_angeles.py in <module> 33 print("(will be slow as it is the first call)") 34 start = time.time_ns() ---> 35 xla_generate(**tokenized_input_1) 36 end = time.time_ns() 37 print(f"Execution time -- {(end - start) / 1e6:.1f} ms\n") ~/miniconda3/envs/nanonets/lib/python3.7/site-packages/tensorflow/python/util/traceback_utils.py in error_handler(*args, **kwargs) 151 except Exception as e: 152 filtered_tb = _process_traceback_frames(e.__traceback__) --> 153 raise e.with_traceback(filtered_tb) from None 154 finally: 155 del filtered_tb ~/miniconda3/envs/nanonets/lib/python3.7/site-packages/transformers/generation/tf_utils.py in generate(self, inputs, generation_config, logits_processor, seed, **kwargs) 824 if not self.config.is_encoder_decoder: 825 if generation_config.pad_token_id is not None and tf.math.reduce_any( --> 826 inputs_tensor[:, -1] == generation_config.pad_token_id 827 ): 828 logger.warning( OperatorNotAllowedInGraphError: Using a symbolic `tf.Tensor` as a Python `bool` is not allowed: AutoGraph did convert this function. This might indicate you are trying to use an unsupported feature. ```
06-02-2023 15:20:12
06-02-2023 15:20:12
@gante Any way I can help in debugging this? Have not used TF in long time, wanted to do a comparison of latest PyTorch with Tf-XLA<|||||>Hey @SushantDaga ๐Ÿ‘‹ The script seems to be working fine on my end (local computer and [colab](https://colab.research.google.com/drive/1377siPIpYLno_DpyE9iodQgNBymTG2h-?usp=sharing)). Could you try running it with updated TF and `transformers`, to double-check?<|||||>Interesting. It is indeed working on Colab. TF, and Transformers version also seems to be fine. Seems like TF installation is still not straight forward. Thank you for clearing this. TF-XLA seems to be working better than native PyTorch<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
23,961
closed
Fix `is_optimum_neuron_available`
# What does this PR do? #23163 introduced a new way for checking which seems to not work in this case. I get this when running `optimum-neuron`: ``` Please use the TrainiumTrainer from optimum[neuron] instead of the Transformers library to perform training on AWS Trainium instances. More information here: https://github.com/huggingface/o ptimum-neuron ``` It happens [here](https://github.com/huggingface/transformers/blob/main/src/transformers/training_args.py#L82). Basically it means that `optimum-neuron` is found to not be available, this PR fixes that.
06-02-2023 15:08:24
06-02-2023 15:08:24
_The documentation is not available anymore as the PR was closed or merged._<|||||>No, here the check for `_is_package_available("optimum.neuron")` is done at "runtime" not "import time". For some reason `optimum.neuron` is not available at import time of this file but is available later... Maybe this is due to the fact that `optimum` is a namespace package, not sure.<|||||>Yes, I will investigate. If I find a fix on the other side, will revert back to the original version here.<|||||>I'm fine with having the workaround :-) Just flagging this for you to look at when you have some time.<|||||>Failure is due to a library pinned on main, so merging.
transformers
23,960
closed
FlaxT5ForConditionalGeneration: Inconsistency in Final Block Hidden State of Encoder/Decoder
### System Info I'm implementing my own T5 model in JAX and using the `FlaxT5ForConditionalGeneration` module to evaluate the results of my work. During my testing phase, I ran into an issue. Using the provided code, I noticed that the hidden states from block `0` to block `10` in my implementation are consistent with the corresponding hidden states in the transformer model (i.e., `output_flax['hidden_states'][0]` to `output_flax['hidden_states'][10]`). However, the issue arises in the final block, where the hidden state of my model doesn't match with the transformer model's corresponding hidden state (`output_flax['hidden_states'][11]`). This is strange because after I apply the RMS layer normalization on my final block hidden state to get the `final_hidden_state`, it aligns with the `final_hidden_state` of the transformer model (`output_flax['final_hidden_state']`). According to my understanding, the encoder block is replicated 12 times without any special processing in the final block. Hence, I am unclear about what could be causing this inconsistency in the final block's hidden states for both the encoder and decoder. In summary, here's what I've observed: - My hidden state aligns with `output_flax['hidden_states'][0]` to `output_flax['hidden_states'][10]`. - My hidden state doesn't match `output_flax['hidden_states'][11]` (before applying the final layer norm). - My final hidden state (after applying the layer norm) aligns with `output_flax['final_hidden_state']`. ### Who can help? _No response_ ### Information - [X] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Here is the Python code I used for testing: ```python from transformers import AutoTokenizer, FlaxT5ForConditionalGeneration tokenizer = AutoTokenizer.from_pretrained("t5-base") model = FlaxT5ForConditionalGeneration.from_pretrained("allenai/unifiedqa-t5-base") inputs = tokenizer( ["summarize: My friends are cool but they eat too many carbs."], return_tensors="np" ) input_ids = inputs["input_ids"] output_flax = model.encode( input_ids, output_hidden_states=True, return_dict=True, output_attentions=True ) ``` ### Expected behavior I expect that each block's hidden state in my implementation of the encoder/decoder should align with the corresponding block's hidden state in the transformer model.
06-02-2023 15:03:02
06-02-2023 15:03:02
cc @sanchit-gandhi <|||||>Hey @ztjhz - thanks for the descriptive issue! We are indeed missing a bit of functionality here - for layers 0 to 10 we append the output of the layer to our collection of hidden-states `all_hidden_states`: https://github.com/huggingface/transformers/blob/17846646f230fdf5b0d6b4d31248ccb418000acb/src/transformers/models/t5/modeling_flax_t5.py#L691-L692 However, we **don't** add the `hidden_states` to our collection `all_hidden_states` after the last layer, we simply return all of the collections we have so far: https://github.com/huggingface/transformers/blob/17846646f230fdf5b0d6b4d31248ccb418000acb/src/transformers/models/t5/modeling_flax_t5.py#L721 What we should do is append the `hidden_states` to our collection `all_hidden_states` after the final layer, as we do for FlaxBart: https://github.com/huggingface/transformers/blob/17846646f230fdf5b0d6b4d31248ccb418000acb/src/transformers/models/bart/modeling_flax_bart.py#L496-L497 Would you like to open a PR to add these lines into FlaxT5?<|||||>cc @amyeroberts here - essentially, what's happening in Flax T5 is that when `output_hidden_states` is True, we append the hidden states for layers 1 to N-1: https://github.com/huggingface/transformers/blob/17846646f230fdf5b0d6b4d31248ccb418000acb/src/transformers/models/t5/modeling_flax_t5.py#L691-L692 But then **not** for the final layer N. For this final layer, we only append the hidden state after it's gone through an extra layer norm:https://github.com/huggingface/transformers/blob/17846646f230fdf5b0d6b4d31248ccb418000acb/src/transformers/models/t5/modeling_flax_t5.py#L774 I suggested that we fix this to add the pre layer-norm hidden states to `all_hidden_states` for the Nth layer, but this would also require updating the **PyTorch** code to follow suit, since the bug is actually inherited from the PT code: https://github.com/huggingface/transformers/blob/17846646f230fdf5b0d6b4d31248ccb418000acb/src/transformers/models/t5/modeling_t5.py#L1066-L1067 For the base T5 model, which has 6 encoder layers, we go from outputting 6 encoder hidden states (hidden states from layers 1 to 5, and then the post layer-norm last hidden state) to 7 hidden states (hidden states from layers 1 to 6, and then the post layer-norm last hidden state). Since T5 is a very core model, WDYT about this update? Is it too breaking to make this fix?<|||||>@sanchit-gandhi @ztjhz Thanks for the detailed outline of the issue! T5 is by far one of our most downloaded models, and many other models in the library copy from it. I wouldn't be in favour of changing the output like this unless it was causing major issues and reported by PT users. <|||||>Thanks for chiming in @amyeroberts! I agree that it's probably too core to change in the lib, but you can certainly modify the source file locally @ztjhz and add the changes so that you have correctness on your end<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Going to close this one since it's been established that we probably can't update such a core model in the lib with a surprise breaking change. Feel free to make the changes locally @ztjhz! You can copy the Flax T5 modelling file and make all the changes you require :)
transformers
23,959
closed
Remote code improvements
# What does this PR do? This PR adds a few improvements to the code on the Hub API, mainly: - if `trust_remote_code` is unset, we ask the user to validate or not (with a timeout of 30s for users launchin script) - in case of conflict between code on the Hub and local code (for when Falcon or MPT are added in Transformers for instance), `trust_remote_code=False` will execute the local code, `trust_remote_code=True` the one on the Hub
06-02-2023 14:42:25
06-02-2023 14:42:25
_The documentation is not available anymore as the PR was closed or merged._
transformers
23,958
closed
Incorrect typing of `debug` field in `TrainingArguments`
### System Info https://github.com/huggingface/transformers/blob/v4.29.2/src/transformers/training_args.py#L862 ### Who can help? _No response_ ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction If you use `TrainingArguments` and try to pass in a list of strings to `debug`, type checkers will complain. ### Expected behavior The `debug` field should be typed as `Union[str, List[str]]` so type checkers don't complain. For my particular use case, I am using a library that serializes dataclasses based on their type annotations, and it fails to serialize if `debug` is `[]` since the type hint says it is `str`. But the `__post_init__` turns `debug=""` into `debug=[]`, so there's no way I can get instances of this class to serialize without passing in non-empty debug options. I would gladly submit a PR myself, but I am not sure if it would have implications for the command line parsers.
06-02-2023 14:16:15
06-02-2023 14:16:15
The `debug` field does not accept a list of string, but a list of `DebugOption`, so the type annotation should be more like `Union[str, List[DebugOption]]`. Happy to have a look at a PR if you want to fix it.
transformers
23,957
closed
find cpu_amp is incorrect set, it's only set if self.sharded_ddp is nโ€ฆ
โ€ฆot None... - trainer: @sgugger
06-02-2023 13:15:16
06-02-2023 13:15:16
amp does not work if self.sharded_ddp is None<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>If Accelerate's support isn't handling mixed precision or if there is some issue with that, raise an issue with the reproducer. As of now, no changes are required to mixed precision handling.<|||||>> Now, mixed precision handling is moved to Accelerate. Only when Fairscale is being used mixed precision handling is supported in trainer as it isn't present in Accelerate. Hi, I see in evaluation_loop and prediction. accelerate prepare is only called when self.is_deepspeed_enabled and self.model_wrapped is self.model, so how about the non-deepspeed case? <|||||>> Hi, I see in evaluation_loop and prediction. accelerate prepare is only called when self.is_deepspeed_enabled and self.model_wrapped is self.model, so how about the non-deepspeed case? That makes sense, thank you! I've raised a PR addressing this as linked above.<|||||>Hi, @pacman100 , I have another question. any idea of jit trace in evaluation in https://github.com/huggingface/transformers/blob/main/src/transformers/trainer.py#L1384, it also need the amp info.
transformers
23,956
closed
https://huggingface.co/sentence-transformers/clip-ViT-B-32 license?
Hi, model card for clip-ViT-B-32 misses information about license. Could you please add license info?
06-02-2023 11:59:01
06-02-2023 11:59:01
You should open a discussion on the model page directly.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
23,954
closed
add new mms functions to doc
This PR is a follow-up from: https://github.com/huggingface/transformers/pull/23813 to add two new functions to the doc strings.
06-02-2023 10:19:52
06-02-2023 10:19:52
_The documentation is not available anymore as the PR was closed or merged._
transformers
23,953
closed
[ASR pipeline] Check for torchaudio
# What does this PR do? Fixes https://github.com/huggingface/transformers/pull/23445#discussion_r1213368268: checks whether `torchaudio` is available to import and throws a useful error message to the user if not
06-02-2023 10:12:15
06-02-2023 10:12:15
_The documentation is not available anymore as the PR was closed or merged._
transformers
23,952
closed
Trainer: fixed evaluate raising `KeyError` for ReduceLROnPlateau
If the `metric_for_best_model` does not start with `eval_`, the trainer always appends the `eval_` string before querying the metric value. Except in one code snippet, that only gets triggered if one uses the `ReduceLROnPlateau` scheduler. I have added the check for `eval` like in the rest of the trainer. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @sgugger
06-02-2023 09:05:50
06-02-2023 09:05:50
_The documentation is not available anymore as the PR was closed or merged._
transformers
23,951
open
`prompt_ids` does not seem to work with `repetition_penalty`
### System Info transformers== 4.30.0.dev0 ### Who can help? @connor-henderson @sanchit-gandhi Hi, Thank you for supporting prompt for inferencing with whisper (https://github.com/huggingface/transformers/pull/22496)! I've used this feature for Japanese audio files to transcribe rare words (e.g., proper nouns and domain specific words), and found it does work, that is, prompt improves WER when we set token list of proper nouns as `prompt_ids`. And I also succeeded in reducing repetition in whisper transcript with `repetition_penalty`. Whisper sometimes meaninglessly repeat words without `repetition_penalty`. However, when I set both `prompt_ids` and `repetition_penalty`, prompt does not work well. I suspect `repetition_penalty` might suppress tokens specified as `prompt_ids`. Could you please check the above behavior? ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ``` base_model_name_or_path = 'openai/whisper-large-v2' model = WhisperForConditionalGeneration.from_pretrained( base_model_name_or_path, load_in_8bit=True, device_map='auto', ) language = 'ja' task = 'transcribe' model = PeftModel.from_pretrained(model, str(model_dir)) # I currently use AdaLoRA-tuned model, but this is reproducible with pretrained large model. tokenizer = WhisperTokenizer.from_pretrained(base_model_name_or_path, language=language, task=task) processor = WhisperProcessor.from_pretrained(base_model_name_or_path, language=language, task=task) feature_extractor = processor.feature_extractor forced_decoder_ids = processor.get_decoder_prompt_ids(language=language, task=task) pipe = AutomaticSpeechRecognitionPipeline(model=model, tokenizer=tokenizer, feature_extractor=feature_extractor) prompt_text = 'prompt text' prompt_ids = processor.get_prompt_ids(prompt_text) with torch.cuda.amp.autocast(): outputs = pipe( str(audio_file_path), # I use in-house audio files generate_kwargs={ 'forced_decoder_ids': forced_decoder_ids, 'prompt_ids': prompt_ids, 'num_beams': 4, 'repetition_penalty': 2.0, # I change this value (1.0 ~ 2.0) }, max_new_tokens=255, ) print(outputs) ``` ### Expected behavior when `repetition_penalty=1.0`, prompt works. As I gradually increase `repetition_penalty`, words I set as prompt will not appear in transcripts.
06-02-2023 08:53:50
06-02-2023 08:53:50
Hey @kiyosumaeda! Cool to see that you're using PEFT to fine-tune Whisper and that prompting is working with the PEFT model - this is nice validation that the model doesn't exhibit catastrophic forgetting, since we don't fine-tune on the prompting task. Note that you can get quite considerable speed-ups to inference time by running the model in fp16 instead of int8 (see https://github.com/huggingface/peft/discussions/477#discussion-5213394) - this also seems to drastically reduce Whisper's propensity to hallucinate. Could you try this first as a step?<|||||>Hi @sanchit-gandhi! Thank you for the feedback and suggestion. I will try fp16 for inference to see whether prompting still works and hallucination is reduced.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Did you have any luck here @kiyosumaeda?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
23,950
closed
Got errors when calling AutoModelForCausalLM related APIs
### System Info torch ==1.10.1+cu111 python == 3.8.5 CUDA 11.2 2 GPU == Tesla P40 (vRAM 24G) 14 core CPU == Intel(R) Xeon(R) CPU E5-2680 v4 @ 2.40GHz RAM == 112G ubuntu 18 OS ### Who can help? _No response_ ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction I'm now using 4.8.1 transformers codebase for running some LLM models. While I run into some trouble when trying to switch device to cpu mode. ``` โ€‚โ€‚device = 'cpu' model = AutoModelForCausalLM.from_pretrained( args.model_id, revision=args.revision, device_map="auto", torch_dtype=torch.float32 ).to(device) ``` method "to(device)" is what I add to call additionally comparing to the origin code, and for the reason why doing this is that model is by default loaded to the GPU while tokenizer is set to be loaded to CPU. otherwise showstop message would given as: ``` Loading checkpoint shards: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 4/4 [00:29<00:00, 7.30s/it] /opt/anaconda3/envs/py385/lib/python3.8/site-packages/transformers/generation/utils.py:1405: UserWarning: You are calling .generate() with the `input_ids` being on a device type different than your model's device. `input_ids` is on cpu, whereas the model is on cuda. You may experience unexpected behaviors or slower generation. Please make sure that you have put `input_ids` to the correct device by calling for example input_ids = input_ids.to('cuda') before running `.generate()`. ``` but after adding "to(device)" and rerun again another problem would show up at model loading. ``` Loading checkpoint shards: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 4/4 [00:29<00:00, 7.32s/it] Traceback (most recent call last): File "generate_cpu.py", line 139, in <module> main() File "generate_cpu.py", line 113, in main model = AutoModelForCausalLM.from_pretrained( File "/opt/anaconda3/envs/py385/lib/python3.8/site-packages/transformers/modeling_utils.py", line 1896, in to return super().to(*args, **kwargs) File "/opt/anaconda3/envs/py385/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1145, in to return self._apply(convert) File "/opt/anaconda3/envs/py385/lib/python3.8/site-packages/torch/nn/modules/module.py", line 797, in _apply module._apply(fn) File "/opt/anaconda3/envs/py385/lib/python3.8/site-packages/torch/nn/modules/module.py", line 797, in _apply module._apply(fn) File "/opt/anaconda3/envs/py385/lib/python3.8/site-packages/torch/nn/modules/module.py", line 797, in _apply module._apply(fn) [Previous line repeated 1 more time] File "/opt/anaconda3/envs/py385/lib/python3.8/site-packages/torch/nn/modules/module.py", line 820, in _apply param_applied = fn(param) File "/opt/anaconda3/envs/py385/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1143, in convert return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking) NotImplementedError: Cannot copy out of meta tensor; no data! ``` I have no idea why this happened and it is working if just use all GPU for model and tokenizer but I just want to know how it works if using CPU for both since I wish I could leverage the CPU RAM totally which I have only over 40G VRAM in total and it seems to be slow when using GPU only by default. **Here attaches the target script generate.py:** ``` import argparse import torch from dialogues import DialogueTemplate, get_dialogue_template from transformers import (AutoModelForCausalLM, AutoTokenizer, GenerationConfig, set_seed) def main(): parser = argparse.ArgumentParser() parser.add_argument( "--model_id", type=str, help="Name of model to generate samples with", ) parser.add_argument( "--revision", type=str, default=None, help="The model repo's revision to use", ) parser.add_argument( "--system_prompt", type=str, default=None, help="Overrides the dialogue template's system prompt" ) args = parser.parse_args() # Set seed for reproducibility set_seed(42) prompts = [ [ { "role": "user", "content": "Develop a C++ program that reads a text file line by line and counts the number of occurrences of a specific word in the file.", } ], [ { "role": "user", "content": "Implement a Python function to find the longest common subsequence of two input strings using dynamic programming.", } ], [{"role": "user", "content": "Implement a regular expression in Python to validate an email address."}], [ { "role": "user", "content": "Write a program to find the nth Fibonacci number using dynamic programming.", } ], [ { "role": "user", "content": "Implement a binary search algorithm to find a specific element in a sorted array.", } ], [{"role": "user", "content": "Implement a queue data structure using two stacks in Python."}], [ { "role": "user", "content": "Implement a program to find the common elements in two arrays without using any extra data structures.", } ], ] try: dialogue_template = DialogueTemplate.from_pretrained(args.model_id, revision=args.revision) except Exception: print("No dialogue template found in model repo. Defaulting to the `no_system` template.") dialogue_template = get_dialogue_template("no_system") if args.system_prompt is not None: dialogue_template.system = args.system_prompt formatted_prompts = [] for prompt in prompts: dialogue_template.messages = [prompt] if isinstance(prompt, dict) else prompt formatted_prompts.append(dialogue_template.get_inference_prompt()) print("=== SAMPLE PROMPT ===") print(formatted_prompts[0]) print("=====================") device = "cpu" tokenizer = AutoTokenizer.from_pretrained(args.model_id, revision=args.revision) print(f"Special tokens: {tokenizer.special_tokens_map}") print(f"EOS token ID for generation: {tokenizer.convert_tokens_to_ids(dialogue_template.end_token)}") generation_config = GenerationConfig( temperature=0.2, top_k=50, top_p=0.95, repetition_penalty=1.2, do_sample=True, pad_token_id=tokenizer.eos_token_id, eos_token_id=tokenizer.convert_tokens_to_ids(dialogue_template.end_token), min_new_tokens=32, max_new_tokens=256, ) model = AutoModelForCausalLM.from_pretrained( args.model_id, revision=args.revision, device_map="auto", torch_dtype=torch.float32 ).to(device) outputs = "" for idx, prompt in enumerate(formatted_prompts): batch = tokenizer(prompt, return_tensors="pt", return_token_type_ids=False).to(device) generated_ids = model.generate(**batch, generation_config=generation_config) generated_text = tokenizer.decode(generated_ids[0], skip_special_tokens=False).lstrip() outputs += generated_text + "\n\n" print(f"=== EXAMPLE {idx} ===") print() print(generated_text) print() print("======================") print() if __name__ == "__main__": main() ``` ### Expected behavior 1. No error messages is given after doing some code adaption using only CPU for both model and tokenizer. 2. Speed up the model loading and tokenizing process on current code if possible
06-02-2023 08:24:20
06-02-2023 08:24:20
This code cannot work. You can't use `device_map="auto"` and then move back your model to another device.<|||||>Hi @sgugger, Thanks for your comment, so how could i adapt the code for device map setup to suppport CPU loading well in this case, do you have any suggestion?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
23,948
closed
fixed direct input of state dict functionality
# What does this PR do? In the documentation it says that if you call from_pretrained with pretrained_model_name_or_path set to None and specify config and state_dict then you can directly input the state_dict. This branch accomplishes that. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
06-02-2023 00:36:56
06-02-2023 00:36:56
Could you please post a reproducer of the bug you encounter? The situation described in the doc works and is tested, but it looks like you are trying to use it with other extra arguments like `low_cpu_mem_usage=True`, which is definitely not a combination we support.<|||||>> Could you please post a reproducer of the bug you encounter? The situation described in the doc works and is tested, but it looks like you are trying to use it with other extra arguments like `low_cpu_mem_usage=True`, which is definitely not a combination we support. Yeah my bad should have been more specific. I needed to specify the device map which automatically sets low_cpu_mem_usage to true, thereby disabling the above behavior. <|||||>Yes, we do not support that use case. `device_map="auto"` requires loading the state dict from disk to.<|||||>> Yes, we do not support that use case. `device_map="auto"` requires loading the state dict from disk to. Thank you for your timely replies. It appears to work just as intended for me unless Iโ€™m missing something. <|||||>I directly inputted the device map as = {โ€˜โ€™ :0} though. <|||||>You misunderstand me: we offer the functionality to pass along a `state_dict` in the very basic case of no other kwargs and we do not want to complexify `from_pretrained` to support those use cases. Passing along a `state_dict` here is only possible to guarantee backward compatibility, but it is not a feature we maintain outside of the basic use case config + state_dict.<|||||>> You misunderstand me: we offer the functionality to pass along a `state_dict` in the very basic case of no other kwargs and we do not want to complexify `from_pretrained` to support those use cases. Passing along a `state_dict` here is only possible to guarantee backward compatibility, but it is not a feature we maintain outside of the basic use case config + state_dict. Ok thank you for your time. Iโ€™ll just stick to my using my fork for my use case. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
23,947
closed
Unable to load from pretrained by inputting state_dict directly
### System Info In the documentation it says that if you put the pretrained_model_name_or_path == None then you can directly input the state_dict for the model. I needed to do this for a specific use case where it doesn't make sense for me to save and then load the checkpoints like usual. Attached is the screenshot of the documentation that says this. I was able to implement this functionality with very minimal changes to the code and without interfering with anything else so I would love to push this to the repository I just need the proper permissions. ![State_dict](https://github.com/huggingface/transformers/assets/13650673/856d0a3a-9a52-4db9-8c15-3aba605e9587) ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction When loading from_pretrained and inputting the config and state_dict but setting pretrained_model_name_or_path == None you get a None object does not have an 'endswith' method or something along those lines. ### Expected behavior The expected behavior is that it would load the state_dict that you input. My fixed version does this.
06-02-2023 00:33:02
06-02-2023 00:33:02
Hi, after reading some more online, I don't believe that this is the proper way to do a pull request. This is my first time contributing to open source. Sorry about that. I would delete this all if I was able. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
23,946
closed
Getting high cossine similarity for any given pair of random text from the xlmr model
### System Info Hey guys, I am using xlm-roberta-base with transformers==4.16.0 torch==1.13.1 For any given pair of random text, I always get a high cosine similarity. ```` model_id = "xlm-roberta-base" tokenizer = XLMRobertaTokenizerFast.from_pretrained(model_id) xlmr_model = XLMRobertaModel.from_pretrained(model_id) xlmr_model.eval() def get_model_output(text, max_length=30): text_input = tokenizer( text, padding="max_length", max_length=max_length, truncation=True, return_tensors="pt", add_special_tokens=True ) print(text_input) text_embedding = xlmr_model( **text_input ) embedding = text_embedding.pooler_output.flatten().tolist() return embedding def cosine_similarity(v1, v2): return float(np.dot(v1, v2) / (np.linalg.norm(v1) * np.linalg.norm(v2))) text1 = "white joota with blue lace" # (white shoes with blue lace) text2 = "vcvbhjook jjjjj" embedding1 = get_model_output(text1) embedding2 = get_model_output(text2) print(cosine_similarity(embedding1, embedding2)) # 0.99 ```` I basically want to use xlmr for a multilingual ecommerce usecase but I am stuck at this point. Let me know if I am mising something or there's some problem with the artifacts Thank You! ### Who can help? _No response_ ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction ```` model_id = "xlm-roberta-base" tokenizer = XLMRobertaTokenizerFast.from_pretrained(model_id) xlmr_model = XLMRobertaModel.from_pretrained(model_id) xlmr_model.eval() def get_model_output(text, max_length=30): text_input = tokenizer( text, padding="max_length", max_length=max_length, truncation=True, return_tensors="pt", add_special_tokens=True ) print(text_input) text_embedding = xlmr_model( **text_input ) embedding = text_embedding.pooler_output.flatten().tolist() return embedding def cosine_similarity(v1, v2): return float(np.dot(v1, v2) / (np.linalg.norm(v1) * np.linalg.norm(v2))) text1 = "white joota with blue lace" # (white shoes with blue lace) text2 = "vcvbhjook jjjjj" embedding1 = get_model_output(text1) embedding2 = get_model_output(text2) print(cosine_similarity(embedding1, embedding2)) # 0.99 ```` ### Expected behavior Low cosine similarity as these are two random texts
06-01-2023 19:32:53
06-01-2023 19:32:53
Please use the [forums](https://discuss.huggingface.co/) for questions like this.<|||||>Sure, will raise it there. Closing the issue
transformers
23,945
closed
[Whisper Tokenizer] Skip special tokens when decoding with timestamps
# What does this PR do? Decoding with timestamps always returns the special tokeniser tokens, ignoring the argument `skip_special_tokens`. This PR fixes this behaviour to respect `skip_special_tokens`, both with and without timestamp decoding. Before: ```python from transformers import WhisperTokenizer tokenizer = WhisperTokenizer.from_pretrained("openai/whisper-tiny") encoded_input = [ 50258, 50363, 50364, 634, 575, 12525, 22618, 1968, 6144, 35617, 20084, 1756, 311, 589, 307, 534, 10281, 934, 439, 293, 50676, 50676, 393, 4411, 294, 309, 457, 707, 295, 33301, 286, 392, 6628, 13, 50836, 50257, ] tokenizer.decode(encoded_input, decode_with_timestamps=True, skip_special_tokens=True) ``` **Output:** ``` <|startoftranscript|><|notimestamps|><|0.00|> He has grave doubts whether Sir Frederick Layton's work is really Greek after all and<|6.24|><|6.24|> can discover in it but little of rocky Ithaca.<|9.44|><|endoftext|> ``` Now: ```python tokenizer.decode(encoded_input, decode_with_timestamps=True, skip_special_tokens=True) ``` **Output:** ``` <|0.00|> He has grave doubts whether Sir Frederick Layton's work is really Greek after all and<|6.24|><|6.24|> can discover in it but little of rocky Ithaca.<|9.44|> ```
06-01-2023 17:28:40
06-01-2023 17:28:40
_The documentation is not available anymore as the PR was closed or merged._
transformers
23,944
closed
Fix `ReduceLROnPlateau` object has no attribute 'get_last_lr'
# What does this PR do? Fix bug when setting lr_scheduler to `ReduceLROnPlateau`. When selecting this scheduler, use the optimizer to get the lr. Refer to [this discussion](https://discuss.pytorch.org/t/shouldnt-reducelronplateau-super-optimizer-in-its-init/89390/2). <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes #23934 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @sgugger <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
06-01-2023 16:58:31
06-01-2023 16:58:31
_The documentation is not available anymore as the PR was closed or merged._<|||||>> Thanks for the fix! Can you run a quick `make style` on your branch? done
transformers
23,943
closed
Revert "Update stale.yml to use HuggingFaceBot"
Reverts huggingface/transformers#23941
06-01-2023 15:56:45
06-01-2023 15:56:45
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23943). All of your documentation changes will be reflected on that endpoint.
transformers
23,942
closed
use _make_causal_mask in clip/vit models
# What does this PR do? Uses the `_make_causal_mask` helper to build the causal attention mask which works for `bfloat16` dtype. the current `_build_causal_attention_mask` uses the `torch.triu_` which is not supported in pytorch release fixed recently in main: https://github.com/pytorch/pytorch/pull/101414 should fix https://github.com/huggingface/diffusers/issues/3453
06-01-2023 15:08:24
06-01-2023 15:08:24
_The documentation is not available anymore as the PR was closed or merged._
transformers
23,941
closed
Update stale.yml to use HuggingFaceBot
Updates the token used to leverage the HuggingFaceBot triaging bot instead
06-01-2023 14:53:49
06-01-2023 14:53:49
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23941). All of your documentation changes will be reflected on that endpoint.
transformers
23,940
closed
Make TF ESM inv_freq non-trainable like PyTorch
Minor TF-ESM fix - the `inv_freq` weight is a non-trainable buffer in PyTorch, but was trainable in TF models. This might cause small discrepancies when fine-tuning.
06-01-2023 14:36:53
06-01-2023 14:36:53
_The documentation is not available anymore as the PR was closed or merged._
transformers
23,939
closed
rename DocumentQuestionAnsweringTool parameter input to match docstring
# What does this PR do? Rename DocumentQuestionAnsweringTool parameter input to match docstring <!-- Congratulations! You've made it this far! You're not quite done yet though. Fixes [# (issue)](https://github.com/huggingface/transformers/issues/23921) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
06-01-2023 14:29:25
06-01-2023 14:29:25
_The documentation is not available anymore as the PR was closed or merged._
transformers
23,938
closed
Add an option to reduce compile() console spam
This is a very simple PR to add an option to reduce the console spam from our `compile()` method. Briefly, Keras model expect you to pass a loss function to `compile()`. However, `transformers` models usually compute loss internally. The solution we used was that if the user didn't pass a `loss` argument to `compile()`, we would read the model's `loss` output and use that as the loss. This was non-standard Keras behaviour, though, so we added a warning to make sure users knew what was going on. Now that we've been doing it for a while, though, that warning is probably just more console spam. This PR adds the option to specify `loss="auto"`, which has exactly the same behaviour but eliminates the warning. The warning also includes a line telling users that they can do that.
06-01-2023 14:14:07
06-01-2023 14:14:07
_The documentation is not available anymore as the PR was closed or merged._<|||||>I think we tried that, but then most users would miss it! I think we definitely want to make users aware of what's going on (because this is the one big place we diverge from the Keras training standard), but give them an option to disable the warning once they know.<|||||>I think proper documentation is a better solution than throwing a warning all the time. Users are pretty mad at us with those already.<|||||>This could be a good opportunity to document stuff, actually! Do you prefer a sidebar tutorial, or some text in the docstring that gets added to all of our TF models?<|||||>I think in the official examples where we actually use `torch.compile` is probably the best place.<|||||>This is TF compilation, not `torch.compile()`! All of our TF examples use it - you can't call `model.fit()` in Keras before calling `model.compile()`<|||||>Sorry I meant `model.compile`. And yes, all the examples sounds about right.<|||||>Done! I added a comment when we call `compile()` in the example scripts. I also fixed up one unnecessary use of `run_eagerly` in `run_mlm.py` - oops! I moved the warning message to `logging.info`, so it should be invisible to most users now, and they can still pass `loss="auto"` to disable it entirely if desired.
transformers
23,937
closed
Pin rhoknp
# What does this PR do? The recent release of `rhoknp` breaks the BERT Japanese tests, so this PR pins it.
06-01-2023 14:00:56
06-01-2023 14:00:56
_The documentation is not available anymore as the PR was closed or merged._
transformers
23,936
closed
[don't merge] Test hfh v0.15.0.rc0
Trying to trigger the CI differently. Still stuck on the ci-... branch (see https://github.com/huggingface/transformers/actions/runs/5143134417) so I guess I did something wrong.
06-01-2023 13:45:04
06-01-2023 13:45:04
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23936). All of your documentation changes will be reflected on that endpoint.
transformers
23,935
closed
RuntimeError: unscale_() has already been called on this optimizer since the last update().
### System Info - `transformers` version: 4.30.0.dev0 - Platform: Linux-5.4.0-122-generic-x86_64-with-glibc2.31 - Python version: 3.9.16 - Huggingface_hub version: 0.14.1 - Safetensors version: 0.3.1 - PyTorch version (GPU?): 2.0.1+cu117 (True) - Tensorflow version (GPU?): 2.9.2 (True) - Flax version (CPU?/GPU?/TPU?): 0.6.3 (gpu) - Jax version: 0.4.1 - JaxLib version: 0.4.1 - Using GPU in script?: yes - Using distributed or parallel set-up in script?: not sure, see colab below ### Who can help? @younesbelkada @pacman100 ### Information - [ ] The official example scripts - [x] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [x] My own task or dataset (give details below) ### Reproduction Run the colab: https://colab.research.google.com/drive/1ARmlaZZaKyAg6HTi57psFLPeh0hDRcPX?usp=sharing#scrollTo=Duak7T_B3VpJ @younesbelkada @pacman100 I have reinstalled from source as suggested after the fix in (https://github.com/huggingface/transformers/pull/23914/files) but I still get the error. I'm in the latest commit `transformers @ git+https://github.com/huggingface/transformers.git@fabe17a726bbf6081cfbcc975d8ac451a81f3e2d` and you can tell from the stacktrace that the line numbers are different (due to the changes to fix the problem when using QLora). The script does not use QLora (afaik) am I missing something? ### Expected behavior The call `trainer.train()` should work and instead produces the following exception: ```โ•ญโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ Traceback (most recent call last) โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฎ โ”‚ in <cell line: 17>:17 โ”‚ โ”‚ โ”‚ โ”‚ /usr/local/lib/python3.10/dist-packages/transformers/trainer.py:1661 in train โ”‚ โ”‚ โ”‚ โ”‚ 1658 โ”‚ โ”‚ inner_training_loop = find_executable_batch_size( โ”‚ โ”‚ 1659 โ”‚ โ”‚ โ”‚ self._inner_training_loop, self._train_batch_size, args.auto_find_batch_size โ”‚ โ”‚ 1660 โ”‚ โ”‚ ) โ”‚ โ”‚ โฑ 1661 โ”‚ โ”‚ return inner_training_loop( โ”‚ โ”‚ 1662 โ”‚ โ”‚ โ”‚ args=args, โ”‚ โ”‚ 1663 โ”‚ โ”‚ โ”‚ resume_from_checkpoint=resume_from_checkpoint, โ”‚ โ”‚ 1664 โ”‚ โ”‚ โ”‚ trial=trial, โ”‚ โ”‚ โ”‚ โ”‚ /usr/local/lib/python3.10/dist-packages/transformers/trainer.py:1995 in _inner_training_loop โ”‚ โ”‚ โ”‚ โ”‚ 1992 โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ args.max_grad_norm, โ”‚ โ”‚ 1993 โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ ) โ”‚ โ”‚ 1994 โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ else: โ”‚ โ”‚ โฑ 1995 โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ self.accelerator.clip_grad_norm_( โ”‚ โ”‚ 1996 โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ model.parameters(), โ”‚ โ”‚ 1997 โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ args.max_grad_norm, โ”‚ โ”‚ 1998 โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ ) โ”‚ โ”‚ โ”‚ โ”‚ /usr/local/lib/python3.10/dist-packages/accelerate/accelerator.py:1817 in clip_grad_norm_ โ”‚ โ”‚ โ”‚ โ”‚ 1814 โ”‚ โ”‚ โ”‚ # `accelerator.backward(loss)` is doing that automatically. Therefore, its i โ”‚ โ”‚ 1815 โ”‚ โ”‚ โ”‚ # We cannot return the gradient norm because DeepSpeed does it. โ”‚ โ”‚ 1816 โ”‚ โ”‚ โ”‚ return None โ”‚ โ”‚ โฑ 1817 โ”‚ โ”‚ self.unscale_gradients() โ”‚ โ”‚ 1818 โ”‚ โ”‚ return torch.nn.utils.clip_grad_norm_(parameters, max_norm, norm_type=norm_type) โ”‚ โ”‚ 1819 โ”‚ โ”‚ โ”‚ 1820 โ”‚ def clip_grad_value_(self, parameters, clip_value): โ”‚ โ”‚ โ”‚ โ”‚ /usr/local/lib/python3.10/dist-packages/accelerate/accelerator.py:1780 in unscale_gradients โ”‚ โ”‚ โ”‚ โ”‚ 1777 โ”‚ โ”‚ โ”‚ for opt in optimizer: โ”‚ โ”‚ 1778 โ”‚ โ”‚ โ”‚ โ”‚ while isinstance(opt, AcceleratedOptimizer): โ”‚ โ”‚ 1779 โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ opt = opt.optimizer โ”‚ โ”‚ โฑ 1780 โ”‚ โ”‚ โ”‚ โ”‚ self.scaler.unscale_(opt) โ”‚ โ”‚ 1781 โ”‚ โ”‚ โ”‚ 1782 โ”‚ def clip_grad_norm_(self, parameters, max_norm, norm_type=2): โ”‚ โ”‚ 1783 โ”‚ โ”‚ """ โ”‚ โ”‚ โ”‚ โ”‚ /usr/local/lib/python3.10/dist-packages/torch/cuda/amp/grad_scaler.py:275 in unscale_ โ”‚ โ”‚ โ”‚ โ”‚ 272 โ”‚ โ”‚ optimizer_state = self._per_optimizer_states[id(optimizer)] โ”‚ โ”‚ 273 โ”‚ โ”‚ โ”‚ โ”‚ 274 โ”‚ โ”‚ if optimizer_state["stage"] is OptState.UNSCALED: โ”‚ โ”‚ โฑ 275 โ”‚ โ”‚ โ”‚ raise RuntimeError("unscale_() has already been called on this optimizer sin โ”‚ โ”‚ 276 โ”‚ โ”‚ elif optimizer_state["stage"] is OptState.STEPPED: โ”‚ โ”‚ 277 โ”‚ โ”‚ โ”‚ raise RuntimeError("unscale_() is being called after step().") โ”‚ โ”‚ 278 โ”‚ โ•ฐโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฏ RuntimeError: unscale_() has already been called on this optimizer since the last update(). ```
06-01-2023 12:37:00
06-01-2023 12:37:00
Hi @kafkasl I can confirm the training works if I install `accelerate` from source using your notebook. ![Screenshot 2023-06-01 at 14 47 20](https://github.com/huggingface/transformers/assets/49240599/55f8ecbd-5761-43b9-895f-a73849fd16a6) Can you add: ```bash !pip install -q git+https://github.com/huggingface/peft.git git+https://github.com/huggingface/transformers.git git+https://github.com/huggingface/accelerate.git ``` On the setup block and re-run it again?<|||||>@younesbelkada I seem to still have the issue, the accelerate version used: ``` !pip freeze | grep accelerate accelerate @ git+https://github.com/huggingface/accelerate.git@4d583ad6a1f13d1d7617e6a37f791ec01a68413a ``` how did you make it work? I just have this setup but it still fails: ``` !pip install -q bitsandbytes datasets loralib # !pip install -q git+https://github.com/huggingface/peft.git git+https://github.com/huggingface/transformers.git !pip install git+https://github.com/huggingface/peft.git git+https://github.com/huggingface/transformers.git git+https://github.com/huggingface/accelerate.git ``` what am I missing? <|||||>Had the same issue yesterday. Kept getting the error even though I updated the transformer library. I believe what helped was to `pip uninstall transformers`, restart the kernel, and then `pip install -U git+https://github.com/huggingface/transformers@de9255de27abfcae4a1f816b904915f0b1e23cd9`.<|||||>@kafkasl I just re-ran the notebook again in a fresh colab instance and it works on my end ![Screenshot 2023-06-01 at 16 07 15](https://github.com/huggingface/transformers/assets/49240599/808d4d3d-5aa7-4410-afb4-2b734c7d66f8) Here is a copy of the notebook: https://colab.research.google.com/drive/1VmNo77ub8IVe-LGSZtjdjEzWY45aN0Tm?usp=sharing<|||||>I confirm it works although I have no idea why because the code looks the same, and I've used the same env twice. Anyway thanks for the help!<|||||>@younesbelkada I actually still receive this error when running with 4bit quantization. Using the same installations and running this notebook I run into the same error. Note that without qlora I can run just fine (ie. the notebook linked here does work but [this notebook](https://huggingface.co/dfurman/falcon-40b-chat-oasst1/blob/main/finetune_falcon40b_oasst1_with_bnb_peft.ipynb) does not). Any idea why this could be? Output of `pip freeze` is below for relevant libraries ```bash accelerate @ git+https://github.com/huggingface/accelerate.git@eba6eb79dc2ab652cd8b44b37165a4852768a8ac bitsandbytes==0.39.0 einops==0.6.1 loralib==0.1.1 peft @ git+https://github.com/huggingface/peft.git@7fb5f90a38cb39a31396de7e638ead9ecea692af transformers @ git+https://github.com/huggingface/transformers.git@460b844360131c99d3dd4dbd9c08545ea2e6ac9e ```<|||||>Hi @sam-h-bean Hmm, looking at the logs it seems that something is wrong with the tensorboard logging: ```bash โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ Traceback (most recent call last) โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฎ โ”‚ in <cell line: 23>:23 โ”‚ โ”‚ โ”‚ โ”‚ /usr/local/lib/python3.10/dist-packages/transformers/trainer.py:1696 in train โ”‚ โ”‚ โ”‚ โ”‚ 1693 โ”‚ โ”‚ inner_training_loop = find_executable_batch_size( โ”‚ โ”‚ 1694 โ”‚ โ”‚ โ”‚ self._inner_training_loop, self._train_batch_size, args.auto_find_batch_size โ”‚ โ”‚ 1695 โ”‚ โ”‚ ) โ”‚ โ”‚ โฑ 1696 โ”‚ โ”‚ return inner_training_loop( โ”‚ โ”‚ 1697 โ”‚ โ”‚ โ”‚ args=args, โ”‚ โ”‚ 1698 โ”‚ โ”‚ โ”‚ resume_from_checkpoint=resume_from_checkpoint, โ”‚ โ”‚ 1699 โ”‚ โ”‚ โ”‚ trial=trial, โ”‚ โ”‚ โ”‚ โ”‚ /usr/local/lib/python3.10/dist-packages/accelerate/utils/memory.py:132 in decorator โ”‚ โ”‚ โ”‚ โ”‚ 129 โ”‚ โ”‚ โ”‚ if batch_size == 0: โ”‚ โ”‚ 130 โ”‚ โ”‚ โ”‚ โ”‚ raise RuntimeError("No executable batch size found, reached zero.") โ”‚ โ”‚ 131 โ”‚ โ”‚ โ”‚ try: โ”‚ โ”‚ โฑ 132 โ”‚ โ”‚ โ”‚ โ”‚ return function(batch_size, *args, **kwargs) โ”‚ โ”‚ 133 โ”‚ โ”‚ โ”‚ except Exception as e: โ”‚ โ”‚ 134 โ”‚ โ”‚ โ”‚ โ”‚ if should_reduce_batch_size(e): โ”‚ โ”‚ 135 โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ gc.collect() โ”‚ โ”‚ โ”‚ โ”‚ /usr/local/lib/python3.10/dist-packages/transformers/trainer.py:2052 in _inner_training_loop โ”‚ โ”‚ โ”‚ โ”‚ 2049 โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ self.state.epoch = epoch + (step + 1 + steps_skipped) / steps_in_epo โ”‚ โ”‚ 2050 โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ self.control = self.callback_handler.on_step_end(args, self.state, s โ”‚ โ”‚ 2051 โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โฑ 2052 โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ self._maybe_log_save_evaluate(tr_loss, model, trial, epoch, ignore_k โ”‚ โ”‚ 2053 โ”‚ โ”‚ โ”‚ โ”‚ else: โ”‚ โ”‚ 2054 โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ self.control = self.callback_handler.on_substep_end(args, self.state โ”‚ โ”‚ 2055 โ”‚ โ”‚ โ”‚ โ”‚ /usr/local/lib/python3.10/dist-packages/transformers/trainer.py:2338 in _maybe_log_save_evaluate โ”‚ โ”‚ โ”‚ โ”‚ 2335 โ”‚ โ”‚ โ”‚ self._globalstep_last_logged = self.state.global_step โ”‚ โ”‚ 2336 โ”‚ โ”‚ โ”‚ self.store_flos() โ”‚ โ”‚ 2337 โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โฑ 2338 โ”‚ โ”‚ โ”‚ self.log(logs) โ”‚ โ”‚ 2339 โ”‚ โ”‚ โ”‚ โ”‚ 2340 โ”‚ โ”‚ metrics = None โ”‚ โ”‚ 2341 โ”‚ โ”‚ if self.control.should_evaluate: โ”‚ โ”‚ โ”‚ โ”‚ /usr/local/lib/python3.10/dist-packages/transformers/trainer.py:2700 in log โ”‚ โ”‚ โ”‚ โ”‚ 2697 โ”‚ โ”‚ โ”‚ โ”‚ 2698 โ”‚ โ”‚ output = {**logs, **{"step": self.state.global_step}} โ”‚ โ”‚ 2699 โ”‚ โ”‚ self.state.log_history.append(output) โ”‚ โ”‚ โฑ 2700 โ”‚ โ”‚ self.control = self.callback_handler.on_log(self.args, self.state, self.control, โ”‚ โ”‚ 2701 โ”‚ โ”‚ โ”‚ 2702 โ”‚ def _prepare_input(self, data: Union[torch.Tensor, Any]) -> Union[torch.Tensor, Any] โ”‚ โ”‚ 2703 โ”‚ โ”‚ """ โ”‚ โ”‚ โ”‚ โ”‚ /usr/local/lib/python3.10/dist-packages/transformers/trainer_callback.py:390 in on_log โ”‚ โ”‚ โ”‚ โ”‚ 387 โ”‚ โ”‚ โ”‚ 388 โ”‚ def on_log(self, args: TrainingArguments, state: TrainerState, control: TrainerContr โ”‚ โ”‚ 389 โ”‚ โ”‚ control.should_log = False โ”‚ โ”‚ โฑ 390 โ”‚ โ”‚ return self.call_event("on_log", args, state, control, logs=logs) โ”‚ โ”‚ 391 โ”‚ โ”‚ โ”‚ 392 โ”‚ def on_prediction_step(self, args: TrainingArguments, state: TrainerState, control: โ”‚ โ”‚ 393 โ”‚ โ”‚ return self.call_event("on_prediction_step", args, state, control) โ”‚ โ”‚ โ”‚ โ”‚ /usr/local/lib/python3.10/dist-packages/transformers/trainer_callback.py:397 in call_event โ”‚ โ”‚ โ”‚ โ”‚ 394 โ”‚ โ”‚ โ”‚ 395 โ”‚ def call_event(self, event, args, state, control, **kwargs): โ”‚ โ”‚ 396 โ”‚ โ”‚ for callback in self.callbacks: โ”‚ โ”‚ โฑ 397 โ”‚ โ”‚ โ”‚ result = getattr(callback, event)( โ”‚ โ”‚ 398 โ”‚ โ”‚ โ”‚ โ”‚ args, โ”‚ โ”‚ 399 โ”‚ โ”‚ โ”‚ โ”‚ state, โ”‚ โ”‚ 400 โ”‚ โ”‚ โ”‚ โ”‚ control, โ”‚ โ”‚ โ”‚ โ”‚ /usr/local/lib/python3.10/dist-packages/transformers/integrations.py:655 in on_log โ”‚ โ”‚ โ”‚ โ”‚ 652 โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ "This invocation of Tensorboard's writer.add_scalar() " โ”‚ โ”‚ 653 โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ "is incorrect so we dropped this attribute." โ”‚ โ”‚ 654 โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ ) โ”‚ โ”‚ โฑ 655 โ”‚ โ”‚ โ”‚ self.tb_writer.flush() โ”‚ โ”‚ 656 โ”‚ โ”‚ โ”‚ 657 โ”‚ def on_train_end(self, args, state, control, **kwargs): โ”‚ โ”‚ 658 โ”‚ โ”‚ if self.tb_writer: โ”‚ โ”‚ โ”‚ โ”‚ /usr/local/lib/python3.10/dist-packages/torch/utils/tensorboard/writer.py:1200 in flush โ”‚ โ”‚ โ”‚ โ”‚ 1197 โ”‚ โ”‚ if self.all_writers is None: โ”‚ โ”‚ 1198 โ”‚ โ”‚ โ”‚ return โ”‚ โ”‚ 1199 โ”‚ โ”‚ for writer in self.all_writers.values(): โ”‚ โ”‚ โฑ 1200 โ”‚ โ”‚ โ”‚ writer.flush() โ”‚ โ”‚ 1201 โ”‚ โ”‚ โ”‚ 1202 โ”‚ def close(self): โ”‚ โ”‚ 1203 โ”‚ โ”‚ if self.all_writers is None: โ”‚ โ”‚ โ”‚ โ”‚ /usr/local/lib/python3.10/dist-packages/torch/utils/tensorboard/writer.py:150 in flush โ”‚ โ”‚ โ”‚ โ”‚ 147 โ”‚ โ”‚ Call this method to make sure that all pending events have been written to โ”‚ โ”‚ 148 โ”‚ โ”‚ disk. โ”‚ โ”‚ 149 โ”‚ โ”‚ """ โ”‚ โ”‚ โฑ 150 โ”‚ โ”‚ self.event_writer.flush() โ”‚ โ”‚ 151 โ”‚ โ”‚ โ”‚ 152 โ”‚ def close(self): โ”‚ โ”‚ 153 โ”‚ โ”‚ """Flushes the event file to disk and close the file. โ”‚ โ”‚ โ”‚ โ”‚ /usr/local/lib/python3.10/dist-packages/tensorboard/summary/writer/event_file_writer.py:125 in โ”‚ โ”‚ flush โ”‚ โ”‚ โ”‚ โ”‚ 122 โ”‚ โ”‚ Call this method to make sure that all pending events have been โ”‚ โ”‚ 123 โ”‚ โ”‚ written to disk. โ”‚ โ”‚ 124 โ”‚ โ”‚ """ โ”‚ โ”‚ โฑ 125 โ”‚ โ”‚ self._async_writer.flush() โ”‚ โ”‚ 126 โ”‚ โ”‚ โ”‚ 127 โ”‚ def close(self): โ”‚ โ”‚ 128 โ”‚ โ”‚ """Performs a final flush of the event file to disk, stops the โ”‚ โ”‚ โ”‚ โ”‚ /usr/local/lib/python3.10/dist-packages/tensorboard/summary/writer/event_file_writer.py:190 in โ”‚ โ”‚ flush โ”‚ โ”‚ โ”‚ โ”‚ 187 โ”‚ โ”‚ โ”‚ if self._closed: โ”‚ โ”‚ 188 โ”‚ โ”‚ โ”‚ โ”‚ raise IOError("Writer is closed") โ”‚ โ”‚ 189 โ”‚ โ”‚ โ”‚ self._byte_queue.join() โ”‚ โ”‚ โฑ 190 โ”‚ โ”‚ โ”‚ self._writer.flush() โ”‚ โ”‚ 191 โ”‚ โ”‚ โ”‚ # Check the status again in case the background worker thread has โ”‚ โ”‚ 192 โ”‚ โ”‚ โ”‚ # failed in the meantime to avoid waiting until the next call to โ”‚ โ”‚ 193 โ”‚ โ”‚ โ”‚ # surface the error. โ”‚ โ”‚ โ”‚ โ”‚ /usr/local/lib/python3.10/dist-packages/tensorboard/summary/writer/record_writer.py:43 in flush โ”‚ โ”‚ โ”‚ โ”‚ 40 โ”‚ โ”‚ self._writer.write(header + header_crc + data + footer_crc) โ”‚ โ”‚ 41 โ”‚ โ”‚ โ”‚ 42 โ”‚ def flush(self): โ”‚ โ”‚ โฑ 43 โ”‚ โ”‚ self._writer.flush() โ”‚ โ”‚ 44 โ”‚ โ”‚ โ”‚ 45 โ”‚ def close(self): โ”‚ โ”‚ 46 โ”‚ โ”‚ self._writer.close() โ”‚ โ”‚ โ”‚ โ”‚ /usr/local/lib/python3.10/dist-packages/tensorflow/python/lib/io/file_io.py:221 in flush โ”‚ โ”‚ โ”‚ โ”‚ 218 โ”‚ data would survive an application crash but not necessarily an OS crash. โ”‚ โ”‚ 219 โ”‚ """ โ”‚ โ”‚ 220 โ”‚ if self._writable_file: โ”‚ โ”‚ โฑ 221 โ”‚ self._writable_file.flush() โ”‚ โ”‚ 222 โ”‚ โ”‚ 223 def close(self): โ”‚ โ”‚ 224 โ”‚ r"""Closes the file. โ”‚ โ•ฐโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฏ FailedPreconditionError: /content/drive/MyDrive/Colab Files/WM/falcon-chat-40b/runs/May30_05-06-25_3e61552a1fc1/events.out.tfevents.1685423187.3e61552a1fc1.231.0; Transport endpoint is not connected ``` I would probably try to use something else than tensorboard or change the log_steps in the trainign arguments, what do you think?<|||||>In my case I hit this due to dataset being too small. See how I can consistently reproduce when dataset has 2 entries but issue goes away once I have 5 entries: https://colab.research.google.com/drive/1jc1hab4pJBWHJNKeuScMDqMyPv0QqRro?usp=sharing<|||||>> `pip install -U git+https://github.com/huggingface/transformers@de9255de27abfcae4a1f816b904915f0b1e23cd9`. Thanks. This worked for me.<|||||>I am still able to reproduce this double `unscale_()` issue with the original stack trace. Using [Falcon-Guanaco.ipynb](https://colab.research.google.com/drive/1BiQiw31DT7-cDp1-0ySXvvhzqomTdI-o?usp=sharing#scrollTo=mNnkgBq7Q3EU) and making the following modifications: 1) Add `dataset = dataset.shard(num_shards=80, index=0)` before constructing `SFTTrainer`. 2) Change `max_seq_length = 512` to `max_seq_length = 1024`. After these modifications the `trainer.train()` call reliably fails. Debugging I see the following steps happen before the error: 1) The `step()` call in `accelerate/optimizer.py` returns immediately because of the [self.gradient_state.sync_gradients condition](https://github.com/huggingface/accelerate/blob/543c59af224e3ea273633732319916b0698234ab/src/accelerate/optimizer.py#L128). As a result, the `optimizer_state["stage"]` is never transitioned to `OptState.READY`. 2) `optimizer_was_run` in the call from `transformers/trainer.py` is (incorrectly?) set to `True` [here](https://github.com/huggingface/transformers/blob/70c79940957fb25b54bd1b106935c756b90345eb/src/transformers/trainer.py#L1881). 3) On the next iteration, the double `unscale_()` error is raised since we call `clip_grad_norm_` every time. Edit: [Fix on my fork](https://github.com/huggingface/accelerate/commit/644038e3859f7ada492bb0053fd360d87c4b4d0a) is working.<|||||>@PhilDakin how to pip install your fork ? Thanks Edit : what works now is using ``` !pip install git+https://github.com/huggingface/transformers@de9255de27abfcae4a1f816b904915f0b1e23cd9 ``` (colab)<|||||>!pip uninstall transformers restart kernel !pip install git+https://github.com/huggingface/transformers@de9255de27abfcae4a1f816b904915f0b1e23cd9 Solves the error<|||||>> !pip uninstall transformers restart kernel !pip install git+https://github.com/huggingface/transformers@de9255de27abfcae4a1f816b904915f0b1e23cd9 > > Solves the error do we need any other thing?<|||||>Hi everyone, I just re-ran the notebook again and it seems to work fine (c.f. screenshot below) ![Screenshot 2023-06-13 at 15 17 59](https://github.com/huggingface/transformers/assets/49240599/4b231900-8a4d-4d8f-a19a-c1c46c393134) ![Screenshot 2023-06-13 at 15 32 32](https://github.com/huggingface/transformers/assets/49240599/1afeffdc-887a-41ce-8cf0-c8db7b6d11e3) Make sure to use a fresh new environment when running the experiments <|||||>@younesbelkada no need to use previous transformer build ?<|||||>Hi @x4080 I just tried the notebook with the latest build of transformers, accelerate and trl and everything seems to work fine. I have updated the notebook accordingly: https://colab.research.google.com/drive/1BiQiw31DT7-cDp1-0ySXvvhzqomTdI-o?usp=sharing Make sure to use a fresh environment with these libraries installed<|||||>Hey I still have this issue when using huggingface trainer instead of trl. And its fixed by ```git+https://github.com/huggingface/transformers@de9255de27abfcae4a1f816b904915f0b1e23cd9``` however as a side effect there is no adapter_config.json saved. https://drive.google.com/file/d/1jWKCRWwM1R0d00yFI8HUsakhQrbSeLFu/view?usp=share_link<|||||>Update: fixed the adapter_config saving issue by ``` from transformers import TrainerCallback class PeftSavingCallback(TrainerCallback): def on_save(self, args, state, control, **kwargs): checkpoint_path = os.path.join(args.output_dir, f"checkpoint-{state.global_step}") kwargs["model"].save_pretrained(checkpoint_path) if "pytorch_model.bin" in os.listdir(checkpoint_path): os.remove(os.path.join(checkpoint_path, "pytorch_model.bin")) ```<|||||>@younesbelkada I can confirm that using regular import works Thanks for all<|||||>@younesbelkada while it is the case that the notebook **with no modifications** runs correctly, the issue I've [described above persists](https://github.com/huggingface/transformers/issues/23935#issuecomment-1588134562). I'm maintaining a fix on my fork but don't have the contextual knowledge to confidently PR. Would you please take another look? (This is using latest of `accelerate` `trl` `transformers` installed via `git`)<|||||>I tried with different jsonl data, and the unscale_() error appears again -> its kinda related to data then, but I dont know what - anybody has similar experience ? Edit : I tried to add 2 more data and it works, but I recalled I add 1 data for the working data, and it error, so its kinda fuzzy right now<|||||>Here's error because of adding 1 more row of training data, the data is in the same data format - so no problem with new data either : ``` /tmp/ipykernel_29/1990004115.py:27 in <module> โ”‚ โ”‚ โ”‚ โ”‚ [Errno 2] No such file or directory: '/tmp/ipykernel_29/1990004115.py' โ”‚ โ”‚ โ”‚ โ”‚ /opt/conda/lib/python3.10/site-packages/transformers/trainer.py:1539 in train โ”‚ โ”‚ โ”‚ โ”‚ 1536 โ”‚ โ”‚ inner_training_loop = find_executable_batch_size( โ”‚ โ”‚ 1537 โ”‚ โ”‚ โ”‚ self._inner_training_loop, self._train_batch_size, args.auto_find_batch_size โ”‚ โ”‚ 1538 โ”‚ โ”‚ ) โ”‚ โ”‚ โฑ 1539 โ”‚ โ”‚ return inner_training_loop( โ”‚ โ”‚ 1540 โ”‚ โ”‚ โ”‚ args=args, โ”‚ โ”‚ 1541 โ”‚ โ”‚ โ”‚ resume_from_checkpoint=resume_from_checkpoint, โ”‚ โ”‚ 1542 โ”‚ โ”‚ โ”‚ trial=trial, โ”‚ โ”‚ โ”‚ โ”‚ /opt/conda/lib/python3.10/site-packages/transformers/trainer.py:1850 in _inner_training_loop โ”‚ โ”‚ โ”‚ โ”‚ 1847 โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ args.max_grad_norm, โ”‚ โ”‚ 1848 โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ ) โ”‚ โ”‚ 1849 โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ else: โ”‚ โ”‚ โฑ 1850 โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ self.accelerator.clip_grad_norm_( โ”‚ โ”‚ 1851 โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ model.parameters(), โ”‚ โ”‚ 1852 โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ args.max_grad_norm, โ”‚ โ”‚ 1853 โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ ) โ”‚ โ”‚ โ”‚ โ”‚ /opt/conda/lib/python3.10/site-packages/accelerate/accelerator.py:1913 in clip_grad_norm_ โ”‚ โ”‚ โ”‚ โ”‚ 1910 โ”‚ โ”‚ โ”‚ # `accelerator.backward(loss)` is doing that automatically. Therefore, its i โ”‚ โ”‚ 1911 โ”‚ โ”‚ โ”‚ # We cannot return the gradient norm because DeepSpeed does it. โ”‚ โ”‚ 1912 โ”‚ โ”‚ โ”‚ return None โ”‚ โ”‚ โฑ 1913 โ”‚ โ”‚ self.unscale_gradients() โ”‚ โ”‚ 1914 โ”‚ โ”‚ return torch.nn.utils.clip_grad_norm_(parameters, max_norm, norm_type=norm_type) โ”‚ โ”‚ 1915 โ”‚ โ”‚ โ”‚ 1916 โ”‚ def clip_grad_value_(self, parameters, clip_value): โ”‚ โ”‚ โ”‚ โ”‚ /opt/conda/lib/python3.10/site-packages/accelerate/accelerator.py:1876 in unscale_gradients โ”‚ โ”‚ โ”‚ โ”‚ 1873 โ”‚ โ”‚ โ”‚ for opt in optimizer: โ”‚ โ”‚ 1874 โ”‚ โ”‚ โ”‚ โ”‚ while isinstance(opt, AcceleratedOptimizer): โ”‚ โ”‚ 1875 โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ opt = opt.optimizer โ”‚ โ”‚ โฑ 1876 โ”‚ โ”‚ โ”‚ โ”‚ self.scaler.unscale_(opt) โ”‚ โ”‚ 1877 โ”‚ โ”‚ โ”‚ 1878 โ”‚ def clip_grad_norm_(self, parameters, max_norm, norm_type=2): โ”‚ โ”‚ 1879 โ”‚ โ”‚ """ โ”‚ โ”‚ โ”‚ โ”‚ /opt/conda/lib/python3.10/site-packages/torch/cuda/amp/grad_scaler.py:275 in unscale_ โ”‚ โ”‚ โ”‚ โ”‚ 272 โ”‚ โ”‚ optimizer_state = self._per_optimizer_states[id(optimizer)] โ”‚ โ”‚ 273 โ”‚ โ”‚ โ”‚ โ”‚ 274 โ”‚ โ”‚ if optimizer_state["stage"] is OptState.UNSCALED: โ”‚ โ”‚ โฑ 275 โ”‚ โ”‚ โ”‚ raise RuntimeError("unscale_() has already been called on this optimizer sin โ”‚ โ”‚ 276 โ”‚ โ”‚ elif optimizer_state["stage"] is OptState.STEPPED: โ”‚ โ”‚ 277 โ”‚ โ”‚ โ”‚ raise RuntimeError("unscale_() is being called after step().") ``` Is there some kind of do and donts with the quantity of data ? should be even or odd ? But I dont think that makes a difference since its like all is random Edit : As expected, I remove that data and training works again Edit2 : I tried to copy the last data (that works) and training it again -> not working again, maybe there's some kind of how many data needed makes a difference ?<|||||>I'm also using jsonl and hitting this issue. I could consistently reproduce it when my dataset had 2 lines instead of 5 lines. I also could reproduce this issue with larger dataset when num_train_epochs was set to 3 vs 1.<|||||>@samos123 Thanks for confirmation, I thought I was crazy to watch that sometimes error and sometime its not ๐Ÿ˜„ <|||||>Another thing I noticed is that if I use num_train_epochs=1 that it works fine but the steps is way less than my dataset size. Dataset size is 282 however steps is ~80<|||||>> > I am still able to reproduce this double `unscale_()` issue with the original stack trace. > > Using [Falcon-Guanaco.ipynb](https://colab.research.google.com/drive/1BiQiw31DT7-cDp1-0ySXvvhzqomTdI-o?usp=sharing#scrollTo=mNnkgBq7Q3EU) and making the following modifications: > > 1. Add `dataset = dataset.shard(num_shards=80, index=0)` before constructing `SFTTrainer`. > 2. Change `max_seq_length = 512` to `max_seq_length = 1024`. > > After these modifications the `trainer.train()` call reliably fails. > > Debugging I see the following steps happen before the error: > > 1. The `step()` call in `accelerate/optimizer.py` returns immediately because of the [self.gradient_state.sync_gradients condition](https://github.com/huggingface/accelerate/blob/543c59af224e3ea273633732319916b0698234ab/src/accelerate/optimizer.py#L128). As a result, the `optimizer_state["stage"]` is never transitioned to `OptState.READY`. > 2. `optimizer_was_run` in the call from `transformers/trainer.py` is (incorrectly?) set to `True` [here](https://github.com/huggingface/transformers/blob/70c79940957fb25b54bd1b106935c756b90345eb/src/transformers/trainer.py#L1881). > 3. On the next iteration, the double `unscale_()` error is raised since we call `clip_grad_norm_` every time. > > Edit: [Fix on my fork](https://github.com/huggingface/accelerate/commit/644038e3859f7ada492bb0053fd360d87c4b4d0a) is working. Hello, able to reproduce this. cc @muellerzr Reason: Gradient Accumulation in trainer is happening across the epochs because of `total_batched_samples `. However, Accelerate resets step at the end of an epoch leading to `sync_gradients` being `False` and optimizer not being run and when the next time `clip_grad_norm_ ` is called, it leads to `unscale_() has already been called on this optimizer since the last update().` ``` def _do_sync(self): "Sets the right `sync_gradients` context and either resets or increases `self.step`" if self.gradient_state.end_of_dataloader: self.step = 0 self.gradient_state._set_sync_gradients(True) else: self.step += 1 self.gradient_state._set_sync_gradients((self.step % self.gradient_state.num_steps) == 0) ```<|||||>Adapter size is 443 bytes, so it does not work. But this error has disappeared when I installed recommended - https://colab.research.google.com/drive/1BiQiw31DT7-cDp1-0ySXvvhzqomTdI-o?usp=sharing#scrollTo=i-tTvEF1RT3y Still, have no idea why adapter is not saved. <|||||>Hello everyone, the above PR #24415 should resolve the issues with grad_acc around epoch boundaries.<|||||>> Had the same issue yesterday. Kept getting the error even though I updated the transformer library. I believe what helped was to `pip uninstall transformers`, restart the kernel, and then `pip install -U git+https://github.com/huggingface/transformers@de9255de27abfcae4a1f816b904915f0b1e23cd9`. @leoplusx thanks, this solution worked for me<|||||>> Had the same issue yesterday. Kept getting the error even though I updated the transformer library. I believe what helped was to `pip uninstall transformers`, restart the kernel, and then `pip install -U git+https://github.com/huggingface/transformers@de9255de27abfcae4a1f816b904915f0b1e23cd9`. worked for me, thanks!
transformers
23,934
closed
'ReduceLROnPlateau' object has no attribute 'get_last_lr'
### System Info - `transformers` version: 4.29.2 - Platform: Linux-4.18.0-425.19.2.el8_7.x86_64-x86_64-with-glibc2.28 - Python version: 3.9.5 - Huggingface_hub version: 0.14.1 - Safetensors version: not installed - PyTorch version (GPU?): 2.0.1+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: Yes (cuda 11.7) - Using distributed or parallel set-up in script?: No ### Who can help? @sgug ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction I run the trainer in the default way (training a LM from scratch); using a cosine LR scheduler works fine, but the recently added `'reduce_lr_on_plateau'` seems to be incompatible with my current setup. ```python from datasets import DatasetDict from transformers import ( DataCollatorForLanguageModeling, PreTrainedTokenizerFast, Trainer, TrainingArguments, ) def initialize_trainer( model: AutoModelForMaskedLM, tokenizer: PreTrainedTokenizerFast, data_collator: DataCollatorForLanguageModeling, datasets: DatasetDict, model_init = None, **config, ): args = TrainingArguments(**config) trainer = Trainer( model=model, tokenizer=tokenizer, args=args, data_collator=data_collator, train_dataset=datasets["train"], eval_dataset=datasets["valid"], ) return trainer trainer = initialize_trainer( model, tokenizer, data_collator, datasets, output_dir='checkpoints', save_steps=10_000, eval_steps=100, logging_steps=100, per_device_train_batch_size=64, per_device_eval_batch_size=64, gradient_accumulation_steps=8, weight_decay=0.1, lr_scheduler_type='reduce_lr_on_plateau', learning_rate=5e-4, num_train_epochs=1, fp16=True, max_grad_norm=0.5, group_by_length=True, auto_find_batch_size=False, do_eval=True, evaluation_strategy='steps', report_to="wandb", ) trainer.train() ``` ```python >>> โ”‚ ~/.local/lib/python3.9/site-packages/transformers/trainer_pt_utils.py:854 in โ”‚ โ”‚ _get_learning_rate โ”‚ โ”‚ โ”‚ โ”‚ 851 โ”‚ โ”‚ โ”‚ else: โ”‚ โ”‚ 852 โ”‚ โ”‚ โ”‚ โ”‚ raise โ”‚ โ”‚ 853 โ”‚ else: โ”‚ โ”‚ โฑ 854 โ”‚ โ”‚ last_lr = self.lr_scheduler.get_last_lr()[0] โ”‚ โ”‚ 855 โ”‚ โ”‚ if torch.is_tensor(last_lr): โ”‚ โ”‚ 856 โ”‚ โ”‚ โ”‚ last_lr = last_lr.item() โ”‚ โ”‚ 857 โ”‚ return last_lr โ”‚ โ•ฐโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฏ AttributeError: 'ReduceLROnPlateau' object has no attribute 'get_last_lr' ``` ### Expected behavior I expect the scheduler to work without any additional steps, but maybe an extra argument is required? Or is my torch version incompatible with the way it is currently implemented?
06-01-2023 11:28:32
06-01-2023 11:28:32
This might be a bug indeed. Will have a look when I have some time but if someone wants to open a PR to fix this, they are more than welcome to do so!
transformers
23,933
open
Streamable way in model.generate
### Feature request As the llm becomes bigger and bigger, a quite slow speed in generate caused long time waiting for whole response. would it be OK add a `model.stream_generate` or something like for users to use? Actually some community guys implementated such a thing, but this could be a problem too: users don't which is reliable to use, it could be better for official introduce such as way. I noticed there are some work form hf like text_generation, but what users might want is simple a tiny API along side model.geneate rather than a whole lib ### Motivation For instance huge models inference. ### Your contribution not yet
06-01-2023 11:27:29
06-01-2023 11:27:29
cc @gante <|||||>"streamer (BaseStreamer, optional) โ€” Streamer object that will be used to stream the generated sequences. Generated tokens are passed through streamer.put(token_ids) and the streamer is responsible for any further processing. kwargs โ€” Ad hoc parametrization of generate_config and/or additional model-specific kwargs that will be forwarded to the forward function of the model. If the model is an encoder-decoder model, encoder specific kwargs should not be prefixed and decoder specific kwargs should be prefixed with decoder_." https://huggingface.co/docs/transformers/v4.29.1/en/main_classes/text_generation#:~:text=be%20much%20smaller.-,streamer%20(BaseStreamer%2C%20optional)%20%E2%80%94%20Streamer%20object%20that%20will%20be%20used,prefixed%20and%20decoder%20specific%20kwargs%20should%20be%20prefixed%20with%20decoder_.,-Returns There is already an argument in the generate function for that<|||||>https://github.com/flozi00/atra/blob/375bd740c37fb42d35048ae33ae414841f22938a/atra/text_utils/chat.py#LL98C7-L98C7 There is an working implementation with gradio for example<|||||>That's indeed already supported: https://huggingface.co/docs/transformers/main/en/generation_strategies#streaming<|||||>Thank u all. From the official doc, seems it was printting to console by default: ![image](https://github.com/huggingface/transformers/assets/21303438/93a020fb-f6aa-4cb3-a228-cef02b3f2d08) How can I yeild the decoded text one by one? From the link Flozi post above, I notice they using a thread to start generate, So in my case, what is the standared way to got it work as I needed?<|||||>Hey @lucasjinreal ๐Ÿ‘‹ As written above, atm there are two options (see [their docstrings](https://huggingface.co/docs/transformers/main/en/internal/generation_utils#transformers.TextStreamer) for examples): 1. Stream the text directly into the console 2. Spin up a parallel thread and use it to stream the text strings This is quite inconvenient (threads ๐Ÿคฎ) and feature-limited (streaming tokens would be nice!). The next step in the plan is to remove the threading part -- have a look at [this comment](https://github.com/huggingface/transformers/issues/23640#issuecomment-1585762715)<|||||>@gante hello, I made it by using thread, when this was removed, please pin me if you can ^.-
transformers
23,932
closed
Effectively allow `encoder_outputs` input to be a tuple in pix2struct
Be consistent with the type hint https://github.com/huggingface/transformers/blob/fabe17a726bbf6081cfbcc975d8ac451a81f3e2d/src/transformers/models/pix2struct/modeling_pix2struct.py#L1656 (which should rather be `Optional[Union[Tuple[Tuple[torch.FloatTensor]], OrderedDict]]` IMO) and follow what is done with other architectures. Otherwise, currently, with `return_dict=True`, an error is raised as we later try to access `encoder_outputs.last_hidden_state`: https://github.com/huggingface/transformers/blob/fabe17a726bbf6081cfbcc975d8ac451a81f3e2d/src/transformers/models/pix2struct/modeling_pix2struct.py#L1768-L1781 This is blocking for the ONNX export support.
06-01-2023 08:57:56
06-01-2023 08:57:56
_The documentation is not available anymore as the PR was closed or merged._
transformers
23,931
closed
GPU memory not completely free after one BERT/RoBERTa fwd pass
### System Info Hello. I am batch processing a list of sentences to solely extract a contextualized word embedding for each one (for the same word, this is just an example). I have to iterate this for different words and sentences, but after one forward pass and all the "del" and "torch.cuda.empty_cache()", the GPU memory has still "residuals" I can't delete in any way. I even tried to delete the model. I am not sure whether the one below is the correct deleting order. Any suggestion is **highly** appreciated. ### Who can help? _No response_ ### Information - [ ] The official example scripts - [x] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [x] My own task or dataset (give details below) ### Reproduction ``` from transformers import AutoTokenizer, AutoModelForMaskedLM import torch model = AutoModelForMaskedLM.from_pretrained("roberta-base", output_hidden_states = True,).to("cuda") tokenizer = AutoTokenizer.from_pretrained("roberta-base") model.eval() context_sentences = ["Lorem ipsum dolor sit amet, consectetur adipiscing elit. Donec porttitor erat urna, sit amet vulputate purus rutrum in. Donec et laoreet velit."] * 100 toks = tokenizer(context_sentences, return_tensors="pt").input_ids.to("cuda") with torch.no_grad(): out = model(toks) del context_sentences del model del out del toks del tokenizer torch.cuda.empty_cache() ``` Result: `NVIDIA GeForce RTX 3070 Laptop GPU memory: 878 MiB/8192 MiB` ### Expected behavior Expected: `NVIDIA GeForce RTX 3070 Laptop GPU memory: 0 MiB/8192 MiB`
06-01-2023 08:20:08
06-01-2023 08:20:08
Hi @halixness In the past I have experienced that, and adding `gc.collect()` before and after the `torch.cuda.empty_cache()` seemed to help. Can you quickly try that out? ๐Ÿ™ <|||||>> `gc.collect()` @younesbelkada Hi! Unfortunately that did not help :( ``` with torch.no_grad(): out = model(toks) del context_sentences del model del out del toks del tokenizer gc.collect() torch.cuda.empty_cache() gc.collect() ``` Also tried before `torch.no_grad()`.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
23,930
open
Two tokenizer initialization methods result in inconsistent segmentation results for special words
### System Info transformers==4.17.0 torch==1.10.0 python==3.7.3 ### Who can help? @ArthurZucker ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction ``` # xlm-roberta-base directory: git clone https://huggingface.co/xlm-roberta-base from transformers import XLMRobertaTokenizer tokenizer_a = XLMRobertaTokenizer.from_pretrained('xlm-roberta-base/') tokenizer_b = XLMRobertaTokenizer('xlm-roberta-base/sentencepiece.bpe.model') t = 'texta<s>textb' print(tokenizer_a.tokenize(t)) print(tokenizer_b.tokenize(t)) ``` ### Expected behavior ``` # what I expect is that both outputs: ['โ–text', 'a', '<s>', 'โ–text', 'b'] ['โ–text', 'a', '<s>', 'โ–text', 'b'] # However, in reality, their outputs are as follows: ['โ–text', 'a', '<s>', 'โ–text', 'b'] ['โ–text', 'a', '<', 's', '>', 'text', 'b'] ``` Why these two tokenizers have different segmentation results for special words?
06-01-2023 08:14:36
06-01-2023 08:14:36
Hey! Which version of transformers are you using? I tried to reproduce this but it did not work. Download the sentencepiece model from the official repo, and it worked on main~<|||||>> Hey! Which version of transformers are you using? I tried to reproduce this but it did not work. Download the sentencepiece model from the official repo, and it worked on main~ my transformers version is 4.17.0. <|||||>Ok, I can confirm that this is not working for me either! Thanks for reporting. This is most probably a problem related to `add_tokens` in the init: ```python >>> tokenizer_a.unique_no_split_tokens ['</s>', '<mask>', '<pad>', '<s>', '<unk>'] >>> tokenizer_b.unique_no_split_tokens ``` <|||||>Linking this with #23909 as the core problem is similar: - Adding a special token does not recreate the Trie (when the special tokens are initialized) - `from_pretrained` calls `added_tokens = tokenizer.sanitize_special_tokens()` which is when the special tokens are added to `unique_no_split` - Initialising a model from a sentencepiece vocab file does not initialize the Trie (used to split the tokens), since the special tokens are not sanitized. <|||||>I've had to take on a few model addition, the fix PR will be updated soon! It requires a bit more refactoring than initially thought!
transformers
23,929
closed
Fix doc string nits
# What does this PR do? Fixes a couple docstring nits discovered while working on another PR
06-01-2023 06:18:52
06-01-2023 06:18:52
_The documentation is not available anymore as the PR was closed or merged._
transformers
23,928
open
[Feature Request] Add timestamp prediction for TF Whisper
### System Info Latest Version. On google Colab ### Who can help? @sanchit-gandhi @connor-henderson ### Information I am trying to convert tensorflow whisper to tflite but turns out that TFWhisper doesnt want to output timestamp tokens. - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ``` import tensorflow as tf # Importing necessary classes from transformers from transformers import WhisperProcessor, WhisperFeatureExtractor, TFWhisperForConditionalGeneration, WhisperTokenizer # Importing necessary functions from datasets from datasets import load_dataset # Creating an instance of AutoProcessor from the pretrained model feature_extractor = WhisperFeatureExtractor.from_pretrained("openai/whisper-tiny.en") tokenizer = WhisperTokenizer.from_pretrained("openai/whisper-tiny.en", predict_timestamps=True) processor = WhisperProcessor(feature_extractor, tokenizer) # Creating an instance of TFWhisperForConditionalGeneration from the pretrained model model = TFWhisperForConditionalGeneration.from_pretrained("openai/whisper-tiny.en") # Loading dataset ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation") # Inputs inputs = processor(ds[0]["audio"]["array"], return_tensors="tf") input_features = inputs.input_features # Generating Transcription generated_ids = model.generate(input_features=input_features, return_timestamps=True) transcription = processor.tokenizer.decode(generated_ids[0], decode_with_timestamps=True) print(transcription) ``` <|startoftranscript|><|notimestamps|> Mr. Quilter is the apostle of the middle classes, and we are glad to welcome his gospel.<|endoftext|> ### Expected behavior While the same tokenizer with ```predict_timestamps=True``` works as expected in pytorch: ``` import torch from transformers import WhisperForConditionalGeneration model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-tiny.en") ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation") inputs = processor(ds[0]["audio"]["array"], return_tensors="pt") input_features = inputs.input_features generated_ids = model.generate(inputs=input_features, return_timestamps=True) transcription = processor.tokenizer.decode(generated_ids[0], decode_with_timestamps=True) transcription ``` <|startoftranscript|><|0.00|> Mr. Quilter is the apostle of the middle classes, and we are glad to welcome his gospel.<|5.44|><|endoftext|>
06-01-2023 05:57:16
06-01-2023 05:57:16
Will be closed by https://github.com/huggingface/transformers/pull/21334<|||||>@ArthurZucker do you maybe have some time to finish https://github.com/huggingface/transformers/pull/21334? Alternatively we can open this one up to the community if not!<|||||>Yep let's open it to the community I'm a bit short on time ๐Ÿ˜“ <|||||>If you come across this feature request and are interested in having a go, that's awesome, it's great to see! Feel free to resume the PR @ArthurZucker started at #21334 - it provides the scaffold you need to add this feature!<|||||>Sure, Iโ€™ll dig it up and share tomorrow โ€ฆ it is only transcribing for now but with timestamps. <|||||>@nyadla-sys https://colab.research.google.com/drive/1qXcgILcA-HPEYqAYPrxQQ1TRwXerErDk?usp=sharing<|||||>Hey @nyadla-sys - cool to see that you're using the TF model for inference! Could I respectfully ask that we try and keep the GitHub issue thread relevant to the issue being discussed? For other TF / TFLite issues, you can either open a new issue or open a post on the forum: https://discuss.huggingface.co Thanks!<|||||>> Hey @nyadla-sys - cool to see that you're using the TF model for inference! Could I respectfully ask that we try and keep the GitHub issue thread relevant to the issue being discussed? For other TF / TFLite issues, you can either open a new issue or open a post on the forum: https://discuss.huggingface.co > > Thanks! removed my comments ,sorry to spam<|||||>May I please try to tackle this with @0525hhgus on the weekends? Thank you and I hope you all have a great weekend!<|||||>Of course! Feel free to continue @ArthurZucker's PR https://github.com/huggingface/transformers/pull/21334 - it's already in a good state regarding TF Whisper timestamps. You just need to do the TF Whisper part, the Flax part has been merged already :) Alternatively you can open a new PR and copy across the relevant code changes if you're more comfortable doing that. Feel free to tag myself and Arthur in any PR you work on - we're on hand to help with questions / queries!
transformers
23,927
closed
4.29.0 bug
### System Info The dp mode of 4.29.0 seems to have a bug. When forwarding, the dtype of the model will be changed to torch.int64, which will cause the torch.finfo function in the get_extended_attention_mask function to report an error ### Who can help? _No response_ ### Information - [ ] The official example scripts - [x] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [x] My own task or dataset (give details below) ### Reproduction no ### Expected behavior no
06-01-2023 05:45:21
06-01-2023 05:45:21
That's why there is the patch 4.29.2<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
23,926
closed
Possible bug in `forced_decoder_ids` in `modeling_whisper.py`
### System Info N/A ### Who can help? @sanchit-gandhi @connor-henderson Hi, thank you for the great work! https://github.com/huggingface/transformers/blob/796162c51298547c357b20cc33d64cbcf77d0241/src/transformers/models/whisper/modeling_whisper.py#L1649 might have a bug (which is introduced in this [PR](https://github.com/huggingface/transformers/pull/22496)). According to a comparison with openAI's whisper, I think the order of the elements should be as follows: ```python forced_decoder_ids = [ # Slicing the text prompt ids in a manner consistent with the OpenAI implementation # to accomodate context space for the prefix (see https://github.com/openai/whisper/blob/c09a7ae299c4c34c5839a76380ae407e7d785914/whisper/decoding.py#L599) generation_config.decoder_start_token_id, *text_prompt_ids[-self.config.max_length // 2 - 1 :], *[token for _rank, token in non_prompt_forced_decoder_ids], ] ``` ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction N/A ### Expected behavior N/A
06-01-2023 04:06:29
06-01-2023 04:06:29
Hey @akuzeee! Cool to see you've been going through the Transformers Whisper codebase! We actually update the decoder start token id prior to setting the forced decoder ids: https://github.com/huggingface/transformers/blob/796162c51298547c357b20cc33d64cbcf77d0241/src/transformers/models/whisper/modeling_whisper.py#L1636-L1637 The decoder start token id is set to the `<|startofprev|>` token (or `self.tokenizer.sot_prev` in the Whisper codebase), and is always forced as the beginning of sequence (BOS) token during generation (i.e. the first token in our sequence). So the decoder start token id is handled outside of the forced decoder ids. We then add the `<|startoftranscript|>` token after the prompts, which indicates the end of the prompt and start of the generation. It seems like the OpenAI codebase has this `<|startoftranscript|>` token included as part of their `tokens`, i.e. looking at the code from https://github.com/openai/whisper/blob/c09a7ae299c4c34c5839a76380ae407e7d785914/whisper/decoding.py#L597: ```python tokens = ( [self.tokenizer.sot_prev] # this is equivalent to us setting the sot prev start token id as the decoder start token id + prompt_tokens[-(self.n_ctx // 2 - 1) :] # this is the same as our code + tokens # these tokens include the bos decoder start token id ) ```<|||||>Thank you for the detailed and kind explanation! I understand now.
transformers
23,925
closed
AttributeError: 'Wav2Vec2Processor' object has no attribute 'set_lang'
### System Info - `transformers` version: 4.28.1 - Platform: Linux-4.15.0-20-generic-x86_64-with-glibc2.10 - Python version: 3.8.0 - Huggingface_hub version: 0.13.4 - Safetensors version: not installed - PyTorch version (GPU?): 1.12.1 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? _No response_ ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction from transformers import Wav2Vec2ForCTC, AutoProcessor ckpt = "./mms-1b-all/" processor = AutoProcessor.from_pretrained(ckpt) model = Wav2Vec2ForCTC.from_pretrained(ckpt) # requires only 3GB of CPU RAM target_lang = "esp" processor.set_lang("esp") model.load_adapter("esp") # This will load a file called "adapter.esp.bin" from: https://huggingface.co/patrickvonplaten/mms-1b-all , cache it and replace the adapter model.to("cuda") audio, sr = sf.read("/home/lenovo/ไธ‹่ฝฝ/audio.flac") #/home/lenovo/project/fairseq/content/audio_samples/1.wav inputs = processor(audio, sampling_rate=sr, return_tensors="pt") with torch.no_grad(): logits = model(**inputs).logits transcription = processor.batch_decode(logits.argmax(-1))[0] print(f"Transcription: {transcription}") ### Expected behavior Fix this error, and output the correct language, such as 'kor'
06-01-2023 02:49:41
06-01-2023 02:49:41
The corresponding PR has not been merged yet.<|||||>[PR](https://github.com/huggingface/transformers/pull/23813) merged. Also feel free to check out: - https://huggingface.co/docs/transformers/main/en/model_doc/mms - https://github.com/huggingface/transformers/pull/23813 - https://huggingface.co/facebook/mms-1b-all<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
23,924
closed
Training "microsoft/beit-large-finetuned-ade-640-640" on my dataset present some worry can not to solve
### System Info win10 anaconda python3.8 ### Who can help? PyTorch: @sgugger ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction "import timm import torch from easydict import EasyDict from torch import nn import pandas as pd metadata = pd.read_json("metadata.json", orient="colunms") dict_all = metadata.sample_pairs.loc[metadata.index[22:]] index_organ_pd = pd.DataFrame() index_list = [] organ_list = [] for num in range(len(dict_all)): index_list.append(dict_all.index[num]) organ_list.append(dict_all[num]['organ']) index_organ_pd["index"] = index_list index_organ_pd["organ"] = organ_list # key_list = list(index_organ_pd.organ.value_counts().index) # id2label = {int(num): key_list[num] for num in range(len(key_list))} id2label = {1:"BG", 2:"CA", 255:"UNK"} label2id = {v: k for k, v in id2label.items()} import glob import os train_path = 'D:\\workplace\\workplace_python\\jupyter_lab\\timm_model\\dataset_oce\\train\\tissue\\img' train_lablepath = 'D:\\workplace\\workplace_python\\jupyter_lab\\timm_model\\dataset_oce\\train\\tissue\\label' test_path = 'D:\\workplace\\workplace_python\\jupyter_lab\\timm_model\\dataset_oce\\test\\tissue\\img' test_lablepath = 'D:\\workplace\\workplace_python\\jupyter_lab\\timm_model\\dataset_oce\\test\\tissue\\label' train_tissuePath = glob.glob(os.path.join(train_path, '*.jpg')) train_tissuelabelPath = glob.glob(os.path.join(train_lablepath, '*.jpg')) test_tissuePath = glob.glob(os.path.join(test_path, '*.jpg')) test_tissuelabelPath = glob.glob(os.path.join(test_lablepath, '*.jpg')) import PIL import cv2 import numpy as np def label_imgpro(tmp_pic): if len(tmp_pic[tmp_pic[:]==2].reshape(-1,1))/len(tmp_pic.reshape(-1,1))>=0.2: return 2 else: return 1 def get_data(train_tissuePath,train_tissueLabelPath,index_organ_pd): train_ds={} image=[] annotation=[] scene_category =[] for num in range(len(train_tissuePath)): image.append(PIL.Image.open(train_tissuePath[num])) annotation.append(PIL.Image.open(train_tissueLabelPath[num]).convert('L')) # tmp_key = np.array(index_organ_pd[index_organ_pd["index"]==train_tissuePath[num].split("\\")[-1].split(".")[0]]["organ"])[0] # print(tmp_key) # tmp_value = label2id.get(tmp_key) # print(tmp_value) scene_category.append(label_imgpro(np.array(PIL.Image.open(train_tissueLabelPath[num]).convert('L')))) train_ds["image"]=image train_ds["annotation"]=annotation train_ds["scene_category"]=scene_category return train_ds train_dict = get_data(train_tissuePath,train_tissuelabelPath,index_organ_pd) test_dict = get_data(test_tissuePath,test_tissuelabelPath,index_organ_pd) train_dict = get_data(train_tissuePath, train_tissuelabelPath, index_organ_pd) test_dict = get_data(test_tissuePath, test_tissuelabelPath, index_organ_pd) import datasets train_ds = datasets.Dataset.from_dict(train_dict) test_ds = datasets.Dataset.from_dict(test_dict) from transformers import AutoImageProcessor checkpoint = "microsoft/beit-large-finetuned-ade-640-640" # checkpoint = "nvidia/mit-b0" image_processor = AutoImageProcessor.from_pretrained(checkpoint, do_reduce_labels=True) # from transformers import BeitImageProcessor # image_processor = BeitImageProcessor(do_reduce_labels=True) from torchvision.transforms import ColorJitter jitter = ColorJitter(brightness=0.25, contrast=0.25, saturation=0.25, hue=0.1) def train_transforms(example_batch): images = [jitter(x) for x in example_batch["image"]] labels = [x for x in example_batch["annotation"]] inputs = image_processor(images, labels) return inputs def val_transforms(example_batch): images = [x for x in example_batch["image"]] labels = [x for x in example_batch["annotation"]] inputs = image_processor(images, labels) return inputs train_ds.set_transform(train_transforms) test_ds.set_transform(val_transforms) import evaluate metric = evaluate.load("mean_iou") num_labels = len(id2label) # num_labels = 3 def compute_metrics(eval_pred): with torch.no_grad(): logits, labels = eval_pred logits_tensor = torch.from_numpy(logits) logits_tensor = nn.functional.interpolate( logits_tensor, size=labels.shape[-2:], mode="bilinear", align_corners=False, ).argmax(dim=1) pred_labels = logits_tensor.detach().cpu().numpy() metrics = metric.compute( predictions=pred_labels, references=labels, num_labels=num_labels, ignore_index=255, reduce_labels=False, ) for key, value in metrics.items(): if type(value) is np.ndarray: metrics[key] = value.tolist() return metrics from transformers import AutoModelForSemanticSegmentation, TrainingArguments, Trainer from transformers import BeitForSemanticSegmentation model = AutoModelForSemanticSegmentation.from_pretrained(checkpoint, id2label=id2label, label2id=label2id, ignore_mismatched_sizes=True) # model = BeitForSemanticSegmentation.from_pretrained(checkpoint, id2label=id2label, label2id=label2id) # from transformers import BertConfig, BertModel # Download model and configuration from huggingface.co and cache. # model = BertModel.from_pretrained("bert-base-uncased") training_args = TrainingArguments( output_dir="beit-b0-oce", learning_rate=6e-5, num_train_epochs=50, per_device_train_batch_size=2, per_device_eval_batch_size=2, save_total_limit=3, evaluation_strategy="steps", save_strategy="steps", save_steps=20, eval_steps=20, logging_steps=1, eval_accumulation_steps=5, remove_unused_columns=False ) trainer = Trainer( model=model, args=training_args, train_dataset=train_ds, eval_dataset=test_ds, compute_metrics=compute_metrics ) trainer.train() " I want to train my model about beit on my dataset that is my code ### Expected behavior "0%| | 0/9100 [00:00<?, ?it/s]Traceback (most recent call last): File "D:\workplace\workplace_python\jupyter_lab\timm_model\beit_mode finetuned.py", line 188, in <module> trainer.train() File "D:\anaconda\envs\timm_model\lib\site-packages\transformers\trainer.py", line 1664, in train return inner_training_loop( File "D:\anaconda\envs\timm_model\lib\site-packages\transformers\trainer.py", line 1940, in _inner_training_loop tr_loss_step = self.training_step(model, inputs) File "D:\anaconda\envs\timm_model\lib\site-packages\transformers\trainer.py", line 2735, in training_step loss = self.compute_loss(model, inputs) File "D:\anaconda\envs\timm_model\lib\site-packages\transformers\trainer.py", line 2767, in compute_loss outputs = model(**inputs) File "D:\anaconda\envs\timm_model\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "D:\anaconda\envs\timm_model\lib\site-packages\transformers\models\beit\modeling_beit.py", line 1276, in forward loss = self.compute_loss(logits, auxiliary_logits, labels) File "D:\anaconda\envs\timm_model\lib\site-packages\transformers\models\beit\modeling_beit.py", line 1194, in compute_loss main_loss = loss_fct(upsampled_logits, labels) File "D:\anaconda\envs\timm_model\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "D:\anaconda\envs\timm_model\lib\site-packages\torch\nn\modules\loss.py", line 1174, in forward return F.cross_entropy(input, target, weight=self.weight, File "D:\anaconda\envs\timm_model\lib\site-packages\torch\nn\functional.py", line 3029, in cross_entropy return torch._C._nn.cross_entropy_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index, label_smoothing) IndexError: Target 62 is out of bounds. 0%| | 0/9100 [01:16<?, ?it/s]" this is my worry,I can not find where is the โ€˜62โ€™ out of bounds.
06-01-2023 02:21:25
06-01-2023 02:21:25
Please use the [forums](https://discuss.huggingface.co/) to debug your code as we keep the issues for bugs and feature requests only.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
23,923
open
Adding support for 3D deep learning models.
### Feature request Hi. I am planning to add a new pipeline and a model for 3d deep learning tasks which can work on point clouds for classification and detection as there is no support for 3d data right now. I just wanted to confirm if the process will be similar to the guides for adding a new pipeline and model to hugging face transformers or there are more complexities which I have not thought about? And is it going to be too much work to add GPU support and batching? ### Motivation I have been working with 3d deep learning and wanted to implement the whole process from scratch. So, why not contribute to hugging face so other people can use and build upon it? ### Your contribution Submitting a PR
06-01-2023 01:39:06
06-01-2023 01:39:06
Hi @VikasSoni1, thanks for adding this feature request! Pipelines are typically built around groups of models that tackle the same task and share inputs/outputs using the `AutoXxx` API. We don't currently have models which operate on point clouds or 3D data (additions always welcome!) and so there isn't a defined API yet and these should be defined before adding such a pipeline. The great thing about open source, is anyone can fork this repo and built upon it. If you've developed your own pipelines - please feel free to share here for the community to find ๐Ÿค—
transformers
23,922
closed
Modify device_map behavior when loading a model using from_pretrained
# What does this PR do? Change how `device_map` works in the `from_pretrained` function when we load the model: - Training setup: we don't need to pass the `device_map` anymore ('cpu' by default or torch.cuda.local_device() for 4/8 bit model) or we can set `device_map` to the device we want our whole model to be placed on('cpu', 'cuda:0', 0, `torch.device('cpu')`...). - Inference setup: pass custom device_map or device_map in `["auto", "balanced", "balanced_low_0", "sequential"]` Fixes [1412](https://github.com/huggingface/accelerate/issues/1412) ## Who can review? @sgugger @younesbelkada
05-31-2023 23:11:05
05-31-2023 23:11:05
_The documentation is not available anymore as the PR was closed or merged._
transformers
23,921
closed
DocumentQuestionAnsweringTool docstring is incorrect
### System Info N/A ### Who can help? https://github.com/huggingface/transformers/blob/fabe17a726bbf6081cfbcc975d8ac451a81f3e2d/src/transformers/tools/document_question_answering.py#LL32C9-L34C80 is wrong. The inputs are image: "Image", question: str. I was trying transformers agent with a task like ``` agent.run("Given the pdf at <insert URL>, tell me <insert question>") ``` and it was failing with `TypeError: DocumentQuestionAnsweringTool.encode() got an unexpected keyword argument 'document'` because of the docstring issue. ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction N/A ### Expected behavior N/A
05-31-2023 22:46:30
05-31-2023 22:46:30
Thanks for flagging! Would you like to open a PR to fix this? The proper fix should be to rename the input `image` to `document` (not change the description).<|||||>Here's a draft https://github.com/huggingface/transformers/pull/23939. Please correct the rest for me if it's not quite right. Thanks!<|||||>Fixed by https://github.com/huggingface/transformers/pull/23939
transformers
23,920
closed
[PushToHub] Make it possible to upload folders
# What does this PR do? For some models, it may be that we have several files with the same name, e.g. for the new InstructBLIP model (#23460), the processor consists of 2 tokenizers (because the model internally uses 2 different text models). Both of these tokenizers require files with the same name, like `tokenizer_config.json`. Hence, it would be nice to create subfolders in the model repos to store for instance all files of one particular tokenizer (similar to how the Diffusers library does this). For InstructBLIP, I created a separate `qformer_tokenizer` folder for this as can be seen [here](https://huggingface.co/nielsr/instructblip-vicuna-7b/tree/main). I had to adapt the `save_pretrained` and `from_pretrained` methods of `InstructBlipProcessor` to save the files to a separate "qformer_tokenizer" folder, and read them back in. I guess those are very specific to InstructBLIP given that the name of the folder is pretty custom. However, `push_to_hub` currently doesn't support uploading folders with files. This PR adds this functionality.
05-31-2023 18:57:13
05-31-2023 18:57:13
_The documentation is not available anymore as the PR was closed or merged._
transformers
23,919
closed
Skip device placement for past key values in decoder models
# What does this PR do? This PR skips the device placement for the `past_key_values` in big models, which is responsible for a lot of time lost according to the analysis in [this issue](https://github.com/huggingface/accelerate/issues/1394). The idea is that the past key values (one per layer) are all generated on the device of the layer they correspond to and never need to be moved. Needs to be tested with Accelerate at https://github.com/huggingface/accelerate/pull/1491 (until this PR is merged).
05-31-2023 18:25:10
05-31-2023 18:25:10
_The documentation is not available anymore as the PR was closed or merged._<|||||>Accelerate handles both strings and arrays :-) <|||||>thanks boss!
transformers
23,918
open
FP-16 training producing nans on t5-large/flan-t5-xl
### System Info This was an issue a while back but seems to have resurfaced - https://discuss.huggingface.co/t/t5-fp16-issue-is-fixed/3139 I have tested the exact following code on `t5-small` and `t5-base` and they work fine. However, when using `t5-large` and/or `flan-t5-xl`, the model produces nan outputs. This is solely a result of using half precision (ignore the multiple GPUs, strategy etc, I have tested with every other variation): ``` trainer = pl.Trainer( precision="16", accelerator='gpu', strategy='auto', devices=4,) ``` I am using `transformers == 4.28.1` and `lightning == 2.0.0` Any ideas/help appreciated Thanks! ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ``` trainer = pl.Trainer( precision="16", accelerator='gpu', strategy='auto', devices=4,) ``` ### Expected behavior Nans!!!
05-31-2023 18:03:54
05-31-2023 18:03:54
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Hi @cassianlewis, thanks for reporting this issue. There's been a few updates recently regarding handling of loading of model weights and their precision. Could you update your transformers version to the latest release and let us know if the issue still persists? Could you also provide some more details about the other strategies tested (when this behaviour doesn't occur)? cc @younesbelkada <|||||>Hi @amyeroberts, I think the issue is more with the model itself. See https://www.philschmid.de/fine-tune-flan-t5-deepspeed `When fine-tuning T5 models we cannot use fp16 since it leads to overflow issues, see: [#4586](https://github.com/huggingface/transformers/issues/4586), [#10830](https://github.com/huggingface/transformers/issues/10830), [#10956](https://github.com/huggingface/transformers/pull/10956)` (t5 was trained with bf16) FYI this behaviour doesn't occur when I remove the `precision="16"` line <|||||>@cassianlewis Thanks for providing these details and links. Looking at the linked issues and Philip's blog, unfortunately I don't think there's much that can be done to resolve this. We can leave this issue open for now for people to share any updates or solutions they may have. <|||||>Hi @cassianlewis I think it is expected that pure fp16 training leads to unstable behaviour for some models and in some configuration; have you tried to train the models in bf16 precision?<|||||>@younesbelkada My Tesla T4 doesn't currently support bf16 (you need Ampere architecture), so I will get back to you on that <|||||>I was running into the same issue, and I found a few things that helped (but did make training time much slower). 1. Specifically, I found that I needed to add the line `@torch.cuda.amp.custom_fwd(cast_inputs=torch.float32)` to the forward method of this module https://github.com/huggingface/transformers/blob/2ab75add4b30c2fc44a8bf575156d448d9ed87a7/src/transformers/models/t5/modeling_t5.py#L301-L314 Of course, this greatly slows things down since `fp16` won't be utilized at all in this module anymore (and this would slow down other methods like bf16 training). I think if there is a way to enforce that `hidden_linear` should always stay in `fp32` then the `torch.cuda.amp...` statement would not be needed, but I was unable to figure out how to get that to work. 2. Another thing I changed were these blocks in the code https://github.com/huggingface/transformers/blob/2ab75add4b30c2fc44a8bf575156d448d9ed87a7/src/transformers/models/t5/modeling_t5.py#L707-L713 to use the alternate if statement ```python if torch.isinf(hidden_states).any(): ... ``` since I found it was not guaranteed that `hidden_states` would be in `torch.float16` during fp16 training, but infinity values could still be present.
transformers
23,917
closed
Update the update metadata job to use upload_folder
# What does this PR do? This PR updates the `update_metadata` script to stop using `Repository` and use `upload_folder` in its place.
05-31-2023 17:42:23
05-31-2023 17:42:23
_The documentation is not available anymore as the PR was closed or merged._
transformers
23,916
closed
device_map issue with M1_M2 MacOS
### System Info Hi, I am on M2 MAX CHIP MACOS that has 12 CPU, 38 GPU. I am having issue with ever modification of this code snippet. Would you please tell me how I can correct it? ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction from transformers import AutoTokenizer, AutoModelForCausalLM import transformers import torch model = AutoModelForCausalLM.from_pretrained("tiiuae/falcon-40b-instruct", trust_remote_code=True) model = model.to('mps') tokenizer = AutoTokenizer.from_pretrained(model) pipeline = transformers.pipeline( "text-generation", model=model, tokenizer=tokenizer, torch_dtype=torch.bfloat16, trust_remote_code=True, # device = torch.device('mps'), # device_map="auto", ) ### Expected behavior from transformers import AutoTokenizer, AutoModelForCausalLM import transformers import torch model = AutoModelForCausalLM.from_pretrained("tiiuae/falcon-40b-instruct", trust_remote_code=True) model = model.to('mps') tokenizer = AutoTokenizer.from_pretrained(model) pipeline = transformers.pipeline( "text-generation", model=model, tokenizer=tokenizer, torch_dtype=torch.bfloat16, trust_remote_code=True, # device = torch.device('mps'), # device_map="auto", )
05-31-2023 16:57:37
05-31-2023 16:57:37
This does not work yet indeed., this is known and is on our roadmap.<|||||>Hi, Thanks for the reply. I hope this will come into play. I purchased very expensive MacOS to continue on ML, but apparently I can not use it for now to implement this type of things. Is there a way to import pdf and then chat with it using this falcon-40b model? Thanks Regards > On May 31, 2023, at 1:29 PM, Sylvain Gugger ***@***.***> wrote: > > > This does not work yet indeed., this is known and is on our roadmap. > > โ€” > Reply to this email directly, view it on GitHub <https://github.com/huggingface/transformers/issues/23916#issuecomment-1570629684>, or unsubscribe <https://github.com/notifications/unsubscribe-auth/ARF6VBUWSXW76NC2SMHKAI3XI55VZANCNFSM6AAAAAAYVX5XTI>. > You are receiving this because you authored the thread. > <|||||>Hi @phdykd , we have added the support with M1/M2 MacOS. Let us know if works. <|||||>Hi @SunMarc, Please share the link with me. Thanks for your support.<|||||>Hi @phdykd , you just need to install the latest version of accelerate (main branch).
transformers
23,915
closed
Deepspeed Integration multi-gpu example does not work as written
### System Info ds_report shows this ``` -------------------------------------------------- DeepSpeed C++/CUDA extension op report -------------------------------------------------- NOTE: Ops not installed will be just-in-time (JIT) compiled at runtime if needed. Op compatibility means that your system meet the required dependencies to JIT install the op. -------------------------------------------------- JIT compiled ops requires ninja ninja .................. [OKAY] -------------------------------------------------- op name ................ installed .. compatible -------------------------------------------------- async_io ............... [NO] ....... [OKAY] cpu_adagrad ............ [NO] ....... [OKAY] cpu_adam ............... [NO] ....... [OKAY] fused_adam ............. [NO] ....... [OKAY] fused_lamb ............. [NO] ....... [OKAY] quantizer .............. [NO] ....... [OKAY] random_ltd ............. [NO] ....... [OKAY] [WARNING] sparse_attn requires a torch version >= 1.5 and < 2.0 but detected 2.0 [WARNING] using untested triton version (2.0.0), only 1.0.0 is known to be compatible sparse_attn ............ [NO] ....... [NO] spatial_inference ...... [NO] ....... [OKAY] transformer ............ [NO] ....... [OKAY] stochastic_transformer . [NO] ....... [OKAY] transformer_inference .. [NO] ....... [OKAY] utils .................. [NO] ....... [OKAY] -------------------------------------------------- DeepSpeed general environment info: torch install path ............... ['/opt/conda/lib/python3.8/site-packages/torch'] torch version .................... 2.0.1+cu117 deepspeed install path ........... ['/opt/conda/lib/python3.8/site-packages/deepspeed'] deepspeed info ................... 0.9.2, unknown, unknown torch cuda version ............... 11.7 torch hip version ................ None nvcc version ..................... 11.6 deepspeed wheel compiled w. ...... torch 2.0, cuda 11.7 ``` I am running in a container built by: ``` FROM nvcr.io/nvidia/pytorch:22.04-py3 RUN pip install git+https://github.com/huggingface/peft.git RUN pip install "transformers==4.29.1" "datasets==2.9.0" "accelerate==0.19.0" "evaluate==0.4.0" loralib --upgrade --quiet RUN pip install bitsandbytes rouge-score tensorboard py7zr einops RUN pip install deepspeed --upgrade RUN pip install jupyter RUN pip uninstall -y apex RUN pip uninstall -y apex ``` And then adding in the transformers and examples as described when running examples. I am running this on a system with 2 x 40G A100s. ### Who can help? @pacman100 @sgugger ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction I am following the deepspeed integration example here: https://huggingface.co/docs/transformers/main/en/main_classes/deepspeed I have a latest clone of transformers have installed transformers and the pytorch examples a described in the examples documentation. Then I run this snippet from the example: ``` deepspeed examples/pytorch/translation/run_translation.py \ --deepspeed tests/deepspeed/ds_config_zero3.json \ --model_name_or_path t5-small --per_device_train_batch_size 1 \ --output_dir output_dir --overwrite_output_dir --fp16 \ --do_train --max_train_samples 500 --num_train_epochs 1 \ --dataset_name wmt16 --dataset_config "ro-en" \ --source_lang en --target_lang ro ``` When I do I get the complaint: ``` Traceback (most recent call last): File "examples/pytorch/translation/run_translation.py", line 666, in <module> main() File "examples/pytorch/translation/run_translation.py", line 581, in main train_result = trainer.train(resume_from_checkpoint=checkpoint) File "/opt/conda/lib/python3.8/site-packages/transformers/trainer.py", line 1661, in train return inner_training_loop( File "/opt/conda/lib/python3.8/site-packages/transformers/trainer.py", line 1788, in _inner_training_loop model, self.optimizer, self.lr_scheduler = self.accelerator.prepare( File "/opt/conda/lib/python3.8/site-packages/accelerate/accelerator.py", line 1139, in prepare result = self._prepare_deepspeed(*args) File "/opt/conda/lib/python3.8/site-packages/accelerate/accelerator.py", line 1360, in _prepare_deepspeed raise ValueError( ValueError: You cannot specify an optimizer in the config file and in the code at the same time. Please remove the optimizer from the config file or create `accelerate.utils.DummyOptim` in the code. ``` I tried removing the optimizer stanza, then it complains about bits in the scheduler, etc. At that point I felt like I was in the weeds. I suspect there is a better ds_config for this reference. Basically, I am trying to run a basic deepspeed test to verify my installation. This is using the unmodified config and the run_translation. ### Expected behavior That the example works as described in the documentation,
05-31-2023 16:28:13
05-31-2023 16:28:13
Looking into this<|||||>hello, I'm unable to reproduce the issue when using deepspeed 0.9.3, transformers pr #23914 and main branch of accelerate<|||||>Awesome, I'll try updating to those versions and see if I can get it working. Thanks!<|||||>I am using these versions and I still get: `ValueError: You cannot specify an optimizer in the config file and in the code at the same time. Please remove the optimizer from the config file or create `accelerate.utils.DummyOptim` in the code.` I am using these versions. accelerate: 0.20.0.dev0 deepspeed: 0.9.3+e02b8d0b transformers: 4.30.0.dev0 Am I missing another lib?<|||||>Hello, with exact same library versions, I am able to run the above `run_translation.py` example. Are you making any changes to the example code? Command: ``` deepspeed examples/pytorch/translation/run_translation.py --deepspeed tests/deepspeed/ds_config_zero3.json --model_name_or_path t5-small --per_device_train_batch_size 1 --output_dir output_dir --overwrite_output_dir --fp16 --do_train --max_train_samples 500 --num_train_epochs 1 --dataset_name wmt16 --dataset_config "ro-en" --source_lang en --target_lang ro ``` ``` accelerate 0.20.0.dev0 /home/sourab/accelerate deepspeed 0.9.3+e02b8d0b /home/sourab/DeepSpeed transformers 4.30.0.dev0 /home/sourab/transformers ``` ![Screenshot 2023-06-01 at 8 55 34 PM](https://github.com/huggingface/transformers/assets/13534540/b673f397-4ed6-4d82-b675-816a31e2f254) <|||||>After cleaning/reinstalling I can confirm it works! Thanks for the quick turnaround. :)
transformers
23,914
closed
remove the extra `accelerator.prepare`
# What does this PR do? Remove the extra `accelerator.prepare` that slipped in with multiple update from main ๐Ÿ˜… 1. Fixes #23905
05-31-2023 16:25:11
05-31-2023 16:25:11
_The documentation is not available anymore as the PR was closed or merged._<|||||>I still get the same error @younesbelkada I have reinstalled from source but I still get the error. I'm in the latest commit `transformers @ git+https://github.com/huggingface/transformers.git@fabe17a726bbf6081cfbcc975d8ac451a81f3e2d` and you can tell from the stacktrace that the line numbers are different (due to the changes merged here https://github.com/huggingface/transformers/pull/23914/files) ```โ•ญโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ Traceback (most recent call last) โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฎ โ”‚ in <cell line: 17>:17 โ”‚ โ”‚ โ”‚ โ”‚ /usr/local/lib/python3.10/dist-packages/transformers/trainer.py:1661 in train โ”‚ โ”‚ โ”‚ โ”‚ 1658 โ”‚ โ”‚ inner_training_loop = find_executable_batch_size( โ”‚ โ”‚ 1659 โ”‚ โ”‚ โ”‚ self._inner_training_loop, self._train_batch_size, args.auto_find_batch_size โ”‚ โ”‚ 1660 โ”‚ โ”‚ ) โ”‚ โ”‚ โฑ 1661 โ”‚ โ”‚ return inner_training_loop( โ”‚ โ”‚ 1662 โ”‚ โ”‚ โ”‚ args=args, โ”‚ โ”‚ 1663 โ”‚ โ”‚ โ”‚ resume_from_checkpoint=resume_from_checkpoint, โ”‚ โ”‚ 1664 โ”‚ โ”‚ โ”‚ trial=trial, โ”‚ โ”‚ โ”‚ โ”‚ /usr/local/lib/python3.10/dist-packages/transformers/trainer.py:1995 in _inner_training_loop โ”‚ โ”‚ โ”‚ โ”‚ 1992 โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ args.max_grad_norm, โ”‚ โ”‚ 1993 โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ ) โ”‚ โ”‚ 1994 โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ else: โ”‚ โ”‚ โฑ 1995 โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ self.accelerator.clip_grad_norm_( โ”‚ โ”‚ 1996 โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ model.parameters(), โ”‚ โ”‚ 1997 โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ args.max_grad_norm, โ”‚ โ”‚ 1998 โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ ) โ”‚ โ”‚ โ”‚ โ”‚ /usr/local/lib/python3.10/dist-packages/accelerate/accelerator.py:1817 in clip_grad_norm_ โ”‚ โ”‚ โ”‚ โ”‚ 1814 โ”‚ โ”‚ โ”‚ # `accelerator.backward(loss)` is doing that automatically. Therefore, its i โ”‚ โ”‚ 1815 โ”‚ โ”‚ โ”‚ # We cannot return the gradient norm because DeepSpeed does it. โ”‚ โ”‚ 1816 โ”‚ โ”‚ โ”‚ return None โ”‚ โ”‚ โฑ 1817 โ”‚ โ”‚ self.unscale_gradients() โ”‚ โ”‚ 1818 โ”‚ โ”‚ return torch.nn.utils.clip_grad_norm_(parameters, max_norm, norm_type=norm_type) โ”‚ โ”‚ 1819 โ”‚ โ”‚ โ”‚ 1820 โ”‚ def clip_grad_value_(self, parameters, clip_value): โ”‚ โ”‚ โ”‚ โ”‚ /usr/local/lib/python3.10/dist-packages/accelerate/accelerator.py:1780 in unscale_gradients โ”‚ โ”‚ โ”‚ โ”‚ 1777 โ”‚ โ”‚ โ”‚ for opt in optimizer: โ”‚ โ”‚ 1778 โ”‚ โ”‚ โ”‚ โ”‚ while isinstance(opt, AcceleratedOptimizer): โ”‚ โ”‚ 1779 โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ opt = opt.optimizer โ”‚ โ”‚ โฑ 1780 โ”‚ โ”‚ โ”‚ โ”‚ self.scaler.unscale_(opt) โ”‚ โ”‚ 1781 โ”‚ โ”‚ โ”‚ 1782 โ”‚ def clip_grad_norm_(self, parameters, max_norm, norm_type=2): โ”‚ โ”‚ 1783 โ”‚ โ”‚ """ โ”‚ โ”‚ โ”‚ โ”‚ /usr/local/lib/python3.10/dist-packages/torch/cuda/amp/grad_scaler.py:275 in unscale_ โ”‚ โ”‚ โ”‚ โ”‚ 272 โ”‚ โ”‚ optimizer_state = self._per_optimizer_states[id(optimizer)] โ”‚ โ”‚ 273 โ”‚ โ”‚ โ”‚ โ”‚ 274 โ”‚ โ”‚ if optimizer_state["stage"] is OptState.UNSCALED: โ”‚ โ”‚ โฑ 275 โ”‚ โ”‚ โ”‚ raise RuntimeError("unscale_() has already been called on this optimizer sin โ”‚ โ”‚ 276 โ”‚ โ”‚ elif optimizer_state["stage"] is OptState.STEPPED: โ”‚ โ”‚ 277 โ”‚ โ”‚ โ”‚ raise RuntimeError("unscale_() is being called after step().") โ”‚ โ”‚ 278 โ”‚ โ•ฐโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฏ RuntimeError: unscale_() has already been called on this optimizer since the last update(). ``` am I missing something? I'm just running this colab demo from a live event yesterday https://colab.research.google.com/drive/1ARmlaZZaKyAg6HTi57psFLPeh0hDRcPX?usp=sharing#scrollTo=Duak7T_B3VpJ<|||||>please check: https://github.com/huggingface/transformers/issues/23935#issuecomment-1571989596<|||||>Hi may I know what is the correct way to use accelerate in Trainer now? right now I manually do accelerate.prepare after initializing trainer by using `trainer.model_wrapped, trainer.optimizer, trainer.lr_scheduler = accelerator.prepare(trainer.model_wrapped, trainer.optimizer, trainer.lr_scheduler)`. Is that not needed at all?<|||||>> Hi may I know what is the correct way to use accelerate in Trainer now? right now I manually do accelerate.prepare after initializing trainer by using trainer.model_wrapped, trainer.optimizer, trainer.lr_scheduler = accelerator.prepare(trainer.model_wrapped, trainer.optimizer, trainer.lr_scheduler). Is that not needed at all? Not needed now as it is being done internally in Trainer. It will prepare based on either the Trainer arguments or the accelerate config <|||||>Which argument should I use for that? Last time i checked there is nothing related<|||||>any help? I just checked doc and there is still nothing accelerate related<|||||>Hello @aliencaocao, you can use the Trainer as it was used previously and expect no changes from user's point of view. You can now use accelerate launcher directly too. Here are the relevant docs: https://huggingface.co/docs/transformers/main/en/main_classes/trainer#using-accelerate-launcher-with-trainer
transformers
23,913
closed
Empty circleci config
# What does this PR do? This makes sure there is always a circleCI config (and thus always a run_tests workflow) so we can detect it in the branch protection rules.
05-31-2023 15:59:04
05-31-2023 15:59:04
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23913). All of your documentation changes will be reflected on that endpoint.
transformers
23,912
closed
Re-enable squad test
# What does this PR do? This re-enables the squad test which was skipped while Accelerate was getting fixed. Will merge when all is green. This also fixes the `[all-test]` directive which was broken by the recent changes for examples in the test fetcher.
05-31-2023 15:24:40
05-31-2023 15:24:40
_The documentation is not available anymore as the PR was closed or merged._<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23912). All of your documentation changes will be reflected on that endpoint.
transformers
23,911
closed
Upgrade safetensors version
# What does this PR do? Upgrades `safetensors` version in the requirements to avoid the following error message: (I had 0.3.0) ```python src/transformers/models/gpt2/modeling_gpt2.py:38: in <module> from ...modeling_utils import PreTrainedModel, SequenceSummary src/transformers/modeling_utils.py:40: in <module> from .pytorch_utils import ( # noqa: F401 src/transformers/pytorch_utils.py:19: in <module> from safetensors.torch import storage_ptr, storage_size E ImportError: cannot import name 'storage_ptr' from 'safetensors.torch' (/home/zach_mueller_huggingface_co/miniconda3/envs/accelerate/lib/python3.9/site-packages/safetensors/torch.py) ``` ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @sgugger
05-31-2023 15:07:02
05-31-2023 15:07:02
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23911). All of your documentation changes will be reflected on that endpoint.
transformers
23,910
closed
[`RWKV`] Fix RWKV 4bit
# What does this PR do? Fixes RWKV 4bit inference. Fixes https://github.com/huggingface/transformers/issues/23848 ```python from transformers import AutoTokenizer, AutoModelForCausalLM, GenerationConfig model_id = "RWKV/rwkv-4-1b5-pile" model = AutoModelForCausalLM.from_pretrained(model_id, load_in_4bit=True, device_map={"":0}) tokenizer = AutoTokenizer.from_pretrained(model_id) generation_config = GenerationConfig(max_new_tokens=20, pad_token_id=tokenizer.eos_token_id) question = "Hello my name is Younes" inputs = tokenizer(question, return_tensors="pt").to(0) output_int8 = model.generate((inputs["input_ids"]), generation_config=generation_config) print(tokenizer.decode(output_int8[0], skip_special_tokens=True)) >>> Hello my name is Younes and I am from the United States. I am in the process of applying to the University of California, ``` The fix is similar to 8bit linear layers, one needs to rescale the quantization statistics of the fp4 linear layers cc @sgugger
05-31-2023 15:05:31
05-31-2023 15:05:31
_The documentation is not available anymore as the PR was closed or merged._
transformers
23,909
open
๐Ÿšจ๐Ÿšจ ๐Ÿšจ๐Ÿšจ [WIP][Tokenizer] attemp to fix add_token issues๐Ÿšจ๐Ÿšจ ๐Ÿšจ๐Ÿšจ
# What does this PR do? Adresses a lot of issues related to `add_tokens`, also adds more refine testing to make sure this does not happen again. - Adding a token with `add_tokens` ignores the arguments if the token is an `AddedToken`. reported in #20734 and #14770, #21120, #16334 - Adding a token does not automatically adds it to the `unique_no_split_token`. Reported in #23818 , #23851, #11531 but also #23250. Also linked to #22940 , should allow us to re-factor the way T5 tokenizes the inputs (convert_token_to_ids should not have a bunch of regex for special tokens) (also #9747) - Adding a token that is already in the vocabulary does not add it. This is debatable, but if someone explicitly adds it, it means he does not want to split it. Reported in #23459 - There is no support for `single_word` in `slow`. Reported in #14770 - Initialising a model from a vocab file does not initialize the Trie. `from_pretrained` calls `added_tokens = tokenizer.sanitize_special_tokens()` which is when the tokens are added to no_unique_split. reported in #23930 One more: #22935 cc @Narsil for visibility - [ ] Default `AddedTokens` class is the same for `tokenizers` and `transformers`, but in transformers since by default we use to always strip left and right....... it's a mess - [ ] TODO Tokens that are normalized should have two versions added to the `trie()`: for spm, both `<s>` and `โ–<s>` should point to the special tokens, as the prefix `โ–` has always been removed for special tokens.
05-31-2023 15:03:34
05-31-2023 15:03:34
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23909). All of your documentation changes will be reflected on that endpoint.<|||||>TODO: the test should be for all the `TokenMixin` rather than comparing slow and fast as the behaviour should work in any case.<|||||>Is work on this still going on? I'm interested in working on tokenization.<|||||>It is! Had to focus on a new model for a while but this is close to being over <|||||>Cool! So, I think the best way to do this is that I fork your branch, and then do PRs there? Any changes you like would then propagate to this PR? Is that the preferred way of collaborating?<|||||>Haha sorry what I meant is that Iโ€™ll take care of this one probably today, tomorrow so no need to dive ! Otherwise forking transformers is always the best solution IMO, then add others as remotes.<|||||>Ah ok, my bad! I totally misunderstood. Good luck with the PR ๐Ÿ˜ธ <|||||>Will come back to this starting next week ๐Ÿ”ฅ <|||||>Another TODO: if someone wants to `unset` one of the special token, it should be removed from the added_tokens_encoder and added_tokens_decoder. Some things to think about here. Also `added_token_encoder` should just always be linked to the decoder, which will be the only one that exists
transformers
23,908
closed
[Flax Whisper] Update decode docstring
# What does this PR do? Fixes docstring for the `.decode` method of `FlaxWhisperForConditionalGeneration`
05-31-2023 14:40:51
05-31-2023 14:40:51
_The documentation is not available anymore as the PR was closed or merged._
transformers
23,907
closed
[XLMModel, FlaubertModel] Compatibility with torch make_fx
### System Info - `transformers` version: 4.29.2 - Platform: Linux - Python version: 3.8.16 - PyTorch version: 2.0.1 ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction I am trying to get a torch.fx representation for some Transformer models and I noticed that some of them are not compatible with `make_fx` from PyTorch. To reproduce the error: ```python from torch.fx.experimental.proxy_tensor import make_fx from transformers import XLMModel model = XLMModel.from_pretrained('xlm-mlm-en-2048', torchscript=True) inp = model.dummy_inputs['input_ids'] model.eval() fx_g = make_fx(model)(inp) ``` which fails with the following: ``` RuntimeError: It appears that you're trying to get value out of a tracing tensor with aten._local_scalar_dense.default - erroring out! It's likely that this is caused by data-dependent control flow or similar. It may be possible to trace this with dynamic shapes; try setting tracing_mode='symbolic' in your make_fx call. ``` This error is present in XLMModel and FlaubertModel because of the line: ```python assert lengths.max().item() <= slen ``` It seems that make_fx does not support value-based control flow. Removing the various occurrences of this line, or executing the code with the -O optimizer flag (that disables all the asserts) eliminate the issue, but neither solution is ideal. Another element to note is that capturing the graph through dynamo also fails (or outputs multiple graphs) but because of the `.item()` call. ### Expected behavior The full FX representation of FlaubertModel/XLMModel using torch make_fx.
05-31-2023 14:23:28
05-31-2023 14:23:28
We can have a look at a PR with a fix.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
23,906
closed
Move import check to before state reset
# What does this PR do? This PR moves the reset of the `AcceleratorState` to be under the check for if it is available, and guards the resetting better. Fixes # (issue) Fixes https://github.com/huggingface/transformers/issues/23898 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @sgugger
05-31-2023 14:04:00
05-31-2023 14:04:00
_The documentation is not available anymore as the PR was closed or merged._<|||||>cc @pacman100 so it's on your radar too that we can do this :) <|||||>How about adding accelerate as a required dependency in the setup.py? <|||||>torch is not a hard dependency of Transformers.<|||||>Which is why it's part of the `[torch]` reqs :)
transformers
23,905
closed
RuntimeError: unscale_() has already been called on this optimizer since the last update()
### System Info - `transformers` version: 4.30.0.dev0 - Platform: Linux-5.15.107+-x86_64-with-glibc2.31 - Python version: 3.10.11 - Huggingface_hub version: 0.14.1 - Safetensors version: not installed - PyTorch version (GPU?): 2.0.1+cu118 (True) - Tensorflow version (GPU?): 2.12.0 (True) - Flax version (CPU?/GPU?/TPU?): 0.6.9 (gpu) - Jax version: 0.4.10 - JaxLib version: 0.4.10 - Using GPU in script?: Yes, T4 - Using distributed or parallel set-up in script?: No ### Who can help? @younesbelkada ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Steps to reproduce the behavior: 1. Run https://colab.research.google.com/drive/1VoYNfYDKcKRQRor98Zbf2-9VQTtGJ24k?usp=sharing ### Expected behavior Last cell in the notebook should run the training. Instead, it's showing the error message `RuntimeError: unscale_() has already been called on this optimizer since the last update().`
05-31-2023 13:51:05
05-31-2023 13:51:05
The notebook was working on the 25th May Looking at the traceback: ```bash โ•ญโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ Traceback (most recent call last) โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฎ โ”‚ in <cell line: 23>:23 โ”‚ โ”‚ โ”‚ โ”‚ /usr/local/lib/python3.10/dist-packages/transformers/trainer.py:1661 in train โ”‚ โ”‚ โ”‚ โ”‚ 1658 โ”‚ โ”‚ inner_training_loop = find_executable_batch_size( โ”‚ โ”‚ 1659 โ”‚ โ”‚ โ”‚ self._inner_training_loop, self._train_batch_size, args.auto_find_batch_size โ”‚ โ”‚ 1660 โ”‚ โ”‚ ) โ”‚ โ”‚ โฑ 1661 โ”‚ โ”‚ return inner_training_loop( โ”‚ โ”‚ 1662 โ”‚ โ”‚ โ”‚ args=args, โ”‚ โ”‚ 1663 โ”‚ โ”‚ โ”‚ resume_from_checkpoint=resume_from_checkpoint, โ”‚ โ”‚ 1664 โ”‚ โ”‚ โ”‚ trial=trial, โ”‚ โ”‚ โ”‚ โ”‚ /usr/local/lib/python3.10/dist-packages/transformers/trainer.py:2008 in _inner_training_loop โ”‚ โ”‚ โ”‚ โ”‚ 2005 โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ args.max_grad_norm, โ”‚ โ”‚ 2006 โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ ) โ”‚ โ”‚ 2007 โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ else: โ”‚ โ”‚ โฑ 2008 โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ self.accelerator.clip_grad_norm_( โ”‚ โ”‚ 2009 โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ model.parameters(), โ”‚ โ”‚ 2010 โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ args.max_grad_norm, โ”‚ โ”‚ 2011 โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ ) โ”‚ โ”‚ โ”‚ โ”‚ /usr/local/lib/python3.10/dist-packages/accelerate/accelerator.py:1878 in clip_grad_norm_ โ”‚ โ”‚ โ”‚ โ”‚ 1875 โ”‚ โ”‚ โ”‚ # `accelerator.backward(loss)` is doing that automatically. Therefore, its i โ”‚ โ”‚ 1876 โ”‚ โ”‚ โ”‚ # We cannot return the gradient norm because DeepSpeed does it. โ”‚ โ”‚ 1877 โ”‚ โ”‚ โ”‚ return None โ”‚ โ”‚ โฑ 1878 โ”‚ โ”‚ self.unscale_gradients() โ”‚ โ”‚ 1879 โ”‚ โ”‚ return torch.nn.utils.clip_grad_norm_(parameters, max_norm, norm_type=norm_type) โ”‚ โ”‚ 1880 โ”‚ โ”‚ โ”‚ 1881 โ”‚ def clip_grad_value_(self, parameters, clip_value): โ”‚ โ”‚ โ”‚ โ”‚ /usr/local/lib/python3.10/dist-packages/accelerate/accelerator.py:1841 in unscale_gradients โ”‚ โ”‚ โ”‚ โ”‚ 1838 โ”‚ โ”‚ โ”‚ for opt in optimizer: โ”‚ โ”‚ 1839 โ”‚ โ”‚ โ”‚ โ”‚ while isinstance(opt, AcceleratedOptimizer): โ”‚ โ”‚ 1840 โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ opt = opt.optimizer โ”‚ โ”‚ โฑ 1841 โ”‚ โ”‚ โ”‚ โ”‚ self.scaler.unscale_(opt) โ”‚ โ”‚ 1842 โ”‚ โ”‚ โ”‚ 1843 โ”‚ def clip_grad_norm_(self, parameters, max_norm, norm_type=2): โ”‚ โ”‚ 1844 โ”‚ โ”‚ """ โ”‚ โ”‚ โ”‚ โ”‚ /usr/local/lib/python3.10/dist-packages/torch/cuda/amp/grad_scaler.py:275 in unscale_ โ”‚ โ”‚ โ”‚ โ”‚ 272 โ”‚ โ”‚ optimizer_state = self._per_optimizer_states[id(optimizer)] โ”‚ โ”‚ 273 โ”‚ โ”‚ โ”‚ โ”‚ 274 โ”‚ โ”‚ if optimizer_state["stage"] is OptState.UNSCALED: โ”‚ โ”‚ โฑ 275 โ”‚ โ”‚ โ”‚ raise RuntimeError("unscale_() has already been called on this optimizer sin โ”‚ โ”‚ 276 โ”‚ โ”‚ elif optimizer_state["stage"] is OptState.STEPPED: โ”‚ โ”‚ 277 โ”‚ โ”‚ โ”‚ raise RuntimeError("unscale_() is being called after step().") ``` I think one of the recent PR of accelerate + trainer has touched something by mistake. My gut feeling is that maybe it needs a proper gradient accumulation support. I can have a look at it, I am curious if @pacman100 or @sgugger have any thoughts also on that<|||||>Also cc @muellerzr <|||||>Looking into this <|||||>Have the same issue, watching<|||||>Yes, narrowing down on it, will be raising a PR in 10 minutes <|||||>@tehistarkderek in the meantime, if you need to do something now you can replace the `pip install git+https:...` for transformers with: ```python pip install git+https://github.com/huggingface/transformers@de9255de27abfcae4a1f816b904915f0b1e23cd9 ```<|||||>Hello @younesbelkada, can you confirm if the above PR fixes the issue? I've checked it on my end and it is working as expected <|||||>It works like charm @pacman100 , thanks so much for taking care of this!<|||||>@younesbelkada I have reinstalled from source but I still get the error. I'm in the latest commit `transformers @ git+https://github.com/huggingface/transformers.git@fabe17a726bbf6081cfbcc975d8ac451a81f3e2d` and you can tell from the stacktrace that the line numbers are different (due to the changes merged here https://github.com/huggingface/transformers/pull/23914/files) ```โ•ญโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ Traceback (most recent call last) โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฎ โ”‚ in <cell line: 17>:17 โ”‚ โ”‚ โ”‚ โ”‚ /usr/local/lib/python3.10/dist-packages/transformers/trainer.py:1661 in train โ”‚ โ”‚ โ”‚ โ”‚ 1658 โ”‚ โ”‚ inner_training_loop = find_executable_batch_size( โ”‚ โ”‚ 1659 โ”‚ โ”‚ โ”‚ self._inner_training_loop, self._train_batch_size, args.auto_find_batch_size โ”‚ โ”‚ 1660 โ”‚ โ”‚ ) โ”‚ โ”‚ โฑ 1661 โ”‚ โ”‚ return inner_training_loop( โ”‚ โ”‚ 1662 โ”‚ โ”‚ โ”‚ args=args, โ”‚ โ”‚ 1663 โ”‚ โ”‚ โ”‚ resume_from_checkpoint=resume_from_checkpoint, โ”‚ โ”‚ 1664 โ”‚ โ”‚ โ”‚ trial=trial, โ”‚ โ”‚ โ”‚ โ”‚ /usr/local/lib/python3.10/dist-packages/transformers/trainer.py:1995 in _inner_training_loop โ”‚ โ”‚ โ”‚ โ”‚ 1992 โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ args.max_grad_norm, โ”‚ โ”‚ 1993 โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ ) โ”‚ โ”‚ 1994 โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ else: โ”‚ โ”‚ โฑ 1995 โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ self.accelerator.clip_grad_norm_( โ”‚ โ”‚ 1996 โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ model.parameters(), โ”‚ โ”‚ 1997 โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ args.max_grad_norm, โ”‚ โ”‚ 1998 โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ ) โ”‚ โ”‚ โ”‚ โ”‚ /usr/local/lib/python3.10/dist-packages/accelerate/accelerator.py:1817 in clip_grad_norm_ โ”‚ โ”‚ โ”‚ โ”‚ 1814 โ”‚ โ”‚ โ”‚ # `accelerator.backward(loss)` is doing that automatically. Therefore, its i โ”‚ โ”‚ 1815 โ”‚ โ”‚ โ”‚ # We cannot return the gradient norm because DeepSpeed does it. โ”‚ โ”‚ 1816 โ”‚ โ”‚ โ”‚ return None โ”‚ โ”‚ โฑ 1817 โ”‚ โ”‚ self.unscale_gradients() โ”‚ โ”‚ 1818 โ”‚ โ”‚ return torch.nn.utils.clip_grad_norm_(parameters, max_norm, norm_type=norm_type) โ”‚ โ”‚ 1819 โ”‚ โ”‚ โ”‚ 1820 โ”‚ def clip_grad_value_(self, parameters, clip_value): โ”‚ โ”‚ โ”‚ โ”‚ /usr/local/lib/python3.10/dist-packages/accelerate/accelerator.py:1780 in unscale_gradients โ”‚ โ”‚ โ”‚ โ”‚ 1777 โ”‚ โ”‚ โ”‚ for opt in optimizer: โ”‚ โ”‚ 1778 โ”‚ โ”‚ โ”‚ โ”‚ while isinstance(opt, AcceleratedOptimizer): โ”‚ โ”‚ 1779 โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ opt = opt.optimizer โ”‚ โ”‚ โฑ 1780 โ”‚ โ”‚ โ”‚ โ”‚ self.scaler.unscale_(opt) โ”‚ โ”‚ 1781 โ”‚ โ”‚ โ”‚ 1782 โ”‚ def clip_grad_norm_(self, parameters, max_norm, norm_type=2): โ”‚ โ”‚ 1783 โ”‚ โ”‚ """ โ”‚ โ”‚ โ”‚ โ”‚ /usr/local/lib/python3.10/dist-packages/torch/cuda/amp/grad_scaler.py:275 in unscale_ โ”‚ โ”‚ โ”‚ โ”‚ 272 โ”‚ โ”‚ optimizer_state = self._per_optimizer_states[id(optimizer)] โ”‚ โ”‚ 273 โ”‚ โ”‚ โ”‚ โ”‚ 274 โ”‚ โ”‚ if optimizer_state["stage"] is OptState.UNSCALED: โ”‚ โ”‚ โฑ 275 โ”‚ โ”‚ โ”‚ raise RuntimeError("unscale_() has already been called on this optimizer sin โ”‚ โ”‚ 276 โ”‚ โ”‚ elif optimizer_state["stage"] is OptState.STEPPED: โ”‚ โ”‚ 277 โ”‚ โ”‚ โ”‚ raise RuntimeError("unscale_() is being called after step().") โ”‚ โ”‚ 278 โ”‚ โ•ฐโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฏ RuntimeError: unscale_() has already been called on this optimizer since the last update(). ``` am I missing something? I'm just running this colab demo from a live event yesterday https://colab.research.google.com/drive/1ARmlaZZaKyAg6HTi57psFLPeh0hDRcPX?usp=sharing#scrollTo=Duak7T_B3VpJ<|||||>Hi @kafkasl Please check: https://github.com/huggingface/transformers/issues/23935#issuecomment-1571989596
transformers
23,904
open
save_pretrained 4-bit models with bitsandbytes
With the latest version of bitsandbytes (0.39.0) library, isn't it possible to serialize 4-bit models then? Thus this section should be updated to allow the user to save these models. https://github.com/huggingface/transformers/blob/68d53bc7178866821282f45732c1e465f5160fa6/src/transformers/modeling_utils.py#LL1704C36-L1704C36 I'm mainly looking at this article to see what got produced lately in the area: https://huggingface.co/blog/4bit-transformers-bitsandbytes I aim to quantize a model to 4-bit, enabling it to be used in e.g., GPT4all and other CPU platforms.
05-31-2023 13:46:47
05-31-2023 13:46:47
cc @younesbelkada <|||||>Hi @westn Thanks for the issue, I don't think 4bit models are serializable yet. Let me double check that with the author of bitsandbytes and get back to you<|||||>Also note that 4bit / 8bit is also applicable to GPU / CUDA devices, you cannot run quantized models with bitsandbytes on a CPU device<|||||>Hi @westn Currently it is not possible to save 4bit models but this is in the roadmap of bitsandbytes for the next releases. We will keep you posted!<|||||>Hi, @younesbelkada , Whether 4bits/8bits models can be saved now?<|||||>Hi @jameswu2014 Thanks for the heads up, currently 4bit saving is not possible, however, 8bit saving is possible, check: https://huggingface.co/docs/transformers/main_classes/quantization#push-quantized-models-on-the-hub
transformers
23,903
closed
[doc build action] Token from secret
null
05-31-2023 13:42:26
05-31-2023 13:42:26
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23903). All of your documentation changes will be reflected on that endpoint.<|||||>superceded by #24013
transformers
23,902
closed
OSError: When i am loading pszemraj\flan-t5-large-grammar-synthesis from hugging face
D:\gramformer>python -m uvicorn nova_grammar_corrector:app --reload โ†[32mINFOโ†[0m: Will watch for changes in these directories: ['D:\\gramformer '] โ†[32mINFOโ†[0m: Uvicorn running on โ†[1mhttp://127.0.0.1:8000โ†[0m (Press CTRL+ C to quit) โ†[32mINFOโ†[0m: Started reloader process [โ†[36mโ†[1m13980โ†[0m] using โ†[36mโ†[1m StatReloadโ†[0m Downloading spiece.model: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 792k/792k [00:00<00:00, 39.6MB/s] C:\Python37\lib\site-packages\huggingface_hub\file_download.py:133: UserWarning: `huggingface_hub` cache-system uses symlinks by default to efficiently store du plicated files but your machine does not support them in C:\Users\devblr\.cache\ huggingface\hub. Caching files will still work but in a degraded version that mi ght require more space on your disk. This warning can be disabled by setting the `HF_HUB_DISABLE_SYMLINKS_WARNING` environment variable. For more details, see h ttps://huggingface.co/docs/huggingface_hub/how-to-cache#limitations. To support symlinks on Windows, you either need to activate Developer Mode or to run Python as an administrator. In order to see activate developer mode, see th is article: https://docs.microsoft.com/en-us/windows/apps/get-started/enable-you r-device-for-development warnings.warn(message) Downloading (.)cial_tokens_map.json: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 2.20k/2.20k [00:00<?, ?B/s] Downloading (.)okenizer_config.json: 100%|โ–ˆ| 2.56k/2.56k [00:00<00:00, 164kB/s] Downloading (.)lve/main/config.json: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 892/892 [00:00<?, ?B/s] Downloading pytorch_model.bin: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 3.13G/3.13G [00:26<00:00, 118MB/s] Process SpawnProcess-1: Traceback (most recent call last): File "C:\Python37\lib\site-packages\transformers\modeling_utils.py", line 446, in load_state_dict return torch.load(checkpoint_file, map_location="cpu") File "C:\Python37\lib\site-packages\torch\serialization.py", line 789, in load return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args ) File "C:\Python37\lib\site-packages\torch\serialization.py", line 1131, in _lo ad result = unpickler.load() File "C:\Python37\lib\site-packages\torch\serialization.py", line 1101, in per sistent_load load_tensor(dtype, nbytes, key, _maybe_decode_ascii(location)) File "C:\Python37\lib\site-packages\torch\serialization.py", line 1079, in loa d_tensor storage = zip_file.get_storage_from_record(name, numel, torch.UntypedStorage ).storage().untyped() RuntimeError: [enforce fail at C:\actions-runner\_work\pytorch\pytorch\builder\w indows\pytorch\c10\core\impl\alloc_cpu.cpp:72] data. DefaultCPUAllocator: not en ough memory: you tried to allocate 11534336 bytes. During handling of the above exception, another exception occurred: Traceback (most recent call last): File "C:\Python37\lib\site-packages\transformers\modeling_utils.py", line 450, in load_state_dict if f.read(7) == "version": File "C:\Python37\lib\encodings\cp1252.py", line 23, in decode return codecs.charmap_decode(input,self.errors,decoding_table)[0] UnicodeDecodeError: 'charmap' codec can't decode byte 0x81 in position 1827: cha racter maps to <undefined> During handling of the above exception, another exception occurred: Traceback (most recent call last): File "C:\Python37\lib\multiprocessing\process.py", line 297, in _bootstrap self.run() File "C:\Python37\lib\multiprocessing\process.py", line 99, in run self._target(*self._args, **self._kwargs) File "C:\Python37\lib\site-packages\uvicorn\_subprocess.py", line 76, in subpr ocess_started target(sockets=sockets) File "C:\Python37\lib\site-packages\uvicorn\server.py", line 61, in run return asyncio.run(self.serve(sockets=sockets)) File "C:\Python37\lib\asyncio\runners.py", line 43, in run return loop.run_until_complete(main) File "C:\Python37\lib\asyncio\base_events.py", line 583, in run_until_complete return future.result() File "C:\Python37\lib\site-packages\uvicorn\server.py", line 68, in serve config.load() File "C:\Python37\lib\site-packages\uvicorn\config.py", line 473, in load self.loaded_app = import_from_string(self.app) File "C:\Python37\lib\site-packages\uvicorn\importer.py", line 21, in import_f rom_string module = importlib.import_module(module_str) File "C:\Python37\lib\importlib\__init__.py", line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "<frozen importlib._bootstrap>", line 1006, in _gcd_import File "<frozen importlib._bootstrap>", line 983, in _find_and_load File "<frozen importlib._bootstrap>", line 967, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 677, in _load_unlocked File "<frozen importlib._bootstrap_external>", line 728, in exec_module File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed File "D:\gramformer\nova_grammar_corrector.py", line 273, in <module> ngc = nova_grammar_corrector(models=1, use_gpu=False) File "D:\gramformer\nova_grammar_corrector.py", line 161, in __init__ self.correction_model = T5ForConditionalGeneration.from_pretrained(corre ction_model_tag, use_auth_token=False) File "C:\Python37\lib\site-packages\transformers\modeling_utils.py", line 2542 , in from_pretrained state_dict = load_state_dict(resolved_archive_file) File "C:\Python37\lib\site-packages\transformers\modeling_utils.py", line 463, in load_state_dict f"Unable to load weights from pytorch checkpoint file for '{checkpoint_file} `"OSError:`` Unable to load weights from pytorch checkpoint file for 'C:\Users\devbl r/.cache\huggingface\hub\models--pszemraj--flan-t5-large-grammar-synthesis\snaps hots\d45c90f835904f6c3fdf320e74fa6e894e960871\pytorch_model.bin' at 'C:\Users\de vblr/.cache\huggingface\hub\models--pszemraj--flan-t5-large-grammar-synthesis\sn apshots\d45c90f835904f6c3fdf320e74fa6e894e960871\pytorch_model.bin'. If you trie d to load a PyTorch model from a TF 2.0 checkpoint, please set `from_tf=True.```
05-31-2023 13:33:20
05-31-2023 13:33:20
You do not have enough CPU RAM to open the model checkpoint.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
23,901
closed
ensure banned_mask and indices in same device
# What does this PR do? ensure banned_mask and indices in same device fix this issue #23900 <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @gante, @sgugger <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
05-31-2023 13:13:12
05-31-2023 13:13:12
_The documentation is not available anymore as the PR was closed or merged._
transformers
23,900
closed
NoBadWordsLogitsProcessor banned_mask and indices not in same device
### System Info transformers version: 4.29.2 Platform: Linux-6.1.22-aufs-1-x86_64-with-glibc2.31 Python version: 3.9.2 PyTorch version (GPU?): 2.0.1+cu117 (True) ### Who can help? _No response_ ### Information - [x] The official example scripts - [x] My own modified scripts ### Tasks - [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction import torch import transformers from transformers import AutoModelForCausalLM, set_seed from transformers import GenerationConfig, LogitsProcessorList, NoBadWordsLogitsProcessor set_seed(0) torch.set_default_device("cuda") model_name_or_path = 'facebook/opt-125m' model = AutoModelForCausalLM.from_pretrained(model_name_or_path, device_map='auto') tokenizer = transformers.AutoTokenizer.from_pretrained(model_name_or_path, add_prefix_space=True) human = 'give me a sentence, more than 20 words, containing word pig' input_ids = tokenizer(human, return_tensors='pt').input_ids input_ids = input_ids.to('cuda') bad_words = ['pigs', 'piggy', 'pig', 'piggies', 'piggish'] bad_words_ids = [tokenizer(bad_word, add_special_tokens=False).input_ids for bad_word in bad_words] processor = LogitsProcessorList([NoBadWordsLogitsProcessor(bad_words_ids=bad_words_ids, eos_token_id=model.config.eos_token_id)]) generation_config = GenerationConfig.from_pretrained(model_name_or_path, max_length=100, do_sample=True) outputs = model.generate(input_ids, generation_config=generation_config, logits_processor=processor) output_str = tokenizer.decode(outputs[0], skip_special_tokens=True) print(output_str) ### Expected behavior run success
05-31-2023 13:10:43
05-31-2023 13:10:43
the reason is pytorch default device ```python import torch torch.set_default_device("cuda") print(torch.cuda.is_available()) banned_mask = torch.LongTensor([0, 1, 2]) indices = torch.ones(len(banned_mask)) print(f"banned_mask.device: {banned_mask.device}") print(f"indices.device: {indices.device}") ``` ![image](https://github.com/huggingface/transformers/assets/45619430/389aca59-98c4-4d93-bcaf-434f4cca95b9)
transformers
23,899
closed
add conditional statement for auxiliary loss calculation
# What does this PR do? Missing conditional statement before computing auxiliary loss for BeitForSemanticSegmentation. File: https://github.com/huggingface/transformers/blob/main/src/transformers/models/beit/modeling_beit.py Function: BeitForSemanticSegmentation -> compute_loss (Line no.: 1183 - 1198) ```python def compute_loss(self, logits, auxiliary_logits, labels): # upsample logits to the images' original size upsampled_logits = nn.functional.interpolate( logits, size=labels.shape[-2:], mode="bilinear", align_corners=False ) if auxiliary_logits is not None: upsampled_auxiliary_logits = nn.functional.interpolate( auxiliary_logits, size=labels.shape[-2:], mode="bilinear", align_corners=False ) # compute weighted loss loss_fct = CrossEntropyLoss(ignore_index=self.config.semantic_loss_ignore_index) main_loss = loss_fct(upsampled_logits, labels) auxiliary_loss = loss_fct(upsampled_auxiliary_logits, labels) loss = main_loss + self.config.auxiliary_loss_weight * auxiliary_loss return loss ``` When training BeitForSemanticSegmentation with config["use_auxiliary_head"] set to False, the following error occurs, ```bash File "/home/jovyan/segmentation/train.py", line 311, in <module> trainer.train() File "/usr/local/lib/python3.10/dist-packages/transformers/trainer.py", line 1664, in train return inner_training_loop( File "/usr/local/lib/python3.10/dist-packages/transformers/trainer.py", line 1940, in _inner_training_loop tr_loss_step = self.training_step(model, inputs) File "/usr/local/lib/python3.10/dist-packages/transformers/trainer.py", line 2735, in training_step loss = self.compute_loss(model, inputs) File "/usr/local/lib/python3.10/dist-packages/transformers/trainer.py", line 2767, in compute_loss outputs = model(**inputs) File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "/usr/local/lib/python3.10/dist-packages/transformers/models/beit/modeling_beit.py", line 1276, in forward loss = self.compute_loss(logits, auxiliary_logits, labels) File "/usr/local/lib/python3.10/dist-packages/transformers/models/beit/modeling_beit.py", line 1195, in compute_loss auxiliary_loss = loss_fct(upsampled_auxiliary_logits, labels) UnboundLocalError: local variable 'upsampled_auxiliary_logits' referenced before assignment ``` Possible solution: add a conditional statement before computing the auxiliary loss. ```python def compute_loss(self, logits, auxiliary_logits, labels): # upsample logits to the images' original size upsampled_logits = nn.functional.interpolate( logits, size=labels.shape[-2:], mode="bilinear", align_corners=False ) if auxiliary_logits is not None: upsampled_auxiliary_logits = nn.functional.interpolate( auxiliary_logits, size=labels.shape[-2:], mode="bilinear", align_corners=False ) # compute weighted loss loss_fct = CrossEntropyLoss(ignore_index=self.config.semantic_loss_ignore_index) main_loss = loss_fct(upsampled_logits, labels) loss = main_loss if auxiliary_logits is not None: auxiliary_loss = loss_fct(upsampled_auxiliary_logits, labels) loss += self.config.auxiliary_loss_weight * auxiliary_loss return loss ``` <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @amyeroberts Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. -->
05-31-2023 13:01:53
05-31-2023 13:01:53
Thanks for letting me know! I have made the required changes.<|||||>_The documentation is not available anymore as the PR was closed or merged._
transformers
23,898
closed
NameError: name 'AcceleratorState' is not defined
### System Info This script fails on 68d53bc7178866821282f45732c1e465f5160fa6 but passes on de9255de27abfcae4a1f816b904915f0b1e23cd9 Hopefully the problem is pretty clear from the message. ``` (/home/ezyang/local/debug/pytorch-env) [[email protected] ~/local/debug]$ pp python transformers/examples/pytorch/speech-recognition/run_speech_recognition_ctc.py --dataset_name="common_voice" --model_name_or_path="facebook/wav2vec2-large-xlsr-53" --dataset_config_name="tr" --output_dir="./wav2vec2-common_voice-tr-demo-dist" --preprocessing_num_workers="16" --overwrite_output_dir --num_train_epochs="15" --per_device_train_batch_size="4" --gradient_accumulation_steps="1" --learning_rate="3e-4" --warmup_steps="500" --evaluation_strategy="steps" --text_column_name="sentence" --save_steps="400" --eval_steps="100" --logging_steps="1" --layerdrop="0.0" --save_total_limit="3" --freeze_feature_encoder --gradient_checkpointing --chars_to_ignore , ? . ! - \; \: \" โ€œ % โ€˜ โ€ ๏ฟฝ --fp16 --group_by_length --do_train --do_eval --torch_compile True Traceback (most recent call last): File "/data/users/ezyang/debug/transformers/examples/pytorch/speech-recognition/run_speech_recognition_ctc.py", line 775, in <module> main() File "/data/users/ezyang/debug/transformers/examples/pytorch/speech-recognition/run_speech_recognition_ctc.py", line 380, in main model_args, data_args, training_args = parser.parse_args_into_dataclasses() File "/home/ezyang/local/debug/pytorch-env/lib/python3.10/site-packages/transformers/hf_argparser.py", line 346, in parse_args_into_dataclasses obj = dtype(**inputs) File "<string>", line 111, in __init__ File "/home/ezyang/local/debug/pytorch-env/lib/python3.10/site-packages/transformers/training_args.py", line 1340, in __post_init__ and (self.device.type != "cuda") File "/home/ezyang/local/debug/pytorch-env/lib/python3.10/site-packages/transformers/training_args.py", line 1764, in device return self._setup_devices File "/home/ezyang/local/debug/pytorch-env/lib/python3.10/site-packages/transformers/utils/generic.py", line 54, in __get__ cached = self.fget(obj) File "/home/ezyang/local/debug/pytorch-env/lib/python3.10/site-packages/transformers/training_args.py", line 1670, in _setup_devices AcceleratorState._reset_state() NameError: name 'AcceleratorState' is not defined ``` ### Who can help? _No response_ ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction `python transformers/examples/pytorch/speech-recognition/run_speech_recognition_ctc.py --dataset_name="common_voice" --model_name_or_path="facebook/wav2vec2-large-xlsr-53" --dataset_config_name="tr" --output_dir="./wav2vec2-common_voice-tr-demo-dist" --preprocessing_num_workers="16" --overwrite_output_dir --num_train_epochs="15" --per_device_train_batch_size="4" --gradient_accumulation_steps="1" --learning_rate="3e-4" --warmup_steps="500" --evaluation_strategy="steps" --text_column_name="sentence" --save_steps="400" --eval_steps="100" --logging_steps="1" --layerdrop="0.0" --save_total_limit="3" --freeze_feature_encoder --gradient_checkpointing --chars_to_ignore , ? . ! - \; \: \" โ€œ % โ€˜ โ€ ๏ฟฝ --fp16 --group_by_length --do_train --do_eval --torch_compile True` ### Expected behavior doesn't error this way
05-31-2023 12:37:17
05-31-2023 12:37:17
I think you don't have accelerate installed, which is now a required dependency for the `Trainer`: `pip install accelerate`.<|||||>Ah ok, in that case, probably a requirements.txt just needs to get updated somewhere. (Also, naively, I would have expected an ImportError if I had a missing dependency, not a NameError)<|||||>Thanks @ezyang, with https://github.com/huggingface/transformers/pull/23906 we'll raise an ImportError properly :)
transformers
23,897
closed
Fix image segmentation tool bug
# What does this PR do? Currently tools using the ImageSegmentation tool fail because the parameters for the image processor are overridden with the input image dimensions. This results in incompatible input dimensions being passed to the model. This PR removes this logic in the `encode` method and removes the resizing in the tests which only happened for the segmentation tool and hid the issue. Fixes #23328 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests?
05-31-2023 10:39:56
05-31-2023 10:39:56
@sgugger @LysandreJik The fix is pretty simple, but in the spirit of Chesterton's fence, I wasn't sure/couldn't remember the reason for the `self.pre_processor.image_processor.size = ...` logic so wanted to ask you both in case this is breaking assumptions elsewhere. <|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>This seems reasonable! I had run into issues where I couldn't segment images with different sizes when implementing it. I have tried with different flavors of the following ![image](https://github.com/huggingface/transformers/assets/30755778/0e895dc8-c4ce-41fe-a864-4d298571d4ac) and it seems to work well even without specifying the size. Your change looks good to me @amyeroberts; are you aware of sizes that may not work with this model?<|||||>@LysandreJik It *should* all be OK if the image processor matches the model: there shouldn't be any sizes that don't work because the image processor will resize as needed. In terms of sizes that won't work: * We can't input images with either height or width smaller that the model's patch size - 16 by default * Images which (image_height // patch_size) * (image_width // patch_size) > max_sequence_length * Non-square images where `(image_height // patch_size) != (image_width // patch_size)`. Just looking high-level at the model, I think it could be reworked to accept non-square images but haven't dug into it deeply.
transformers
23,896
closed
[`core`] Add no split modules support for all models
# What does this PR do? Fixes: #23816 and other related issues ## Motivation With the increase of the usage of code on the Hub features and more and more models pushed on that direction, I am convinced that it would be great to easily support accelerate loading of these models out of the box, without having to open PRs on the Hub and wait for authors approvals on each repo (as the changes needs to be duplicated across all repos, which can be redundant). ## Description This PR addresses this, and adds support of `no_split_modules` on the `from_pretrained` method. Allowing superusers to load models using accelerate (for loading models in 8bit / 4bit for example), for models where the code is hosted on the Hub. ## To reproduce: ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_name = 'mosaicml/mpt-7b-instruct' tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-neox-20b") model = AutoModelForCausalLM.from_pretrained( model_name, load_in_8bit=True, device_map="auto", no_split_modules=["MPTBlock"], trust_remote_code=True, ) prompt = "What is the boiling point of Nitrogen?" input_ids = tokenizer(prompt, return_tensors="pt").input_ids.to(0) out = model.generate(input_ids) print(tokenizer.decode(out[0], skip_special_tokens=True)) >>> What is the boiling point of Nitrogen?\n The boiling point of Nitrogen is 77.37 ``` cc @ArthurZucker @sgugger Once the PoC approved, will add a section on the docs
05-31-2023 10:10:19
05-31-2023 10:10:19
_The documentation is not available anymore as the PR was closed or merged._<|||||>I don't think this is a good idea and I don't want `from_pretrained` to end up with 96 arguments. It's something that's very internal to the model, so should definitely be in the model code. If a user opens a PR they can directly used the updated code by specifying a revision argument.<|||||>Sounds good to me! Closing the PR
transformers
23,895
open
[`bnb`] Fix blip2 4bit
# What does this PR do? Fixes https://github.com/huggingface/transformers/issues/23839 Indeed, for models such as Blip2 that have lm head inside submodules (and not directly on the top level of the model), the lm head does get converted to 4bit / 8bit models, leading to unexpected behavior for 4bit models. The PR fixes this by making sure to consider the last term after `.` when creating `modules_not_to_convert`. cc @sgugger
05-31-2023 09:37:22
05-31-2023 09:37:22
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23895). All of your documentation changes will be reflected on that endpoint.<|||||>It should be all good, I have verified the slow tests pass for 8bit and 4bit. Let me know if there is anything particular I should have a look. Per my understanding this only affects Blip2 as it is the only model (from what I know) that have an lm head as part of a submodule.<|||||>Hmm getting some gibberish output with the fix, need to investigate more<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
23,894
closed
[`bnb`] add warning when no linear
# What does this PR do? This PR adds a new warning for users in case `replace_with_bnb_linear` hasn't replace any module, this would avoid any confusion for users Fixes https://github.com/huggingface/transformers/issues/23807 cc @sgugger
05-31-2023 09:18:07
05-31-2023 09:18:07
_The documentation is not available anymore as the PR was closed or merged._
transformers
23,893
closed
Dataset load train data have values but when training got num_examples 0
I have usinng load_dataset load data ``` >> training data samples: 51184 400 ['input', 'output', 'input_ids', 'labels', 'attention_mask'] >> training data samples: 51184 400 ['input', 'output', 'input_ids', 'labels', 'attention_mask'] ``` it actually have values, not 0, but training I got error: ``` IndexError: Invalid key: 38100 is out of bounds for size 0 ``` And the training info: ``` ***** Running training ***** Num examples = 0 Num Epochs = 3 Instantaneous batch size per device = 2 Total train batch size (w. parallel, distributed & accumulation) = 128 Gradient Accumulation steps = 32 Total optimization steps = 1,197 Number of trainable parameters = 19,988,480 ``` Why nuim examples got 0? the training examples logged are actually normal: ``` >>> tokenizer example: [1, 450, 1494, 338, 263, 14983, 1546, 385, 319, 29902, 20255, 2000, 4007, 22137, 322, 263, 5199, 1404, 2000, 4911, 29889, 450, 20255, 338, 13052, 296, 29892, 7134, 519, 322, 1248, 568, 304, 1234, 5155, 310, 1404, 29889, 13, 13, 2659, 29901, 235, 178, 169, 234, 190, 137, 233, 146, 146, 235, 194, 179, 30287, 30557, 30847, 31502, 30768, 31138, 30783, 31529, 31629, 30354, 30505, 30413, 30980, 30594, 31016, 30503, 30413, 30980, 30874, 232, 193, 135, 30429, 30210, 31462, 30705, 31174, 30448, 234, 170, 178, 30748, 30805, 31466, 31565, 31935, 234, 181, 149, 31412, 31138, 31951, 31217, 30874, 232, 193, 135, 30210, 233, 169, 133, 234, 145, 138, 13, 13, 7900, 22137, 29901, 30783, 31529, 31629, 30354, 30505, 30413, 30980, 30594, 31016, 30503, 30413, 30980, 30874, 232, 193, 135, 30429, 30210, 31462, 30705, 31174, 30448, 234, 170, 178, 30748, 30805, 31466, 31565, 31935, 234, 181, 149, 31412, 31138, 31951, 31217, 30874, 232, 193, 135, 30210, 233, 169, 133, 234, 145, 138, 30214, 30392, 31180, 30319, 31074, 30415, 30275, 30874, 232, 193, 135, 234, 170, 178, 30748, 30545, 30210, 31359, 30346, 30667, 30687, 30267, 13, 13, 31529, 31629, 30354, 30392, 31180, 30319, 31074, 30415, 30275, 30406, 30805, 233, 146, 146, 235, 194, 179, 31935, 234, 181, 149, 31531, 31613, 30210, 30354, 30415, 31629, 30354, 30267, 232, 177, 134, 30682, 30651, 30406, 30805, 31466, 31565, 31935, 234, 181, 149, 30505, 233, 162, 147, 30287, 30594, 232, 139, 190, 30210, 233, 169, 133, 234, 145, 138, 30748, 31454, 30214, 31594, 31325, 31835, 30495, 31935, 234, 181, 149, 30682, 30815, 30505, 232, 150, 173, 30755, 30544, 31424, 30267, 31529, 31629, 30354, 236, 157, 146, 234, 160, 131, 30594, 31016, 31462, 30705, 30214, 31570, 31389, 30682, 30651, 30406, 30805, 233, 146, 146, 235, 194, 179, 31935, 234, 181, 149, 30505, 30594, 31016, 30429, 30210, 31462, 30705, 30267, 13, 13, 30874, 232, 193, 135, 234, 170, 178, 30748, 30545, 31107, 30406, 31529, 31629, 30354, 236, 157, 146, 30594, 31016, 31462, 30705, 30210, 31141, 30952, 30214, 30768, 31138, 30783, 31529, 31629, 30354, 30505, 30413, 30980, 30594, 31016, 30503, 30413, 30980, 30874, 232, 193, 135, 30429, 30210, 31462, 30705, 31174, 30448, 234, 170, 178, 30748, 30805, 31466, 31565, 31935, 234, 181, 149, 31412, 31138, 31951, 31217, 30874, 232, 193, 135, 30210, 233, 169, 133, 234, 145, 138, 30267, 13, 13, 232, 136, 186, 30988, 30805, 31639, 30214, 30672, 31381, 30682, 30651, 233, 141, 141, 30874, 232, 193, 135, 234, 170, 178, 30748, 30748, 30573, 31977, 30502, 233, 176, 168, 236, 173, 167, 30383, 13, 13, 29896, 29889, 29871, 30783, 31529, 31629, 30354, 30505, 31951, 31217, 30874, 232, 193, 135, 30429, 30210, 31462, 30705, 31174, 30448, 234, 170, 178, 30748, 30267, 30810, 30682, 30651, 30768, 31138, 30783, 31529, 31629, 30354, 30505, 31951, 31217, 30874, 232, 193, 135, 30429, 30210, 30683, 30959, 31174, 30448, 234, 170, 178, 30748, 30805, 31195, 31424, 30267, 13, 29906, 29889, 29871, 30783, 31951, 31217, 30874, 232, 193, 135, 30429, 30210, 234, 170, 178, 30748, 30959, 31733, 30936, 31174, 30448, 234, 170, 178, 30748, 30214, 31466, 31565, 30544, 30505, 31951, 30502, 30594, 31016, 30940, 30429, 31935, 234, 181, 149, 31412, 31138, 31951, 31217, 30874, 232, 193, 135, 30210, 233, 169, 133, 234, 145, 138, 30267, 13, 13, 30768, 31138, 30810, 31977, 30502, 233, 176, 168, 236, 173, 167, 30214, 30672, 31381, 31238, 30682, 30651, 31466, 31565, 30544, 31935, 234, 181, 149, 31412, 31138, 31951, 31217, 30874, 232, 193, 135, 30210, 233, 169, 133, 30267, 30878, 30822, 30214, 30672, 31381, 30682, 30651, 31419, 235, 193, 134, 30810, 31959, 233, 169, 133, 234, 145, 138, 30214, 31666, 31333, 233, 142, 172, 233, 169, 133, 13, 13, 2659, 29901, 31201, 236, 138, 141, 30287, 30557, 31529, 31629, 30354, 13, 13, 7900, 22137, 29901, 31529, 31629, 30354, 30392, 31180, 30319, 31074, 30415, 30275, 30406, 30805, 233, 146, 146, 235, 194, 179, 31935, 234, 181, 149, 31531, 31613, 30210, 30354, 2] ```
05-31-2023 09:14:23
05-31-2023 09:14:23
It's impossible for us to help you without knowinbg the code you run.<|||||>When I use **torch.compile(),** with PyTorch 2.0, Trainer seems to remove all the columns in the datasets (Pdb) [INFO|trainer.py:565] 2023-06-08 14:00:46,576 >> max_steps is given, it will override any value given in num_train_epochs [INFO|trainer.py:622] 2023-06-08 14:00:46,576 >> Using cuda_amp half precision backend [INFO|trainer.py:763] 2023-06-08 14:00:46,577 >> **The following columns in the training set don't have a corresponding argument in `OptimizedModule.forward` and have been ignored: output, input, attention_mask, input_ids, labels. If output, input, attention_mask, input_ids, labels are not expected by `OptimizedModule.forward`, you can safely ignore this message**. Loading checkpoint shards: 0%| | 0/2 [00:00<?, ?it/s]/home/jeff/anaconda3/envs/torch2.0/lib/python3.10/site-packages/transformers/optimization.py:407: FutureWarning: This implementation of AdamW is deprecated and will be removed in a future version. Use the PyTorch implementation torch.optim.AdamW instead, or set `no_deprecation_warning=True` to disable this warning warnings.warn( [INFO|trainer.py:1779] 2023-06-08 14:00:51,297 >> ***** Running training ***** [INFO|trainer.py:1780] 2023-06-08 14:00:51,297 >> Num examples = 0 [INFO|trainer.py:1781] 2023-06-08 14:00:51,297 >> Num Epochs = 26 [INFO|trainer.py:1782] 2023-06-08 14:00:51,297 >> Instantaneous batch size per device = 2 [INFO|trainer.py:1783] 2023-06-08 14:00:51,297 >> Total train batch size (w. parallel, distributed & accumulation) = 128 [INFO|trainer.py:1784] 2023-06-08 14:00:51,297 >> Gradient Accumulation steps = 16 [INFO|trainer.py:1785] 2023-06-08 14:00:51,297 >> Total optimization steps = 4,000 [INFO|trainer.py:1786] 2023-06-08 14:00:51,298 >> Number of trainable parameters = 1,767,049,216 <|||||>updates: - when I remove **model=torch.compile(model)**, everything works well - when I use **model=torch.compile(model)** with option **--remove_unused_columns False** , and remove useless unpad columns, it comes with a new error **Dynamo only supports FSDP with use_orig_params=True** <|||||>The problem of the labels removes with `torch.compile` is being addressed in #24066 <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
23,892
closed
Error while training VisionEncoderDecoderModel ValueError: one or more references are empty strings
I was trying to train a VisionEncoderDecoderModel and I got the below error. For decoder I'm using bert-base-multilingual-cased and encoder is google/vit-base-patch16-224. How to solve this error? Thanks in advace!! ``` โ•ญโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ Traceback (most recent call last) โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฎ โ”‚ in <cell line: 13>:13 โ”‚ โ”‚ โ”‚ โ”‚ /usr/local/lib/python3.10/dist-packages/transformers/trainer.py:1662 in train โ”‚ โ”‚ โ”‚ โ”‚ 1659 โ”‚ โ”‚ inner_training_loop = find_executable_batch_size( โ”‚ โ”‚ 1660 โ”‚ โ”‚ โ”‚ self._inner_training_loop, self._train_batch_size, args.auto_find_batch_size โ”‚ โ”‚ 1661 โ”‚ โ”‚ ) โ”‚ โ”‚ โฑ 1662 โ”‚ โ”‚ return inner_training_loop( โ”‚ โ”‚ 1663 โ”‚ โ”‚ โ”‚ args=args, โ”‚ โ”‚ 1664 โ”‚ โ”‚ โ”‚ resume_from_checkpoint=resume_from_checkpoint, โ”‚ โ”‚ 1665 โ”‚ โ”‚ โ”‚ trial=trial, โ”‚ โ”‚ โ”‚ โ”‚ /usr/local/lib/python3.10/dist-packages/transformers/trainer.py:2006 in _inner_training_loop โ”‚ โ”‚ โ”‚ โ”‚ 2003 โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ self.state.epoch = epoch + (step + 1 + steps_skipped) / steps_in_epo โ”‚ โ”‚ 2004 โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ self.control = self.callback_handler.on_step_end(args, self.state, s โ”‚ โ”‚ 2005 โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โฑ 2006 โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ self._maybe_log_save_evaluate(tr_loss, model, trial, epoch, ignore_k โ”‚ โ”‚ 2007 โ”‚ โ”‚ โ”‚ โ”‚ else: โ”‚ โ”‚ 2008 โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ self.control = self.callback_handler.on_substep_end(args, self.state โ”‚ โ”‚ 2009 โ”‚ โ”‚ โ”‚ โ”‚ /usr/local/lib/python3.10/dist-packages/transformers/trainer.py:2287 in _maybe_log_save_evaluate โ”‚ โ”‚ โ”‚ โ”‚ 2284 โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ ) โ”‚ โ”‚ 2285 โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ metrics.update(dataset_metrics) โ”‚ โ”‚ 2286 โ”‚ โ”‚ โ”‚ else: โ”‚ โ”‚ โฑ 2287 โ”‚ โ”‚ โ”‚ โ”‚ metrics = self.evaluate(ignore_keys=ignore_keys_for_eval) โ”‚ โ”‚ 2288 โ”‚ โ”‚ โ”‚ self._report_to_hp_search(trial, self.state.global_step, metrics) โ”‚ โ”‚ 2289 โ”‚ โ”‚ โ”‚ โ”‚ 2290 โ”‚ โ”‚ if self.control.should_save: โ”‚ โ”‚ โ”‚ โ”‚ /usr/local/lib/python3.10/dist-packages/transformers/trainer_seq2seq.py:159 in evaluate โ”‚ โ”‚ โ”‚ โ”‚ 156 โ”‚ โ”‚ ) โ”‚ โ”‚ 157 โ”‚ โ”‚ self._gen_kwargs = gen_kwargs โ”‚ โ”‚ 158 โ”‚ โ”‚ โ”‚ โ”‚ โฑ 159 โ”‚ โ”‚ return super().evaluate(eval_dataset, ignore_keys=ignore_keys, metric_key_prefix โ”‚ โ”‚ 160 โ”‚ โ”‚ โ”‚ 161 โ”‚ def predict( โ”‚ โ”‚ 162 โ”‚ โ”‚ self, โ”‚ โ”‚ โ”‚ โ”‚ /usr/local/lib/python3.10/dist-packages/transformers/trainer.py:2993 in evaluate โ”‚ โ”‚ โ”‚ โ”‚ 2990 โ”‚ โ”‚ start_time = time.time() โ”‚ โ”‚ 2991 โ”‚ โ”‚ โ”‚ โ”‚ 2992 โ”‚ โ”‚ eval_loop = self.prediction_loop if self.args.use_legacy_prediction_loop else se โ”‚ โ”‚ โฑ 2993 โ”‚ โ”‚ output = eval_loop( โ”‚ โ”‚ 2994 โ”‚ โ”‚ โ”‚ eval_dataloader, โ”‚ โ”‚ 2995 โ”‚ โ”‚ โ”‚ description="Evaluation", โ”‚ โ”‚ 2996 โ”‚ โ”‚ โ”‚ # No point gathering the predictions if there are no metrics, otherwise we d โ”‚ โ”‚ โ”‚ โ”‚ /usr/local/lib/python3.10/dist-packages/transformers/trainer.py:3281 in evaluation_loop โ”‚ โ”‚ โ”‚ โ”‚ 3278 โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ EvalPrediction(predictions=all_preds, label_ids=all_labels, inputs=a โ”‚ โ”‚ 3279 โ”‚ โ”‚ โ”‚ โ”‚ ) โ”‚ โ”‚ 3280 โ”‚ โ”‚ โ”‚ else: โ”‚ โ”‚ โฑ 3281 โ”‚ โ”‚ โ”‚ โ”‚ metrics = self.compute_metrics(EvalPrediction(predictions=all_preds, lab โ”‚ โ”‚ 3282 โ”‚ โ”‚ else: โ”‚ โ”‚ 3283 โ”‚ โ”‚ โ”‚ metrics = {} โ”‚ โ”‚ 3284 โ”‚ โ”‚ in compute_metrics:29 โ”‚ โ”‚ โ”‚ โ”‚ /usr/local/lib/python3.10/dist-packages/datasets/metric.py:453 in compute โ”‚ โ”‚ โ”‚ โ”‚ 450 โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ 451 โ”‚ โ”‚ โ”‚ inputs = {input_name: self.data[input_name] for input_name in self.features} โ”‚ โ”‚ 452 โ”‚ โ”‚ โ”‚ with temp_seed(self.seed): โ”‚ โ”‚ โฑ 453 โ”‚ โ”‚ โ”‚ โ”‚ output = self._compute(**inputs, **compute_kwargs) โ”‚ โ”‚ 454 โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ 455 โ”‚ โ”‚ โ”‚ if self.buf_writer is not None: โ”‚ โ”‚ 456 โ”‚ โ”‚ โ”‚ โ”‚ self.buf_writer = None โ”‚ โ”‚ โ”‚ โ”‚ /root/.cache/huggingface/modules/datasets_modules/metrics/cer/46482e3826224451c26c9b51d8d193d38a โ”‚ โ”‚ 4226daab693df497d2e397b623274e/cer.py:149 in _compute โ”‚ โ”‚ โ”‚ โ”‚ 146 โ”‚ โ”‚ incorrect = 0 โ”‚ โ”‚ 147 โ”‚ โ”‚ total = 0 โ”‚ โ”‚ 148 โ”‚ โ”‚ for prediction, reference in zip(predictions, references): โ”‚ โ”‚ โฑ 149 โ”‚ โ”‚ โ”‚ measures = jiwer.compute_measures( โ”‚ โ”‚ 150 โ”‚ โ”‚ โ”‚ โ”‚ reference, โ”‚ โ”‚ 151 โ”‚ โ”‚ โ”‚ โ”‚ prediction, โ”‚ โ”‚ 152 โ”‚ โ”‚ โ”‚ โ”‚ truth_transform=cer_transform, โ”‚ โ”‚ โ”‚ โ”‚ /usr/local/lib/python3.10/dist-packages/jiwer/measures.py:306 in compute_measures โ”‚ โ”‚ โ”‚ โ”‚ 303 โ”‚ โ”‚ ) โ”‚ โ”‚ 304 โ”‚ ) โ”‚ โ”‚ 305 โ”‚ โ”‚ โ”‚ โฑ 306 โ”‚ output = process_words( โ”‚ โ”‚ 307 โ”‚ โ”‚ reference=truth, โ”‚ โ”‚ 308 โ”‚ โ”‚ hypothesis=hypothesis, โ”‚ โ”‚ 309 โ”‚ โ”‚ reference_transform=truth_transform, โ”‚ โ”‚ โ”‚ โ”‚ /usr/local/lib/python3.10/dist-packages/jiwer/process.py:159 in process_words โ”‚ โ”‚ โ”‚ โ”‚ 156 โ”‚ if isinstance(hypothesis, str): โ”‚ โ”‚ 157 โ”‚ โ”‚ hypothesis = [hypothesis] โ”‚ โ”‚ 158 โ”‚ if any(len(t) == 0 for t in reference): โ”‚ โ”‚ โฑ 159 โ”‚ โ”‚ raise ValueError("one or more references are empty strings") โ”‚ โ”‚ 160 โ”‚ โ”‚ โ”‚ 161 โ”‚ # pre-process reference and hypothesis by applying transforms โ”‚ โ”‚ 162 โ”‚ ref_transformed = _apply_transform( โ”‚ โ•ฐโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฏ ValueError: one or more references are empty strings ```
05-31-2023 09:08:58
05-31-2023 09:08:58
cc @younesbelkada and @amyeroberts <|||||>@VallabhMahajan1 Could you share a reproducible code snippet and information about the running environment (run `transformers-cli env` in the terminal and copy-paste the output)? From the traceback, it seems the issue is coming in the metric calculation when using `Trainer`. I'm able to build and run a small example with the checkpoints you shared on the `main` branch: ```python from transformers import AutoImageProcessor, AutoTokenizer, VisionEncoderDecoderModel import requests from PIL import Image import torch encoder_checkpoint = "google/vit-base-patch16-224" decoder_checkpoint = "bert-base-multilingual-cased" image_processor = AutoImageProcessor.from_pretrained(encoder_checkpoint) tokenizer = AutoTokenizer.from_pretrained(decoder_checkpoint) model = VisionEncoderDecoderModel.from_encoder_decoder_pretrained( encoder_pretrained_model_name_or_path=encoder_checkpoint, decoder_pretrained_model_name_or_path=decoder_checkpoint, ) # load image from the IAM dataset url = "https://fki.tic.heia-fr.ch/static/img/a01-122-02.jpg" image = Image.open(requests.get(url, stream=True).raw).convert("RGB") # training model.config.decoder_start_token_id = tokenizer.cls_token_id model.config.pad_token_id = tokenizer.pad_token_id model.config.vocab_size = model.config.decoder.vocab_size pixel_values = image_processor(image, return_tensors="pt").pixel_values text = "hello world" labels = tokenizer(text, return_tensors="pt").input_ids outputs = model(pixel_values=pixel_values, labels=labels) loss = outputs.loss ```<|||||>Thanks for the reply. I was trying to train trocr model. Below is the code snippet. I'm not sure but I guess we are got this error in compute matrix function. ``` - `transformers` version: 4.28.0 - Platform: Linux-5.15.107+-x86_64-with-glibc2.31 - Python version: 3.10.11 - Huggingface_hub version: 0.14.1 - Safetensors version: not installed - PyTorch version (GPU?): 2.0.1+cu118 (True) - Tensorflow version (GPU?): 2.12.0 (True) - Flax version (CPU?/GPU?/TPU?): 0.6.9 (gpu) - Jax version: 0.4.10 - JaxLib version: 0.4.10 - Using GPU in script?: yes - Using distributed or parallel set-up in script?: no ``` ``` from transformers import BertTokenizer tokenizer = BertTokenizer.from_pretrained("bert-base-multilingual-cased") feature_extractor=ViTFeatureExtractor.from_pretrained("google/vit-base-patch16-224-in21k") processor = TrOCRProcessor(feature_extractor = feature_extractor, tokenizer = tokenizer) model = VisionEncoderDecoderModel.from_encoder_decoder_pretrained("google/vit-base-patch16-224", "bert-base-multilingual-cased") cer_metric = load_metric("cer") def compute_metrics(pred): labels_ids = pred.label_ids pred_ids = pred.predictions pred_str = processor.batch_decode(pred_ids, skip_special_tokens=True) labels_ids[labels_ids == -100] = processor.tokenizer.pad_token_id label_str = processor.batch_decode(labels_ids, skip_special_tokens=True) cer = cer_metric.compute(predictions=pred_str, references=label_str) return {"cer": cer} from transformers import Seq2SeqTrainer, Seq2SeqTrainingArguments training_args = Seq2SeqTrainingArguments( predict_with_generate=True, evaluation_strategy="steps", num_train_epochs=1, per_device_train_batch_size=16, per_device_eval_batch_size=16, fp16=True, output_dir="./", logging_steps=2, save_strategy="no", eval_steps=100, ) from transformers import default_data_collator # instantiate trainer trainer = Seq2SeqTrainer( model=model, tokenizer=processor.tokenizer, args=training_args, compute_metrics=compute_metrics, train_dataset=train_dataset, eval_dataset=eval_dataset, data_collator=default_data_collator, ) trainer.train() ```<|||||>cc @younesbelkada and @amyeroberts<|||||>@VallabhMahajan1 Thank you for providing a code snippet. However, the code snippet is incomplete: `train_dataset` and `eval_dataset` are not defined. If you can't provide these datasets, you can try to use public datasets (for example, on HF's dataset Hub) which is similar to your own datasets. In any case, please use a small dataset (or take a small slice from the large dataset). Without a self-complete code snippet to reproduce, we are not able to help. Thank you. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
23,891
closed
fix(configuration_llama): add `keys_to_ignore_at_inference` to `LlamaConfig`
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes #23890 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 --> @ArthurZucker and @younesbelkada
05-31-2023 08:50:59
05-31-2023 08:50:59
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23891). All of your documentation changes will be reflected on that endpoint.
transformers
23,890
closed
`LlamaConfig` is missing default `keys_to_ignore_at_inference`
### System Info - `transformers` version: 4.30.0.dev0 - Platform: Linux-5.4.0-42-generic-x86_64-with-glibc2.31 - Python version: 3.10.11 - Huggingface_hub version: 0.14.1 - Safetensors version: 0.3.1 - PyTorch version (GPU?): 2.0.1 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: yes - Using distributed or parallel set-up in script?: yes ### Who can help? _No response_ ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction When I use `transformers.Trainer` to train a `LlamaForSequenceClassification` and provide a `compute_metrics` function, the program failed and returned the following error message: ```bash Sizes of tensors must match except in dimension 0. Expected size 218 but got size 131 for tensor number 1 in the list. File "/home/user/Miniconda3/envs/safe-rlhf/lib/python3.10/site-packages/transformers/trainer_pt_utils.py", line 75, in torch_pad_and_concatenate return torch.cat((tensor1, tensor2), dim=0) File "/home/user/Miniconda3/envs/safe-rlhf/lib/python3.10/site-packages/transformers/trainer_pt_utils.py", line 116, in nested_concat return torch_pad_and_concatenate(tensors, new_tensors, padding_index=padding_index) File "/home/user/Miniconda3/envs/safe-rlhf/lib/python3.10/site-packages/transformers/trainer_pt_utils.py", line 114, in <genexpr> return type(tensors)(nested_concat(t, n, padding_index=padding_index) for t, n in zip(tensors, new_tensors)) File "/home/user/Miniconda3/envs/safe-rlhf/lib/python3.10/site-packages/transformers/trainer_pt_utils.py", line 114, in nested_concat return type(tensors)(nested_concat(t, n, padding_index=padding_index) for t, n in zip(tensors, new_tensors)) File "/home/user/Miniconda3/envs/safe-rlhf/lib/python3.10/site-packages/transformers/trainer_pt_utils.py", line 114, in <genexpr> return type(tensors)(nested_concat(t, n, padding_index=padding_index) for t, n in zip(tensors, new_tensors)) File "/home/user/Miniconda3/envs/safe-rlhf/lib/python3.10/site-packages/transformers/trainer_pt_utils.py", line 114, in nested_concat return type(tensors)(nested_concat(t, n, padding_index=padding_index) for t, n in zip(tensors, new_tensors)) File "/home/user/Miniconda3/envs/safe-rlhf/lib/python3.10/site-packages/transformers/trainer_pt_utils.py", line 114, in <genexpr> return type(tensors)(nested_concat(t, n, padding_index=padding_index) for t, n in zip(tensors, new_tensors)) File "/home/user/Miniconda3/envs/safe-rlhf/lib/python3.10/site-packages/transformers/trainer_pt_utils.py", line 114, in nested_concat return type(tensors)(nested_concat(t, n, padding_index=padding_index) for t, n in zip(tensors, new_tensors)) File "/home/user/Miniconda3/envs/safe-rlhf/lib/python3.10/site-packages/transformers/trainer.py", line 3235, in evaluation_loop preds_host = logits if preds_host is None else nested_concat(preds_host, logits, padding_index=-100) File "/home/user/Miniconda3/envs/safe-rlhf/lib/python3.10/site-packages/transformers/trainer.py", line 3029, in evaluate output = eval_loop( File "/home/user/Miniconda3/envs/safe-rlhf/lib/python3.10/site-packages/transformers/trainer.py", line 2300, in _maybe_log_save_evaluate metrics = self.evaluate(ignore_keys=ignore_keys_for_eval) File "/home/user/Miniconda3/envs/safe-rlhf/lib/python3.10/site-packages/transformers/trainer.py", line 2019, in _inner_training_loop self._maybe_log_save_evaluate(tr_loss, model, trial, epoch, ignore_keys_for_eval) File "/home/user/Miniconda3/envs/safe-rlhf/lib/python3.10/site-packages/transformers/trainer.py", line 1664, in train return inner_training_loop( File "/home/user/Projects/classification/train.py", line 193, in train trainer.train() File "/home/user/Projects/classification/train.py", line 200, in <module> train() RuntimeError: Sizes of tensors must match except in dimension 0. Expected size 218 but got size 131 for tensor number 1 in the list. ``` I have identified that the error occurred because the `LlamaConfig` is missing the default `keys_to_ignore_at_inference`, which resulted in the `logits` term incorrectly including the `past_key_values`. Then, when the `Trainer` attempts to `nested_concat` the `logits`, it encounters a dimension mismatch within the tuple. The related code in the `evaluation_loop` to get `logits` term is as follows: ```python if isinstance(outputs, dict): logits = tuple(v for k, v in outputs.items() if k not in ignore_keys + ["loss"]) else: logits = outputs[1:] ``` Because in almost all cases, `logits` should not include `past_key_values`, it is reasonable to add default `keys_to_ignore_at_inference` in the `LlamaConfig`. ### Expected behavior Add default `keys_to_ignore_at_inference` in the `LlamaConfig`.
05-31-2023 08:49:48
05-31-2023 08:49:48
transformers
23,889
closed
Behaviour between slow and fast LLaMa tokenizer not equivalent
### System Info Transformers v4.29.2 ### Who can help? @ArthurZucker ### Reproduction For a new model (#23460), I'd like to get equivalent behaviour between the slow and fast LLaMa tokenizers. The code of the slow tokenizer was taken from the [original code](https://github.com/salesforce/LAVIS/blob/59273f651b9bffb193d1b12a235e909e9f826dda/lavis/models/blip2_models/blip2_vicuna_instruct.py#L82-L89), and now I'd like to translate this to the fast tokenizer as well. However, as can be seen below, behaviour is not equivalent: ``` from transformers import LlamaTokenizer, LlamaTokenizerFast import torch tokenizer = LlamaTokenizer.from_pretrained("huggyllama/llama-7b", truncation_side="left") tokenizer.add_special_tokens({"pad_token": "[PAD]"}) tokenizer.add_special_tokens({"bos_token": "</s>"}) tokenizer.add_special_tokens({"eos_token": "</s>"}) tokenizer.add_special_tokens({"unk_token": "</s>"}) fast_tokenizer = LlamaTokenizerFast.from_pretrained("huggyllama/llama-7b", truncation_side="left") fast_tokenizer.add_special_tokens({"pad_token": "[PAD]"}) fast_tokenizer.add_special_tokens({"bos_token": "</s>"}) fast_tokenizer.add_special_tokens({"eos_token": "</s>"}) fast_tokenizer.add_special_tokens({"unk_token": "</s>"}) prompt = "What is unusual about this image?" encoding = tokenizer(prompt, return_tensors="pt") fast_encoding = fast_tokenizer(prompt, return_tensors="pt") for k,v in encoding.items(): assert torch.allclose(fast_encoding[k], v) => this assertion fails since the input_ids differ: tensor([[ 2, 1724, 338, 22910, 1048, 445, 1967, 29973]]) tensor([[ 1, 1724, 338, 22910, 1048, 445, 1967, 29973]]) ``` ### Expected behavior I'd expect that the assertion above passes.
05-31-2023 08:03:26
05-31-2023 08:03:26
Thanks for reporting, will have a look <|||||>Okay, what's happening here is that you are adding tokens that are already present in the vocabulary of the model. `</s>` is `2`. - fast: When you add the `bos_token` it is not added as it already exist, but the content is updated with the new value for the fast tokenizer. - slow: the token id is properly updated, but the `post_processor` is not. This was fixed in #23855 <|||||>Reproduced is still working for the latest version of transformers because you are relying on adding the token, which should be ignored but is not. The content in rust is modified . Use this: `fast_tokenizer.bos_token = "</s>"` <|||||>(this will update the processor)<|||||>Thanks for taking a look! However I'm using the latest version of Transformers, have added `fast_tokenizer.bos_token = "</s>"`, but the assertion still fails for me.<|||||>Reproduced on main branch, here's a Colab notebook: https://colab.research.google.com/drive/1KA_mliTsvjnhOCO3SApVJkgVd2HEeVQZ?usp=sharing.<|||||>Actually, with fast tokenizers there is no logic to properly update the template processor if it exists. The default has always been to initialize the model with the correct tokens. Meaning: fast_tokenizer = LlamaTokenizerFast.from_pretrained("huggyllama/llama-7b", truncation_side="left", bos_token=โ€œ</s>โ€) Is what you should be using. The template processor gets updated only if you change โ€œadd_bosโ€ and โ€œadd_eosโ€ otherwise the logic is a bit complicated, we have to overload parent setters to bos_token as well as bos_token_id to update the template processing. Not in favor of that so leaving as is, will improve the doc for changing bos and eos in fast <|||||>Hmm ok so there's no way to have an equivalent fast tokenizer that makes the script above pass? The reason is that for the new InstructBLIP model (https://github.com/huggingface/transformers/pull/23460), the processor class (`InstructBlipProcessor`) would normally use the `AutoTokenizer` class to load files from the hub. And as the `AutoTokenizer` API uses the fast tokenizer by default, I'm currently not getting equivalent results as when I use the slow one. <|||||>No way no. I am not in favor of introducing a very hacky behaviour while the fix should be in rust in that case. The following works: ```python from transformers import LlamaTokenizer, LlamaTokenizerFast import torch fast_tokenizer = LlamaTokenizerFast.from_pretrained("huggyllama/llama-7b", truncation_side="left", bos_token = "</s>", unk_token = "</s>") fast_tokenizer.add_special_tokens({"pad_token": "[PAD]"}) tokenizer = LlamaTokenizer.from_pretrained("huggyllama/llama-7b", truncation_side="left", bos_token = "</s>", unk_token = "</s>") tokenizer.add_special_tokens({"pad_token": "[PAD]"}) prompt = "What is unusual about this image?" encoding = tokenizer(prompt, return_tensors="pt") fast_encoding = fast_tokenizer(prompt, return_tensors="pt") for k,v in encoding.items(): assert torch.allclose(fast_encoding[k], v) ```<|||||>Also once you have a tokenizer ready, you can save it and it should have the correct postProcessor<|||||>Ok thanks a lot, now works fine and I can use the fast tokenizer.
transformers
23,888
open
[`config.to_dict()`] update test and models that have a composition
# What does this PR do? Adresses #23876 Most of the configurations that are `composition` are not json serialzable. - A test exists, but is usually not run (`ConfigTester` scarcly used on the full config) - The test does test the case when `torch_dtype = torch.float16`
05-31-2023 07:47:30
05-31-2023 07:47:30
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23888). All of your documentation changes will be reflected on that endpoint.<|||||>3 model config that were not tested need a better update to pass the `test_config`, will work on this once I am back from lunch! - blip2 - pix2struct - clap
transformers
23,887
closed
MarkupLM: feature_extraction_markuplm.py only extracts SearchableText
### System Info - `transformers` version: 4.29.2 - Platform: Linux-6.1.22-aufs-1-x86_64-with-glibc2.31 - Python version: 3.9.2 - Huggingface_hub version: 0.14.1 - Safetensors version: not installed - PyTorch version (GPU?): 2.0.1+cu117 (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Run some MArkupLM code that uses the extractor over HTML that uses template tags, etc. (for example, pages of a wiki.js wiki ). See that you will miss much of the content ### Expected behavior feature_extraction_markuplm.py should extract all *Text bs4 types, not just SearchableText
05-31-2023 07:44:00
05-31-2023 07:44:00
Please follow the issue template. There is no reproducer and the expected behavior is very unclear.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
23,886
closed
MarkupLM: TypeError: unsupported operand type(s) for +: 'Tensor' and 'NoneType' in modeling_markuplm.py", line 217
### System Info - `transformers` version: 4.29.2 - Platform: Linux-6.1.22-aufs-1-x86_64-with-glibc2.31 - Python version: 3.9.2 - Huggingface_hub version: 0.14.1 - Safetensors version: not installed - PyTorch version (GPU?): 2.0.1+cu117 (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? _No response_ ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction Run the code above or similar, probably for a larger than 512 or 1024 tokens input: tokenizer = MarkupLMTokenizerFast.from_pretrained("microsoft/markuplm-large-finetuned-websrc") model = MarkupLMForQuestionAnswering.from_pretrained("microsoft/markuplm-large-finetuned-websrc") feature_extractor = MarkupLMFeatureExtractor() feature_processor = MarkupLMProcessor(feature_extractor, tokenizer) ... encoding = feature_processor(html_content, questions=question, return_tensors="pt") with torch.no_grad(): outputs = model(**encoding) I tried to extend the model's max tokens size as[ suggested here](https://discuss.huggingface.co/t/fine-tuning-bert-with-sequences-longer-than-512-tokens/12652), but that doesn't seem to help ### Expected behavior outputs = model(**encoding) should create outputs
05-31-2023 07:07:12
05-31-2023 07:07:12
We cannot help without a clear reproducer of the issue.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
23,885
closed
inhomogeneous shape after 1 dimensions
### System Info - `transformers` version: 4.30.0.dev0 - Platform: Linux-5.4.204-ql-generic-12.0-19-x86_64-with-glibc2.31 - Python version: 3.11.3 - Huggingface_hub version: 0.14.1 - Safetensors version: 0.3.1 - PyTorch version (GPU?): 2.0.1 (True) ### Who can help? @sanchit-gandhi @sgugger @muellerzr ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Learned about audio classification: https://huggingface.co/tasks/audio-classification directed me to the steps: Followed steps provided by doc: https://huggingface.co/docs/transformers/v4.15.0/examples : git clone https://github.com/huggingface/transformers cd transformers pip install . cd /transformers/examples/pytorch/audio-classification pip install -r requirements.txt Followed steps provided by doc: https://github.com/huggingface/transformers/blob/main/examples/pytorch/audio-classification/README.md cd `transformers/examples/pytorch/audio-classification` python `run_audio_classification.py \` --model_name_or_path facebook/wav2vec2-base \ --dataset_name superb \ --dataset_config_name ks \ --output_dir wav2vec2-base-ft-keyword-spotting \ --overwrite_output_dir \ --remove_unused_columns False \ --do_train \ --do_eval \ --fp16 \ --learning_rate 3e-5 \ --max_length_seconds 1 \ --attention_mask False \ --warmup_ratio 0.1 \ --num_train_epochs 5 \ --per_device_train_batch_size 32 \ --gradient_accumulation_steps 4 \ --per_device_eval_batch_size 32 \ --dataloader_num_workers 4 \ --logging_strategy steps \ --logging_steps 10 \ --evaluation_strategy epoch \ --save_strategy epoch \ --load_best_model_at_end True \ --metric_for_best_model accuracy \ --save_total_limit 3 \ --seed 0 \ --push_to_hub ### Errors: (1) python3.11/site-packages/transformers/feature_extraction_utils.py", line 166, in convert_to_tensors tensor = as_tensor(value) ^^^^^^^^^^^^^^^^ **ValueError:** `setting an array element with a sequence. The requested array has an inhomogeneous shape after 1 dimensions. The detected shape was (32,) + inhomogeneous part. ` (2) python3.11/site-packages/transformers/feature_extraction_utils.py", line 172, in convert_to_tensors raise ValueError( **ValueError:** `Unable to create tensor, you should probably activate padding with 'padding=True' to have batched tensors with the same length.` ### Expected behavior get a fine-tuned model with the example model and dataset
05-31-2023 06:08:02
05-31-2023 06:08:02
What version of NumPy are you using? If it's 1.24 can you downgrade it to 1.23 and try again? <|||||>@hollance, worked thanks<|||||>Alternatively, you can run `transformers` on main following the updates from #23162<|||||>When I downgrade numpy I gt a new error: ``` TypeError: `pad_width` must be of integral type. ``` ``` TypeError: Caught TypeError in DataLoader worker process 0. Original Traceback (most recent call last): File "/home/rave/mambaforge/envs/slickformer/lib/python3.9/site-packages/torch/utils/data/_utils/worker.py", line 308, in _worker_loop data = fetcher.fetch(index) File "/home/rave/mambaforge/envs/slickformer/lib/python3.9/site-packages/torch/utils/data/_utils/fetch.py", line 32, in fetch data.append(next(self.dataset_iter)) File "/home/rave/mambaforge/envs/slickformer/lib/python3.9/site-packages/torch/utils/data/datapipes/_hook_iterator.py", line 144, in __next__ return self._get_next() File "/home/rave/mambaforge/envs/slickformer/lib/python3.9/site-packages/torch/utils/data/datapipes/_hook_iterator.py", line 132, in _get_next result = next(self.iterator) File "/home/rave/mambaforge/envs/slickformer/lib/python3.9/site-packages/torch/utils/data/datapipes/_hook_iterator.py", line 215, in wrap_next result = next_func(*args, **kwargs) File "/home/rave/mambaforge/envs/slickformer/lib/python3.9/site-packages/torch/utils/data/datapipes/datapipe.py", line 369, in __next__ return next(self._datapipe_iter) File "/home/rave/mambaforge/envs/slickformer/lib/python3.9/site-packages/torch/utils/data/datapipes/_hook_iterator.py", line 185, in wrap_generator response = gen.send(request) File "/home/rave/slickformer/ceruleanml/data_pipeline.py", line 321, in __iter__ inputs = self.processor(images=[sample_dict['image']], segmentation_maps=[semantic_mask], task_inputs=["semantic"], reduce_labels=False, return_tensors="pt") File "/home/rave/mambaforge/envs/slickformer/lib/python3.9/site-packages/transformers/models/mask2former/image_processing_mask2former.py", line 542, in __call__ return self.preprocess(images, segmentation_maps=segmentation_maps, **kwargs) File "/home/rave/mambaforge/envs/slickformer/lib/python3.9/site-packages/transformers/models/mask2former/image_processing_mask2former.py", line 718, in preprocess encoded_inputs = self.encode_inputs( File "/home/rave/mambaforge/envs/slickformer/lib/python3.9/site-packages/transformers/models/mask2former/image_processing_mask2former.py", line 870, in encode_inputs masks = [ File "/home/rave/mambaforge/envs/slickformer/lib/python3.9/site-packages/transformers/models/mask2former/image_processing_mask2former.py", line 871, in <listcomp> self._pad_image(image=mask, output_size=pad_size, constant_values=ignore_index) for mask in masks File "/home/rave/mambaforge/envs/slickformer/lib/python3.9/site-packages/transformers/models/mask2former/image_processing_mask2former.py", line 740, in _pad_image padded_image = pad( File "/home/rave/mambaforge/envs/slickformer/lib/python3.9/site-packages/transformers/image_transforms.py", line 714, in pad image = np.pad(image, padding, mode="constant", constant_values=constant_values) File "<__array_function__ internals>", line 180, in pad File "/home/rave/mambaforge/envs/slickformer/lib/python3.9/site-packages/numpy/lib/arraypad.py", line 740, in pad raise TypeError('`pad_width` must be of integral type.') TypeError: `pad_width` must be of integral type. This exception is thrown by __iter__ of Mask2FormerSemanticProcessorDP(kwargs={}, sample_dicts=StackConvertLabelsToTensor) ```<|||||>@rbavery Could you open a new issue for this linking to this one (#23885)? It seems the error is arising from the use of the image transformers library, whereas this issue concerns audio models. It helps us keep track of what's been resolved and avoids Sanchit and Matthijs getting notifications whilst we try to solve this one :)
transformers
23,884
closed
(End2End RAG) Finetuning Script (finetune_rag_ray_end2end.sh) not working for HF Dataset
### System Info Python 3.9.16 Transformers 4.13.0 WSL ### Who can help? @shamanez @ArthurZucker ### Information - [X] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Note: I have gotten all test scripts to work. I'm running the below modified `finetune_rag_ray_end2end.sh` script. I specifically have the `wiki_dpr/psgs_w100.multiset.compressed` version of the `wiki_dpr` dataset cached (with train generated). ```bash # Sample script to finetune RAG using Ray for distributed retrieval. # Add parent directory to python path to access lightning_base.py export PYTHONPATH="../":"${PYTHONPATH}" #creates the custom knowlegebase # python use_own_knowledge_dataset.py \ # --csv_path /DIR/SQUAD-KB/squad-kb.csv \ # --output_dir /DIR/SQUAD-KB # Start a single-node Ray cluster. ray start --head python finetune_rag.py \ --data_dir ./ft_data \ --output_dir ./model_checkpoints \ --model_name_or_path facebook/rag-token-base \ --model_type rag_token \ --fp16 \ --gpus 4 \ --profile \ --do_train \ --end2end \ --do_predict \ --n_val -1 \ --train_batch_size 32 \ --eval_batch_size 32 \ --max_source_length 300 \ --max_target_length 25 \ --val_max_target_length 25 \ --test_max_target_length 25 \ --label_smoothing 0.1 \ --dropout 0.1 \ --attention_dropout 0.1 \ --weight_decay 0.001 \ --adam_epsilon 1e-08 \ --max_grad_norm 0.1 \ --lr_scheduler polynomial \ --learning_rate 3e-05 \ --num_train_epochs 10 \ --warmup_steps 500 \ --gradient_accumulation_steps 8 \ --distributed_retriever ray \ --num_retrieval_workers 4 \ --index_name hf \ --index_path /path/to/.hf_cache/datasets/wiki_dpr/psgs_w100.multiset.compressed/0.0.0/74d4bff38a7c18a9498fafef864a8ba7129e27cb8d71b22f5e14d84cb17edd54/psgs_w100.multiset.IVF4096_HNSW128_PQ128-IP-train.faiss \ --context_encoder_name facebook/dpr-ctx_encoder-multiset-base \ --index_gpus 2 \ --gpu_order [2,3,0,1] \ --indexing_freq 500 # Stop the Ray cluster. ray stop ``` Since this is an official HF dataset, my reading of the below usage of the `finetune_rag.py` is that I should use the argument `--index_name hf`. Usage of `finetune_rag.py`: ``` --index_name INDEX_NAME Name of the index to use: 'hf' for a canonical dataset from the datasets library (default), 'custom' for a local index, or 'legacy' for the orignal one) --passages_path PASSAGES_PATH Path to the dataset of passages for custom index. More info about custom indexes in the RagRetriever documentation as well as in `examples/rag/use_own_knowledge_dataset.py` --index_path INDEX_PATH Path to the faiss index for custom index. More info about custom indexes in the RagRetriever documentation as well as in `examples/rag/use_own_knowledge_dataset.py` ``` By the wording of the usage, I would think I need only specify `--index_name hf` without `--index_path`. But this results in ``` Traceback (most recent call last): File "/.../rag_end2end/finetune_rag.py", line 820, in <module> main(args) File "/.../rag_end2end/finetune_rag.py", line 758, in main model: GenerativeQAModule = GenerativeQAModule(args) File "/.../rag_end2end/finetune_rag.py", line 122, in __init__ retriever = RagRayDistributedRetriever.from_pretrained( File "/.../rag_end2end/distributed_ray_retriever.py", line 159, in from_pretrained index = cls._build_index(config) File "/.../miniconda3/envs/qa3/lib/python3.9/site-packages/transformers/models/rag/retrieval_rag.py", line 403, in _build_index return CanonicalHFIndex( File "/fs/nexus-scratch/yzhang42/miniconda3/envs/qa3/lib/python3.9/site-packages/transformers/models/rag/retrieval_rag.py", line 260, in __init__ raise ValueError("Please provide `index_name` or `index_path`.") ValueError: Please provide `index_name` or `index_path`. ``` Specifying `--index_name hf` and `--index_path /path/to/.hf_cache/datasets/wiki_dpr/psgs_w100.multiset.compressed/0.0.0/74d4bff38a7c18a9498fafef864a8ba7129e27cb8d71b22f5e14d84cb17edd54/psgs_w100.multiset.IVF4096_HNSW128_PQ128-IP-train.faiss` results in the same behavior. Specifying Specifying `--index_name custom` and `--index_path [path]` results in trying to load a dummy knowledge base from the `test_finetune.sh` script. ``` Loading passages from /fs/nexus-scratch/yzhang42/rag_end2end/test_run/dummy-kb/my_knowledge_dataset Traceback (most recent call last): File "/.../rag_end2end/finetune_rag.py", line 820, in <module> main(args) File "/.../rag_end2end/finetune_rag.py", line 758, in main model: GenerativeQAModule = GenerativeQAModule(args) File "/.../rag_end2end/finetune_rag.py", line 122, in __init__ retriever = RagRayDistributedRetriever.from_pretrained( File "/.../rag_end2end/distributed_ray_retriever.py", line 159, in from_pretrained index = cls._build_index(config) File "/.../miniconda3/envs/qa3/lib/python3.9/site-packages/transformers/models/rag/retrieval_rag.py", line 397, in _build_index return CustomHFIndex.load_from_disk( File "/.../miniconda3/envs/qa3/lib/python3.9/site-packages/transformers/models/rag/retrieval_rag.py", line 320, in load_from_disk dataset = load_from_disk(dataset_path) File "/.../miniconda3/envs/qa3/lib/python3.9/site-packages/datasets/load.py", line 1886, in load_from_disk raise FileNotFoundError(f"Directory {dataset_path} not found") FileNotFoundError: Directory /.../rag_end2end/test_run/dummy-kb/my_knowledge_dataset not found ``` I'm confused as to how this hardcoded value for `dataset_path` even persisted from my run of `test_finetune.sh` Running `--index_name legacy` results in the follow regardless of specifying `--index_path` ``` OSError: Can't load '/.../.hf_cache/datasets/wiki_dpr/psgs_w100.multiset.compressed/0.0.0/psgs_w100.tsv.pkl'. Make sure that: - '/.../.hf_cache/datasets/wiki_dpr/psgs_w100.multiset.compressed/0.0.0' is a correct remote path to a directory containing a file named psgs_w100.tsv.pkl - or '/.../.hf_cache/datasets/wiki_dpr/psgs_w100.multiset.compressed/0.0.0' is the correct path to a directory containing a file named psgs_w100.tsv.pkl. ``` Doing `find /.../.hf_cache/datasets/wiki_dpr -name "psgs_w100.tsv.pkl"` finds nothing. ### Expected behavior I'm confused as to what the proper arguments should be. Design wise, I think `--index_path=/.../wiki_dpr/psgs_w100.multiset.compressed` should work for my purposes. I've thought of saving the dataset in question and treating it like the dummy dataset used in `test_finetuning.sh` but I do not have the storage to copy the dataset. I also do not understand how the `dummy-kb` is persisting as an argument in the `finetune_rag_ray_end2end.sh` script.
05-31-2023 05:16:07
05-31-2023 05:16:07
Testing the below hard-coding to get expected behavior: `finetune_rag.sh` arguments: ``` --index_name custom \ --index_path /.../.hf_cache/datasets/wiki_dpr/psgs_w100.multiset.compressed/0.0.0/74d4bff38a7c18a9498fafef864a8ba7129e27cb8d71b22f5e14d84cb17edd54/psgs_w100.multiset.IVF4096_HNSW128_PQ128-IP-train.faiss ``` `retrieval_rag.py` hard-coding: ```py class CustomHFIndex(HFIndexBase): def __init__(self, vector_size: int, dataset, index_path=None): super().__init__(vector_size, dataset, index_initialized=index_path is None) self.index_path = index_path @classmethod def load_from_disk(cls, vector_size, dataset_path, index_path): # print("HERE", dataset_path) # print("here2", index_path) # raise # exit() logger.info(f"Loading passages from {dataset_path}") if dataset_path is None or index_path is None: raise ValueError( "Please provide ``dataset_path`` and ``index_path`` after calling ``dataset.save_to_disk(dataset_path)`` " "and ``dataset.get_index('embeddings').save(index_path)``." ) # dataset = load_from_disk(dataset_path) dataset = load_dataset(path='wiki_dpr', name='psgs_w100.multiset.compressed', split='train') # CHANGED return cls(vector_size=vector_size, dataset=dataset, index_path=index_path) ``` <|||||>This is not a script we maintain.<|||||>Yeah, you need to use the custom index. But you can try to get this to work with the HF index.
transformers
23,883
closed
Fix bug leading to missing token in GPTSanJapaneseTokenizer
# Fixes the missing comma (,) token in the GPTSanJapaneseTokenizer The comma character "," is a token in the default vocabulary file used by GPTSanJapaneseTokenizer. However, the vocab file parser has a bug that deletes this token and so the GPTSanJapaneseTokenizer encodes commas as the <byte 44> token instead of a unique token. This fixes that bug. @tanreinama @younesbelkada @ArthurZucker
05-31-2023 05:07:46
05-31-2023 05:07:46
Hey! Could you make sure to run `make fix-copies` ! Otherzise looks good to me<|||||>๐Ÿ˜… sorry, you can remove the `copied from` to get rid of the issue with make fix copies overwriting<|||||>Remove this : `# Copied from transformers.models.gpt_neox_japanese.tokenization_gpt_neox_japanese.load_vocab_and_emoji`<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23883). All of your documentation changes will be reflected on that endpoint.
transformers
23,882
closed
Please add an example for run_speech_recognition_ctc for eval only
### System Info - `transformers` version: 4.28.1 - Platform: Windows-10-10.0.22621-SP0 - Python version: 3.11.2 - Huggingface_hub version: 0.14.1 - Safetensors version: not installed - PyTorch version (GPU?): 2.0.1+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: NO - Using distributed or parallel set-up in script?: NO ### Who can help? @sgugger ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction 1. Remove any training output directory. 2. Delete the cache. 3. Modify any of the examples, by removing --do_train. 4. Run. You should run into an exception: ``` File "/home/xekri/eo-reco/run_speech_recognition_ctc.py", line 824, in <module> main() File "/home/xekri/eo-reco/run_speech_recognition_ctc.py", line 575, in main vocab_dict = create_vocabulary_from_data( File "/home/xekri/eo-reco/run_speech_recognition_ctc.py", line 351, in create_vocabulary_from_data remove_columns=datasets["train"].column_names, File "/home/xekri/.env/lib/python3.10/site-packages/datasets/dataset_dict.py", line 57, in __getitem__ return super().__getitem__(k) KeyError: 'train' ``` Note: line numbers may be different, since I modified the file with debug. ### Expected behavior 1. An example that clearly shows a working evaluation-only set of arguments. 2. No exception during evaluation-only.
05-31-2023 03:32:32
05-31-2023 03:32:32
cc @sanchit-gandhi <|||||>Hey @RobertBaruch, thanks for the issue! I would advocate against using a training script for evaluation only. The reason is that when we use a training script with HF Trainer, we still initialise optimiser states for our model, even if we're not doing any training. For common optimisers, such as Adam or AdamW, there are two optimiser states per trainable model parameter. This means for a 1GB model, we have 2GB of optimiser states, both of which are on the GPU. So immediately, 2GB of GPU memory is consumed by optimiser states we don't actually use. What I would encourage you to do instead is run inference using `pipeline`. In doing so, you'll only put the model weights on the GPU, so you'll have more GPU memory and thus be able to run larger batch sizes. Here's a code snippet that achieves this: ```python from datasets import load_dataset from transformers import pipeline from transformers.pipelines.pt_utils import KeyDataset from tqdm.auto import tqdm pipe = pipeline("automatic-speech-recognition", model="facebook/wav2vec2-base-960h", device=0) dataset = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation") all_preds = [] for out in tqdm(pipe(KeyDataset(dataset, "file"), batch_size=8)): all_preds.append(out) print(all_preds) ``` It's adapted from the docs, where you can read more about pipeline: https://huggingface.co/docs/transformers/main_classes/pipelines#transformers.pipeline<|||||>What I really want is the ability to compute the loss, CER, and WER for the predictions, which I don't think pipeline gives? On Wed, May 31, 2023, 07:14 Sanchit Gandhi ***@***.***> wrote: > Hey @RobertBaruch <https://github.com/RobertBaruch>, thanks for the > issue! I would advocate against using a training script for evaluation > only. The reason is that we still initialise optimiser states for our > model, even if we're not doing any training. For common optimisers, such as > Adam or AdamW, there are two optimiser states per trainable model > parameter. This means for a 1GB model, we have 2GB of optimiser states, > both of which are on the GPU. So immediately, 2GB of GPU memory is consumed > by optimiser states we don't actually use. > > What I would encourage you to do instead is run inference using pipeline. > In doing so, you'll only put the model weights on the GPU, so you'll have > more GPU memory and thus be able to run larger batch sizes. Here's a code > snippet that achieves this: > > from datasets import load_datasetfrom transformers import pipelinefrom transformers.pipelines.pt_utils import KeyDatasetfrom tqdm.auto import tqdm > pipe = pipeline("automatic-speech-recognition", model="facebook/wav2vec2-base-960h", device=0)dataset = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation") > all_preds = []for out in tqdm(pipe(KeyDataset(dataset, "file"), batch_size=8)): > all_preds.append(out) > print(all_preds) > > โ€” > Reply to this email directly, view it on GitHub > <https://github.com/huggingface/transformers/issues/23882#issuecomment-1570321793>, > or unsubscribe > <https://github.com/notifications/unsubscribe-auth/AANTRDTOY32TU52OXW6AQVLXI5G3HANCNFSM6AAAAAAYU243JQ> > . > You are receiving this because you were mentioned.Message ID: > ***@***.***> > <|||||>Well, I could get CER and WER just by comparing the result with the target via `jiwer`. But a loss would be nice to see.<|||||>Hey @RobertBaruch - in this case, you can use the `model` + `processor` API. Does this docstring answer your question? https://huggingface.co/docs/transformers/model_doc/wav2vec2#transformers.Wav2Vec2ForCTC.forward.example<|||||>Definitely, thank you!
transformers
23,880
open
Whisper with Elastic Weight Consolidation
### Feature request After specific language finetuning for Whisper, its ASR recognition performance in previous languages deteriorates, known as catastrophic forgetting. Therefore, something, such as the EWC, needs to be used to overcome this problem. Here is the paper of EWC: _https://arxiv.org/pdf/1612.00796.pdf_ ### Motivation When I fine-tuned Whisper large-v2 with 10 hours of Af language data, its WER in languages like Be and Is dropped to close to 90%. However, the WER of these languages under the pre-fine-tuning model is around 40%. So I hope to use the EWC to overcome or mitigate this problem. ### Your contribution I believe in the professional ability of Hugging Face team, and I can provide data support for it.
05-31-2023 02:00:56
05-31-2023 02:00:56
As far as we know, apart from the EWC, other ways to overcome catastrophic forgetting include synaptic Intelligence, knowledge distillation, learning without forgetting (LWF), less-forgetting learning(LFL) and other methods. If these methods can be implemented, it will be helpful for Whisper to further develop its research and development.<|||||>Hi @LYPinASR I would love to implement this feature with some of your guidance <|||||>> Hi @LYPinASR I would love to implement this feature with some of your guidance I would love to talk to you about my understanding of this feature. <|||||>I > > Hi @LYPinASR I would love to implement this feature with some of your guidance > > I would love to talk to you about my understanding of this feature. I will be happy to talk to you. <|||||>> I > > > > Hi @LYPinASR I would love to implement this feature with some of your guidance > > > I would love to talk to you about my understanding of this feature. > > I will be happy to talk to you. I have implemented some of the code in modelling_whisper.py, but it doesn't work well. # coding=utf-8 # Copyright 2022 The OpenAI Authors and The HuggingFace Inc. team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ PyTorch Whisper model.""" import math import random from typing import Optional, Tuple, Union import numpy as np import torch import torch.utils.checkpoint from torch import nn from torch.nn import CrossEntropyLoss, MSELoss import copy from ...activations import ACT2FN from ...generation.logits_process import WhisperTimeStampLogitsProcessor from ...modeling_outputs import ( BaseModelOutput, BaseModelOutputWithPastAndCrossAttentions, Seq2SeqLMOutput, Seq2SeqModelOutput, SequenceClassifierOutput, ) from ...modeling_utils import PreTrainedModel from ...utils import add_start_docstrings, add_start_docstrings_to_model_forward, logging, replace_return_docstrings from .configuration_whisper import WhisperConfig from .tokenization_whisper import TASK_IDS, TO_LANGUAGE_CODE logger = logging.get_logger(__name__) _CONFIG_FOR_DOC = "WhisperConfig" _CHECKPOINT_FOR_DOC = "openai/whisper-tiny" WHISPER_PRETRAINED_MODEL_ARCHIVE_LIST = [ "openai/whisper-base", # See all Whisper models at https://huggingface.co/models?filter=whisper ] # Copied from transformers.models.bart.modeling_bart.shift_tokens_right def shift_tokens_right(input_ids: torch.Tensor, pad_token_id: int, decoder_start_token_id: int): """ Shift input ids one token to the right. """ shifted_input_ids = input_ids.new_zeros(input_ids.shape) shifted_input_ids[:, 1:] = input_ids[:, :-1].clone() shifted_input_ids[:, 0] = decoder_start_token_id if pad_token_id is None: raise ValueError("self.model.config.pad_token_id has to be defined.") # replace possible -100 values in labels by `pad_token_id` shifted_input_ids.masked_fill_(shifted_input_ids == -100, pad_token_id) return shifted_input_ids # Copied from transformers.models.bart.modeling_bart._make_causal_mask def _make_causal_mask( input_ids_shape: torch.Size, dtype: torch.dtype, device: torch.device, past_key_values_length: int = 0 ): """ Make causal mask used for bi-directional self-attention. """ bsz, tgt_len = input_ids_shape mask = torch.full((tgt_len, tgt_len), torch.tensor(torch.finfo(dtype).min, device=device), device=device) mask_cond = torch.arange(mask.size(-1), device=device) mask.masked_fill_(mask_cond < (mask_cond + 1).view(mask.size(-1), 1), 0) mask = mask.to(dtype) if past_key_values_length > 0: mask = torch.cat([torch.zeros(tgt_len, past_key_values_length, dtype=dtype, device=device), mask], dim=-1) return mask[None, None, :, :].expand(bsz, 1, tgt_len, tgt_len + past_key_values_length) # Copied from transformers.models.bart.modeling_bart._expand_mask def _expand_mask(mask: torch.Tensor, dtype: torch.dtype, tgt_len: Optional[int] = None): """ Expands attention_mask from `[bsz, seq_len]` to `[bsz, 1, tgt_seq_len, src_seq_len]`. """ bsz, src_len = mask.size() tgt_len = tgt_len if tgt_len is not None else src_len expanded_mask = mask[:, None, None, :].expand(bsz, 1, tgt_len, src_len).to(dtype) inverted_mask = 1.0 - expanded_mask return inverted_mask.masked_fill(inverted_mask.to(torch.bool), torch.finfo(dtype).min) # Copied from transformers.models.wav2vec2.modeling_wav2vec2._compute_mask_indices def _compute_mask_indices( shape: Tuple[int, int], mask_prob: float, mask_length: int, attention_mask: Optional[torch.LongTensor] = None, min_masks: int = 0, ) -> np.ndarray: """ Computes random mask spans for a given shape. Used to implement [SpecAugment: A Simple Data Augmentation Method for ASR](https://arxiv.org/abs/1904.08779). Note that this method is not optimized to run on TPU and should be run on CPU as part of the preprocessing during training. Args: shape: The shape for which to compute masks. This should be of a tuple of size 2 where the first element is the batch size and the second element is the length of the axis to span. mask_prob: The percentage of the whole axis (between 0 and 1) which will be masked. The number of independently generated mask spans of length `mask_length` is computed by `mask_prob*shape[1]/mask_length`. Note that due to overlaps, `mask_prob` is an upper bound and the actual percentage will be smaller. mask_length: size of the mask min_masks: minimum number of masked spans attention_mask: A (right-padded) attention mask which independently shortens the feature axis of each batch dimension. """ batch_size, sequence_length = shape if mask_length < 1: raise ValueError("`mask_length` has to be bigger than 0.") if mask_length > sequence_length: raise ValueError( f"`mask_length` has to be smaller than `sequence_length`, but got `mask_length`: {mask_length}" f" and `sequence_length`: {sequence_length}`" ) # epsilon is used for probabilistic rounding epsilon = np.random.rand(1).item() def compute_num_masked_span(input_length): """Given input length, compute how many spans should be masked""" num_masked_span = int(mask_prob * input_length / mask_length + epsilon) num_masked_span = max(num_masked_span, min_masks) # make sure num masked span <= sequence_length if num_masked_span * mask_length > sequence_length: num_masked_span = sequence_length // mask_length # make sure num_masked span is also <= input_length - (mask_length - 1) if input_length - (mask_length - 1) < num_masked_span: num_masked_span = max(input_length - (mask_length - 1), 0) return num_masked_span # compute number of masked spans in batch input_lengths = ( attention_mask.sum(-1).detach().tolist() if attention_mask is not None else [sequence_length for _ in range(batch_size)] ) # SpecAugment mask to fill spec_aug_mask = np.zeros((batch_size, sequence_length), dtype=bool) spec_aug_mask_idxs = [] max_num_masked_span = compute_num_masked_span(sequence_length) if max_num_masked_span == 0: return spec_aug_mask for input_length in input_lengths: # compute num of masked spans for this input num_masked_span = compute_num_masked_span(input_length) # get random indices to mask spec_aug_mask_idx = np.random.choice( np.arange(input_length - (mask_length - 1)), num_masked_span, replace=False ) # pick first sampled index that will serve as a dummy index to pad vector # to ensure same dimension for all batches due to probabilistic rounding # Picking first sample just pads those vectors twice. if len(spec_aug_mask_idx) == 0: # this case can only happen if `input_length` is strictly smaller then # `sequence_length` in which case the last token has to be a padding # token which we can use as a dummy mask id dummy_mask_idx = sequence_length - 1 else: dummy_mask_idx = spec_aug_mask_idx[0] spec_aug_mask_idx = np.concatenate( [spec_aug_mask_idx, np.ones(max_num_masked_span - num_masked_span, dtype=np.int32) * dummy_mask_idx] ) spec_aug_mask_idxs.append(spec_aug_mask_idx) spec_aug_mask_idxs = np.array(spec_aug_mask_idxs) # expand masked indices to masked spans spec_aug_mask_idxs = np.broadcast_to( spec_aug_mask_idxs[:, :, None], (batch_size, max_num_masked_span, mask_length) ) spec_aug_mask_idxs = spec_aug_mask_idxs.reshape(batch_size, max_num_masked_span * mask_length) # add offset to the starting indexes so that indexes now create a span offsets = np.arange(mask_length)[None, None, :] offsets = np.broadcast_to(offsets, (batch_size, max_num_masked_span, mask_length)).reshape( batch_size, max_num_masked_span * mask_length ) spec_aug_mask_idxs = spec_aug_mask_idxs + offsets # ensure that we cannot have indices larger than sequence_length if spec_aug_mask_idxs.max() > sequence_length - 1: spec_aug_mask_idxs[spec_aug_mask_idxs > sequence_length - 1] = sequence_length - 1 # scatter indices to mask np.put_along_axis(spec_aug_mask, spec_aug_mask_idxs, 1, -1) return spec_aug_mask class WhisperPositionalEmbedding(nn.Embedding): def __init__(self, num_positions: int, embedding_dim: int, padding_idx: Optional[int] = None): super().__init__(num_positions, embedding_dim) def forward(self, input_ids, past_key_values_length=0): return self.weight[past_key_values_length : past_key_values_length + input_ids.shape[1]] class WhisperAttention(nn.Module): """Multi-headed attention from 'Attention Is All You Need' paper""" def __init__( self, embed_dim: int, num_heads: int, dropout: float = 0.0, is_decoder: bool = False, bias: bool = True, ): super().__init__() self.embed_dim = embed_dim self.num_heads = num_heads self.dropout = dropout self.head_dim = embed_dim // num_heads if (self.head_dim * num_heads) != self.embed_dim: raise ValueError( f"embed_dim must be divisible by num_heads (got `embed_dim`: {self.embed_dim}" f" and `num_heads`: {num_heads})." ) self.scaling = self.head_dim**-0.5 self.is_decoder = is_decoder self.k_proj = nn.Linear(embed_dim, embed_dim, bias=False) self.v_proj = nn.Linear(embed_dim, embed_dim, bias=bias) self.q_proj = nn.Linear(embed_dim, embed_dim, bias=bias) self.out_proj = nn.Linear(embed_dim, embed_dim, bias=bias) # Copied from transformers.models.bart.modeling_bart.BartAttention._shape with BART->whisper def _shape(self, tensor: torch.Tensor, seq_len: int, bsz: int): return tensor.view(bsz, seq_len, self.num_heads, self.head_dim).transpose(1, 2).contiguous() # Copied from transformers.models.bart.modeling_bart.BartAttention.forward with BART->whisper def forward( self, hidden_states: torch.Tensor, key_value_states: Optional[torch.Tensor] = None, past_key_value: Optional[Tuple[torch.Tensor]] = None, attention_mask: Optional[torch.Tensor] = None, layer_head_mask: Optional[torch.Tensor] = None, output_attentions: bool = False, ) -> Tuple[torch.Tensor, Optional[torch.Tensor], Optional[Tuple[torch.Tensor]]]: """Input shape: Batch x Time x Channel""" # if key_value_states are provided this layer is used as a cross-attention layer # for the decoder is_cross_attention = key_value_states is not None bsz, tgt_len, _ = hidden_states.size() # get query proj query_states = self.q_proj(hidden_states) * self.scaling # get key, value proj # `past_key_value[0].shape[2] == key_value_states.shape[1]` # is checking that the `sequence_length` of the `past_key_value` is the same as # the provided `key_value_states` to support prefix tuning if ( is_cross_attention and past_key_value is not None and past_key_value[0].shape[2] == key_value_states.shape[1] ): # reuse k,v, cross_attentions key_states = past_key_value[0] value_states = past_key_value[1] elif is_cross_attention: # cross_attentions key_states = self._shape(self.k_proj(key_value_states), -1, bsz) value_states = self._shape(self.v_proj(key_value_states), -1, bsz) elif past_key_value is not None: # reuse k, v, self_attention key_states = self._shape(self.k_proj(hidden_states), -1, bsz) value_states = self._shape(self.v_proj(hidden_states), -1, bsz) key_states = torch.cat([past_key_value[0], key_states], dim=2) value_states = torch.cat([past_key_value[1], value_states], dim=2) else: # self_attention key_states = self._shape(self.k_proj(hidden_states), -1, bsz) value_states = self._shape(self.v_proj(hidden_states), -1, bsz) if self.is_decoder: # if cross_attention save Tuple(torch.Tensor, torch.Tensor) of all cross attention key/value_states. # Further calls to cross_attention layer can then reuse all cross-attention # key/value_states (first "if" case) # if uni-directional self-attention (decoder) save Tuple(torch.Tensor, torch.Tensor) of # all previous decoder key/value_states. Further calls to uni-directional self-attention # can concat previous decoder key/value_states to current projected key/value_states (third "elif" case) # if encoder bi-directional self-attention `past_key_value` is always `None` past_key_value = (key_states, value_states) proj_shape = (bsz * self.num_heads, -1, self.head_dim) query_states = self._shape(query_states, tgt_len, bsz).view(*proj_shape) key_states = key_states.reshape(*proj_shape) value_states = value_states.reshape(*proj_shape) src_len = key_states.size(1) attn_weights = torch.bmm(query_states, key_states.transpose(1, 2)) if attn_weights.size() != (bsz * self.num_heads, tgt_len, src_len): raise ValueError( f"Attention weights should be of size {(bsz * self.num_heads, tgt_len, src_len)}, but is" f" {attn_weights.size()}" ) if attention_mask is not None: if attention_mask.size() != (bsz, 1, tgt_len, src_len): raise ValueError( f"Attention mask should be of size {(bsz, 1, tgt_len, src_len)}, but is {attention_mask.size()}" ) attn_weights = attn_weights.view(bsz, self.num_heads, tgt_len, src_len) + attention_mask attn_weights = attn_weights.view(bsz * self.num_heads, tgt_len, src_len) attn_weights = nn.functional.softmax(attn_weights, dim=-1) if layer_head_mask is not None: if layer_head_mask.size() != (self.num_heads,): raise ValueError( f"Head mask for a single layer should be of size {(self.num_heads,)}, but is" f" {layer_head_mask.size()}" ) attn_weights = layer_head_mask.view(1, -1, 1, 1) * attn_weights.view(bsz, self.num_heads, tgt_len, src_len) attn_weights = attn_weights.view(bsz * self.num_heads, tgt_len, src_len) if output_attentions: # this operation is a bit awkward, but it's required to # make sure that attn_weights keeps its gradient. # In order to do so, attn_weights have to be reshaped # twice and have to be reused in the following attn_weights_reshaped = attn_weights.view(bsz, self.num_heads, tgt_len, src_len) attn_weights = attn_weights_reshaped.view(bsz * self.num_heads, tgt_len, src_len) else: attn_weights_reshaped = None attn_probs = nn.functional.dropout(attn_weights, p=self.dropout, training=self.training) attn_output = torch.bmm(attn_probs, value_states) if attn_output.size() != (bsz * self.num_heads, tgt_len, self.head_dim): raise ValueError( f"`attn_output` should be of size {(bsz * self.num_heads, tgt_len, self.head_dim)}, but is" f" {attn_output.size()}" ) attn_output = attn_output.view(bsz, self.num_heads, tgt_len, self.head_dim) attn_output = attn_output.transpose(1, 2) # Use the `embed_dim` from the config (stored in the class) rather than `hidden_state` because `attn_output` can be # partitioned across GPUs when using tensor-parallelism. attn_output = attn_output.reshape(bsz, tgt_len, self.embed_dim) attn_output = self.out_proj(attn_output) return attn_output, attn_weights_reshaped, past_key_value # Copied from transformers.models.mbart.modeling_mbart.MBartEncoderLayer with MBart->Whisper class WhisperEncoderLayer(nn.Module): def __init__(self, config: WhisperConfig): super().__init__() self.embed_dim = config.d_model self.self_attn = WhisperAttention( embed_dim=self.embed_dim, num_heads=config.encoder_attention_heads, dropout=config.attention_dropout, ) self.self_attn_layer_norm = nn.LayerNorm(self.embed_dim) self.dropout = config.dropout self.activation_fn = ACT2FN[config.activation_function] self.activation_dropout = config.activation_dropout self.fc1 = nn.Linear(self.embed_dim, config.encoder_ffn_dim) self.fc2 = nn.Linear(config.encoder_ffn_dim, self.embed_dim) self.final_layer_norm = nn.LayerNorm(self.embed_dim) def forward( self, hidden_states: torch.Tensor, attention_mask: torch.Tensor, layer_head_mask: torch.Tensor, output_attentions: bool = False, ) -> torch.Tensor: """ Args: hidden_states (`torch.FloatTensor`): input to the layer of shape `(seq_len, batch, embed_dim)` attention_mask (`torch.FloatTensor`): attention mask of size `(batch, 1, tgt_len, src_len)` where padding elements are indicated by very large negative values. layer_head_mask (`torch.FloatTensor`): mask for attention heads in a given layer of size `(encoder_attention_heads,)`. output_attentions (`bool`, *optional*): Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned tensors for more detail. """ residual = hidden_states hidden_states = self.self_attn_layer_norm(hidden_states) hidden_states, attn_weights, _ = self.self_attn( hidden_states=hidden_states, attention_mask=attention_mask, layer_head_mask=layer_head_mask, output_attentions=output_attentions, ) hidden_states = nn.functional.dropout(hidden_states, p=self.dropout, training=self.training) hidden_states = residual + hidden_states residual = hidden_states hidden_states = self.final_layer_norm(hidden_states) hidden_states = self.activation_fn(self.fc1(hidden_states)) hidden_states = nn.functional.dropout(hidden_states, p=self.activation_dropout, training=self.training) hidden_states = self.fc2(hidden_states) hidden_states = nn.functional.dropout(hidden_states, p=self.dropout, training=self.training) hidden_states = residual + hidden_states if hidden_states.dtype == torch.float16 and ( torch.isinf(hidden_states).any() or torch.isnan(hidden_states).any() ): clamp_value = torch.finfo(hidden_states.dtype).max - 1000 hidden_states = torch.clamp(hidden_states, min=-clamp_value, max=clamp_value) outputs = (hidden_states,) if output_attentions: outputs += (attn_weights,) return outputs # Copied from transformers.models.mbart.modeling_mbart.MBartDecoderLayer with MBart->Whisper class WhisperDecoderLayer(nn.Module): def __init__(self, config: WhisperConfig): super().__init__() self.embed_dim = config.d_model self.self_attn = WhisperAttention( embed_dim=self.embed_dim, num_heads=config.decoder_attention_heads, dropout=config.attention_dropout, is_decoder=True, ) self.dropout = config.dropout self.activation_fn = ACT2FN[config.activation_function] self.activation_dropout = config.activation_dropout self.self_attn_layer_norm = nn.LayerNorm(self.embed_dim) self.encoder_attn = WhisperAttention( self.embed_dim, config.decoder_attention_heads, dropout=config.attention_dropout, is_decoder=True, ) self.encoder_attn_layer_norm = nn.LayerNorm(self.embed_dim) self.fc1 = nn.Linear(self.embed_dim, config.decoder_ffn_dim) self.fc2 = nn.Linear(config.decoder_ffn_dim, self.embed_dim) self.final_layer_norm = nn.LayerNorm(self.embed_dim) def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.Tensor] = None, encoder_hidden_states: Optional[torch.Tensor] = None, encoder_attention_mask: Optional[torch.Tensor] = None, layer_head_mask: Optional[torch.Tensor] = None, cross_attn_layer_head_mask: Optional[torch.Tensor] = None, past_key_value: Optional[Tuple[torch.Tensor]] = None, output_attentions: Optional[bool] = False, use_cache: Optional[bool] = True, ) -> torch.Tensor: """ Args: hidden_states (`torch.FloatTensor`): input to the layer of shape `(batch, seq_len, embed_dim)` attention_mask (`torch.FloatTensor`): attention mask of size `(batch, 1, tgt_len, src_len)` where padding elements are indicated by very large negative values. encoder_hidden_states (`torch.FloatTensor`): cross attention input to the layer of shape `(batch, seq_len, embed_dim)` encoder_attention_mask (`torch.FloatTensor`): encoder attention mask of size `(batch, 1, tgt_len, src_len)` where padding elements are indicated by very large negative values. layer_head_mask (`torch.FloatTensor`): mask for attention heads in a given layer of size `(encoder_attention_heads,)`. cross_attn_layer_head_mask (`torch.FloatTensor`): mask for cross-attention heads in a given layer of size `(decoder_attention_heads,)`. past_key_value (`Tuple(torch.FloatTensor)`): cached past key and value projection states output_attentions (`bool`, *optional*): Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned tensors for more detail. """ residual = hidden_states hidden_states = self.self_attn_layer_norm(hidden_states) # Self Attention # decoder uni-directional self-attention cached key/values tuple is at positions 1,2 self_attn_past_key_value = past_key_value[:2] if past_key_value is not None else None # add present self-attn cache to positions 1,2 of present_key_value tuple hidden_states, self_attn_weights, present_key_value = self.self_attn( hidden_states=hidden_states, past_key_value=self_attn_past_key_value, attention_mask=attention_mask, layer_head_mask=layer_head_mask, output_attentions=output_attentions, ) hidden_states = nn.functional.dropout(hidden_states, p=self.dropout, training=self.training) hidden_states = residual + hidden_states # Cross-Attention Block cross_attn_present_key_value = None cross_attn_weights = None if encoder_hidden_states is not None: residual = hidden_states hidden_states = self.encoder_attn_layer_norm(hidden_states) # cross_attn cached key/values tuple is at positions 3,4 of present_key_value tuple cross_attn_past_key_value = past_key_value[-2:] if past_key_value is not None else None hidden_states, cross_attn_weights, cross_attn_present_key_value = self.encoder_attn( hidden_states=hidden_states, key_value_states=encoder_hidden_states, attention_mask=encoder_attention_mask, layer_head_mask=cross_attn_layer_head_mask, past_key_value=cross_attn_past_key_value, output_attentions=output_attentions, ) hidden_states = nn.functional.dropout(hidden_states, p=self.dropout, training=self.training) hidden_states = residual + hidden_states # add cross-attn to positions 3,4 of present_key_value tuple present_key_value = present_key_value + cross_attn_present_key_value # Fully Connected residual = hidden_states hidden_states = self.final_layer_norm(hidden_states) hidden_states = self.activation_fn(self.fc1(hidden_states)) hidden_states = nn.functional.dropout(hidden_states, p=self.activation_dropout, training=self.training) hidden_states = self.fc2(hidden_states) hidden_states = nn.functional.dropout(hidden_states, p=self.dropout, training=self.training) hidden_states = residual + hidden_states outputs = (hidden_states,) if output_attentions: outputs += (self_attn_weights, cross_attn_weights) if use_cache: outputs += (present_key_value,) return outputs class WhisperPreTrainedModel(PreTrainedModel): config_class = WhisperConfig base_model_prefix = "model" main_input_name = "input_features" supports_gradient_checkpointing = True _no_split_modules = ["WhisperEncoderLayer", "WhisperDecoderLayer"] def _init_weights(self, module): std = self.config.init_std if isinstance(module, (nn.Linear, nn.Conv1d)): module.weight.data.normal_(mean=0.0, std=std) if module.bias is not None: module.bias.data.zero_() elif isinstance(module, nn.Embedding): module.weight.data.normal_(mean=0.0, std=std) if module.padding_idx is not None: module.weight.data[module.padding_idx].zero_() def _set_gradient_checkpointing(self, module, value=False): if isinstance(module, (WhisperDecoder, WhisperEncoder)): module.gradient_checkpointing = value def _get_feat_extract_output_lengths(self, input_lengths: torch.LongTensor): """ Computes the output length of the convolutional layers """ input_lengths = (input_lengths - 1) // 2 + 1 return input_lengths WHISPER_START_DOCSTRING = r""" This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`WhisperConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. """ WHISPER_INPUTS_DOCSTRING = r""" Args: input_features (`torch.FloatTensor` of shape `(batch_size, feature_size, sequence_length)`): Float values mel features extracted from the raw speech waveform. Raw speech waveform can be obtained by loading a `.flac` or `.wav` audio file into an array of type `List[float]` or a `numpy.ndarray`, *e.g.* via the soundfile library (`pip install soundfile`). To prepare the array into `input_features`, the [`AutoFeatureExtractor`] should be used for extracting the mel features, padding and conversion into a tensor of type `torch.FloatTensor`. See [`~WhisperFeatureExtractor.__call__`] attention_mask (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*): Mask to avoid performing *SpecAugment* data augmentation on padding token indices. Mask values selected in `[0, 1]`: - 1 for tokens that are **not masked**, - 0 for tokens that are **masked**. [What are attention masks?](../glossary#attention-mask) decoder_input_ids (`torch.LongTensor` of shape `(batch_size, target_sequence_length)`, *optional*): Indices of decoder input sequence tokens in the vocabulary. Indices can be obtained using [`WhisperTokenizer`]. See [`PreTrainedTokenizer.encode`] and [`PreTrainedTokenizer.__call__`] for details. [What are decoder input IDs?](../glossary#decoder-input-ids) Whisper uses the `decoder_start_token_id` as the starting token for `decoder_input_ids` generation. If `past_key_values` is used, optionally only the last `decoder_input_ids` have to be input (see `past_key_values`). decoder_attention_mask (`torch.LongTensor` of shape `(batch_size, target_sequence_length)`, *optional*): Default behavior: generate a tensor that ignores pad tokens in `decoder_input_ids`. Causal mask will also be used by default. If you want to change padding behavior, you should read [`modeling_whisper._prepare_decoder_attention_mask`] and modify to your needs. See diagram 1 in [the BART paper](https://arxiv.org/abs/1910.13461) for more information on the default strategy. head_mask (`torch.Tensor` of shape `(encoder_layers, encoder_attention_heads)`, *optional*): Mask to nullify selected heads of the attention modules in the encoder. Mask values selected in `[0, 1]`: - 1 indicates the head is **not masked**, - 0 indicates the head is **masked**. decoder_head_mask (`torch.Tensor` of shape `(decoder_layers, decoder_attention_heads)`, *optional*): Mask to nullify selected heads of the attention modules in the decoder. Mask values selected in `[0, 1]`: - 1 indicates the head is **not masked**, - 0 indicates the head is **masked**. cross_attn_head_mask (`torch.Tensor` of shape `(decoder_layers, decoder_attention_heads)`, *optional*): Mask to nullify selected heads of the cross-attention modules. Mask values selected in `[0, 1]`: - 1 indicates the head is **not masked**, - 0 indicates the head is **masked**. encoder_outputs (`tuple(tuple(torch.FloatTensor)`, *optional*): Tuple consists of (`last_hidden_state`, *optional*: `hidden_states`, *optional*: `attentions`) `last_hidden_state` of shape `(batch_size, sequence_length, hidden_size)`, *optional*) is a sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder. past_key_values (`tuple(tuple(torch.FloatTensor))`, *optional*, returned when `use_cache=True` is passed or when `config.use_cache=True`): Tuple of `tuple(torch.FloatTensor)` of length `config.n_layers`, with each tuple having 2 tensors of shape `(batch_size, num_heads, sequence_length, embed_size_per_head)`) and 2 additional tensors of shape `(batch_size, num_heads, encoder_sequence_length, embed_size_per_head)`. Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention blocks) that can be used (see `past_key_values` input) to speed up sequential decoding. If `past_key_values` are used, the user can optionally input only the last `decoder_input_ids` (those that don't have their past key value states given to this model) of shape `(batch_size, 1)` instead of all `decoder_input_ids` of shape `(batch_size, sequence_length)`. decoder_inputs_embeds (`torch.FloatTensor` of shape `(batch_size, target_sequence_length, hidden_size)`, *optional*): Optionally, instead of passing `decoder_input_ids` you can choose to directly pass an embedded representation. If `past_key_values` is used, optionally only the last `decoder_inputs_embeds` have to be input (see `past_key_values`). This is useful if you want more control over how to convert `decoder_input_ids` indices into associated vectors than the model's internal embedding lookup matrix. use_cache (`bool`, *optional*): If set to `True`, `past_key_values` key value states are returned and can be used to speed up decoding (see `past_key_values`). output_attentions (`bool`, *optional*): Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned tensors for more detail. output_hidden_states (`bool`, *optional*): Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for more detail. return_dict (`bool`, *optional*): Whether or not to return a [`~utils.ModelOutput`] instead of a plain tuple. """ WHISPER_ENCODER_INPUTS_DOCSTRING = r""" Args: input_features (`torch.FloatTensor` of shape `(batch_size, feature_size, sequence_length)`): Float values mel features extracted from the raw speech waveform. Raw speech waveform can be obtained by loading a `.flac` or `.wav` audio file into an array of type `List[float]` or a `numpy.ndarray`, *e.g.* via the soundfile library (`pip install soundfile`). To prepare the array into `input_features`, the [`AutoFeatureExtractor`] should be used for extracting the mel features, padding and conversion into a tensor of type `torch.FloatTensor`. See [`~WhisperFeatureExtractor.__call__`] head_mask (`torch.Tensor` of shape `(encoder_layers, encoder_attention_heads)`, *optional*): Mask to nullify selected heads of the attention modules in the encoder. Mask values selected in `[0, 1]`: - 1 indicates the head is **not masked**, - 0 indicates the head is **masked**. encoder_outputs (`tuple(tuple(torch.FloatTensor)`, *optional*): Tuple consists of (`last_hidden_state`, *optional*: `hidden_states`, *optional*: `attentions`) `last_hidden_state` of shape `(batch_size, sequence_length, hidden_size)`, *optional*) is a sequence of hidden-states at the output of the last layer of the encoder. output_attentions (`bool`, *optional*): Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned tensors for more detail. output_hidden_states (`bool`, *optional*): Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for more detail. return_dict (`bool`, *optional*): Whether or not to return a [`~utils.ModelOutput`] instead of a plain tuple. """ class WhisperEncoder(WhisperPreTrainedModel): """ Transformer encoder consisting of *config.encoder_layers* self attention layers. Each layer is a [`WhisperEncoderLayer`]. Args: config: WhisperConfig """ def __init__(self, config: WhisperConfig): super().__init__(config) self.dropout = config.dropout self.layerdrop = config.encoder_layerdrop embed_dim = config.d_model self.num_mel_bins = config.num_mel_bins self.padding_idx = config.pad_token_id self.max_source_positions = config.max_source_positions self.embed_scale = math.sqrt(embed_dim) if config.scale_embedding else 1.0 self.conv1 = nn.Conv1d(self.num_mel_bins, embed_dim, kernel_size=3, padding=1) self.conv2 = nn.Conv1d(embed_dim, embed_dim, kernel_size=3, stride=2, padding=1) self.embed_positions = nn.Embedding(self.max_source_positions, embed_dim) self.layers = nn.ModuleList([WhisperEncoderLayer(config) for _ in range(config.encoder_layers)]) self.layer_norm = nn.LayerNorm(config.d_model) self.gradient_checkpointing = False # Initialize weights and apply final processing self.post_init() def _freeze_parameters(self): for param in self.parameters(): param.requires_grad = False self._requires_grad = False def get_input_embeddings(self) -> nn.Module: return self.conv1 def set_input_embeddings(self, value: nn.Module): self.conv1 = value def forward( self, input_features, attention_mask=None, head_mask=None, output_attentions=None, output_hidden_states=None, return_dict=None, ): r""" Args: input_features (`torch.LongTensor` of shape `(batch_size, feature_size, sequence_length)`): Float values of mel features extracted from the raw speech waveform. Raw speech waveform can be obtained by loading a `.flac` or `.wav` audio file into an array of type `List[float]` or a `numpy.ndarray`, *e.g.* via the soundfile library (`pip install soundfile`). To prepare the array into `input_features`, the [`AutoFeatureExtractor`] should be used for extracting the mel features, padding and conversion into a tensor of type `torch.FloatTensor`. See [`~WhisperFeatureExtractor.__call__`] attention_mask (`torch.Tensor`)`, *optional*): Whisper does not support masking of the `input_features`, this argument is preserved for compatibility, but it is not used. By default the silence in the input log mel spectrogram are ignored. head_mask (`torch.Tensor` of shape `(encoder_layers, encoder_attention_heads)`, *optional*): Mask to nullify selected heads of the attention modules. Mask values selected in `[0, 1]`: - 1 indicates the head is **not masked**, - 0 indicates the head is **masked**. output_attentions (`bool`, *optional*): Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned tensors for more detail. output_hidden_states (`bool`, *optional*): Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for more detail. return_dict (`bool`, *optional*): Whether or not to return a [`~utils.ModelOutput`] instead of a plain tuple. """ output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions output_hidden_states = ( output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states ) return_dict = return_dict if return_dict is not None else self.config.use_return_dict inputs_embeds = nn.functional.gelu(self.conv1(input_features)) inputs_embeds = nn.functional.gelu(self.conv2(inputs_embeds)) inputs_embeds = inputs_embeds.permute(0, 2, 1) embed_pos = self.embed_positions.weight hidden_states = inputs_embeds + embed_pos hidden_states = nn.functional.dropout(hidden_states, p=self.dropout, training=self.training) encoder_states = () if output_hidden_states else None all_attentions = () if output_attentions else None # check if head_mask has a correct number of layers specified if desired if head_mask is not None: assert head_mask.size()[0] == ( len(self.layers) ), f"The head_mask should be specified for {len(self.layers)} layers, but it is for {head_mask.size()[0]}." for idx, encoder_layer in enumerate(self.layers): if output_hidden_states: encoder_states = encoder_states + (hidden_states,) # add LayerDrop (see https://arxiv.org/abs/1909.11556 for description) dropout_probability = random.uniform(0, 1) if self.training and (dropout_probability < self.layerdrop): # skip the layer layer_outputs = (None, None) else: if self.gradient_checkpointing and self.training: def create_custom_forward(module): def custom_forward(*inputs): return module(*inputs, output_attentions) return custom_forward layer_outputs = torch.utils.checkpoint.checkpoint( create_custom_forward(encoder_layer), hidden_states, None, (head_mask[idx] if head_mask is not None else None), ) else: layer_outputs = encoder_layer( hidden_states, None, layer_head_mask=(head_mask[idx] if head_mask is not None else None), output_attentions=output_attentions, ) hidden_states = layer_outputs[0] if output_attentions: all_attentions = all_attentions + (layer_outputs[1],) hidden_states = self.layer_norm(hidden_states) if output_hidden_states: encoder_states = encoder_states + (hidden_states,) if not return_dict: return tuple(v for v in [hidden_states, encoder_states, all_attentions] if v is not None) return BaseModelOutput( last_hidden_state=hidden_states, hidden_states=encoder_states, attentions=all_attentions ) class WhisperDecoder(WhisperPreTrainedModel): """ Transformer decoder consisting of *config.decoder_layers* layers. Each layer is a [`WhisperDecoderLayer`] Args: config: WhisperConfig """ def __init__(self, config: WhisperConfig): super().__init__(config) self.dropout = config.dropout self.layerdrop = config.decoder_layerdrop self.padding_idx = config.pad_token_id self.max_target_positions = config.max_target_positions self.max_source_positions = config.max_source_positions self.embed_scale = math.sqrt(config.d_model) if config.scale_embedding else 1.0 self.embed_tokens = nn.Embedding(config.vocab_size, config.d_model, self.padding_idx) self.embed_positions = WhisperPositionalEmbedding(self.max_target_positions, config.d_model) self.layers = nn.ModuleList([WhisperDecoderLayer(config) for _ in range(config.decoder_layers)]) self.layer_norm = nn.LayerNorm(config.d_model) self.gradient_checkpointing = False # Initialize weights and apply final processing self.post_init() def get_input_embeddings(self): return self.embed_tokens def set_input_embeddings(self, value): self.embed_tokens = value def _prepare_decoder_attention_mask(self, attention_mask, input_shape, inputs_embeds, past_key_values_length): # create causal mask # [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len] combined_attention_mask = None if input_shape[-1] > 1: combined_attention_mask = _make_causal_mask( input_shape, inputs_embeds.dtype, device=inputs_embeds.device, past_key_values_length=past_key_values_length, ) if attention_mask is not None: # [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len] expanded_attn_mask = _expand_mask(attention_mask, inputs_embeds.dtype, tgt_len=input_shape[-1]) combined_attention_mask = ( expanded_attn_mask if combined_attention_mask is None else expanded_attn_mask + combined_attention_mask ) return combined_attention_mask def forward( self, input_ids=None, attention_mask=None, encoder_hidden_states=None, head_mask=None, cross_attn_head_mask=None, past_key_values=None, inputs_embeds=None, use_cache=None, output_attentions=None, output_hidden_states=None, return_dict=None, ): r""" Args: input_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`): Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide it. Indices can be obtained using [`WhisperTokenizer`]. See [`PreTrainedTokenizer.encode`] and [`PreTrainedTokenizer.__call__`] for details. [What are input IDs?](../glossary#input-ids) attention_mask (`torch.Tensor` of shape `(batch_size, sequence_length)`, *optional*): Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`: - 1 for tokens that are **not masked**, - 0 for tokens that are **masked**. [What are attention masks?](../glossary#attention-mask) encoder_hidden_states (`torch.FloatTensor` of shape `(batch_size, encoder_sequence_length, hidden_size)`, *optional*): Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder. head_mask (`torch.Tensor` of shape `(decoder_layers, decoder_attention_heads)`, *optional*): Mask to nullify selected heads of the attention modules. Mask values selected in `[0, 1]`: - 1 indicates the head is **not masked**, - 0 indicates the head is **masked**. cross_attn_head_mask (`torch.Tensor` of shape `(decoder_layers, decoder_attention_heads)`, *optional*): Mask to nullify selected heads of the attention modules in encoder to avoid performing cross-attention on hidden heads. Mask values selected in `[0, 1]`: - 1 indicates the head is **not masked**, - 0 indicates the head is **masked**. past_key_values (`tuple(tuple(torch.FloatTensor))`, *optional*, returned when `use_cache=True` is passed or when `config.use_cache=True`): Tuple of `tuple(torch.FloatTensor)` of length `config.n_layers`, with each tuple having 2 tensors of shape `(batch_size, num_heads, sequence_length, embed_size_per_head)`) and 2 additional tensors of shape `(batch_size, num_heads, encoder_sequence_length, embed_size_per_head)`. Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention blocks) that can be used (see `past_key_values` input) to speed up sequential decoding. If `past_key_values` are used, the user can optionally input only the last `decoder_input_ids` (those that don't have their past key value states given to this model) of shape `(batch_size, 1)` instead of all `decoder_input_ids` of shape `(batch_size, sequence_length)`. inputs_embeds (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`, *optional*): Optionally, instead of passing `input_ids` you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert `input_ids` indices into associated vectors than the model's internal embedding lookup matrix. output_attentions (`bool`, *optional*): Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned tensors for more detail. output_hidden_states (`bool`, *optional*): Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for more detail. return_dict (`bool`, *optional*): Whether or not to return a [`~utils.ModelOutput`] instead of a plain tuple. """ output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions output_hidden_states = ( output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states ) use_cache = use_cache if use_cache is not None else self.config.use_cache return_dict = return_dict if return_dict is not None else self.config.use_return_dict # retrieve input_ids and inputs_embeds if input_ids is not None and inputs_embeds is not None: raise ValueError("You cannot specify both decoder_input_ids and decoder_inputs_embeds at the same time") elif input_ids is not None: input_shape = input_ids.size() input_ids = input_ids.view(-1, input_shape[-1]) elif inputs_embeds is not None: input_shape = inputs_embeds.size()[:-1] else: raise ValueError("You have to specify either decoder_input_ids or decoder_inputs_embeds") # past_key_values_length past_key_values_length = past_key_values[0][0].shape[2] if past_key_values is not None else 0 if inputs_embeds is None: inputs_embeds = self.embed_tokens(input_ids) attention_mask = self._prepare_decoder_attention_mask( attention_mask, input_shape, inputs_embeds, past_key_values_length ) # embed positions if input_ids is not None: positions = self.embed_positions(input_ids, past_key_values_length=past_key_values_length) else: positions = self.embed_positions(inputs_embeds, past_key_values_length=past_key_values_length) hidden_states = inputs_embeds + positions hidden_states = nn.functional.dropout(hidden_states, p=self.dropout, training=self.training) if self.gradient_checkpointing and self.training: if use_cache: logger.warning_once( "`use_cache = True` is incompatible with gradient checkpointing. Setting `use_cache = False`..." ) use_cache = False # decoder layers all_hidden_states = () if output_hidden_states else None all_self_attns = () if output_attentions else None all_cross_attentions = () if (output_attentions and encoder_hidden_states is not None) else None next_decoder_cache = () if use_cache else None # check if head_mask/cross_attn_head_mask has a correct number of layers specified if desired for attn_mask, mask_name in zip([head_mask, cross_attn_head_mask], ["head_mask", "cross_attn_head_mask"]): if attn_mask is not None: assert attn_mask.size()[0] == (len(self.layers)), ( f"The `{mask_name}` should be specified for {len(self.layers)} layers, but it is for" f" {head_mask.size()[0]}." ) for idx, decoder_layer in enumerate(self.layers): # add LayerDrop (see https://arxiv.org/abs/1909.11556 for description) if output_hidden_states: all_hidden_states += (hidden_states,) dropout_probability = random.uniform(0, 1) if self.training and (dropout_probability < self.layerdrop): continue past_key_value = past_key_values[idx] if past_key_values is not None else None if self.gradient_checkpointing and self.training: def create_custom_forward(module): def custom_forward(*inputs): # None for past_key_value return module(*inputs, output_attentions, use_cache) return custom_forward layer_outputs = torch.utils.checkpoint.checkpoint( create_custom_forward(decoder_layer), hidden_states, attention_mask, encoder_hidden_states, None, # encoder attention mask head_mask[idx] if head_mask is not None else None, cross_attn_head_mask[idx] if cross_attn_head_mask is not None else None, None, # past_key_value ) else: layer_outputs = decoder_layer( hidden_states, attention_mask=attention_mask, encoder_hidden_states=encoder_hidden_states, layer_head_mask=(head_mask[idx] if head_mask is not None else None), cross_attn_layer_head_mask=( cross_attn_head_mask[idx] if cross_attn_head_mask is not None else None ), past_key_value=past_key_value, output_attentions=output_attentions, use_cache=use_cache, ) hidden_states = layer_outputs[0] if use_cache: next_decoder_cache += (layer_outputs[3 if output_attentions else 1],) if output_attentions: all_self_attns += (layer_outputs[1],) if encoder_hidden_states is not None: all_cross_attentions += (layer_outputs[2],) hidden_states = self.layer_norm(hidden_states) # add hidden states from the last decoder layer if output_hidden_states: all_hidden_states += (hidden_states,) next_cache = next_decoder_cache if use_cache else None if not return_dict: return tuple( v for v in [hidden_states, next_cache, all_hidden_states, all_self_attns, all_cross_attentions] if v is not None ) return BaseModelOutputWithPastAndCrossAttentions( last_hidden_state=hidden_states, past_key_values=next_cache, hidden_states=all_hidden_states, attentions=all_self_attns, cross_attentions=all_cross_attentions, ) @add_start_docstrings( "The bare Whisper Model outputting raw hidden-states without any specific head on top.", WHISPER_START_DOCSTRING, ) class WhisperModel(WhisperPreTrainedModel): _keys_to_ignore_on_load_missing = [r"proj_out.weight"] def __init__(self, config: WhisperConfig): super().__init__(config) self.encoder = WhisperEncoder(config) self.decoder = WhisperDecoder(config) # Initialize weights and apply final processing self.post_init() def get_input_embeddings(self): return self.decoder.embed_tokens def set_input_embeddings(self, value): self.decoder.embed_tokens = value def get_encoder(self): return self.encoder def get_decoder(self): return self.decoder def freeze_encoder(self): """ Calling this function will disable the gradient computation for the Whisper encoder so that its parameters will not be updated during training. """ self.encoder._freeze_parameters() def _mask_input_features( self, input_features: torch.FloatTensor, attention_mask: Optional[torch.LongTensor] = None, ): """ Masks extracted features along time axis and/or along feature axis according to [SpecAugment](https://arxiv.org/abs/1904.08779). """ # `config.apply_spec_augment` can set masking to False if not getattr(self.config, "apply_spec_augment", True): return input_features # generate indices & apply SpecAugment along time axis batch_size, hidden_size, sequence_length = input_features.size() if self.config.mask_time_prob > 0 and self.training: # generate indices & apply SpecAugment along time axis mask_time_indices = _compute_mask_indices( (batch_size, sequence_length), mask_prob=self.config.mask_time_prob, mask_length=self.config.mask_time_length, attention_mask=attention_mask, min_masks=self.config.mask_time_min_masks, ) mask_time_indices = torch.tensor(mask_time_indices, device=input_features.device, dtype=torch.bool) mask_time_indices = mask_time_indices[:, None].expand(-1, hidden_size, -1) input_features[mask_time_indices] = 0 if self.config.mask_feature_prob > 0 and self.training: # generate indices & apply SpecAugment along feature axis mask_feature_indices = _compute_mask_indices( (batch_size, hidden_size), mask_prob=self.config.mask_feature_prob, mask_length=self.config.mask_feature_length, min_masks=self.config.mask_feature_min_masks, ) mask_feature_indices = torch.tensor(mask_feature_indices, device=input_features.device, dtype=torch.bool) input_features[mask_feature_indices] = 0 return input_features @add_start_docstrings_to_model_forward(WHISPER_INPUTS_DOCSTRING) @replace_return_docstrings(output_type=Seq2SeqModelOutput, config_class=_CONFIG_FOR_DOC) def forward( self, input_features: Optional[torch.FloatTensor] = None, attention_mask: Optional[torch.LongTensor] = None, decoder_input_ids: Optional[torch.LongTensor] = None, decoder_attention_mask: Optional[torch.LongTensor] = None, head_mask: Optional[torch.Tensor] = None, decoder_head_mask: Optional[torch.Tensor] = None, cross_attn_head_mask: Optional[torch.Tensor] = None, encoder_outputs: Optional[Tuple[Tuple[torch.FloatTensor]]] = None, past_key_values: Optional[Tuple[Tuple[torch.FloatTensor]]] = None, decoder_inputs_embeds: Optional[Tuple[torch.FloatTensor]] = None, use_cache: Optional[bool] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, return_dict: Optional[bool] = None, ) -> Union[Tuple[torch.Tensor], Seq2SeqModelOutput]: r""" Returns: Example: ```python >>> import torch >>> from transformers import AutoFeatureExtractor, WhisperModel >>> from datasets import load_dataset >>> model = WhisperModel.from_pretrained("openai/whisper-base") >>> feature_extractor = AutoFeatureExtractor.from_pretrained("openai/whisper-base") >>> ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation") >>> inputs = feature_extractor(ds[0]["audio"]["array"], return_tensors="pt") >>> input_features = inputs.input_features >>> decoder_input_ids = torch.tensor([[1, 1]]) * model.config.decoder_start_token_id >>> last_hidden_state = model(input_features, decoder_input_ids=decoder_input_ids).last_hidden_state >>> list(last_hidden_state.shape) [1, 2, 512] ```""" output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions output_hidden_states = ( output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states ) use_cache = use_cache if use_cache is not None else self.config.use_cache return_dict = return_dict if return_dict is not None else self.config.use_return_dict if encoder_outputs is None: input_features = self._mask_input_features(input_features, attention_mask=attention_mask) encoder_outputs = self.encoder( input_features, head_mask=head_mask, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, ) # If the user passed a tuple for encoder_outputs, we wrap it in a BaseModelOutput when return_dict=True elif return_dict and not isinstance(encoder_outputs, BaseModelOutput): encoder_outputs = BaseModelOutput( last_hidden_state=encoder_outputs[0], hidden_states=encoder_outputs[1] if len(encoder_outputs) > 1 else None, attentions=encoder_outputs[2] if len(encoder_outputs) > 2 else None, ) # decoder outputs consists of (dec_features, past_key_value, dec_hidden, dec_attn) decoder_outputs = self.decoder( input_ids=decoder_input_ids, attention_mask=decoder_attention_mask, encoder_hidden_states=encoder_outputs[0], head_mask=decoder_head_mask, cross_attn_head_mask=cross_attn_head_mask, past_key_values=past_key_values, inputs_embeds=decoder_inputs_embeds, use_cache=use_cache, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, ) if not return_dict: return decoder_outputs + encoder_outputs return Seq2SeqModelOutput( last_hidden_state=decoder_outputs.last_hidden_state, past_key_values=decoder_outputs.past_key_values, decoder_hidden_states=decoder_outputs.hidden_states, decoder_attentions=decoder_outputs.attentions, cross_attentions=decoder_outputs.cross_attentions, encoder_last_hidden_state=encoder_outputs.last_hidden_state, encoder_hidden_states=encoder_outputs.hidden_states, encoder_attentions=encoder_outputs.attentions, ) @add_start_docstrings( "The Whisper Model with a language modeling head. Can be used for automatic speech recognition.", WHISPER_START_DOCSTRING, ) class WhisperForConditionalGeneration(WhisperPreTrainedModel): base_model_prefix = "model" _keys_to_ignore_on_load_missing = [ r"encoder.version", r"decoder.version", r"proj_out.weight", ] _keys_to_ignore_on_save = [ r"proj_out.weight", ] def __init__(self, config: WhisperConfig): super().__init__(config) self.model = WhisperModel(config) #print(self.model) self.model_old = copy.deepcopy(self.model) self.model_old.eval() self.freeze_model(self.model_old) self._means = {} for n, p in self.model_old.encoder.named_parameters(): self._means[n] = p.data self.proj_out = nn.Linear(config.d_model, config.vocab_size, bias=False) # Initialize weights and apply final processing self.post_init() def freeze_model(self, model): for param in model.parameters(): param.requires_grad = False return def _diag_fisher(self, input_features, attention_mask, decoder_input_ids, decoder_attention_mask, head_mask, decoder_head_mask, cross_attn_head_mask, encoder_outputs, past_key_values, decoder_inputs_embeds, use_cache, output_attentions, output_hidden_states, return_dict, labels,): precision_matrices = {} for n, p in self.model.encoder.named_parameters(): #p.data.zero_() precision_matrices[n] = 0*p.data # self.model.train() # # # input_features = input_features.to("cuda") # input_features = torch.autograd.Variable(input_features,volatile=False) # labels = torch.autograd.Variable(labels,volatile=False) # self.model.zero_grad() output=self.model( input_features, attention_mask=attention_mask, decoder_input_ids=decoder_input_ids, encoder_outputs=encoder_outputs, decoder_attention_mask=decoder_attention_mask, head_mask=head_mask, decoder_head_mask=decoder_head_mask, cross_attn_head_mask=cross_attn_head_mask, past_key_values=past_key_values, decoder_inputs_embeds=decoder_inputs_embeds, use_cache=use_cache, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, ) logits = self.proj_out(output[0]) loss_fct = CrossEntropyLoss() # move labels to correct device to enable PP labels = labels.to(logits.device) loss_fisher = loss_fct(logits.view(-1, self.config.vocab_size), labels.reshape(-1)) #print(loss) #loss.requires_grad_(True) loss_fisher.backward() for n, p in self.model.encoder.named_parameters(): #if p.grad is not None: #print(p.grad.data) precision_matrices[n].data += p.grad.data ** 2 precision_matrices = {n: p/4 for n, p in precision_matrices.items()} for n, p in self.model.encoder.named_parameters(): precision_matrices[n]=torch.autograd.Variable(precision_matrices[n],requires_grad=False) return precision_matrices def freeze_model(self, model): for param in model.parameters(): param.requires_grad = False return def get_encoder(self): return self.model.get_encoder() def get_decoder(self): return self.model.get_decoder() def resize_token_embeddings(self, new_num_tokens: int) -> nn.Embedding: new_embeddings = super().resize_token_embeddings(new_num_tokens) return new_embeddings def get_output_embeddings(self): return self.proj_out def set_output_embeddings(self, new_embeddings): self.proj_out = new_embeddings def get_input_embeddings(self) -> nn.Module: return self.model.get_input_embeddings() def freeze_encoder(self): """ Calling this function will disable the gradient computation for the Whisper encoder so that its parameters will not be updated during training. """ self.model.encoder._freeze_parameters() @add_start_docstrings_to_model_forward(WHISPER_INPUTS_DOCSTRING) @replace_return_docstrings(output_type=Seq2SeqLMOutput, config_class=_CONFIG_FOR_DOC) def forward( self, input_features: Optional[torch.FloatTensor] = None, attention_mask: Optional[torch.LongTensor] = None, decoder_input_ids: Optional[torch.LongTensor] = None, decoder_attention_mask: Optional[torch.LongTensor] = None, head_mask: Optional[torch.Tensor] = None, decoder_head_mask: Optional[torch.Tensor] = None, cross_attn_head_mask: Optional[torch.Tensor] = None, encoder_outputs: Optional[Tuple[Tuple[torch.FloatTensor]]] = None, past_key_values: Optional[Tuple[Tuple[torch.FloatTensor]]] = None, decoder_inputs_embeds: Optional[Tuple[torch.FloatTensor]] = None, labels: Optional[torch.LongTensor] = None, use_cache: Optional[bool] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, return_dict: Optional[bool] = None, ) -> Union[Tuple[torch.Tensor], Seq2SeqLMOutput]: r""" labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*): Labels for computing the language modeling loss. Indices should either be in `[0, ..., config.vocab_size]` or -100 (see `input_ids` docstring). Tokens with indices set to `-100` are ignored (masked), the loss is only computed for the tokens with labels in `[0, ..., config.vocab_size]`. Returns: Example: ```python >>> import torch >>> from transformers import AutoProcessor, WhisperForConditionalGeneration >>> from datasets import load_dataset >>> processor = AutoProcessor.from_pretrained("openai/whisper-tiny.en") >>> model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-tiny.en") >>> ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation") >>> inputs = processor(ds[0]["audio"]["array"], return_tensors="pt") >>> input_features = inputs.input_features >>> generated_ids = model.generate(inputs=input_features) >>> transcription = processor.batch_decode(generated_ids, skip_special_tokens=True)[0] >>> transcription ' Mr. Quilter is the apostle of the middle classes, and we are glad to welcome his gospel.' ```""" return_dict = return_dict if return_dict is not None else self.config.use_return_dict if labels is not None: if decoder_input_ids is None and decoder_inputs_embeds is None: decoder_input_ids = shift_tokens_right( labels, self.config.pad_token_id, self.config.decoder_start_token_id ) if self.model.training: _precision_matrices = self._diag_fisher(input_features, attention_mask, decoder_input_ids, decoder_attention_mask, head_mask, decoder_head_mask, cross_attn_head_mask, encoder_outputs, past_key_values, decoder_inputs_embeds, use_cache, output_attentions, output_hidden_states, return_dict, labels) outputs = self.model( input_features, attention_mask=attention_mask, decoder_input_ids=decoder_input_ids, encoder_outputs=encoder_outputs, decoder_attention_mask=decoder_attention_mask, head_mask=head_mask, decoder_head_mask=decoder_head_mask, cross_attn_head_mask=cross_attn_head_mask, past_key_values=past_key_values, decoder_inputs_embeds=decoder_inputs_embeds, use_cache=use_cache, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, ) lm_logits = self.proj_out(outputs[0]) #print(outputs.encoder_hidden_states) loss = None loss_ewc = 0 asr_loss = None if labels is not None: loss_fct_asr = CrossEntropyLoss() # move labels to correct device to enable PP labels = labels.to(lm_logits.device) asr_loss = loss_fct_asr(lm_logits.view(-1, self.config.vocab_size), labels.reshape(-1)) #ewc if self.model.training: for n, p in self.model.encoder.named_parameters(): #print(_precision_matrices[n]) #print(p) #print(self._means[n]) _loss = _precision_matrices[n] * (p - self._means[n].to(lm_logits.device)) ** 2 #print("1") loss_ewc += _loss.sum() #print(loss_ewc) loss = asr_loss + 2 * loss_ewc if not return_dict: output = (lm_logits,) + outputs[1:] return ((loss,) + output) if loss is not None else output return Seq2SeqLMOutput( loss=loss, logits=lm_logits, past_key_values=outputs.past_key_values, decoder_hidden_states=outputs.decoder_hidden_states, decoder_attentions=outputs.decoder_attentions, cross_attentions=outputs.cross_attentions, encoder_last_hidden_state=outputs.encoder_last_hidden_state, encoder_hidden_states=outputs.encoder_hidden_states, encoder_attentions=outputs.encoder_attentions, ) def generate( self, inputs: Optional[torch.Tensor] = None, generation_config=None, logits_processor=None, stopping_criteria=None, prefix_allowed_tokens_fn=None, synced_gpus=False, return_timestamps=None, task=None, language=None, is_multilingual=None, **kwargs, ): """ Generates sequences of token ids for models with a language modeling head. <Tip warning={true}> Most generation-controlling parameters are set in `generation_config` which, if not passed, will be set to the model's default generation configuration. You can override any `generation_config` by passing the corresponding parameters to generate(), e.g. `.generate(inputs, num_beams=4, do_sample=True)`. For an overview of generation strategies and code examples, check out the [following guide](./generation_strategies). </Tip> Parameters: inputs (`torch.Tensor` of varying shape depending on the modality, *optional*): The sequence used as a prompt for the generation or as model inputs to the encoder. If `None` the method initializes it with `bos_token_id` and a batch size of 1. For decoder-only models `inputs` should of in the format of `input_ids`. For encoder-decoder models *inputs* can represent any of `input_ids`, `input_values`, `input_features`, or `pixel_values`. generation_config (`~generation.GenerationConfig`, *optional*): The generation configuration to be used as base parametrization for the generation call. `**kwargs` passed to generate matching the attributes of `generation_config` will override them. If `generation_config` is not provided, the default will be used, which had the following loading priority: 1) from the `generation_config.json` model file, if it exists; 2) from the model configuration. Please note that unspecified parameters will inherit [`~generation.GenerationConfig`]'s default values, whose documentation should be checked to parameterize generation. logits_processor (`LogitsProcessorList`, *optional*): Custom logits processors that complement the default logits processors built from arguments and generation config. If a logit processor is passed that is already created with the arguments or a generation config an error is thrown. This feature is intended for advanced users. stopping_criteria (`StoppingCriteriaList`, *optional*): Custom stopping criteria that complement the default stopping criteria built from arguments and a generation config. If a stopping criteria is passed that is already created with the arguments or a generation config an error is thrown. This feature is intended for advanced users. prefix_allowed_tokens_fn (`Callable[[int, torch.Tensor], List[int]]`, *optional*): If provided, this function constraints the beam search to allowed tokens only at each step. If not provided no constraint is applied. This function takes 2 arguments: the batch ID `batch_id` and `input_ids`. It has to return a list with the allowed tokens for the next generation step conditioned on the batch ID `batch_id` and the previously generated tokens `inputs_ids`. This argument is useful for constrained generation conditioned on the prefix, as described in [Autoregressive Entity Retrieval](https://arxiv.org/abs/2010.00904). synced_gpus (`bool`, *optional*, defaults to `False`): Whether to continue running the while loop until max_length (needed for ZeRO stage 3) return_timestamps (`bool`, *optional*): Whether to return the timestamps with the text. This enables the `WhisperTimestampsLogitsProcessor`. task (`bool`, *optional*): Task to use for generation, either "translate" or "transcribe". The `model.config.forced_decoder_ids` will be updated accordingly. language (`bool`, *optional*): Language token to use for generation, can be either in the form of `<|en|>`, `en` or `english`. You can find all the possible language tokens in the `model.generation_config.lang_to_id` dictionary. is_multilingual (`bool`, *optional*): Whether or not the model is multilingual. kwargs: Ad hoc parametrization of `generate_config` and/or additional model-specific kwargs that will be forwarded to the `forward` function of the model. If the model is an encoder-decoder model, encoder specific kwargs should not be prefixed and decoder specific kwargs should be prefixed with *decoder_*. Return: [`~utils.ModelOutput`] or `torch.LongTensor`: A [`~utils.ModelOutput`] (if `return_dict_in_generate=True` or when `config.return_dict_in_generate=True`) or a `torch.FloatTensor`. If the model is *not* an encoder-decoder model (`model.config.is_encoder_decoder=False`), the possible [`~utils.ModelOutput`] types are: - [`~generation.GreedySearchDecoderOnlyOutput`], - [`~generation.SampleDecoderOnlyOutput`], - [`~generation.BeamSearchDecoderOnlyOutput`], - [`~generation.BeamSampleDecoderOnlyOutput`] If the model is an encoder-decoder model (`model.config.is_encoder_decoder=True`), the possible [`~utils.ModelOutput`] types are: - [`~generation.GreedySearchEncoderDecoderOutput`], - [`~generation.SampleEncoderDecoderOutput`], - [`~generation.BeamSearchEncoderDecoderOutput`], - [`~generation.BeamSampleEncoderDecoderOutput`] """ if generation_config is None: generation_config = self.generation_config if return_timestamps is not None: if not hasattr(generation_config, "no_timestamps_token_id"): raise ValueError( "You are trying to return timestamps, but the generation config is not properly set." "Make sure to initialize the generation config with the correct attributes that are needed such as `no_timestamps_token_id`." "For more details on how to generate the approtiate config, refer to https://github.com/huggingface/transformers/issues/21878#issuecomment-1451902363" ) generation_config.return_timestamps = return_timestamps else: generation_config.return_timestamps = False if language is not None: language = language.lower() generation_config.language = language if task is not None: generation_config.task = task forced_decoder_ids = [] if task is not None or language is not None: if hasattr(generation_config, "language"): if generation_config.language in generation_config.lang_to_id.keys(): language_token = generation_config.language elif generation_config.language in TO_LANGUAGE_CODE.keys(): language_token = f"<|{TO_LANGUAGE_CODE[generation_config.language]}|>" elif generation_config.language in TO_LANGUAGE_CODE.values(): language_token = f"<|{generation_config.language}|>" else: is_language_code = len(generation_config.language) == 2 raise ValueError( f"Unsupported language: {generation_config.language}. Language should be one of:" f" {list(TO_LANGUAGE_CODE.values()) if is_language_code else list(TO_LANGUAGE_CODE.keys())}." ) forced_decoder_ids.append((1, generation_config.lang_to_id[language_token])) else: forced_decoder_ids.append((1, None)) # automatically detect the language if hasattr(generation_config, "task"): if generation_config.task in TASK_IDS: forced_decoder_ids.append((2, generation_config.task_to_id[generation_config.task])) else: raise ValueError( f"The `{generation_config.task}`task is not supported. The task should be one of `{TASK_IDS}`" ) else: forced_decoder_ids.append((2, generation_config.task_to_id["transcribe"])) # defaults to transcribe if hasattr(generation_config, "no_timestamps_token_id") and not generation_config.return_timestamps: idx = forced_decoder_ids[-1][0] + 1 if forced_decoder_ids else 1 forced_decoder_ids.append((idx, generation_config.no_timestamps_token_id)) # Legacy code for backward compatibility elif hasattr(self.config, "forced_decoder_ids") and self.config.forced_decoder_ids is not None: forced_decoder_ids = self.config.forced_decoder_ids elif ( hasattr(self.generation_config, "forced_decoder_ids") and self.generation_config.forced_decoder_ids is not None ): forced_decoder_ids = self.generation_config.forced_decoder_ids if generation_config.return_timestamps: logits_processor = [WhisperTimeStampLogitsProcessor(generation_config)] if len(forced_decoder_ids) > 0: generation_config.forced_decoder_ids = forced_decoder_ids return super().generate( inputs, generation_config, logits_processor, stopping_criteria, prefix_allowed_tokens_fn, synced_gpus, **kwargs, ) def prepare_inputs_for_generation( self, decoder_input_ids, past_key_values=None, use_cache=None, encoder_outputs=None, attention_mask=None, **kwargs, ): # cut decoder_input_ids if past is used if past_key_values is not None: decoder_input_ids = decoder_input_ids[:, -1:] return { "encoder_outputs": encoder_outputs, "past_key_values": past_key_values, "decoder_input_ids": decoder_input_ids, "use_cache": use_cache, "decoder_attention_mask": None, } # @staticmethod def _reorder_cache(past_key_values, beam_idx): reordered_past = () for layer_past in past_key_values: reordered_past += (tuple(past_state.index_select(0, beam_idx) for past_state in layer_past),) return reordered_past @add_start_docstrings( """ Whisper Encoder Model with a sequence classification head on top (a linear layer over the pooled output) for tasks like SUPERB Keyword Spotting. """, WHISPER_ENCODER_INPUTS_DOCSTRING, ) class WhisperForAudioClassification(WhisperPreTrainedModel): def __init__(self, config): super().__init__(config) self.encoder = WhisperEncoder(config) num_layers = config.num_hidden_layers + 1 # transformer layers + input embeddings if config.use_weighted_layer_sum: self.layer_weights = nn.Parameter(torch.ones(num_layers) / num_layers) self.projector = nn.Linear(config.hidden_size, config.classifier_proj_size) self.classifier = nn.Linear(config.classifier_proj_size, config.num_labels) # Initialize weights and apply final processing self.post_init() def freeze_encoder(self): """ Calling this function will disable the gradient computation for the Whisper encoder so that its parameters will not be updated during training. Only the projection layers and classification head will be updated. """ self.encoder._freeze_parameters() def get_input_embeddings(self) -> nn.Module: return self.encoder.get_input_embeddings() def set_input_embeddings(self, value: nn.Module): self.encoder.set_input_embeddings(value) @add_start_docstrings_to_model_forward(WHISPER_ENCODER_INPUTS_DOCSTRING) @replace_return_docstrings(output_type=SequenceClassifierOutput, config_class=_CONFIG_FOR_DOC) def forward( self, input_features: Optional[torch.LongTensor] = None, head_mask: Optional[torch.Tensor] = None, encoder_outputs: Optional[Tuple[Tuple[torch.FloatTensor]]] = None, labels: Optional[torch.LongTensor] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, return_dict: Optional[bool] = None, ) -> Union[Tuple[torch.Tensor], SequenceClassifierOutput]: r""" labels (`torch.LongTensor` of shape `(batch_size,)`, *optional*): Labels for computing the sequence classification/regression loss. Indices should be in `[0, ..., config.num_labels - 1]`. If `config.num_labels == 1` a regression loss is computed (Mean-Square loss), If `config.num_labels > 1` a classification loss is computed (Cross-Entropy). Returns: Example: ```python >>> import torch >>> from transformers import AutoFeatureExtractor, WhisperForAudioClassification >>> from datasets import load_dataset >>> feature_extractor = AutoFeatureExtractor.from_pretrained("sanchit-gandhi/whisper-medium-fleurs-lang-id") >>> model = WhisperForAudioClassification.from_pretrained("sanchit-gandhi/whisper-medium-fleurs-lang-id") >>> ds = load_dataset("google/fleurs", "all", split="validation", streaming=True) >>> sample = next(iter(ds)) >>> inputs = feature_extractor( ... sample["audio"]["array"], sampling_rate=sample["audio"]["sampling_rate"], return_tensors="pt" ... ) >>> input_features = inputs.input_features >>> with torch.no_grad(): ... logits = model(input_features).logits >>> predicted_class_ids = torch.argmax(logits).item() >>> predicted_label = model.config.id2label[predicted_class_ids] >>> predicted_label 'af_za' ```""" output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions output_hidden_states = ( output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states ) return_dict = return_dict if return_dict is not None else self.config.use_return_dict if encoder_outputs is None: encoder_outputs = self.encoder( input_features, head_mask=head_mask, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, ) if self.config.use_weighted_layer_sum: hidden_states = torch.stack(encoder_outputs, dim=1) norm_weights = nn.functional.softmax(self.layer_weights, dim=-1) hidden_states = (hidden_states * norm_weights.view(-1, 1, 1)).sum(dim=1) else: hidden_states = encoder_outputs[0] hidden_states = self.projector(hidden_states) pooled_output = hidden_states.mean(dim=1) logits = self.classifier(pooled_output) loss = None if labels is not None: loss_fct = CrossEntropyLoss() # move labels to correct device to enable PP labels = labels.to(logits.device) loss = loss_fct(logits.view(-1, self.config.num_labels), labels.view(-1)) if not return_dict: output = (logits,) + encoder_outputs[1:] return ((loss,) + output) if loss is not None else output return SequenceClassifierOutput( loss=loss, logits=logits, hidden_states=encoder_outputs.hidden_states, attentions=encoder_outputs.attentions, )
transformers
23,879
closed
distillation training for arabic langauge
### System Info I encountered two issues while attempting to run the `binarized_data.py` and `train.py` scripts for the Knowledge Distillation of BERT Language Model on the Arabic Language project. Below are the details of each issue: 1. In the `binarized_data.py` script, I had to modify line 83 to make it work. The original line is: ```python dp_file = f"{args.dump_file}.{args.tokenizer_name}.pickle" ``` However, I had to remove the `tokenizer_name` variable and change the line to: ```python dp_file = f"{args.dump_file}.pickle" ``` This change was necessary because the Arabic BERT model name, "asafaya/bert-large-arabic," contains a forward slash ("/"), which caused errors when concatenating it with the `tokenizer_name` variable. 2. In the `train.py` script, I made a modification on line 258. The original line is: ```python args.max_model_input_size = tokenizer.max_model_input_sizes[args.teacher_name] ``` However, I had to change it to: ```python args.max_model_input_size = tokenizer.max_model_input_sizes['bert-large-uncased'] ``` This modification was necessary because I am using different model configurations than those listed in the folder. It would be helpful if the script could be modified to automatically work with the intended config, allowing for more flexibility. Apart from these script modifications, I made the necessary changes to the config files to match the different models I am using. this is understood as I am using a model with a different config than the one listed in the folder, maybe we can modify the script to download and locate the necessary config file automatically. Please let me know if there are any further clarifications needed or if you require additional information to address these issues. ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction here is link to the Google colab that has the problem https://colab.research.google.com/drive/1OqSvRNMl0-Z7ScCd6hLbPHMO-ZXT3WEw?usp=sharing ### Expected behavior the model has to start the training smoothly and the script has to be able to handle the model names which contains '/'
05-31-2023 01:54:22
05-31-2023 01:54:22
Please use the [forums](https://discuss.huggingface.co/) for such questions. This is not a maintained example.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
23,878
closed
[i18n]Translated "attention.mdx" to korean
# What does this PR do? Translated attention.mdx file of the documentation to Korean. Thank you in advance for your review. Part of https://github.com/huggingface/transformers/issues/20179 ## Before reviewing - [x] Check for missing / redundant translations (๋ฒˆ์—ญ ๋ˆ„๋ฝ/์ค‘๋ณต ๊ฒ€์‚ฌ) - [x] Grammar Check (๋งž์ถค๋ฒ• ๊ฒ€์‚ฌ) - [x] Review or Add new terms to glossary (์šฉ์–ด ํ™•์ธ ๋ฐ ์ถ”๊ฐ€) - [x] Check Inline TOC (e.g. [[lowercased-header]]) - [x] Check live-preview for gotchas (live-preview๋กœ ์ •์ƒ์ž‘๋™ ํ™•์ธ) ## Who can review?(Initial) Team PseudoLab, may you please review this PR? @0525hhgus, @KIHOON71, @sim-so, @gabrielwithappy, @HanNayeoniee, @wonhyeongseo, @jungnerd ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review?(Final) @sgugger, @ArthurZucker, @eunseojo May you please review this PR? <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
05-31-2023 01:53:01
05-31-2023 01:53:01
_The documentation is not available anymore as the PR was closed or merged._
transformers
23,877
open
Cannot reproduce results for Pix2struct on InfographicVQA
I am using the `pix2struct-infographics-vqa-base` and `pix2struct-infographics-vqa-large` model here and doing inference on InfographicsVQA. However, I get 29.53 ANLS for base and 34.31 ANLS for large, which do not match with the 38.2 and 40.0 results as in the original paper. Could anyone help with this? Here is my inference code: ``` import requests from PIL import Image import torch from transformers import Pix2StructForConditionalGeneration, Pix2StructProcessor model = Pix2StructForConditionalGeneration.from_pretrained("google/pix2struct-infographics-vqa-base").to("cuda") processor = Pix2StructProcessor.from_pretrained("google/pix2struct-infographics-vqa-base") image_url = "https://blogs.constantcontact.com/wp-content/uploads/2019/03/Social-Media-Infographic.png" image = Image.open(requests.get(image_url, stream=True).raw) question = "Which social platform has heavy female audience?" inputs = processor(images=image, text=question, return_tensors="pt").to("cuda") predictions = model.generate(**inputs) pred = processor.decode(predictions[0], skip_special_tokens=True) gt = 'pinterest' print(pred) ```
05-30-2023 22:10:21
05-30-2023 22:10:21
cc @younesbelkada <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>gentle ping @younesbelkada <|||||>Hi everyone, Sadly I won't have the bandwidth to properly dig into this right now, @Lizw14 do you still face the same issue when using the main branch of `transformers`? ``` pip install git+https://github.com/huggingface/transformers.git ```<|||||>@Lizw14 quickly going back to the issue, can you double check you used the same hyper parameters than the ones presented on the paper? for example what is the sequence length you are using? in what precision do you load the model (fp32, fp16, bf16, int8)? Ideally can you share the full script you use to reproduce the results of the paper Thanks!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
23,876
open
`.to_dict` does not correctly serialize `torch.dtype` in some cases (e.g., vision models)
### System Info - `transformers` version: 4.29.1 - Platform: Windows-10 - Python version: 3.8.3 - Huggingface_hub version: 0.14.1 - Safetensors version: 0.3.1 - PyTorch version (GPU?): 2.0.0+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: no - Using distributed or parallel set-up in script?: no ### Who can help? @sgugger @ArthurZucker @amyeroberts ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Running: ```python import json from transformers import AutoConfig json_data = AutoConfig.from_pretrained('openai/clip-vit-base-patch16').to_dict() json.dumps(json_data, indent=4) ``` Results in ``` TypeError: Object of type dtype is not JSON serializable ``` --- I have identified this problem with the following models: - `clip` - `sam` - `vision-encoder-decoder` ### Expected behavior torch dtypes should be converted to a string. I believe this is due to these configs redefining their `to_dict` method, without calling `dict_torch_dtype_to_str` on the top-level object. https://github.com/huggingface/transformers/blob/de9255de27abfcae4a1f816b904915f0b1e23cd9/src/transformers/models/clip/configuration_clip.py#L397-L408
05-30-2023 22:02:51
05-30-2023 22:02:51
Hey! Indeed, the `PretrainedConfig` class calls `dict_torch_dtype_to_str`, and the `text_config` and `vision_config` inherit from it, so they work fine, indeed, the parent's `torch_dtype` attribute can be modified and we don't use `self.to_dict()` . Thanks for reporting The configs should be automatically tested IMO, this is currently note the case. It seems that for blip, only the text config is tested, which is why this does not fail. 10 models or more are concerned (mostly when `is_composition=True`. I'll open a PR to fix this <|||||>Commenting to prevent it being closed as stale.<|||||>Yep, sorry I'll try to get to the original fix taking the comment into account!
transformers
23,875
closed
Changed "perplexity" to "eval_perplexity"
# What does this PR do? Modifies training scripts in examples/pytorch/language-modelling so that `perplexity` is correctly logged to wandb. Since the metrics don't contain an eval_ prefix in the metrics dictionary they are not logged. <!-- Remove if not applicable --> Fixes # https://github.com/huggingface/transformers/issues/23593 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
05-30-2023 21:44:10
05-30-2023 21:44:10
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23875). All of your documentation changes will be reflected on that endpoint.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
23,874
closed
Code formatting issue with Korean translation of quicktour.mdx
### System Info N/A ### Reproduction In the Korean translation of quicktour.mdx found [here](https://github.com/huggingface/transformers/blob/main/docs/source/ko/quicktour.mdx), there is a small formatting issue in the bash commands to install pytorch and tensorflow. ![image](https://github.com/huggingface/transformers/assets/36463300/3f750ef3-ba13-4977-b8e5-52ec7d8d396e) ### Expected behavior - The install commands should be rendered as code blocks in the markdown file. - The formatting can be fixed by adding a newline/return between the two commands like this: ```bash pip install torch``` ```bash pip install tensorflow``` Furthermore, these commands can be simplified to one line by using the following syntax, which will install both PyTorch and Tensorflow: ```!pip install torch tensorflow``` I'd be happy to make these changes and help with some of the other Korean documentation.
05-30-2023 19:59:15
05-30-2023 19:59:15
Hey! Thanks for reporting, feel free to open a PR and ping me ๐Ÿ˜‰ <|||||>It looks like this issue was resolved on the docs website https://huggingface.co/docs/transformers/v4.30.0/ko/quicktour. Closing this issue
transformers
23,873
closed
RWKV bug for 8-bit model fine-tuning.
### System Info - `transformers` version: 4.30.0.dev0 - Platform: Linux-5.15.107+-x86_64-with-glibc2.31 - Python version: 3.10.11 - Huggingface_hub version: 0.14.1 - Safetensors version: 0.3.1 - PyTorch version (GPU?): 2.0.1+cu118 (True) - Tensorflow version (GPU?): 2.12.0 (True) - Flax version (CPU?/GPU?/TPU?): 0.6.9 (gpu) - Jax version: 0.4.10 - JaxLib version: 0.4.10 - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ### Who can help? @ArthurZucker @younesbelkada @sgugger ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction The 8-bit model inference works successfully, but after fine-tuning, the model fails when inferring it again. Reproduction: ``` import torch from transformers import AutoTokenizer, RwkvForCausalLM, GenerationConfig from torch.optim import AdamW model = RwkvForCausalLM.from_pretrained("RWKV/rwkv-raven-1b5", device_map="auto", torch_dtype=torch.bfloat16, low_cpu_mem_usage=True, load_in_8bit=True) tokenizer = AutoTokenizer.from_pretrained("RWKV/rwkv-raven-1b5") optim = AdamW(model.parameters(), lr=1e-4) ctx = "Hello my name is Bob" inputs = tokenizer(ctx, return_tensors="pt").to(0) model(inputs["input_ids"]) # ok model.train() outputs = model(inputs["input_ids"], labels=inputs["input_ids"]) loss = outputs.loss loss.backward() optim.step() model.eval() model(inputs["input_ids"]) # failed ``` or see my colab code as follows: https://colab.research.google.com/drive/1l_vNHPd9_Z40dPkhIj5CxgrLhIn1Edyc?usp=sharing ### Expected behavior After fine-tuning, the model should still work properly.
05-30-2023 19:58:35
05-30-2023 19:58:35
Hi @LetianLee Thanks for the issue! In fact, you cannot train a model that has been purely loaded in 8bit. In order to apply fine tuning using 8bit / 4bit models, you need to add adapters on top of the model and train these adapters only. Please check out the official example of PEFT: https://github.com/huggingface/peft/blob/main/examples/int8_training/Finetune_opt_bnb_peft.ipynb and adapt it to your needs. You may need to manually specify `target_modules=["key", "value", "receptance"]` when defining the `LoraConfig`. Please let us know how it goes<|||||>Hi @younesbelkada , Thank you for your kind reply and explanation. Since this is the case, I will close this ticket as it is not an issue related to Hugging Face/Transformers. Thank you very much for providing the relevant tutorial. I will now proceed to try the Lora fine-tuning. Thanks!<|||||>Thanks so much @LetianLee !
transformers
23,872
closed
Raise error if loss can't be calculated - ViT MIM
# What does this PR do? Currently, `ViTForMaskedImageModeling` will fail when calculating the reconstruction loss if a patch size other than 16 is chosen. This is because the decoder head is parametrized by `config.encoder_stride`, which controls the resolution of the upsampled image. By default, `config.patch_size = config.encoder_stride = 16`. If a user updates the patch size but not the encoder stride to match, the reconstructed image will have a different resolution. This PR adds a warning for the user before the forward pass, explaining why the loss calculation won't work. Fixes #23832 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests?
05-30-2023 19:51:34
05-30-2023 19:51:34
_The documentation is not available anymore as the PR was closed or merged._
transformers
23,871
closed
Support shared tensors
# What does this PR do? - Fixes #23868 We can uniquely hash the storage by computing `data_ptr()` and `nbytes()` since storages are 1D contiguous buffers. We use that to find tensors that share the same storage. And if we do find them, we put those within the same "block" relying on underlying code to optimize serialization.
05-30-2023 18:02:57
05-30-2023 18:02:57
_The documentation is not available anymore as the PR was closed or merged._<|||||>Failing test seems unrelated?<|||||>Nice ๐Ÿ”ฅ <|||||>Seems like this requires the latest version of safetensors otherwise getting ```python E RuntimeError: Failed to import transformers.modeling_utils because of the following error (look up to see its traceback): E cannot import name 'storage_ptr' from 'safetensors.torch' (/opt/conda/envs/py39/lib/python3.9/site-packages/safetensors/torch.py) ``` should probably update the setup? <|||||>Yes @muellerzr made a PR: #23911 <|||||>Woops! Thank you @muellerzr !
transformers
23,870
closed
importing of transformers 4.29.2 slows down PyToch DataLoader's multi-processing significantly
### System Info - `transformers` version: 4.29.2 - Platform: Linux-5.15.0-70-generic-x86_64-with-glibc2.35 - Python version: 3.10.11 - Huggingface_hub version: 0.14.1 - Safetensors version: not installed - PyTorch version (GPU?): 1.13.1 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <no> no - Using distributed or parallel set-up in script?: <yes> yes ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction The issue is firstly report to ```PyTorch```, then I found it's caused by ```transformers``` [Original Issue](https://github.com/pytorch/pytorch/issues/102494) The codes below take 23.6 seconds with only 2 CPU cores fully used, even though I didn't really use the transformers. ``` python import transformers # imported but not used import torch import torchvision.datasets as datasets import torchvision.transforms as transforms trans = transforms.Compose([ transforms.Resize(256), transforms.CenterCrop(224), transforms.ToTensor(), transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]) ]) dataset = datasets.FakeData(size=10000, transform=trans) loader = torch.utils.data.DataLoader( dataset, batch_size=128, shuffle=True, num_workers=12, sampler=None) i = 0 for d in loader: print("Batch {}".format(i)) i += 1 # takes 23.6 seconds ``` And by importing ```torch``` before ```transformers```, the CPU is fully used and only takes 5.4 seconds. ``` python import torch import torchvision.datasets as datasets import torchvision.transforms as transforms import transformers trans = transforms.Compose([ transforms.Resize(256), transforms.CenterCrop(224), transforms.ToTensor(), transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]) ]) dataset = datasets.FakeData(size=10000, transform=trans) loader = torch.utils.data.DataLoader( dataset, batch_size=128, shuffle=True, num_workers=12, sampler=None) i = 0 for d in loader: print("Batch {}".format(i)) i += 1 # take only 5.4 seconds ``` ### Expected behavior The aforementioned issue happens to ```transformers 4.29.2```. I tested 4.26.1 as well and it works fine. I expect the multi-processing DataLoader can fully use my CPU so the data processing could be faster.
05-30-2023 16:53:44
05-30-2023 16:53:44
Both take the same time on my side, so it's not just Transformers but some external library causing the problem. Could you share your full env?<|||||>> Both take the same time on my side, so it's not just Transformers but some external library causing the problem. Could you share your full env? Thanks for your reply! Here is the env generated by Pytorch env script: ``` PyTorch version: 2.0.1 Is debug build: False CUDA used to build PyTorch: 11.7 ROCM used to build PyTorch: N/A OS: Ubuntu 22.04.2 LTS (x86_64) GCC version: (Ubuntu 11.3.0-1ubuntu1~22.04.1) 11.3.0 Clang version: Could not collect CMake version: Could not collect Libc version: glibc-2.35 Python version: 3.10.11 (main, Apr 20 2023, 19:02:41) [GCC 11.2.0] (64-bit runtime) Python platform: Linux-5.15.0-70-generic-x86_64-with-glibc2.35 Is CUDA available: True CUDA runtime version: Could not collect CUDA_MODULE_LOADING set to: LAZY GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3090 Ti Nvidia driver version: 515.65.01 cuDNN version: Could not collect HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True CPU: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Address sizes: 43 bits physical, 48 bits virtual Byte Order: Little Endian CPU(s): 16 On-line CPU(s) list: 0-15 Vendor ID: AuthenticAMD Model name: AMD Ryzen 7 3700X 8-Core Processor CPU family: 23 Model: 113 Thread(s) per core: 2 Core(s) per socket: 8 Socket(s): 1 Stepping: 0 Frequency boost: enabled CPU max MHz: 3600.0000 CPU min MHz: 2200.0000 BogoMIPS: 7199.26 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip rdpid overflow_recov succor smca sme sev sev_es Virtualisation: AMD-V L1d cache: 256 KiB (8 instances) L1i cache: 256 KiB (8 instances) L2 cache: 4 MiB (8 instances) L3 cache: 32 MiB (2 instances) NUMA node(s): 1 NUMA node0 CPU(s): 0-15 Vulnerability Itlb multihit: Not affected Vulnerability L1tf: Not affected Vulnerability Mds: Not affected Vulnerability Meltdown: Not affected Vulnerability Mmio stale data: Not affected Vulnerability Retbleed: Mitigation; untrained return thunk; SMT enabled with STIBP protection Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, STIBP always-on, RSB filling, PBRSB-eIBRS Not affected Vulnerability Srbds: Not affected Vulnerability Tsx async abort: Not affected Versions of relevant libraries: [pip3] mypy-extensions==1.0.0 [pip3] numpy==1.24.3 [pip3] torch==2.0.1 [pip3] torchaudio==2.0.2 [pip3] torchvision==0.15.2 [pip3] triton==2.0.0 [conda] blas 1.0 mkl [conda] ffmpeg 4.3 hf484d3e_0 pytorch [conda] mkl 2023.1.0 h6d00ec8_46342 [conda] mkl-service 2.4.0 py310h5eee18b_1 [conda] mkl_fft 1.3.6 py310h1128e8f_1 [conda] mkl_random 1.2.2 py310h1128e8f_1 [conda] numpy 1.24.3 py310h5f9d8c6_1 [conda] numpy-base 1.24.3 py310hb5e798b_1 [conda] pytorch 2.0.1 py3.10_cuda11.7_cudnn8.5.0_0 pytorch [conda] pytorch-cuda 11.7 h778d358_5 pytorch [conda] pytorch-mutex 1.0 cuda pytorch [conda] torchaudio 2.0.2 py310_cu117 pytorch [conda] torchtriton 2.0.0 py310 pytorch [conda] torchvision 0.15.2 py310_cu117 pytorch ``` Here is my conda environment: ``` name: pt2hfpy310 channels: - pytorch - huggingface - nvidia - conda-forge - defaults dependencies: - _libgcc_mutex=0.1=main - _openmp_mutex=5.1=1_gnu - abseil-cpp=20211102.0=h27087fc_1 - aiosignal=1.3.1=pyhd8ed1ab_0 - anyio=3.5.0=py310h06a4308_0 - argon2-cffi=21.3.0=pyhd3eb1b0_0 - argon2-cffi-bindings=21.2.0=py310h7f8727e_0 - arrow-cpp=11.0.0=py310h7516544_0 - asttokens=2.0.5=pyhd3eb1b0_0 - async-timeout=4.0.2=pyhd8ed1ab_0 - attrs=23.1.0=pyh71513ae_1 - aws-c-common=0.4.57=he6710b0_1 - aws-c-event-stream=0.1.6=h2531618_5 - aws-checksums=0.1.9=he6710b0_0 - aws-sdk-cpp=1.8.185=hce553d0_0 - babel=2.11.0=py310h06a4308_0 - backcall=0.2.0=pyhd3eb1b0_0 - beautifulsoup4=4.12.2=py310h06a4308_0 - blas=1.0=mkl - bleach=4.1.0=pyhd3eb1b0_0 - boost-cpp=1.65.1=0 - bottleneck=1.3.5=py310ha9d4c09_0 - brotli=1.0.9=he6710b0_2 - brotlipy=0.7.0=py310h7f8727e_1002 - bzip2=1.0.8=h7b6447c_0 - c-ares=1.19.0=h5eee18b_0 - ca-certificates=2023.01.10=h06a4308_0 - certifi=2023.5.7=py310h06a4308_0 - cffi=1.15.1=py310h5eee18b_3 - charset-normalizer=2.0.4=pyhd3eb1b0_0 - click=8.0.4=py310h06a4308_0 - comm=0.1.2=py310h06a4308_0 - contourpy=1.0.5=py310hdb19cb5_0 - cryptography=39.0.1=py310h9ce1e76_0 - cuda-cudart=11.7.99=0 - cuda-cupti=11.7.101=0 - cuda-libraries=11.7.1=0 - cuda-nvrtc=11.7.99=0 - cuda-nvtx=11.7.91=0 - cuda-runtime=11.7.1=0 - cycler=0.11.0=pyhd3eb1b0_0 - dataclasses=0.8=pyh6d0b6a4_7 - datasets=2.12.0=py_0 - dbus=1.13.18=hb2f20db_0 - debugpy=1.5.1=py310h295c915_0 - decorator=5.1.1=pyhd3eb1b0_0 - defusedxml=0.7.1=pyhd3eb1b0_0 - dill=0.3.6=pyhd8ed1ab_1 - entrypoints=0.4=py310h06a4308_0 - executing=0.8.3=pyhd3eb1b0_0 - expat=2.4.9=h6a678d5_0 - ffmpeg=4.3=hf484d3e_0 - filelock=3.9.0=py310h06a4308_0 - fontconfig=2.14.1=h52c9d5c_1 - fonttools=4.25.0=pyhd3eb1b0_0 - freetype=2.12.1=h4a9f257_0 - frozenlist=1.3.3=py310h5eee18b_0 - fsspec=2023.5.0=pyh1a96a4e_0 - gflags=2.2.2=he1b5a44_1004 - giflib=5.2.1=h5eee18b_3 - glib=2.69.1=he621ea3_2 - glog=0.5.0=h48cff8f_0 - gmp=6.2.1=h295c915_3 - gmpy2=2.1.2=py310heeb90bb_0 - gnutls=3.6.15=he1e5248_0 - grpc-cpp=1.46.1=h33aed49_1 - gst-plugins-base=1.14.1=h6a678d5_1 - gstreamer=1.14.1=h5eee18b_1 - huggingface_hub=0.14.1=py_0 - icu=58.2=hf484d3e_1000 - idna=3.4=py310h06a4308_0 - importlib-metadata=6.0.0=py310h06a4308_0 - importlib_metadata=6.0.0=hd3eb1b0_0 - intel-openmp=2023.1.0=hdb19cb5_46305 - ipykernel=6.19.2=py310h2f386ee_0 - ipython=8.12.0=py310h06a4308_0 - ipython_genutils=0.2.0=pyhd3eb1b0_1 - ipywidgets=8.0.4=py310h06a4308_0 - jedi=0.18.1=py310h06a4308_1 - jinja2=3.1.2=py310h06a4308_0 - joblib=1.1.1=py310h06a4308_0 - jpeg=9e=h5eee18b_1 - json5=0.9.6=pyhd3eb1b0_0 - jsonschema=4.17.3=py310h06a4308_0 - jupyter=1.0.0=py310h06a4308_8 - jupyter_client=8.1.0=py310h06a4308_0 - jupyter_console=6.6.3=py310h06a4308_0 - jupyter_core=5.3.0=py310h06a4308_0 - jupyter_server=1.23.4=py310h06a4308_0 - jupyterlab=3.5.3=py310h06a4308_0 - jupyterlab_pygments=0.1.2=py_0 - jupyterlab_server=2.22.0=py310h06a4308_0 - jupyterlab_widgets=3.0.5=py310h06a4308_0 - keyutils=1.6.1=h166bdaf_0 - kiwisolver=1.4.4=py310h6a678d5_0 - krb5=1.19.3=h3790be6_0 - lame=3.100=h7b6447c_0 - lcms2=2.12=h3be6417_0 - ld_impl_linux-64=2.38=h1181459_1 - lerc=3.0=h295c915_0 - libbrotlicommon=1.0.9=h166bdaf_7 - libbrotlidec=1.0.9=h166bdaf_7 - libbrotlienc=1.0.9=h166bdaf_7 - libclang=10.0.1=default_hb85057a_2 - libcublas=11.10.3.66=0 - libcufft=10.7.2.124=h4fbf590_0 - libcufile=1.6.1.9=0 - libcurand=10.3.2.106=0 - libcurl=7.87.0=h91b91d3_0 - libcusolver=11.4.0.1=0 - libcusparse=11.7.4.91=0 - libdeflate=1.17=h5eee18b_0 - libedit=3.1.20191231=he28a2e2_2 - libev=4.33=h516909a_1 - libevent=2.1.12=h8f2d780_0 - libffi=3.4.4=h6a678d5_0 - libgcc-ng=11.2.0=h1234567_1 - libgomp=11.2.0=h1234567_1 - libiconv=1.16=h7f8727e_2 - libidn2=2.3.4=h5eee18b_0 - libllvm10=10.0.1=hbcb73fb_5 - libnghttp2=1.46.0=hce63b2e_0 - libnpp=11.7.4.75=0 - libnvjpeg=11.8.0.2=0 - libpng=1.6.39=h5eee18b_0 - libpq=12.9=h16c4e8d_3 - libprotobuf=3.20.3=he621ea3_0 - libsodium=1.0.18=h7b6447c_0 - libssh2=1.10.0=ha56f1ee_2 - libstdcxx-ng=11.2.0=h1234567_1 - libtasn1=4.19.0=h5eee18b_0 - libthrift=0.15.0=hcc01f38_0 - libtiff=4.5.0=h6a678d5_2 - libunistring=0.9.10=h27cfd23_0 - libuuid=1.41.5=h5eee18b_0 - libwebp=1.2.4=h11a3e52_1 - libwebp-base=1.2.4=h5eee18b_1 - libxcb=1.15=h7f8727e_0 - libxkbcommon=1.0.1=hfa300c1_0 - libxml2=2.9.14=h74e7548_0 - libxslt=1.1.35=h4e12654_0 - lxml=4.9.1=py310h1edc446_0 - lz4-c=1.9.4=h6a678d5_0 - markupsafe=2.1.1=py310h7f8727e_0 - matplotlib=3.7.1=py310h06a4308_1 - matplotlib-base=3.7.1=py310h1128e8f_1 - matplotlib-inline=0.1.6=py310h06a4308_0 - mistune=0.8.4=py310h7f8727e_1000 - mkl=2023.1.0=h6d00ec8_46342 - mkl-service=2.4.0=py310h5eee18b_1 - mkl_fft=1.3.6=py310h1128e8f_1 - mkl_random=1.2.2=py310h1128e8f_1 - mpc=1.1.0=h10f8cd9_1 - mpfr=4.0.2=hb69a4c5_1 - multidict=6.0.2=py310h5eee18b_0 - multiprocess=0.70.14=py310h06a4308_0 - munkres=1.1.4=py_0 - nbclassic=0.5.5=py310h06a4308_0 - nbclient=0.5.13=py310h06a4308_0 - nbconvert=6.5.4=py310h06a4308_0 - nbformat=5.7.0=py310h06a4308_0 - ncurses=6.4=h6a678d5_0 - nest-asyncio=1.5.6=py310h06a4308_0 - nettle=3.7.3=hbbd107a_1 - networkx=2.8.4=py310h06a4308_1 - notebook=6.5.4=py310h06a4308_0 - notebook-shim=0.2.2=py310h06a4308_0 - nspr=4.33=h295c915_0 - nss=3.74=h0370c37_0 - numexpr=2.8.4=py310h85018f9_1 - numpy=1.24.3=py310h5f9d8c6_1 - numpy-base=1.24.3=py310hb5e798b_1 - openh264=2.1.1=h4ff587b_0 - openssl=1.1.1t=h7f8727e_0 - orc=1.7.4=hb3bc3d3_1 - packaging=23.0=py310h06a4308_0 - pandas=1.5.3=py310h1128e8f_0 - pandocfilters=1.5.0=pyhd3eb1b0_0 - parso=0.8.3=pyhd3eb1b0_0 - pcre=8.45=h295c915_0 - pexpect=4.8.0=pyhd3eb1b0_3 - pickleshare=0.7.5=pyhd3eb1b0_1003 - pillow=9.4.0=py310h6a678d5_0 - pip=23.0.1=py310h06a4308_0 - platformdirs=2.5.2=py310h06a4308_0 - ply=3.11=py310h06a4308_0 - prometheus_client=0.14.1=py310h06a4308_0 - prompt-toolkit=3.0.36=py310h06a4308_0 - prompt_toolkit=3.0.36=hd3eb1b0_0 - protobuf=3.20.3=py310h6a678d5_0 - psutil=5.9.0=py310h5eee18b_0 - ptyprocess=0.7.0=pyhd3eb1b0_2 - pure_eval=0.2.2=pyhd3eb1b0_0 - pyarrow=11.0.0=py310h468efa6_0 - pycparser=2.21=pyhd3eb1b0_0 - pygments=2.15.1=py310h06a4308_1 - pyopenssl=23.0.0=py310h06a4308_0 - pyparsing=3.0.9=py310h06a4308_0 - pyqt=5.15.7=py310h6a678d5_1 - pyrsistent=0.18.0=py310h7f8727e_0 - pysocks=1.7.1=py310h06a4308_0 - python=3.10.11=h7a1cb2a_2 - python-dateutil=2.8.2=pyhd8ed1ab_0 - python-fastjsonschema=2.16.2=py310h06a4308_0 - python-xxhash=3.0.0=py310h5764c6d_1 - python_abi=3.10=2_cp310 - pytorch=2.0.1=py3.10_cuda11.7_cudnn8.5.0_0 - pytorch-cuda=11.7=h778d358_5 - pytorch-mutex=1.0=cuda - pytz=2023.3=pyhd8ed1ab_0 - pyyaml=6.0=py310h5eee18b_1 - pyzmq=25.0.2=py310h6a678d5_0 - qt-main=5.15.2=h327a75a_7 - qt-webengine=5.15.9=hd2b0992_4 - qtconsole=5.4.2=py310h06a4308_0 - qtpy=2.2.0=py310h06a4308_0 - qtwebkit=5.212=h4eab89a_4 - re2=2022.04.01=h27087fc_0 - readline=8.2=h5eee18b_0 - regex=2022.7.9=py310h5eee18b_0 - requests=2.29.0=py310h06a4308_0 - sacremoses=master=py_0 - send2trash=1.8.0=pyhd3eb1b0_1 - sentencepiece=0.1.99=py310hdb19cb5_0 - setuptools=66.0.0=py310h06a4308_0 - sip=6.6.2=py310h6a678d5_0 - six=1.16.0=pyhd3eb1b0_1 - snappy=1.1.9=h295c915_0 - sniffio=1.2.0=py310h06a4308_1 - soupsieve=2.4=py310h06a4308_0 - sqlite=3.41.2=h5eee18b_0 - stack_data=0.2.0=pyhd3eb1b0_0 - sympy=1.11.1=py310h06a4308_0 - tbb=2021.8.0=hdb19cb5_0 - terminado=0.17.1=py310h06a4308_0 - tinycss2=1.2.1=py310h06a4308_0 - tk=8.6.12=h1ccaba5_0 - tokenizers=0.11.4=py310h3dcd8bd_1 - toml=0.10.2=pyhd3eb1b0_0 - tomli=2.0.1=py310h06a4308_0 - torchaudio=2.0.2=py310_cu117 - torchtriton=2.0.0=py310 - torchvision=0.15.2=py310_cu117 - tornado=6.2=py310h5eee18b_0 - tqdm=4.65.0=py310h2f386ee_0 - traitlets=5.7.1=py310h06a4308_0 - typing-extensions=4.5.0=py310h06a4308_0 - typing_extensions=4.5.0=py310h06a4308_0 - tzdata=2023c=h04d1e81_0 - urllib3=1.26.15=py310h06a4308_0 - utf8proc=2.6.1=h27cfd23_0 - wcwidth=0.2.5=pyhd3eb1b0_0 - webencodings=0.5.1=py310h06a4308_1 - websocket-client=0.58.0=py310h06a4308_4 - wheel=0.38.4=py310h06a4308_0 - widgetsnbextension=4.0.5=py310h06a4308_0 - xxhash=0.8.0=h7f98852_3 - xz=5.4.2=h5eee18b_0 - yaml=0.2.5=h7b6447c_0 - yarl=1.7.2=py310h5764c6d_2 - zeromq=4.3.4=h2531618_0 - zipp=3.11.0=py310h06a4308_0 - zlib=1.2.13=h5eee18b_0 - zstd=1.5.5=hc292b87_0 - pip: - aiohttp==3.8.4 - dataclasses-json==0.5.7 - greenlet==2.0.2 - langchain==0.0.180 - marshmallow==3.19.0 - marshmallow-enum==1.5.1 - mpmath==1.2.1 - mypy-extensions==1.0.0 - openai==0.27.7 - openapi-schema-pydantic==1.2.4 - pydantic==1.10.8 - pyqt5-sip==12.11.0 - sqlalchemy==2.0.15 - tenacity==8.2.2 - transformers==4.29.2 - typing-inspect==0.9.0 prefix: /home/tai/miniconda3/envs/pt2hfpy310 ``` <|||||>This is really puzzling as `import transformers` does not really do anything (it's when you import a specific object that the code of a module is actually executed), so I don't see what could cause this slowdown.<|||||>@sgugger Yeah, it's really puzzling. I think ```import transformers``` would run the codes inside the ```transformers/__init__.py``` before actually using it. ZailiWang said it may be because "that transformers have another openmp dependency and the new openmp lib flushed llvm-openmp invoked by torch" in [anohter issue](https://github.com/pytorch/pytorch/issues/102494#issuecomment-1568727409).<|||||>We do not have an openmp dependency. And if you look at the transformers __init__ you will see that nothing is done there.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
23,869
closed
Editing issue with pickle def with lambda function
# What does this PR do? In this PR, I address the problem of pickling the constant LR scheduler, which fails during the process (potentially during multi-GPU training, as observed in my case) due to the presence of a lambda function within it. Fixes #23865 (issue)
05-30-2023 16:52:44
05-30-2023 16:52:44
_The documentation is not available anymore as the PR was closed or merged._
transformers
23,868
closed
Avoid saving tied weights with sharded checkpoints
It seems that when sharding a checkpoint we untie weights which makes them take more space ```python import torch from transformers import GPT2LMHeadModel, GPT2Config config = GPT2Config() model = GPT2LMHeadModel(config) assert id(model.transformer.wte.weight) == id(model.lm_head.weight) model.save_pretrained("gpt2-tied-weights") config.tie_word_embeddings = False model = GPT2LMHeadModel(config) assert id(model.transformer.wte.weight) != id(model.lm_head.weight) model.save_pretrained("gpt2-untied-weights") config = GPT2Config() model = GPT2LMHeadModel(config) assert id(model.transformer.wte.weight) == id(model.lm_head.weight) model.save_pretrained("gpt2-tied-weights-sharded", max_shard_size="100MB") config.tie_word_embeddings = False model = GPT2LMHeadModel(config) assert id(model.transformer.wte.weight) != id(model.lm_head.weight) model.save_pretrained("gpt2-untied-weights-sharded", max_shard_size="100MB") ``` When checking the space taken by these checkpoints: ``` $ du -sh gpt2* 475M gpt2-tied-weights 622M gpt2-tied-weights-sharded # MUST BE 475M 622M gpt2-untied-weights 622M gpt2-untied-weights-sharded ``` cc @ArthurZucker @thomasw21
05-30-2023 16:10:28
05-30-2023 16:10:28
I don't think that developing the logic that would avoid this is really worth the time it will require, but we can leave this issue as a reference point.<|||||>Hum I don't think it should take that long. I can have a go at it. I do think it's an easy win.
transformers
23,867
closed
[wip: test doc-builder]
Closes #23625 Testing https://github.com/huggingface/doc-builder/pull/373
05-30-2023 16:07:36
05-30-2023 16:07:36
_The documentation is not available anymore as the PR was closed or merged._<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
23,866
closed
merge main
null
05-30-2023 16:05:20
05-30-2023 16:05:20
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23866). All of your documentation changes will be reflected on that endpoint.
transformers
23,865
closed
Possible pickle issues
https://github.com/huggingface/transformers/blob/af2aac51fc1c59237ff7228908ace2cd8fc0d9a6/src/transformers/optimization.py#L49 When attempting to pickle this function, there is a potential for an error to occur due to the lambda function that cannot be pickled. I suggest the following solution for this problem. ``` def get_constant_lambda(_): return 1 def get_constant_schedule(optimizer: Optimizer, last_epoch: int = -1): """ Create a schedule with a constant learning rate, using the learning rate set in optimizer. Args: optimizer ([`~torch.optim.Optimizer`]): The optimizer for which to schedule the learning rate. last_epoch (`int`, *optional*, defaults to -1): The index of the last epoch when resuming training. Return: `torch.optim.lr_scheduler.LambdaLR` with the appropriate schedule. """ return LambdaLR(optimizer, get_constant_lambda, last_epoch=last_epoch) ```
05-30-2023 15:50:32
05-30-2023 15:50:32
Would like to open a PR with your fix?<|||||>yes, is it okay?