repo
stringclasses
1 value
number
int64
1
25.3k
state
stringclasses
2 values
title
stringlengths
1
487
body
stringlengths
0
234k
βŒ€
created_at
stringlengths
19
19
closed_at
stringlengths
19
19
comments
stringlengths
0
293k
transformers
22,678
closed
[WIP] 🌐 [i18n-KO] Translated `tasks/translation.mdx` to Korean
<!-- PR의 제λͺ©μ€ "🌐 [i18n-KO] Translated `<your_file>.mdx` to Korean" 으둜 λΆ€νƒλ“œλ¦½λ‹ˆλ‹Ή --> # What does this PR do? Translated the `tasks/translation.mdx` file of the documentation to Korean. Thank you in advance for your review. Part of https://github.com/huggingface/transformers/issues/20179 <!-- 메인 μ΄μŠˆμ— 기둝이 λ‚¨μ•„μš”! κ°€μ§œμ—°κ΅¬μ†Œ 리포λ₯Ό μ‚¬μš©ν•΄ μ—°μŠ΅ν•˜μ‹€λ•ŒλŠ” μ œκ±°ν•΄μ£Όμ‹œλ©΄ κ°μ‚¬ν•˜κ² μŠ΅λ‹ˆλ‹€! :smile: --> ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? <!-- 제좜 μ „ 체크리슀트둜, κ°€μ§œμ—°κ΅¬μ†Œλ§Œμ˜ μ²΄ν¬λ¦¬μŠ€νŠΈλ„ <details>둜 κ°μ‹Έμ„œ λ§Œλ“€μ–΄λ‘λ©΄ 더 쒋을 것 κ°™μ•„μš”. --> ## Who can review? <!-- κ°€μ§œμ—°κ΅¬μ†Œ νŒ€μ›λ“€κ³Ό 리뷰가 λλ‚œ ν›„μ—λ§Œ ν—ˆκΉ…νŽ˜μ΄μŠ€ μ§μ›λ“€μ—κ²Œ 리뷰 μš”μ²­ν•˜λŠ” μ•„λž˜ 주석을 λ…ΈμΆœν•΄μ£Όμ„Έμš”! --> <!-- @sgugger, @ArthurZucker, @eunseojo May you please review this PR? --> Team PseudoLab, may you please review this PR? @0525hhgus, @KIHOON71, @sim-so, @gabrielwithappy, @HanNayeoniee, @wonhyeongseo, @jungnerd
04-09-2023 08:40:38
04-09-2023 08:40:38
_The documentation is not available anymore as the PR was closed or merged._<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_22678). All of your documentation changes will be reflected on that endpoint.<|||||>Closed in favor of https://github.com/huggingface/transformers/pull/22805
transformers
22,677
closed
Where does hugging face's transformers save models?
### Feature request A clear instruction of receving the downloaded weight in system. ### Motivation Running the below code downloads a model - does anyone know what folder it downloads it to? ``` !pip install -q transformers from transformers import pipeline model = pipeline('fill-mask') ``` ### Your contribution Same issue with diffusers.
04-09-2023 06:41:15
04-09-2023 06:41:15
This is described in the [installation page of the doc](https://huggingface.co/docs/transformers/installation#cache-setup).<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
22,676
closed
Model parallelism: Moving labels to the same device as logits for BridgeTower models
As suggested in the https://github.com/huggingface/transformers/issues/22561, moving the labels to the same device as the logits are for the BridgeTower models @sgugger Can you please review?
04-09-2023 05:11:55
04-09-2023 05:11:55
_The documentation is not available anymore as the PR was closed or merged._
transformers
22,675
open
Going above version 4.21.3 gives UnicodeDecodeError
### System Info - `transformers` version: 4.27.2 - Platform: Windows-10-10.0.22621-SP0 - Python version: 3.10.6 - Huggingface_hub version: 0.13.3 - PyTorch version (GPU?): 1.13.1+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? @ArthurZucker ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction 1. update stable diffusion webui to a version that requires Transformers 4.22.0 or above 2. start stable diffusion 3. when loading the model, this error appears ![Screenshot 2023-04-08 232922](https://user-images.githubusercontent.com/15109972/230743894-dbc3598c-15b9-4222-ad80-51cef9eb4fb8.png) ### Expected behavior load the model normally like it does for version 4.21.3 and below
04-08-2023 21:51:01
04-08-2023 21:51:01
cc @amyeroberts and @younesbelkada <|||||>Hi @emidio90, thanks for raising this issue. Could you share some more information about how to reproduce this error? In particular, is this the [Stable Diffusion WebUI](https://github.com/AUTOMATIC1111/stable-diffusion-webui) repo being referred to? What model and settings are being used? And which task is being run? <|||||>Hello @amyeroberts , I confirm that is the repo i'm referring to. This happens with any model, when starting WebUI locally on my pc by clicking on webui-user.bat. The console arguments are only --xformers. I'm sorry i don't know how to find the specific task being run, but I suppose it's the model loading that happens when launching webui. This started to happen when I updated to a commit that changed the required version of Transformers to 4.25. So now to make SD work I have to manually change the requirements_version.txt transformers==4.21.3<|||||>@emidio90 Great, thanks for the additional info!
transformers
22,674
closed
fix bug of CLAP dataloader
Fix the bug of the CLAP data loader (https://github.com/LAION-AI/CLAP/issues/62) # What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
04-08-2023 21:35:04
04-08-2023 21:35:04
_The documentation is not available anymore as the PR was closed or merged._<|||||>cc @younesbelkada and @ArthurZucker <|||||>Just an FYI that the slow tests aren't actually being run for CLAP. See also https://github.com/huggingface/transformers/pull/22834
transformers
22,673
closed
Loading FlaxHybridCLIP trained model
### System Info - `transformers` version: 4.27.4 - Platform: Linux-5.15.0-52-generic-x86_64-with-glibc2.29 - Python version: 3.8.10 - Huggingface_hub version: 0.13.4 - PyTorch version (GPU?): 1.9.0+cpu (False) - Tensorflow version (GPU?): 2.9.1 (True) - Flax version (CPU?/GPU?/TPU?): 0.6.8 (cpu) - Jax version: 0.4.8 - JaxLib version: 0.4.7 - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> Models: FlaxHybridCLIP ### Who can help? @sanchit-gandhi @patrickvonplaten, @patil-suraj ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction I'm wondering how to import a trained FlaxHybridCLIP model from a folder that contains the following files - config.json - flax_model.msgpack I attempted to load it using the below: ``` if args.run_from_checkpoint is not None: with open(f"{args.run_from_checkpoint}/config.json", "r") as fp: config_dict = json.load(fp) config_dict["vision_config"]["model_type"] = "clip" config = HybridCLIPConfig(**config_dict) model = FlaxHybridCLIP.from_pretrained( args.run_from_checkpoint, seed=training_args.seed, dtype=getattr(jnp, model_args.dtype), config=config, freeze_backbones=args.freeze_backbones ) ``` But, I encountered the following error: ``` `text_config` is `None`. Initializing the `CLIPTextConfig` with default values. `vision_config` is `None`. initializing the `CLIPVisionConfig` with default values. loading weights file freeze/18/flax_model.msgpack Traceback (most recent call last): File "run_hybrid_clip.py", line 831, in <module> main() File "run_hybrid_clip.py", line 528, in main model = FlaxHybridCLIP.from_pretrained( File "/home/ubuntu/.local/lib/python3.8/site-packages/transformers/modeling_flax_utils.py", line 807, in from_pretrained model = cls(config, *model_args, _do_init=_do_init, **model_kwargs) File "/home/ubuntu/modeling_hybrid_clip.py", line 148, in __init__ module = self.module_class(config=config, dtype=dtype, **kwargs) TypeError: __init__() got an unexpected keyword argument '_do_init' ``` I used the modified Italian hybrid CLIP scripts [here](https://github.com/clip-italian/clip-italian/tree/master/hybrid_clip) ### Expected behavior to load successfully and fine-tune with unfrozen backbone Thanks
04-08-2023 17:28:34
04-08-2023 17:28:34
Hey @alhuri! Sorry for the late reply here! It looks like the clip-italian repo assumes an older version of transformers. The modelling code would need to be updated to use transformers==4.27.4, namely the [`FlaxHybridCLIP`](https://github.com/clip-italian/clip-italian/blob/8c75204be0d747c0ab150973fd8cd8556ca2f444/hybrid_clip/modeling_hybrid_clip.py#L133) class. The required changes can be found in this PR: https://github.com/huggingface/transformers/pull/16148 My recommendation would be reaching out to the clip-italian authors here via GitHub and discussing this with them!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Closing since this issue is related to the Italian CLIP repo (not transformers!)
transformers
22,672
closed
add `**kwargs` argument in some functions in `tokenization_utils.py`
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Despite `**kwargs` being set in the `prepare_for_model` function in `tokenization_utils_base.py`, there was an issue where `**kwargs` were not being set in some functions within `tokenization_utils.py`, resulting in `**kwargs` not being propagated to the `prepare_for_model` function. This PR eliminates the need to copy multiple functions to set `**kwargs` when customizing the `prepare_for_model function, we can make a more concise code. ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @ArthurZucker @sgugger <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
04-08-2023 17:14:13
04-08-2023 17:14:13
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_22672). All of your documentation changes will be reflected on that endpoint.<|||||>Can you elaborate and give us a sample of code that fails before this PR? Thanks.<|||||>Thanks for the reply! In my case, I need to consider adding custom special tags within the `def build_inputs_with_special_tokens(...)`, for example, whether to add `<Language ID>` in addition to `<eos>` and `<bos>` tags. As I need to switch frequently, I decided to control this by an argument like the following code. ```python def build_inputs_with_special_tokens( self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None, prepend: bool = True ) -> List[int]: ``` ```python tokenizer('Hi world.', add_special_tokens=True, prepend=False) # {'input_ids': [4218, 1381, 6, 2], 'attention_mask': [1, 1, 1, 1]} tokenizer('Hi world.', add_special_tokens=True, prepend=True) # {'input_ids': [806, 4218, 1381, 6, 2], 'attention_mask': [1, 1, 1, 1, 1]} ``` The reason why the `def prepare_for_tokenization(...)` function did not handle this is that these tags need to be treated as special_tokens, just like `<eos>` and `<bos>` tags. `def build_inputs_with_special_tokens(...)` is executed by `def prepare_for_model(...)`, so I rewrite that function slightly. ```python def prepare_for_model( ... prepend: bool = True, #add this line **kwargs, ) -> BatchEncoding: ... # Add special tokens if add_special_tokens: sequence = self.build_inputs_with_special_tokens(ids, pair_ids, prepend = prepend) token_type_ids = self.create_token_type_ids_from_sequences(ids, pair_ids, prepend = prepend) ... ``` I would like it to work with this modification only, but the functions using `def prepare_for_model(...)` ( `def _encode_plus(...)` and `def _batch_prepare_for_model(...)` and `def _batch_encode_plus(...)` in `tokenisation_utils.py`), `** kwargs` are not passed on, which is causing it not to work. The following code is an example of this issue in `def _encode_plus(...)` ```python def _encode_plus( self, text: Union[TextInput, PreTokenizedInput, EncodedInput], text_pair: Optional[Union[TextInput, PreTokenizedInput, EncodedInput]] = None, add_special_tokens: bool = True, padding_strategy: PaddingStrategy = PaddingStrategy.DO_NOT_PAD, truncation_strategy: TruncationStrategy = TruncationStrategy.DO_NOT_TRUNCATE, max_length: Optional[int] = None, stride: int = 0, is_split_into_words: bool = False, pad_to_multiple_of: Optional[int] = None, return_tensors: Optional[Union[str, TensorType]] = None, return_token_type_ids: Optional[bool] = None, return_attention_mask: Optional[bool] = None, return_overflowing_tokens: bool = False, return_special_tokens_mask: bool = False, return_offsets_mapping: bool = False, return_length: bool = False, verbose: bool = True, **kwargs, ) -> BatchEncoding: ... return self.prepare_for_model( first_ids, pair_ids=second_ids, add_special_tokens=add_special_tokens, padding=padding_strategy.value, truncation=truncation_strategy.value, max_length=max_length, stride=stride, pad_to_multiple_of=pad_to_multiple_of, return_tensors=return_tensors, prepend_batch_axis=True, return_attention_mask=return_attention_mask, return_token_type_ids=return_token_type_ids, return_overflowing_tokens=return_overflowing_tokens, return_special_tokens_mask=return_special_tokens_mask, return_length=return_length, verbose=verbose, **kwargs, # ADD THIS LINE!!!!! ) ``` Therefore, by adding this PR fix, `**kwargs` will be passed to `def prepare_for_model(...)` and solve this problem. In addition, although `**kwargs` are set to `def prepare_for_model(...)` in the code, there was no mechanism in place to use `**kwargs` simply, which is resolved in this PR. Overall, this PR resolves the issue of `**kwargs` not being propagated in some functions within `tokenization_utils.py`, providing a more efficient and streamlined way to customize the `def prepare_for_model(...)` function. Thank you for reviewing this PR!!!!<|||||>I'm a bit wary about passing all those kwargs since there could be unrelated/mispelled arguments that the user wouldn't get an error about then. The easiest is probably for you copy paste those methods and do the change in your subclass of the tokenizer, since, if I understand correctly, you are writing your own subclass of the tokenizer with an overloaded method.<|||||>Thank you for your comments! I explain that there is no need to worry about this PR. > I'm a bit wary about passing all those kwargs since there could be unrelated/mispelled arguments that the user wouldn't get an error about then. As all the functions fixed in this PR already have the `**kwargs` argument, the worry is a problem that could well occur in existing systems. This is not a new problem caused by this PR. **It is not an attempt to set new `**kwargs` in `def prepare_for_model(...)`, but a PR to leverage the `**kwargs` already set in `def prepare_for_model(...)`.** > The easiest is probably for you copy paste those methods and do the change in your subclass of the tokenizer If only a few functions with few codes need to be edited, it would certainly be the easiest way. However, in this case, to utilize the `**kwargs` in `def prepare_for_model(...)`, it would require copy-pasting hundreds of lines of code with minimal modifications. From code maintenance, it is important to avoid such duplications as much as possible. The changes made by this PR are not breaking changes, so they will work on all existing models. Additionally, the worry you mentioned is not specific to this PR, as it is a potential issue that could also occur in existing systems. For the above reasons, respecting your worry, but I still believe that the benefits of utilizing `**kwargs` in `def prepare_for_model(...)` outweigh the risks. I hope that this PR will contribute to the continued improvement and maintenance of the system. Thank you for your comments and looking forward to your positive feedback.<|||||>cc @LysandreJik if you have a different opinion.<|||||>Hey @yusuke1997, thank you for your PR! I would err on the side of caution here with having `**kwargs` on every method. When we add `"**kwargs` to a method, we're forfeiting argument validation in favor of convenience, which is likely to bite us later. A very simple example of a bug this PR would introduce can be seen with the following script, where I misspelled `return_token_type_ids`: ```python In [1]: from transformers import GPT2TokenizerFast In [2]: tok = GPT2TokenizerFast.from_pretrained('gpt2') In [3]: tok("hey", return_token_type_id=False) ``` On `main`, this errors with `TypeError: PreTrainedTokenizerFast._batch_encode_plus() got an unexpected keyword argument 'return_token_type_id'`. On this PR, this doesn't error and returns ``` Out[3]: {'input_ids': [20342], 'attention_mask': [1]} ``` which is false. --- Your `prepend` approach can still work, but we would strongly recommend that you do it on a per-architecture approach than to try and patch a global method. If you're using GPT2 for example, then I would advise overriding the `build_inputs_with_special_tokens` within this tokenizer directly.<|||||>Thank you @LysandreJik for a very helpful review!! I checked. Certainly, on `main` `PreTrainedTokenizerFast` has a mechanism to raise an error when input misspelled arguments. This PR eliminates such errors and it is able to handle any type of input. I would like to reiterate that the purpose of this PR was to utilize `def prepare_for_model(...)`, and upon further review of the code, I found that `def prepare_for_model(...)` is not being utilized within `PreTrainedTokenizerFast`. The modification made for `PreTrainedTokenizerFast` was superfluous. I retract the changes. However, I still believe that the changes made to `tokenization_utils.py` are meaningful, non-breaking changes, and safe. This is because even in the `main` `PretrainedTokenizer`, misspelled or invalid arguments will pass with only a warning raised. ```python from transformers import GPT2Tokenizer tok = GPT2Tokenizer.from_pretrained('gpt2') print(tok("hey", return_token_type_id=False)) # Keyword arguments {'return_token_type_id': False} not recognized. # {'input_ids': [20342], 'attention_mask': [1]} ``` **This warning is no different in `main` and in this PR.** To avoid this warning, we need to pop the desired argument to use in `def prepare_for_tokenization(...)`. This part is also no different in the current `main` and after PR. **The purpose of this PR is to utilize the `**kwargs` in `def prepare_for_model(...)` function.** If it works by simply overriding `def build_inputs_with_special_tokens(...)`, it couldn't be easier than that. However, currently, just adding a new argument to `def build_inputs_with_special_tokens(...)` requires a number of function changes. This restriction of not being able to utilize the `def prepare_for_model(...)` arguments apply not only to this function but also to all the functions executed by `def prepare_for_model(...)`, I think it is causing potential drawbacks. **Furthermore, I confirmed that there are no destructive elements, such as error displays or other potentially harmful changes, involved in implementing such a mechanism. And any input to the already set `**kwargs` in `def prepare_for_model(...)` will not have any adverse effects.** So, I believe it is reasonable to establish a mechanism for passing `**kwargs` in order to utilize `def prepare_for_model(...)` (and functions executed by it). Therefore, I think there is no need to fear the part that you are cautious about. If any concerns have been resolved, I would sincerely hope you will think about again it. Thank you very much for your attention and time!! <|||||>In addition, Your point about the error not being raised is actually due to the different handling of unrelated/misspelled arguments between `PretrainedTokenizer` and `PretrainedTokenizerFast`. I am of the opinion that **if `PretrainedTokenizer` is set up to generate warnings instead of errors, then `PretrainedTokenizerFast` should also generate warnings instead of errors**, and vice versa. In other words, the behavior of both is different, and this is what the PR has revealed. What do you think about this point? What would be the best solution? For the purpose of furthering my understanding of this matter, would you kindly explain it to me? I am genuinely curious.<|||||>Sorry (miss-clicked the tag)! A small part of my answer is that this looks good to me, but at the same time I think that we are at a point where we want to refactor a lot of the API regarding the **kwargs that are scattered everywhere. The second main concern can come from this, and also maintenance of this in the long run. But both @sgugger and @LysandreJik have a better understanding on the specifics, will let them decide πŸ€— <|||||> @sgugger and @LysandreJik, I'm sorry to bother you. What do you think about it? If the above discussion has cleared up any concerns, then I sincerely hope that you will consider making a positive decision.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
22,671
closed
add GPTNeoXForSequenceClassification
# What does this PR do? This PR adds the GPTNeoX Seq Classification. Would you be able to check them? In advance, thank you for this cool OSS! ref: https://github.com/huggingface/transformers/pull/11906 ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? @sgugger @ArthurZucker @younesbelkada
04-08-2023 14:46:25
04-08-2023 14:46:25
_The documentation is not available anymore as the PR was closed or merged._
transformers
22,670
closed
🌐 [i18n-KO] Translated `training.mdx` to Korean
# What does this PR do? Translated the `training.mdx` file of the documentation to Korean. Thank you in advance for your review. Part of https://github.com/huggingface/transformers/issues/20179 ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Team PseudoLab, may you please review this PR? @0525hhgus, @KIHOON71, @sim-so, @gabrielwithappy, @HanNayeoniee, @wonhyeongseo, @jungnerd May you please review this PR? @sgugger, @ArthurZucker, @eunseojo <!-- @sgugger, @ArthurZucker, @eunseojo May you please review this PR? --> <!-- Team PseudoLab, may you please review this PR? @0525hhgus, @KIHOON71, @sim-so, @gabrielwithappy, @HanNayeoniee, @wonhyeongseo, @jungnerd -->
04-08-2023 12:47:54
04-08-2023 12:47:54
_The documentation is not available anymore as the PR was closed or merged._<|||||>Team PseudoLab, may you please review this PR? @0525hhgus, @KIHOON71, @sim-so, @gabrielwithappy, @HanNayeoniee, @wonhyeongseo, @jungnerd - fixed some ambiguous translations of the autoclass_tutorial doc. (from https://github.com/huggingface/transformers/pull/22533) - translated training doc.<|||||>fixed review comments :-) Thank you your kind reivews. BRs @HanNayeoniee @sim-so @wonhyeongseo <|||||>May you please review this PR? @sgugger, @ArthurZucker, @eunseojo
transformers
22,669
closed
New LlamaTokenizer "fast" version takes 90s to load on 5900x with nvme
### System Info Transformer [head] Cuda 11.8 Pytorch nightly 2.1 Ubuntu 22.04 ### Reproduction Is it normal for the new default "faster" LlamaTokenizer to load so slowly on a fairly new cpu? Imagine the load time on a 2018 Intel xeon. Model is a llama 7b converted to HF using latest script within transformer/model/llama head. Each cold load takes ~90s. ``` # this will take 90s to load on 5900x + nvme # load llama 7b model converted from facebook to hf tokenizer = AutoTokenizer.from_pretrained(base_model, use_fast=True) ``` ### Expected behavior Load in seconds.
04-08-2023 12:31:48
04-08-2023 12:31:48
This is to convert the slow tokenizer to the fast format (which the conversion script should do once and for all @ArthurZucker ). You should do `tokenizer.save_pretrained(some_path)` and then copy the fast tokenizer file in the folder where you have your converted LLaMA model to avoid having the slowdown more than once as a workaround @diegomontoya until we fix the conversion script.<|||||>Yep the llama conversion script reactor happened before the llama fast tokenizer was around. Will open a PR to save the fast <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
22,668
closed
Why run_t5_mlm_flax.py does not produces model weight file etc?
### System Info - `transformers` version: 4.27.4 - Platform: Linux-5.15.0-1020-aws-x86_64-with-glibc2.10 - Python version: 3.8.16 - Huggingface_hub version: 0.13.1 - PyTorch version (GPU?): 1.13.1+cu117 (True) - Tensorflow version (GPU?): 2.12.0 (True) - Flax version (CPU?/GPU?/TPU?): 0.6.3 (gpu) - Jax version: 0.3.25 - JaxLib version: 0.3.25 - Using GPU in script?: YES - Using distributed or parallel set-up in script?: <fill in> ### Who can help? @sgugger @patrickvonplaten @sanchit-gandhi ### Information - [X] The official example scripts - [x] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction I was trying to reproduce this tutorial on [**T5-like span masked-language-modeling**](https://github.com/huggingface/transformers/tree/main/examples/flax/language-modeling#t5-like-span-masked-language-modeling). I have the following code `tokenizing_and_configing.py`: ``` import datasets from t5_tokenizer_model import SentencePieceUnigramTokenizer from transformers import T5Config vocab_size = 32_000 input_sentence_size = None # Calculate the total number of samples in the dataset total_samples = datasets.load_dataset( "nthngdy/oscar-mini", name="unshuffled_deduplicated_no", split="train" ).num_rows # Calculate one thirtieth of the total samples subset_samples = total_samples // 30 # Load one thirtieth of the dataset dataset = datasets.load_dataset( "nthngdy/oscar-mini", name="unshuffled_deduplicated_no", split=f"train[:{subset_samples}]", ) tokenizer = SentencePieceUnigramTokenizer( unk_token="<unk>", eos_token="</s>", pad_token="<pad>" ) # Build an iterator over this dataset def batch_iterator(input_sentence_size=None): if input_sentence_size is None: input_sentence_size = len(dataset) batch_length = 100 for i in range(0, input_sentence_size, batch_length): yield dataset[i : i + batch_length]["text"] print("Train Tokenizer") # Train tokenizer tokenizer.train_from_iterator( iterator=batch_iterator(input_sentence_size=input_sentence_size), vocab_size=vocab_size, show_progress=True, ) # Save files to disk tokenizer.save("./models/norwegian-t5-base/tokenizer.json") print("DONE TOKENIZING ") # CONFIG config = T5Config.from_pretrained( "google/t5-v1_1-small", vocab_size=tokenizer.get_vocab_size() # "google/t5-v1_1-base", vocab_size=tokenizer.get_vocab_size() ) config.save_pretrained("./models/norwegian-t5-base") print("DONE SAVING TOKENIZER ") ``` The dependency can be found here: - πŸ“— [`t5_tokenizer_model.py`](https://raw.githubusercontent.com/huggingface/transformers/main/examples/flax/language-modeling/t5_tokenizer_model.py) After `tokenizing_and_configing.py` is completed. I run this code: ``` python run_t5_mlm_flax.py \ --output_dir="./models/norwegian-t5-base" \ --model_type="t5" \ --config_name="./models/norwegian-t5-base" \ --tokenizer_name="./models/norwegian-t5-base" \ --dataset_name="nthngdy/oscar-mini" \ --dataset_config_name="unshuffled_deduplicated_no" \ --max_seq_length="512" \ --per_device_train_batch_size="32" \ --per_device_eval_batch_size="32" \ --adafactor \ --learning_rate="0.005" \ --weight_decay="0.001" \ --warmup_steps="2000" \ --overwrite_output_dir \ --logging_steps="500" \ --save_steps="10000" \ --eval_steps="2500" \ --do_train \ --do_eval ``` The full code for `run_t5_mlm_flax.py` can be found [here](https://raw.githubusercontent.com/huggingface/transformers/main/examples/flax/language-modeling/run_t5_mlm_flax.py). But after `run_t5_mlm_flax.py` is completed , I can only find these files in `./model/norwegian-t5-base`: ``` . └── norwegian-t5-base β”œβ”€β”€ config.json β”œβ”€β”€ events.out.tfevents.1680920382.ip-172-31-30-81.71782.0.v2 └── tokenizer.json └── eval_results.json ``` What's wrong with my process. I expect it to produce more files (see Expected Behavior section). Additional note: I don't experience any error messages AT ALL. Everything completes smoothly without interruption. I'm using Amazon AWS p3.2xlarge; cuda_11.2.r11.2/compiler.29618528_0 ### Expected behavior I expect it to produce more files like these: 1. flax_model.msgpack: This file contains the weights of the fine-tuned Flax model. 2. tokenizer_config.json: This file contains the tokenizer configuration, such as the vocabulary size and special tokens. 3. training_args.bin: This file contains the training arguments used during fine-tuning, such as learning rate and batch size. 4. merges.txt: This file is part of the tokenizer and contains the subword merges. 5. vocab.json: This file is part of the tokenizer and contains the vocabulary mappings. 6. train.log: Logs from the training process, including loss, learning rate, and other metrics. 7. Checkpoint files: If you have enabled checkpoints during training, you will find checkpoint files containing the model weights at specific training steps.
04-08-2023 09:52:08
04-08-2023 09:52:08
Hey @gundalav, it looks like your number of training steps is less than your number of save steps (10000). We save the model and tokenizer every `save_steps` training steps: https://github.com/huggingface/transformers/blob/aec10d162f59d809ead3990ef78c51918b622f38/examples/flax/language-modeling/run_t5_mlm_flax.py#L949 Since `save_steps` is greater than your total number of training steps, we never hit this threshold. If you reduce your number of `save_steps` to 10, you'll see that the weights file, config and tokenizer are saved every 10 steps. You can then change your `save_steps` based on your total number of training steps for an appropriate value (e.g. set save steps to ~10% of your total train steps, so that you save 10 checkpoints during training)<|||||>@sanchit-gandhi Thanks so much! It works like charm!<|||||>Glad to hear that @gundalav!
transformers
22,667
closed
Update some `MarkupLM` tests' expected values
# What does this PR do? Need to update some expected values in test files after #22302.
04-08-2023 09:44:41
04-08-2023 09:44:41
_The documentation is not available anymore as the PR was closed or merged._<|||||>Merge now. Feel free to leave comments if any :-)
transformers
22,666
closed
Fix quantization docs typo
null
04-08-2023 05:58:00
04-08-2023 05:58:00
_The documentation is not available anymore as the PR was closed or merged._<|||||>@sgugger not quite sure where to report. HF blog's rss feed links are broken. Also running in trough https://validator.w3.org/feed/ shows a duplicated article
transformers
22,665
closed
add xformers dep, xformers attn for gpt2
# What does this PR do? Add xformers as a dependency and implement xformers attention for gpt2. I am a bit of a novice to this, but would like to contribute in helping all models in the transformers library to have xformers support. It is likely the case that this PR is not ready to merge, but I was hoping I could get some feedback on what I would be able to provide Fixes # (issue) Reduces VRAM and increases speed
04-08-2023 03:44:27
04-08-2023 03:44:27
cc @younesbelkada I don't know if this redundant with the Better Transformer integration.<|||||>Related: https://github.com/huggingface/transformers/pull/22386 I don't think it's an issue to collide - if it is just better in most cases, having it default to users makes sense, in transformers natively (with some refactoring). However, for now, pytorch's sdpa has some limitations: * no scale argument (some archs do not scale query/key) * no speedup/memory savings for custom attention mask (flash and mem-efficient not supported) * no support for mixed fp16/fp32, like in some models where softmax is in fp32 while the rest in fp32 * C++ implementation is good for all hardware, mem-efficient and flash are Nvidia-only<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
22,664
closed
Generate: add CJK support to TextStreamer
# What does this PR do? This pull request adds support for streaming CJK (Chinese, Japanese, Korean) characters to the TextStreamer class. It now flushes the token cache if the last token is a CJK character, in addition to flushing it if the text ends with `"\n"` or `" "`. This prevents CJK characters from being stuck in `token_cache`. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @gante
04-08-2023 02:14:34
04-08-2023 02:14:34
_The documentation is not available anymore as the PR was closed or merged._<|||||>cc @gante <|||||>Yes of course, I use this tiny script for screen recording. ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer class TextCJKStreamer(TextStreamer): def put(self, value): """ Recives tokens, decodes them, and prints them to stdout as soon as they form entire words. """ if len(value.shape) > 1 and value.shape[0] > 1: raise ValueError("TextStreamer only supports batch size 1") elif len(value.shape) > 1: value = value[0] if self.skip_prompt and self.next_tokens_are_prompt: self.next_tokens_are_prompt = False return # Add the new token to the cache and decodes the entire thing. self.token_cache.extend(value.tolist()) text = self.tokenizer.decode(self.token_cache, **self.decode_kwargs) # After the symbol for a new line, we flush the cache. if text.endswith("\n"): printable_text = text[self.print_len :] self.token_cache = [] self.print_len = 0 # If the last token is a CJK character, we print the characters. elif len(text) > 0 and self._is_chinese_char(ord(text[-1])): printable_text = text[self.print_len :] self.print_len += len(printable_text) # Otherwise, prints until the last space char (simple heuristic to avoid printing incomplete words, # which may change with the subsequent token -- there are probably smarter ways to do this!) else: printable_text = text[self.print_len : text.rfind(" ") + 1] self.print_len += len(printable_text) self.on_finalized_text(printable_text) def _is_chinese_char(self, cp): """Checks whether CP is the codepoint of a CJK character.""" # This defines a "chinese character" as anything in the CJK Unicode block: # https://en.wikipedia.org/wiki/CJK_Unified_Ideographs_(Unicode_block) # # Note that the CJK Unicode block is NOT all Japanese and Korean characters, # despite its name. The modern Korean Hangul alphabet is a different block, # as is Japanese Hiragana and Katakana. Those alphabets are used to write # space-separated words, so they are not treated specially and handled # like the all of the other languages. if ( (cp >= 0x4E00 and cp <= 0x9FFF) or (cp >= 0x3400 and cp <= 0x4DBF) # or (cp >= 0x20000 and cp <= 0x2A6DF) # or (cp >= 0x2A700 and cp <= 0x2B73F) # or (cp >= 0x2B740 and cp <= 0x2B81F) # or (cp >= 0x2B820 and cp <= 0x2CEAF) # or (cp >= 0xF900 and cp <= 0xFAFF) or (cp >= 0x2F800 and cp <= 0x2FA1F) # ): # return True return False tokenizer = AutoTokenizer.from_pretrained("bigscience/bloomz-560m") # Use CPU to make generation slow model = AutoModelForCausalLM.from_pretrained("bigscience/bloomz-560m") streamer = TextStreamer(tokenizer, skip_prompt=True) cjk_streamer = TextCJKStreamer(tokenizer, skip_prompt=True) prompt = "δΈ€δΈͺδΌ ε₯‡ηš„εΌ€η«―οΌŒδΈ€δΈͺδΈη­ηš„η₯žθ―οΌŒθΏ™δΈδ»…δ»…ζ˜―δΈ€ιƒ¨η”΅ε½±οΌŒ" tokenized_inputs = tokenizer([prompt], return_tensors="pt") print("Origin TextStreamer:") tokenized_inputs = tokenized_inputs.to(model.device) _ = model.generate( **tokenized_inputs, do_sample=False, streamer=streamer, min_new_tokens=64, max_new_tokens=128, ) print("CJK TextStreamer") _ = model.generate( **tokenized_inputs, do_sample=False, streamer=cjk_streamer, min_new_tokens=64, max_new_tokens=128, ) prompt = "Suggest at least five related search terms to 'MαΊ‘ng neural nhΓ’n tαΊ‘o'." tokenized_inputs = tokenizer([prompt], return_tensors="pt") print("Origin TextStreamer:") tokenized_inputs = tokenized_inputs.to(model.device) _ = model.generate( **tokenized_inputs, do_sample=False, streamer=streamer, min_new_tokens=64, max_new_tokens=128, ) print("CJK TextStreamer") _ = model.generate( **tokenized_inputs, do_sample=False, streamer=cjk_streamer, min_new_tokens=64, max_new_tokens=128, ) ``` The model and prompts are from https://huggingface.co/bigscience/bloomz-560m. And here is the comparison. As you can see, before the Chinese text only prints when it meets "。". https://user-images.githubusercontent.com/12250696/232178058-fdf2a7f7-5db0-4b3a-833c-09f097fd0ed6.mov https://user-images.githubusercontent.com/12250696/232178065-b0afd810-34dc-4aae-a370-5b35cb4ca9ed.mov <|||||>@bcol23 thank you for adding the screen recordings for future reference πŸ™ And thank you for making `transformers` a little bit more inclusive πŸ€—
transformers
22,663
closed
moved labels to the same device as logits for BLOOM, GPT Neo, GPT NeoX, RoBERTa and VIT models
# What does this PR do? As suggested in the https://github.com/huggingface/transformers/issues/22561, moved labels to the same device as logits for `BLOOM`, `GPT Neo`, `GPT NeoX`, `RoBERTa` and `VIT` models. @sgugger Could you review this once?
04-07-2023 20:29:38
04-07-2023 20:29:38
_The documentation is not available anymore as the PR was closed or merged._
transformers
22,662
closed
run_text_classification.py: error: the following arguments are required: --model_name_or_path
Hi! I'm running the 'transformers/examples/tensorflow/text-classification/run_text_classification.py' and got the following "Error: the following arguments are required: --model_name_or_path" and "--model_name_or_path=gpt2: command not found" at the same time. Could you please help?
04-07-2023 19:51:53
04-07-2023 19:51:53
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
22,661
closed
Make dynamic code work with offline mode
# What does this PR do? Using dynamic code on the Hub won't work in offline mode if the model is cached. This is because of an old way of getting the commit hash I put there before we had the commit hash returned in the e-tag. Now it's very easy to get it, so this PR changes the line of code and adds a test to make sure we don't regress. cc @VictorSanh and @leot13 since you reported the bug.
04-07-2023 19:41:56
04-07-2023 19:41:56
Thank you!!!<|||||>_The documentation is not available anymore as the PR was closed or merged._
transformers
22,660
closed
Remove 2 failing ONNX conversion tests
# What does this PR do? After ##22212, two tests start to fail.
04-07-2023 18:07:37
04-07-2023 18:07:37
OK. But should we do something like removing CI regarding this? Currently failing tests pop up.<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>I think you can remove the tests as well.<|||||>@sgugger Just to confirm, we want/could remove all things like - `ConvertCommand` in `src/transformers/commands/transformers_cli.py` - `export_with_transformers` in `src/transformers/onnx/__main__.py` - the file `src/transformers/onnx/convert.py` and any test using this fle also cc @michaelbenayoun @fxmarty <|||||>No we're not removing code, just the tests if they start failing.<|||||>OK, glad I ask!<|||||>Ping @sgugger again to draw a bit of his attention.
transformers
22,659
closed
Generate: add API warning to streamers
# What does this PR do? The API for the streamers is still being worked on, and will not be stable in time for the next release. This PR adds a warning regarding potential future changes in the API.
04-07-2023 17:53:27
04-07-2023 17:53:27
_The documentation is not available anymore as the PR was closed or merged._
transformers
22,658
closed
Revert migration of setup to pyproject.toml
# What does this PR do? As mentioned in #22599, the migration of the setup to pyproject is causing some issues for editable installs on some setups. This PR reverts that migration and adds the setup.py to the formatted files. (Note that I could not directly revert the original PR due to some of its changes being already reverted in #22587 ) Fixes #22599
04-07-2023 17:51:27
04-07-2023 17:51:27
_The documentation is not available anymore as the PR was closed or merged._
transformers
22,657
closed
[tokenization] do not push special file
# What does this PR do? Prevent pushing the path of the special tokens map file
04-07-2023 16:18:10
04-07-2023 16:18:10
_The documentation is not available anymore as the PR was closed or merged._
transformers
22,656
closed
Reverting Deta cloning mecanism.
# What does this PR do? This one is quite odd. With the revert the slow test will work (I guess what we care most about): ```python from transformers import AutoImageProcessor, DetaForObjectDetection from PIL import Image import requests import torch url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) image_processor = AutoImageProcessor.from_pretrained("jozhang97/deta-swin-large") inputs = image_processor(images=image, return_tensors="pt") outputs = model(**inputs) target_sizes = torch.tensor([image.size[::-1]]) results = image_processor.post_process_object_detection(outputs, threshold=0.5, target_sizes=target_sizes)[0] print(results) ``` However if I incorporate this: ``` model = DetaForObjectDetection.from_pretrained("jozhang97/deta-swin-large") model.save_pretrained("./tmp") model = DetaForObjectDetection.from_pretrained("./tmp") ``` ~Then, the output is garbage again (this isn't using safetensors and is not linked to the original change). I even tried to revert the PR that introduced the bug.~ The change of output **is** due to safetensors. I need to thoroughly check this. This revert will fix the slow PR anyway. I think something is not properly setup in this model, becuase the uploaded model seems to have those layers NOT linked (hence the copy.deepcopy) but the rest of the configuration seems to supposed to assume they are, hence the issue maybe ? Fixes https://github.com/huggingface/transformers/pull/22437#issuecomment-1500356727 <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
04-07-2023 16:00:49
04-07-2023 16:00:49
_The documentation is not available anymore as the PR was closed or merged._<|||||>There is however `test_can_use_safetensors` failing after this PR. Is this test still relevant (at least while we keep the changes in this PR)<|||||>> There is however test_can_use_safetensors failing after this PR. Is this test still relevant (at least while we keep the changes in this PR) The new code should fix everything. @sgugger for a new review since the change has evolved quite a bit and is not a simple revert anymore. Added inline comments in the PR to explain what's going on. <|||||>> So we tried it your way and it doesn't work. Can we try to use Accelerate to detect the tied weights instead as suggested initially? Because `find_tied_weights` looks at the model, where as here we look at the state_dict, which can be passed directly to the function. In both functions the `state_dict` is the source of truth, not the model, isn't it ? We could definitely use `find_tied_weights` and it would most likely pass the tests, but it wouldn't be exactly looking at the same thing. State dict is what is coming in, find_tied_weights is looking where it's being put on. (in from_pretrained, opposite in save_pretrained). In general they should be the same. But not necessarily always. For instance, I wonder what happens for buffers. > This will ignore the whole state dict as soon as device_map="auto" or low_cpu_mem_usage=True. Why ? It seems you're using the hash (via `is`) in accelerate, I will switch to that since we want entirely shared tensors like in accelerate.<|||||>> Why ? It seems you're using the hash (via is) in accelerate, I will switch to that since we want entirely shared tensors like in accelerate. So actually `hash` doesn't seem to work either, you can have shared buffer and still different hashes. I'll try to exhibit a simple example, but deta `model_decoder.class_embed.n.bias` and `class_embed.n.bias` do share the buffer, and yet don't have the same hash. This exhibits the different between find_tied_weights and the state_dict. Here the tensors from the state_dict don't share the hash, while the parameters do on the model, yet the tensors on the state dict do share memory. In this particular case, using find_tied_weights would work, but that also means the opposite is possible.<|||||>In both situations, you have access to the model, and `find_tied_weights` will give you a list of names that are compatible with the `state_dict` of the model. > In this particular case, using find_tied_weights would work, but that also means the opposite is possible. If this situation (the opposite) does not appear in Transformers, let's just use `find_tied_weights`. I also would like to drive the point home that `safetensors` not dealing with shared weights makes it unusable in practice in other libs: see what we have to do here... and we really want to use `safetensors`. How are we going to convince other users?<|||||>> makes it unusable in practice Why are we even caring about `_keys_to_ignore` and `tie_weights` if it's so inconvenient ? Why are we trying to even find tied weights in accelerate ? How do we expect to use safetensors for the TF models, since sharing doesn't exist over there ? <|||||>In order to help with ease of use of `safetensors` by itself I created this PR: https://github.com/huggingface/safetensors/pull/236 which sorts of mimics what is done here. However I still think this PR and the mechanism in transformer should be kept, since `_keys_to_ignore` are very good at hinting which keys we should keep, and which to drop, information which is not available in `safetensors` directly. Also modification are shallower here since it doesn't touch `state_dict` and `load_state_dict` which the proposed methods to have to change.<|||||>> Thanks for considering shared weights in `safetensors` directly. I agree it would still be cleaner to have the same kind of mechanism in Transformers. Could you please explain to me once again why the hash check does not work for the first changes in the PR (dropping weights in the checkpoint before passing it to safetensors). I don't think we ever tie weights in Transformers other than just setting the same tensors. Mostly this: https://github.com/huggingface/transformers/blob/main/src/transformers/modeling_utils.py#L2146 ```python state_dict = kwargs.pop("state_dict", None) ``` Users can send a state_dict, not linked to `self` to this PRs tried to look only at the `state_dict`, instead of `self`. This is indeed a bit of an edge case. Then there are even further edge cases: ```python class Model(torch.nn.Module): def __init__(self): super().__init__() self.a = torch.nn.Linear(100, 100) self.b = self.a model = Model() assert model.a is model.b # OK ! ``` ```python A = torch.zeros((1000, 100)) a = A[:100] model.a.weight = nn.Parameter(a) model.b.weight = model.a.weight assert model.a is model.b # Well indeed it's the same parameter, but both are shared with respect to a larger tensor ``` ```python class NoSharedModel(torch.nn.Module): def __init__(self): super().__init__() self.a = torch.nn.Linear(100, 100) self.b = torch.nn.Linear(100, 100) model = NoSharedmodel() A = torch.zeros((100, 100)) model.a.weight = nn.Parameter(A) model.b.weight = nn.Parameter(A[:10]) assert model.a.weight is not model.b .weight # A is not B in parameters, however, the underlying tensors are indeed shared ``` I haven't looked at that deeply when fintune occurs to see if the autograd starts to copy the tensors During `state_dict()` will give back `a` and `b` as shared tensors, yet the params don't have the same hash. If you want I could take a look at `accelerate` shared params function and see if this applies. There's a lot of weird things when playing super deeply with this. I discovered a lot of behavior with Deta from this PR. But the biggest reason, really is the optional `state_dict` whereas `accelerate` looks directly at the model. Within `from_pretrained` looking at the model is better in this case since what matters is the users' model rather than the state_dict coming from file (be it pytorch or safetensors) > > Apart from that, just rebasing on main should be necessary here. > > Note that I will rework the constants in future work to have one distinct key for the tied weights (as sometimes they are not tied and we are currently not warning the user if they are missing), but it's orthogonal to this PR. Great ! <|||||>Seeing the rebase, `hash` doesn't work on tensors unfortunately: ```python import torch A = torch.zeros((10, 10)) B = A[1] A.untyped_storage().data_ptr() == B.untyped_storage().data_ptr() hash(A) != hash(B) ```<|||||>> (which will become the default utlimately) Hurray !!! <|||||>Failing tests seem to be linked to newly release huggingface_hub==0.14.0 @sgugger Merge if you think it's OK, I'm going to not merge given this PR affects core modeling.
transformers
22,655
closed
🌐 [i18n-KO] Translated `sequence_classification.mdx` to Korean
# What does this PR do? Translated the `tasks/sequence_classification.mdx` file of the documentation to Korean. - The file name is `sequence_classification.mdx`, but the document name is `text classification`. - Currently, it is being revised to consistent vocabulary. Thank you in advance for your review:) Part of https://github.com/huggingface/transformers/issues/20179 ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? <!-- 제좜 μ „ 체크리슀트둜, κ°€μ§œμ—°κ΅¬μ†Œλ§Œμ˜ μ²΄ν¬λ¦¬μŠ€νŠΈλ„ <details>둜 κ°μ‹Έμ„œ λ§Œλ“€μ–΄λ‘λ©΄ 더 쒋을 것 κ°™μ•„μš”. --> ## Who can review? <!-- κ°€μ§œμ—°κ΅¬μ†Œ νŒ€μ›λ“€κ³Ό 리뷰가 λλ‚œ ν›„μ—λ§Œ ν—ˆκΉ…νŽ˜μ΄μŠ€ μ§μ›λ“€μ—κ²Œ 리뷰 μš”μ²­ν•˜λŠ” μ•„λž˜ 주석을 λ…ΈμΆœν•΄μ£Όμ„Έμš”! --> Team PseudoLab, may you please review this PR? @0525hhgus, @KIHOON71, @sim-so, @gabrielwithappy, @HanNayeoniee, @wonhyeongseo, @jungnerd May you please review this PR? @sgugger, @ArthurZucker, @eunseojo
04-07-2023 15:09:25
04-07-2023 15:09:25
_The documentation is not available anymore as the PR was closed or merged._<|||||>May you please review this PR? @sgugger, @ArthurZucker, @eunseojo
transformers
22,654
closed
Add Segment Anything Model (SAM)
# What does this PR do? Original repo: https://github.com/facebookresearch/segment-anything Segment Anything Model (SAM) is a recent model from Meta AI that makes it possible to predict image segmentation masks given an image and various inputs such as bounding boxes, 2D points or previous masks. It is also mentioned in the original paper that the model can take textual input, but this feature has not been released yet in the original repository. The release came with 3 weights, namely: - `sam_vit_b` - `sam_vit_h` - `sam_vit_l` Their main difference is about the vision encoder size, the prompt encoder, and mask decoder should stay the same. According to the paper, for each input, the model predicts 3 binary masks, corresponding to the region where the "object of interest" lives in the image. cc @sgugger @amyeroberts
04-07-2023 14:52:40
04-07-2023 14:52:40
_The documentation is not available anymore as the PR was closed or merged._<|||||>I have few comments for the reviewers before starting the review, IMO we should not expose `SamPromptEncoder` and `SamMaskDecoder` inside the main init (contrary to other models such as Blip where we used to expose the text module and the vision module) mainly because these modules cannot be used as a standalone module. The MaskDecoder needs the image embeddings and the points/bounding box/masks embeddings to predict the masks, and the Prompt Encoder is a super small module (just two embedding layers). But both the image encoder and prompt encoder can be called through `get_xxxx_embeddings` method from `SamForMaskGeneration`. One last point regarding the PromptEmbedding module, in the paper they mention that this module should also accept textual inputs. However according to the authors this has not been released yet. cc @sgugger @amyeroberts just FYI<|||||>Might be a few things here and there, but new pairs of eyes will help us fix fast. Pinging @amyeroberts and @sgugger for a review!<|||||>Merging as I need it to update the pipeline based on reviews. Will adresse remaining comments in a follow up PR
transformers
22,653
closed
Small nit,
# What does this PR do? Fixes #21986
04-07-2023 14:04:04
04-07-2023 14:04:04
_The documentation is not available anymore as the PR was closed or merged._
transformers
22,652
closed
Fix `MegaModel` CI
# What does this PR do? Fix `MegaModel` CI (some tests are skipped too). See comments.
04-07-2023 13:30:46
04-07-2023 13:30:46
_The documentation is not available anymore as the PR was closed or merged._
transformers
22,651
closed
May I ask when will release 4.28.0
### Feature request May I ask when will release 4.28.0? ### Motivation May I ask when will release 4.28.0 ### Your contribution May I ask when will release 4.28.0
04-07-2023 13:27:10
04-07-2023 13:27:10
Next week Next week Next week Next week<|||||>I needed to use TF-BLIP offline, so I created whl. It may help for this week :D https://www.kaggle.com/datasets/ipythonx/tenp-transformer-4280<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
22,650
closed
Fix typo
# What does this PR do? Typo on trainer.py This should be modified from "forword" to "forward" Fixes # (issue) ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
04-07-2023 09:23:26
04-07-2023 09:23:26
_The documentation is not available anymore as the PR was closed or merged._<|||||>Thank you for the fix!
transformers
22,649
closed
[OPT] Fix default attention mask size
# What does this PR do? Fixes #21685, should also help in adding the ONNX configuration in #17771
04-07-2023 09:02:52
04-07-2023 09:02:52
_The documentation is not available anymore as the PR was closed or merged._<|||||>I'm gonna add a test before merging
transformers
22,648
closed
🚨🚨🚨 [`Blip`] Refactor the Blip modeling file + test file 🚨🚨🚨
# What does this PR do? Removes the file `test_modeling_blip_text` as its content is totally duplicated inside `test_modeling_blip`, so that we avoid running these tests twice This PR also refactors the modeling file of `blip`, to have a single file for the whole architecture. I also realized that there is no necessary to have a `BlipTextPretrainedModel`, so I decided to remove that class for a cleaner implementation. Hence, this PR might introduce breaking changes for blip. cc @ydshieh
04-07-2023 08:41:01
04-07-2023 08:41:01
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_22648). All of your documentation changes will be reflected on that endpoint.<|||||>Let's wait our great @sgugger to express his opinion on if we are allowed to change this. I think it's fine however.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
22,647
open
Open AI GPT Model Implementation in Flax
### Model description https://huggingface.co/openai-gpt today supports tf and pytorch but not flax. I'd like to implement the support to enhance the current gpt offering by hugging face ### Open source status - [X] The model implementation is available - [X] The model weights are available ### Provide useful links for the implementation Given that the model is already implemented in other two frameworks, I'll try to infer the model from there. Please feel free to provide additional resources that can help me wrap this up better and faster
04-07-2023 05:04:23
04-07-2023 05:04:23
@sanchit-gandhi <|||||>@sanchit-gandhi @sgugger Are there any reservations around this? I have gone through GPT architecture and flax code of GPT2. I'm fairly certain this is implementable for exhaustiveness. OpenAI GPT model still sees almost a million downloads a month Please let me know. Would like to start with a draft PR than just rushing in<|||||>Hey @mayankagarwals! Super sorry for not getting back to you earlier here. Let me give you my two cents: the OpenAI GPT model is definitely still super popular amongst PyTorch users (as you say, ~1 mil downloads per month). What we tend to see with Flax users though is a preference for newer, larger models (e.g. OPT, Flan-T5). This is primarily because of how easy it is to run super large models in JAX with data and model parallelism. So whilst I think this PR would be cool for completeness, I think porting a newer, more flashy model might get the JAX/Flax community more excited! How does this sound?<|||||>No worries :) @sanchit-gandhi Yes, I had not gone ahead because of the same skepticism. Would you mind pointing me to what in your opinion might be a model worth digging into and think will benefit hugging face and the community? I have a good hold on text generation architecture so something aligned there would be better!<|||||>LLaMA could be cool! What I would suggest doing is starting from the Flax GPT-Neo model (since this is the Flax model most similar to LLaMa) and then adding the new bits in<|||||>@sanchit-gandhi I was also thinking of adding a Flax version of LLama (and also GPT-NeoX, maybe others) as some Flax practice. I couldn't find a guide on adding a new framework to an existing model, and I asked on the discord without much avail (but was directed to this issue). I'm familiar with the architectures having already ported them to other frameworks where I work. If you could point me in the right direction, I would be happy to port this for you! I wasn't sure if it is as simple as adding a new `modeling_flax_*` file or if there are more parts / some best practices to be aware of. Thanks πŸ€— <|||||>Hey @vvvm23! In this case, since we already have the PT model, the best thing to do would be to add a new modelling file for flax (`modeling_flax_llama.py`) which is initially copied from the Flax GPT Neo modelling code. You can then start making changes to the Flax code to adapt it to LLama. The reason that we copy from Flax GPT Neo is that it contains optimised code for the attention layer which we should try and re-use for Flax LLama. You'll then need to make sure that the weight names match and that you have equivalence between PyTorch LLama and Flax LLama. To do this, I would recommend creating a 'dummy' version of the PyTorch LLama model: ```python from transformers import LlamaConfig, LlamaForCausalLM config = LlamaConfig(hidden_size=16, intermediate_size=24, max_position_embeddings=128, num_attention_heads=2, num_hidden_layers=2) model = LlamaForCausalLM(config) model.save_pretrained("./path/to/save") ``` And then for your test script, load this same model in PyTorch, then Flax (pass `from_pt=True` in the `from_pretrained` call), and verify with random inputs that you get the same logits out when you do a forward pass (example here https://github.com/huggingface/transformers/issues/15476#issue-1121800731) You can then focus on the tests and converting the actual model weights as required. Feel free to open a PR and tag me - more than happy to help with the integration here! <|||||>Thanks @sanchit-gandhi that was very comprehensive! I'll let you know how I get on. :hugs: <|||||>Got a bit caught up with real life stuff, but I will be working on this more intensively from Monday, aiming to finish something by end of week.<|||||>@sanchit-gandhi I made a draft PR of my current progress, see #24587. Sorry, I haven't made the full model, been very busy πŸ˜“
transformers
22,646
closed
T5Tokenizer, TFT5ForConditionalGeneration Graph execution error using tfa.metrics.CohenKappa
### System Info - `transformers` version: 4.24.0 - Platform: macOS-12.5-arm64-arm-64bit - Python version: 3.10.9 - Huggingface_hub version: 0.10.1 - PyTorch version (GPU?): 1.12.1 (False) - Tensorflow version (GPU?): 2.12.0 (True) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> Hello Huggingface Team, I am currently using the T5 modules: T5Tokenizer, TFT5ForConditionalGeneration with the metric tfa.metrics.CohenKappa from Tensorflow Add-ons library. I have fine-tuned a model to achieve multi-class classification which works when I use a different metric, for example, 'accuracy'. The issue is when I exchange the metric to use CohenKappa I get the error found below. If more information is required please let me know. Thank you in advance! Error: > InvalidArgumentError Traceback (most recent call last) > Cell In[80], line 15 > 6 #model.fit(tokenized_train_data, validation_data=val_dataset, epochs=num_epochs) > 8 model_checkpoint_callback = tf.keras.callbacks.ModelCheckpoint( > 9 filepath='/', > 10 save_weights_only=True, > 11 monitor='val_accuracy', > 12 mode='max', > 13 save_best_only=True) > ---> 15 history = t5_model.fit(tokenized_train_data, > 16 validation_data=tokenized_test_data, > 17 callbacks=None,#[model_checkpoint_callback], > 18 batch_size=batch_size, > 19 epochs=num_epochs) > > File ~/anaconda3/lib/python3.10/site-packages/keras/utils/traceback_utils.py:70, in filter_traceback.<locals>.error_handler(*args, **kwargs) > 67 filtered_tb = _process_traceback_frames(e.__traceback__) > 68 # To get the full stack trace, call: > 69 # `tf.debugging.disable_traceback_filtering()` > ---> 70 raise e.with_traceback(filtered_tb) from None > 71 finally: > 72 del filtered_tb > > File ~/anaconda3/lib/python3.10/site-packages/tensorflow/python/eager/execute.py:52, in quick_execute(op_name, num_outputs, inputs, attrs, ctx, name) > 50 try: > 51 ctx.ensure_initialized() > ---> 52 tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name, > 53 inputs, attrs, num_outputs) > 54 except core._NotOkStatusException as e: > 55 if name is not None: > > InvalidArgumentError: Graph execution error: > > Detected at node 'confusion_matrix/assert_less/Assert/AssertGuard/Assert' defined at (most recent call last): > File "/Users/pj/anaconda3/lib/python3.10/runpy.py", line 196, in _run_module_as_main > return _run_code(code, main_globals, None, > File "/Users/pj/anaconda3/lib/python3.10/runpy.py", line 86, in _run_code > exec(code, run_globals) > File "/Users/pj/anaconda3/lib/python3.10/site-packages/ipykernel_launcher.py", line 17, in <module> > app.launch_new_instance() > File "/Users/pj/anaconda3/lib/python3.10/site-packages/traitlets/config/application.py", line 992, in launch_instance > app.start() > File "/Users/pj/anaconda3/lib/python3.10/site-packages/ipykernel/kernelapp.py", line 711, in start > self.io_loop.start() > File "/Users/pj/anaconda3/lib/python3.10/site-packages/tornado/platform/asyncio.py", line 199, in start > self.asyncio_loop.run_forever() > File "/Users/pj/anaconda3/lib/python3.10/asyncio/base_events.py", line 603, in run_forever > self._run_once() > File "/Users/pj/anaconda3/lib/python3.10/asyncio/base_events.py", line 1906, in _run_once > handle._run() > File "/Users/pj/anaconda3/lib/python3.10/asyncio/events.py", line 80, in _run > self._context.run(self._callback, *self._args) > File "/Users/pj/anaconda3/lib/python3.10/site-packages/ipykernel/kernelbase.py", line 510, in dispatch_queue > await self.process_one() > File "/Users/pj/anaconda3/lib/python3.10/site-packages/ipykernel/kernelbase.py", line 499, in process_one > await dispatch(*args) > File "/Users/pj/anaconda3/lib/python3.10/site-packages/ipykernel/kernelbase.py", line 406, in dispatch_shell > await result > File "/Users/pj/anaconda3/lib/python3.10/site-packages/ipykernel/kernelbase.py", line 729, in execute_request > reply_content = await reply_content > File "/Users/pj/anaconda3/lib/python3.10/site-packages/ipykernel/ipkernel.py", line 411, in do_execute > res = shell.run_cell( > File "/Users/pj/anaconda3/lib/python3.10/site-packages/ipykernel/zmqshell.py", line 531, in run_cell > return super().run_cell(*args, **kwargs) > File "/Users/pj/anaconda3/lib/python3.10/site-packages/IPython/core/interactiveshell.py", line 2961, in run_cell > result = self._run_cell( > File "/Users/pj/anaconda3/lib/python3.10/site-packages/IPython/core/interactiveshell.py", line 3016, in _run_cell > result = runner(coro) > File "/Users/pj/anaconda3/lib/python3.10/site-packages/IPython/core/async_helpers.py", line 129, in _pseudo_sync_runner > coro.send(None) > File "/Users/pj/anaconda3/lib/python3.10/site-packages/IPython/core/interactiveshell.py", line 3221, in run_cell_async > has_raised = await self.run_ast_nodes(code_ast.body, cell_name, > File "/Users/pj/anaconda3/lib/python3.10/site-packages/IPython/core/interactiveshell.py", line 3400, in run_ast_nodes > if await self.run_code(code, result, async_=asy): > File "/Users/pj/anaconda3/lib/python3.10/site-packages/IPython/core/interactiveshell.py", line 3460, in run_code > exec(code_obj, self.user_global_ns, self.user_ns) > File "/var/folders/wv/10kjqk217c5039dg4pbqggh00000gn/T/ipykernel_39051/3646968355.py", line 15, in <module> > history = t5_model.fit(tokenized_train_data, > File "/Users/pj/anaconda3/lib/python3.10/site-packages/keras/utils/traceback_utils.py", line 65, in error_handler > return fn(*args, **kwargs) > File "/Users/pj/anaconda3/lib/python3.10/site-packages/keras/engine/training.py", line 1685, in fit > tmp_logs = self.train_function(iterator) > File "/Users/pj/anaconda3/lib/python3.10/site-packages/keras/engine/training.py", line 1284, in train_function > return step_function(self, iterator) > File "/Users/pj/anaconda3/lib/python3.10/site-packages/keras/engine/training.py", line 1268, in step_function > outputs = model.distribute_strategy.run(run_step, args=(data,)) > File "/Users/pj/anaconda3/lib/python3.10/site-packages/keras/engine/training.py", line 1249, in run_step > outputs = model.train_step(data) > File "/var/folders/wv/10kjqk217c5039dg4pbqggh00000gn/T/ipykernel_39051/2458810848.py", line 36, in train_step > self.compiled_metrics.update_state(labels, tf.argmax(outputs.logits, -1)) > File "/Users/pj/anaconda3/lib/python3.10/site-packages/keras/engine/compile_utils.py", line 605, in update_state > metric_obj.update_state(y_t, y_p, sample_weight=mask) > File "/Users/pj/anaconda3/lib/python3.10/site-packages/keras/utils/metrics_utils.py", line 77, in decorated > update_op = update_state_fn(*args, **kwargs) > File "/Users/pj/anaconda3/lib/python3.10/site-packages/keras/metrics/base_metric.py", line 140, in update_state_fn > return ag_update_state(*args, **kwargs) > File "/Users/pj/anaconda3/lib/python3.10/site-packages/tensorflow_addons/metrics/cohens_kappa.py", line 150, in update_state > return self._update(y_true, y_pred, sample_weight) > File "/Users/pj/anaconda3/lib/python3.10/site-packages/tensorflow_addons/metrics/cohens_kappa.py", line 165, in _update_multi_class_model > return self._update_confusion_matrix(y_true, y_pred, sample_weight) > File "/Users/pj/anaconda3/lib/python3.10/site-packages/tensorflow_addons/metrics/cohens_kappa.py", line 193, in _update_confusion_matrix > new_conf_mtx = tf.math.confusion_matrix( > Node: 'confusion_matrix/assert_less/Assert/AssertGuard/Assert' > Detected at node 'confusion_matrix/assert_less/Assert/AssertGuard/Assert' defined at (most recent call last): > File "/Users/pj/anaconda3/lib/python3.10/runpy.py", line 196, in _run_module_as_main > return _run_code(code, main_globals, None, > File "/Users/pj/anaconda3/lib/python3.10/runpy.py", line 86, in _run_code > exec(code, run_globals) > File "/Users/pj/anaconda3/lib/python3.10/site-packages/ipykernel_launcher.py", line 17, in <module> > app.launch_new_instance() > File "/Users/pj/anaconda3/lib/python3.10/site-packages/traitlets/config/application.py", line 992, in launch_instance > app.start() > File "/Users/pj/anaconda3/lib/python3.10/site-packages/ipykernel/kernelapp.py", line 711, in start > self.io_loop.start() > File "/Users/pj/anaconda3/lib/python3.10/site-packages/tornado/platform/asyncio.py", line 199, in start > self.asyncio_loop.run_forever() > File "/Users/pj/anaconda3/lib/python3.10/asyncio/base_events.py", line 603, in run_forever > self._run_once() > File "/Users/pj/anaconda3/lib/python3.10/asyncio/base_events.py", line 1906, in _run_once > handle._run() > File "/Users/pj/anaconda3/lib/python3.10/asyncio/events.py", line 80, in _run > self._context.run(self._callback, *self._args) > File "/Users/pj/anaconda3/lib/python3.10/site-packages/ipykernel/kernelbase.py", line 510, in dispatch_queue > await self.process_one() > File "/Users/pj/anaconda3/lib/python3.10/site-packages/ipykernel/kernelbase.py", line 499, in process_one > await dispatch(*args) > File "/Users/pj/anaconda3/lib/python3.10/site-packages/ipykernel/kernelbase.py", line 406, in dispatch_shell > await result > File "/Users/pj/anaconda3/lib/python3.10/site-packages/ipykernel/kernelbase.py", line 729, in execute_request > reply_content = await reply_content > File "/Users/pj/anaconda3/lib/python3.10/site-packages/ipykernel/ipkernel.py", line 411, in do_execute > res = shell.run_cell( > File "/Users/pj/anaconda3/lib/python3.10/site-packages/ipykernel/zmqshell.py", line 531, in run_cell > return super().run_cell(*args, **kwargs) > File "/Users/pj/anaconda3/lib/python3.10/site-packages/IPython/core/interactiveshell.py", line 2961, in run_cell > result = self._run_cell( > File "/Users/pj/anaconda3/lib/python3.10/site-packages/IPython/core/interactiveshell.py", line 3016, in _run_cell > result = runner(coro) > File "/Users/pj/anaconda3/lib/python3.10/site-packages/IPython/core/async_helpers.py", line 129, in _pseudo_sync_runner > coro.send(None) > File "/Users/pj/anaconda3/lib/python3.10/site-packages/IPython/core/interactiveshell.py", line 3221, in run_cell_async > has_raised = await self.run_ast_nodes(code_ast.body, cell_name, > File "/Users/pj/anaconda3/lib/python3.10/site-packages/IPython/core/interactiveshell.py", line 3400, in run_ast_nodes > if await self.run_code(code, result, async_=asy): > File "/Users/pj/anaconda3/lib/python3.10/site-packages/IPython/core/interactiveshell.py", line 3460, in run_code > exec(code_obj, self.user_global_ns, self.user_ns) > File "/var/folders/wv/10kjqk217c5039dg4pbqggh00000gn/T/ipykernel_39051/3646968355.py", line 15, in <module> > history = t5_model.fit(tokenized_train_data, > File "/Users/pj/anaconda3/lib/python3.10/site-packages/keras/utils/traceback_utils.py", line 65, in error_handler > return fn(*args, **kwargs) > File "/Users/pj/anaconda3/lib/python3.10/site-packages/keras/engine/training.py", line 1685, in fit > tmp_logs = self.train_function(iterator) > File "/Users/pj/anaconda3/lib/python3.10/site-packages/keras/engine/training.py", line 1284, in train_function > return step_function(self, iterator) > File "/Users/pj/anaconda3/lib/python3.10/site-packages/keras/engine/training.py", line 1268, in step_function > outputs = model.distribute_strategy.run(run_step, args=(data,)) > File "/Users/pj/anaconda3/lib/python3.10/site-packages/keras/engine/training.py", line 1249, in run_step > outputs = model.train_step(data) > File "/var/folders/wv/10kjqk217c5039dg4pbqggh00000gn/T/ipykernel_39051/2458810848.py", line 36, in train_step > self.compiled_metrics.update_state(labels, tf.argmax(outputs.logits, -1)) > File "/Users/pj/anaconda3/lib/python3.10/site-packages/keras/engine/compile_utils.py", line 605, in update_state > metric_obj.update_state(y_t, y_p, sample_weight=mask) > File "/Users/pj/anaconda3/lib/python3.10/site-packages/keras/utils/metrics_utils.py", line 77, in decorated > update_op = update_state_fn(*args, **kwargs) > File "/Users/pj/anaconda3/lib/python3.10/site-packages/keras/metrics/base_metric.py", line 140, in update_state_fn > return ag_update_state(*args, **kwargs) > File "/Users/pj/anaconda3/lib/python3.10/site-packages/tensorflow_addons/metrics/cohens_kappa.py", line 150, in update_state > return self._update(y_true, y_pred, sample_weight) > File "/Users/pj/anaconda3/lib/python3.10/site-packages/tensorflow_addons/metrics/cohens_kappa.py", line 165, in _update_multi_class_model > return self._update_confusion_matrix(y_true, y_pred, sample_weight) > File "/Users/pj/anaconda3/lib/python3.10/site-packages/tensorflow_addons/metrics/cohens_kappa.py", line 193, in _update_confusion_matrix > new_conf_mtx = tf.math.confusion_matrix( > Node: 'confusion_matrix/assert_less/Assert/AssertGuard/Assert' > 2 root error(s) found. > (0) INVALID_ARGUMENT: assertion failed: [`labels` out of bound] [Condition x < y did not hold element-wise:] [x (confusion_matrix/control_dependency:0) = ] [[220 1][209...]...] [y (confusion_matrix/Cast:0) = ] [4] > [[{{node confusion_matrix/assert_less/Assert/AssertGuard/Assert}}]] > [[gradient_tape/tft5_for_conditional_generation/decoder/block_._5/layer_._0/SelfAttention/mul_1/_596]] > (1) INVALID_ARGUMENT: assertion failed: [`labels` out of bound] [Condition x < y did not hold element-wise:] [x (confusion_matrix/control_dependency:0) = ] [[220 1][209...]...] [y (confusion_matrix/Cast:0) = ] [4] > [[{{node confusion_matrix/assert_less/Assert/AssertGuard/Assert}}]] > 0 successful operations. > 0 derived errors ignored. [Op:__inference_train_function_393119] ### Who can help? @Rocketknight1 @gante @ArthurZucker ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction ``` def train_step(self, inputs): input_ids = inputs['input_ids'] attention_mask = inputs['attention_mask'] labels = inputs['labels'] labels_mask = inputs['labels_mask'] with tf.GradientTape() as tape: outputs = self(input_ids=input_ids, attention_mask=attention_mask, labels=labels, decoder_attention_mask=labels_mask, training=True ) loss = self.compiled_loss(labels, outputs.logits, regularization_losses=self.losses) self.optimizer.minimize(loss, self.trainable_variables, tape=tape) self.compiled_metrics.update_state(labels, outputs.logits) ## error happens here return_metrics = {} for metric in self.metrics: result = metric.result() if isinstance(result, dict): return_metrics.update(result) else: return_metrics[metric.name] = result if "loss" in return_metrics and "loss_loss" in return_metrics: del return_metrics["loss_loss"] return return_metrics def test_step(self, inputs): input_ids = inputs['input_ids'] attention_mask = inputs['attention_mask'] labels = inputs['labels'] labels_mask = inputs['labels_mask'] outputs = self(input_ids=input_ids, attention_mask=attention_mask, labels=labels, decoder_attention_mask=labels_mask, training=False ) if not self.loss: self.loss_tracker.update_state(y_pred.loss) return_metrics = {"loss": self.loss_tracker.result()} else: return_metrics = {} self.compiled_loss(labels, outputs.logits, regularization_losses=self.losses) self.compiled_metrics.update_state(labels, outputs.logits) for metric in self.metrics: result = metric.result() if isinstance(result, dict): return_metrics.update(result) else: return_metrics[metric.name] = result if "loss" in return_metrics and "loss_loss" in return_metrics: del return_metrics["loss_loss"] return return_metrics import functools t5_model.train_step = functools.partial(train_step, t5_model) t5_model.test_step = functools.partial(test_step, t5_model) learning_rate = 0.00005 batch_size = 8 num_epochs = 2 optimizer = tf.keras.optimizers.legacy.Adam(learning_rate=learning_rate) loss_fn = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True) t5_model.compile(optimizer=optimizer, loss=loss_fn, metrics=[tfa.metrics.CohenKappa(num_classes=4, weightage='quadratic', sparse_labels=True)]) history = t5_model.fit(tokenized_train_data, validation_data=tokenized_test_data, callbacks=None, batch_size=batch_size, epochs=num_epochs) ``` ### Expected behavior The model should return results instead of producing a crash.
04-07-2023 04:44:59
04-07-2023 04:44:59
Looks like the metric does not like your labels. Are you sure it can be used in that case? cc @Rocketknight1 <|||||>Thank you @sgugger for your response. I was hoping it would work, I ran this same metric through a bert model and using its own tokenizers and I had no issues with it. Is there a way to tweak the fit function to insert the labels as is instead of a tensor?<|||||>I believe the issue is caused by your combination of model and metric. `TFT5ForConditionalGeneration` is a model that outputs text, where the distribution over output tokens is conditioned on some input text. Tasks that are suitable for conditional generation models include summarization and translation. When using any model that generates text, the number of output classes is equal to the vocabulary size of the model - the model produces a distribution over all possible tokens at each position. However, your metric uses `num_classes=4`. This results in an error because the vocabulary for T5 is thousands of tokens, and so label values can be much higher than 4. If this code worked with a BERT model, this is because BERT models mostly do not generate text. If you used e.g. `TFBertForSequenceClassification` or `TFBertForTokenClassification`, then the number of classes would be much lower. That is because these models predict categories for each token or for the entire sequence, and the number of categories is set by the `num_labels` argument to the model's `from_pretrained` method.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
22,645
closed
Implement QFormer for pretrain
### Feature request In [BLIP-2](https://arxiv.org/pdf/2301.12597.pdf), there is a pretraining stage (or stage 1) of QFormer. ![image](https://user-images.githubusercontent.com/38489776/230536840-5b466474-0e29-4029-976b-68c966b2b499.png) Implementation of QFormer in this stage is requested. ### Motivation In [HuggingFace's source code of BLIP-2](https://github.com/huggingface/transformers/blob/v4.27.2/src/transformers/models/blip_2/modeling_blip_2.py#L1019), I see no implementations for text inputs, Image-text contrastive loss, Image-grounded text generation loss, Image-text matching loss for pretraining. Currently, The source code only provides for vision-language generative learning (stage 2). Therefore, it will be very helpful for people who are interested in stage 1 of QFormer (like me). ### Your contribution Unfortunately, I don't think there is a way that I could help.
04-07-2023 03:51:37
04-07-2023 03:51:37
@NielsRogge Gentle ping because I saw your name in the docs<|||||>cc @younesbelkada <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>as the issue is reopened, is there any plan to impl the loss for qformer?<|||||>Hi @jianantian , I didn't had time to have a look unfortunately, if you want to try your hands on it, feel free to open a PR and we'll guide you frm there!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
22,644
closed
Add support for Ascend NPU
# What does this PR do? This PR enables users to leverage the Ascend NPU for training and inference of πŸ€— Transformers models. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. #22600 - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @sgugger
04-07-2023 03:02:36
04-07-2023 03:02:36
For example, you can run the official question answering task using Ascend NPU with below command: ``` python examples/pytorch/question-answering/run_qa.py --model_name_or_path bert-base-uncased \ --dataset_name squad \ --do_train \ --do_eval \ --device_id 5 \ // The specific device to be used for single card training on Ascend NPUs. --per_device_train_batch_size 24 \ --num_train_epochs 2 \ --learning_rate 3e-5 \ --max_seq_length 384 \ --doc_stride 128 \ --save_steps 5000 \ --fp16_opt_level O2 \ --half_precision_backend apex \ --dataloader_drop_last \ --overwrite_output_dir \ --output_dir ./output \ ``` Below are the output logs: ``` 04/07/2023 01:53:22 - WARNING - __main__ - Process rank: -1, device: npu:5, n_gpu: 1distributed training: False, 16-bits training: True 04/07/2023 01:53:22 - INFO - __main__ - Training/evaluation parameters TrainingArguments( _n_gpu=1, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, bf16=False, bf16_full_eval=False, data_seed=None, dataloader_drop_last=True, dataloader_num_workers=0, dataloader_pin_memory=True, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=1800, debug=[], deepspeed=None, device_id=5, disable_tqdm=False, do_eval=True, do_predict=False, do_train=True, eval_accumulation_steps=None, eval_delay=0, eval_steps=None, evaluation_strategy=no, fp16=True, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O2, fsdp=[], fsdp_config={'fsdp_min_num_params': 0, 'xla': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=1, gradient_checkpointing=False, greater_is_better=None, group_by_length=False, half_precision_backend=apex, hub_model_id=None, hub_private_repo=False, hub_strategy=every_save, hub_token=<HUB_TOKEN>, ignore_data_skip=False, include_inputs_for_metrics=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=3e-05, length_column_name=length, load_best_model_at_end=False, local_rank=-1, log_level=passive, log_level_replica=warning, log_on_each_node=True, logging_dir=./output/runs/Apr07_01-53-21_localhost, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=500, logging_strategy=steps, lr_scheduler_type=linear, max_grad_norm=1.0, max_steps=-1, metric_for_best_model=None, mp_parameters=, no_cuda=False, num_train_epochs=2.0, optim=adamw_hf, optim_args=None, output_dir=./output, overwrite_output_dir=True, past_index=-1, per_device_eval_batch_size=8, per_device_train_batch_size=24, prediction_loss_only=False, push_to_hub=False, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_token=<PUSH_TO_HUB_TOKEN>, ray_scope=last, remove_unused_columns=True, report_to=[], resume_from_checkpoint=None, run_name=./output, save_on_each_node=False, save_safetensors=False, save_steps=5000, save_strategy=steps, save_total_limit=None, seed=42, sharded_ddp=[], skip_memory_metrics=True, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_ipex=False, use_legacy_prediction_loop=False, use_mps_device=False, warmup_ratio=0.0, warmup_steps=0, weight_decay=0.0, xpu_backend=None, ) 04/07/2023 01:53:24 - INFO - datasets.builder - No config specified, defaulting to the single config: squad/plain_text 04/07/2023 01:53:24 - INFO - datasets.info - Loading Dataset Infos from /root/.cache/huggingface/modules/datasets_modules/datasets/squad/d6ec3ceb99ca480ce37cdd35555d6cb2511d223b9150cce08a837ef62ffea453 04/07/2023 01:53:24 - INFO - datasets.builder - Overwrite dataset info from restored data version. 04/07/2023 01:53:24 - INFO - datasets.info - Loading Dataset info from /root/.cache/huggingface/datasets/squad/plain_text/1.0.0/d6ec3ceb99ca480ce37cdd35555d6cb2511d223b9150cce08a837ef62ffea453 04/07/2023 01:53:24 - WARNING - datasets.builder - Found cached dataset squad (/root/.cache/huggingface/datasets/squad/plain_text/1.0.0/d6ec3ceb99ca480ce37cdd35555d6cb2511d223b9150cce08a837ef62ffea453) 04/07/2023 01:53:24 - INFO - datasets.info - Loading Dataset info from /root/.cache/huggingface/datasets/squad/plain_text/1.0.0/d6ec3ceb99ca480ce37cdd35555d6cb2511d223b9150cce08a837ef62ffea453 0%| | 0/2 [00:00<?, ?it/s] 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 2/2 [00:00<00:00, 196.27it/s] [INFO|configuration_utils.py:668] 2023-04-07 01:53:24,715 >> loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--bert-base-uncased/snapshots/0a6aa9128b6194f4f3c4db429b6cb4891cdb421b/config.json [INFO|configuration_utils.py:720] 2023-04-07 01:53:24,723 >> Model config BertConfig { "_name_or_path": "bert-base-uncased", "architectures": [ "BertForMaskedLM" ], "attention_probs_dropout_prob": 0.1, "classifier_dropout": null, "gradient_checkpointing": false, "hidden_act": "gelu", "hidden_dropout_prob": 0.1, "hidden_size": 768, "initializer_range": 0.02, "intermediate_size": 3072, "layer_norm_eps": 1e-12, "max_position_embeddings": 512, "model_type": "bert", "num_attention_heads": 12, "num_hidden_layers": 12, "pad_token_id": 0, "position_embedding_type": "absolute", "transformers_version": "4.28.0.dev0", "type_vocab_size": 2, "use_cache": true, "vocab_size": 30522 } [INFO|configuration_utils.py:668] 2023-04-07 01:53:25,029 >> loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--bert-base-uncased/snapshots/0a6aa9128b6194f4f3c4db429b6cb4891cdb421b/config.json [INFO|configuration_utils.py:720] 2023-04-07 01:53:25,033 >> Model config BertConfig { "_name_or_path": "bert-base-uncased", "architectures": [ "BertForMaskedLM" ], "attention_probs_dropout_prob": 0.1, "classifier_dropout": null, "gradient_checkpointing": false, "hidden_act": "gelu", "hidden_dropout_prob": 0.1, "hidden_size": 768, "initializer_range": 0.02, "intermediate_size": 3072, "layer_norm_eps": 1e-12, "max_position_embeddings": 512, "model_type": "bert", "num_attention_heads": 12, "num_hidden_layers": 12, "pad_token_id": 0, "position_embedding_type": "absolute", "transformers_version": "4.28.0.dev0", "type_vocab_size": 2, "use_cache": true, "vocab_size": 30522 } [INFO|tokenization_utils_base.py:1809] 2023-04-07 01:53:25,035 >> loading file vocab.txt from cache at /root/.cache/huggingface/hub/models--bert-base-uncased/snapshots/0a6aa9128b6194f4f3c4db429b6cb4891cdb421b/vocab.txt [INFO|tokenization_utils_base.py:1809] 2023-04-07 01:53:25,035 >> loading file tokenizer.json from cache at /root/.cache/huggingface/hub/models--bert-base-uncased/snapshots/0a6aa9128b6194f4f3c4db429b6cb4891cdb421b/tokenizer.json [INFO|tokenization_utils_base.py:1809] 2023-04-07 01:53:25,035 >> loading file added_tokens.json from cache at None [INFO|tokenization_utils_base.py:1809] 2023-04-07 01:53:25,035 >> loading file special_tokens_map.json from cache at None [INFO|tokenization_utils_base.py:1809] 2023-04-07 01:53:25,035 >> loading file tokenizer_config.json from cache at /root/.cache/huggingface/hub/models--bert-base-uncased/snapshots/0a6aa9128b6194f4f3c4db429b6cb4891cdb421b/tokenizer_config.json [INFO|configuration_utils.py:668] 2023-04-07 01:53:25,036 >> loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--bert-base-uncased/snapshots/0a6aa9128b6194f4f3c4db429b6cb4891cdb421b/config.json [INFO|configuration_utils.py:720] 2023-04-07 01:53:25,037 >> Model config BertConfig { "_name_or_path": "bert-base-uncased", "architectures": [ "BertForMaskedLM" ], "attention_probs_dropout_prob": 0.1, "classifier_dropout": null, "gradient_checkpointing": false, "hidden_act": "gelu", "hidden_dropout_prob": 0.1, "hidden_size": 768, "initializer_range": 0.02, "intermediate_size": 3072, "layer_norm_eps": 1e-12, "max_position_embeddings": 512, "model_type": "bert", "num_attention_heads": 12, "num_hidden_layers": 12, "pad_token_id": 0, "position_embedding_type": "absolute", "transformers_version": "4.28.0.dev0", "type_vocab_size": 2, "use_cache": true, "vocab_size": 30522 } [INFO|modeling_utils.py:2478] 2023-04-07 01:53:25,108 >> loading weights file pytorch_model.bin from cache at /root/.cache/huggingface/hub/models--bert-base-uncased/snapshots/0a6aa9128b6194f4f3c4db429b6cb4891cdb421b/pytorch_model.bin [WARNING|modeling_utils.py:3118] 2023-04-07 01:53:27,033 >> Some weights of the model checkpoint at bert-base-uncased were not used when initializing BertForQuestionAnswering: ['cls.predictions.bias', 'cls.predictions.transform.dense.bias', 'cls.predictions.transform.LayerNorm.bias', 'cls.predictions.transform.dense.weight', 'cls.seq_relationship.weight', 'cls.predictions.transform.LayerNorm.weight', 'cls.predictions.decoder.weight', 'cls.seq_relationship.bias'] - This IS expected if you are initializing BertForQuestionAnswering from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model). - This IS NOT expected if you are initializing BertForQuestionAnswering from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). [WARNING|modeling_utils.py:3130] 2023-04-07 01:53:27,034 >> Some weights of BertForQuestionAnswering were not initialized from the model checkpoint at bert-base-uncased and are newly initialized: ['qa_outputs.bias', 'qa_outputs.weight'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. 04/07/2023 01:53:27 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /root/.cache/huggingface/datasets/squad/plain_text/1.0.0/d6ec3ceb99ca480ce37cdd35555d6cb2511d223b9150cce08a837ef62ffea453/cache-d1f3bae3544867f1.arrow 04/07/2023 01:53:27 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /root/.cache/huggingface/datasets/squad/plain_text/1.0.0/d6ec3ceb99ca480ce37cdd35555d6cb2511d223b9150cce08a837ef62ffea453/cache-7bcc744f960ea416.arrow [INFO|trainer.py:621] 2023-04-07 01:53:28,908 >> Using apex half precision backend [INFO|trainer.py:1766] 2023-04-07 01:53:30,199 >> ***** Running training ***** [INFO|trainer.py:1767] 2023-04-07 01:53:30,199 >> Num examples = 88,524 [INFO|trainer.py:1768] 2023-04-07 01:53:30,199 >> Num Epochs = 2 [INFO|trainer.py:1769] 2023-04-07 01:53:30,200 >> Instantaneous batch size per device = 24 [INFO|trainer.py:1770] 2023-04-07 01:53:30,200 >> Total train batch size (w. parallel, distributed & accumulation) = 24 [INFO|trainer.py:1771] 2023-04-07 01:53:30,200 >> Gradient Accumulation steps = 1 [INFO|trainer.py:1772] 2023-04-07 01:53:30,200 >> Total optimization steps = 7,376 [INFO|trainer.py:1773] 2023-04-07 01:53:30,203 >> Number of trainable parameters = 108,893,186 Selected optimization level O2: FP16 training with FP32 batchnorm and FP32 master weights. Defaults for this optimization level are: enabled : True opt_level : O2 cast_model_type : torch.float16 patch_torch_functions : False keep_batchnorm_fp32 : True master_weights : True loss_scale : dynamic combine_grad : None combine_ddp : None ddp_replica_count : 4 check_combined_tensors : None user_cast_preferred : None Processing user overrides (additional kwargs that are not None)... After processing overrides, optimization options are: enabled : True opt_level : O2 cast_model_type : torch.float16 patch_torch_functions : False keep_batchnorm_fp32 : True master_weights : True loss_scale : dynamic combine_grad : None combine_ddp : None ddp_replica_count : 4 check_combined_tensors : None user_cast_preferred : None 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 7376/7376 [37:49<00:00, 3.25it/s] [INFO|trainer.py:2865] 2023-04-07 02:31:19,790 >> Saving model checkpoint to ./output [INFO|configuration_utils.py:457] 2023-04-07 02:31:19,793 >> Configuration saved in ./output/config.json [INFO|modeling_utils.py:1839] 2023-04-07 02:31:21,077 >> Model weights saved in ./output/pytorch_model.bin [INFO|tokenization_utils_base.py:2170] 2023-04-07 02:31:21,079 >> tokenizer config file saved in ./output/tokenizer_config.json [INFO|tokenization_utils_base.py:2177] 2023-04-07 02:31:21,080 >> Special tokens file saved in ./output/special_tokens_map.json Gradient overflow. Skipping step, loss scaler 0 reducing loss scale to 32768.0 Gradient overflow. Skipping step, loss scaler 0 reducing loss scale to 16384.0 {'loss': 2.2345, 'learning_rate': 2.7966377440347073e-05, 'epoch': 0.14} Gradient overflow. Skipping step, loss scaler 0 reducing loss scale to 8192.0 {'loss': 1.3912, 'learning_rate': 2.5932754880694143e-05, 'epoch': 0.27} Gradient overflow. Skipping step, loss scaler 0 reducing loss scale to 4096.0 {'loss': 1.2321, 'learning_rate': 2.3899132321041215e-05, 'epoch': 0.41} {'loss': 1.1749, 'learning_rate': 2.1865509761388288e-05, 'epoch': 0.54} {'loss': 1.0975, 'learning_rate': 1.983188720173536e-05, 'epoch': 0.68} {'loss': 1.0988, 'learning_rate': 1.779826464208243e-05, 'epoch': 0.81} {'loss': 1.0514, 'learning_rate': 1.5764642082429502e-05, 'epoch': 0.95} {'loss': 0.8971, 'learning_rate': 1.3731019522776571e-05, 'epoch': 1.08} {'loss': 0.7757, 'learning_rate': 1.1697396963123646e-05, 'epoch': 1.22} {'loss': 0.7823, 'learning_rate': 9.663774403470717e-06, 'epoch': 1.36} {'loss': 0.7851, 'learning_rate': 7.630151843817788e-06, 'epoch': 1.49} {'loss': 0.7617, 'learning_rate': 5.5965292841648585e-06, 'epoch': 1.63} Gradient overflow. Skipping step, loss scaler 0 reducing loss scale to 8192.0 {'loss': 0.7459, 'learning_rate': 3.5629067245119307e-06, 'epoch': 1.76} {'loss': 0.7529, 'learning_rate': 1.529284164859002e-06, 'epoch': 1.9} {'train_runtime': 2269.5834, 'train_samples_per_second': 78.009, 'train_steps_per_second': 3.25, 'train_loss': 1.0397178831948635, 'epoch': 2.0} ***** train metrics ***** epoch = 2.0 train_loss = 1.0397 train_runtime = 0:37:49.58 train_samples = 88524 train_samples_per_second = 78.009 train_steps_per_second = 3.25 04/07/2023 02:31:21 - INFO - __main__ - *** Evaluate *** [INFO|trainer.py:763] 2023-04-07 02:31:21,152 >> The following columns in the evaluation set don't have a corresponding argument in `BertForQuestionAnswering.forward` and have been ignored: example_id, offset_mapping. If example_id, offset_mapping are not expected by `BertForQuestionAnswering.forward`, you can safely ignore this message. [INFO|trainer.py:3126] 2023-04-07 02:31:21,156 >> ***** Running Evaluation ***** [INFO|trainer.py:3128] 2023-04-07 02:31:21,157 >> Num examples = 10784 [INFO|trainer.py:3131] 2023-04-07 02:31:21,157 >> Batch size = 8 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1348/1348 [01:07<00:00, 26.69it/s]04/07/2023 02:32:40 - INFO - utils_qa - Post-processing 10570 example predictions split into 10784 features. 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 10570/10570 [01:09<00:00, 152.04it/s]04/07/2023 02:33:50 - INFO - utils_qa - Saving predictions to ./output/eval_predictions.json. 04/07/2023 02:33:50 - INFO - utils_qa - Saving nbest_preds to ./output/eval_nbest_predictions.json. 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1348/1348 [02:40<00:00, 8.37it/s] [INFO|modelcard.py:451] 2023-04-07 02:34:02,898 >> Dropping the following result as it does not have all the necessary fields: {'task': {'name': 'Question Answering', 'type': 'question-answering'}, 'dataset': {'name': 'squad', 'type': 'squad', 'config': 'plain_text', 'split': 'validation', 'args': 'plain_text'}} ***** eval metrics ***** epoch = 2.0 eval_exact_match = 80.0946 eval_f1 = 87.853 eval_runtime = 0:00:51.51 eval_samples = 10784 eval_samples_per_second = 209.317 eval_steps_per_second = 26.165 ```<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_22644). All of your documentation changes will be reflected on that endpoint.<|||||>The test case needs to be executed on the Ascend NPU, and the results are shown below: ![image](https://user-images.githubusercontent.com/28150734/230536813-d8457a5d-c73c-4165-9344-9433bc154811.png) <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>@sgugger now if i want to use Ascend NPU and accelerate to complete distributed training. how can i start and is there any examples for reference<|||||>The PR hasn't been moved there to add support for NPUs, so for now it's not possible.<|||||>@sgugger Currently, I can use transformers on npu based on this PR, but I find that accelerate cannot be used. What should I do if I want to use accelerate on npu? Is there any reference?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
22,643
closed
use __func__ to check can_generate
# What does this PR do? Fixes # (the issue description is as follows) When using `model.generate()` it calls `self._validate_model_class()` to check if model can do generate. If we use `str(self.prepare_inputs_for_generation)` it will dump the entire model architecture which consumes resource and is not necessary. Using `str(self.prepare_inputs_for_generation.__func__)` is a better equivalent replacement. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
04-07-2023 02:43:58
04-07-2023 02:43:58
_The documentation is not available anymore as the PR was closed or merged._<|||||>Could you share a script that shows any improvement? The model is a reference type, so what will be shared is a pointer to that class. cc @gante <|||||>``` import transformers import time model_name = 'facebook/bart-large-cnn' model = transformers.AutoModelForSeq2SeqLM.from_pretrained(model_name) start = time.time() #print(model.can_generate()) print(model._validate_model_class()) print('time1:', time.time() - start) start = time.time() if "GenerationMixin" in str(model.prepare_inputs_for_generation.__func__): pass print('time2:', time.time() - start) start = time.time() if "GenerationMixin" in str(model.prepare_inputs_for_generation): pass print('time3:', time.time() - start) """ Result from my side: True time1: 0.001619100570678711 time2: 5.245208740234375e-06 time3: 0.0012629032135009766 """ from optimum.intel.neural_compressor.quantization import IncQuantizedModelForSeq2SeqLM model_name = 'Intel/bart-large-cnn-int8-dynamic' model = IncQuantizedModelForSeq2SeqLM.from_pretrained(model_name) start = time.time() print(model._validate_model_class()) print('time1:', time.time() - start) start = time.time() if "GenerationMixin" in str(model.prepare_inputs_for_generation.__func__): pass print('time2:', time.time() - start) start = time.time() if "GenerationMixin" in str(model.prepare_inputs_for_generation): pass print('time3:', time.time() - start) """ Result from my side: True time1: 0.5971765518188477 time2: 7.3909759521484375e-06 time3: 0.5961868762969971 """ ```
transformers
22,642
closed
New LlamaTokenizer compat issues
### System Info Cuda 11.8 Latest git transformers tokenizers 0.13.3 pytorch 2.0 python 3.9 ### Reproduction Run trained HF llama models based on https://huggingface.co/decapoda-research/llama-7b-hf without error. Using latest head transformer+tokenizers I am seeing: 1. Very long loading time for LlamaTokenizer with cpu pegged at 100% 2. Incorrect tokenization of trained Llama (7B tested) models that have the Lora adapter applied under old transformer/tokenizer code (from last week). I see that there are several commits regarding LlamaTokenizer but what is the correct usage now and is this compatible with old models trained on "old" LlamaTokenizer? For example, ```https://huggingface.co/decapoda-research/llama-7b-hf``` tokenizer supposedly has bad default values vs original. And the new commits are supposed to resolve this. Does this mean we need to regenerate a HF compatible 7B from original META llama using latest transformer and retrain all over again? Just need clarifications on I can move forward using the latest transformer/tokenizer code.
04-07-2023 02:24:29
04-07-2023 02:24:29
You shouldn't use this checkpoint. They are ignoring all PRs to update the weights/configs/tokenizers to the latest fixes in Transformers (the repo was generated in the middle of the PR adding Llama so was never compatible with Hugging Face). You should indeed re-run the conversion script to be up to date, or use other checkpoints. For instance, I've found [this one](https://huggingface.co/huggyllama/llama-7b) to be fully compatible with Transformers. Since that repo has never worked with Transformers, I cannot speak for models fine-tuned using it sadly. The tokenizer implementation has been fixed to match the original tokenizer of the researchers.<|||||>@sgugger The new LlamaTokenizerFast (the current default) is taking tremendously amount of time to load versus old tokenizer. Cpu is pegged at 100% and even on 5900x will take like 90s to load. Is this normal?<|||||>It won't happen if you have the fast tokenizer file. The repo I linked in my comment above has it. It's because the conversion from slow to fast tokenizer is very slow for LLaMA. As a workaround, you can also use `LlamaTokenizer` instead of `AutoTokenizer` (which will force using the slow tokenizer).<|||||>Sorry to comment on a old closed issue, but this pops out as first result when ddg'ing for "LlamaTokenizer extremely slow", so there are probably many people who will get here. If I understand correctly, this happens when the tokenizer is in an old format, and it happens because the tokenizer is converted to the new format each time we load it, is that correct? If so, is there any way to persist the converted tokenizer, to use that the next time, instead of converting it again and again?<|||||>You can save it, then reload it from the save you did. But note that if you follow the doc and use the conversion script on the weights obtained by Llama, you will get the fast tokenizer file created for you automatically.<|||||>Thanks. For anyone else reaching this page and wondering how to do it, the method to use is `save_pretrained`, like this: ``` tokenizer.save_pretrained("./models/tokenizer/") ```` I can confirm loading this new version of the tokenizer fixes the slow load for me.
transformers
22,641
closed
Compute Accuracy in clip-roberta
### Feature request Is it possible to include an accuracy metric when training clip-roberta? ### Motivation We would like to have something other than loss to track during training. ### Your contribution I tried creating a dummy compute_metrics function to pass to GaudiTrainer thusly. metric = evaluate.load("accuracy") def compute_metrics(p): return 1 get the following error: Traceback (most recent call last): File "run_clip.py", line 553, in <module> main() File "run_clip.py", line 532, in main metrics = trainer.evaluate() File "/usr/local/lib/python3.8/dist-packages/transformers/trainer.py", line 2932, in evaluate output = eval_loop( File "/usr/local/lib/python3.8/dist-packages/optimum/habana/transformers/trainer.py", line 1074, in evaluation_loop logits_dtype = get_dtype(logits) File "/usr/local/lib/python3.8/dist-packages/optimum/habana/transformers/trainer_utils.py", line 43, in get_dtype return [get_dtype(logits_tensor) for logits_tensor in logits] File "/usr/local/lib/python3.8/dist-packages/optimum/habana/transformers/trainer_utils.py", line 43, in <listcomp> return [get_dtype(logits_tensor) for logits_tensor in logits] File "/usr/local/lib/python3.8/dist-packages/optimum/habana/transformers/trainer_utils.py", line 45, in get_dtype raise TypeError(f"logits should be of type torch.Tensor or tuple, got {type(logits)} which is not supported") TypeError: logits should be of type torch.Tensor or tuple, got <class 'transformers.modeling_outputs.BaseModelOutputWithPoolingAndCrossAttentions'> which is not supported
04-07-2023 00:47:11
04-07-2023 00:47:11
Hi @skaulintel! According to what I see in the traceback, this issue is specific to Optimum Habana. Could you move this issue [there](https://github.com/huggingface/optimum-habana/issues) please? And then I'll follow up.
transformers
22,640
closed
Seq2Seq Trainer for QA: No useful metric returned for evaluation
### System Info Latest Transformers version from main. ### Who can help? _No response_ ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Hi, unfortunately, the `run_seq2seq_qa.py` in PyTorch Example folder does not output a useful evaluation metric: ```bash python run_seq2seq_qa.py \ --model_name_or_path google/mt5-small \ --dataset_name mlqa \ --dataset_config mlqa-translate-train.de \ --context_column context \ --question_column question \ --answer_column answers \ --do_train \ --do_eval \ --per_device_train_batch_size 12 \ --learning_rate 3e-5 \ --num_train_epochs 1 \ --max_seq_length 384 \ --doc_stride 128 \ --save_steps 5000 \ --max_steps 50 \ --output_dir ./mt5-small ``` I'm running this command and the final output looks like: ``` ***** eval metrics ***** epoch = 0.01 eval_loss = 15.7694 eval_runtime = 0:00:48.60 eval_samples = 10584 eval_samples_per_second = 217.741 eval_steps_per_second = 27.218 ``` So I'm missing EM and F1-Score: why is it no longer there? This also happens when fine-tuning SquAD 2.0. ### Expected behavior Useful evaluation metric: EM and F1-Score should be returned.
04-06-2023 23:01:42
04-06-2023 23:01:42
You need to pass along `--predict_with_generate` to use generate in the evaluation, and then get the metrics.<|||||>Thanks @sgugger ! I used that option, but after evaluation it outputs: ```bash Traceback (most recent call last): File "run_seq2seq_qa.py", line 724, in <module> main() File "run_seq2seq_qa.py", line 683, in main metrics = trainer.evaluate(max_length=max_length, num_beams=num_beams, metric_key_prefix="eval") File "/home/ubuntu/transformers/examples/pytorch/question-answering/trainer_seq2seq_qa.py", line 92, in evaluate eval_preds = self.post_process_function(eval_examples, eval_dataset, output) File "run_seq2seq_qa.py", line 617, in post_processing_function decoded_preds = tokenizer.batch_decode(preds, skip_special_tokens=True) File "/home/ubuntu/transformers/src/transformers/tokenization_utils_base.py", line 3445, in batch_decode return [ File "/home/ubuntu/transformers/src/transformers/tokenization_utils_base.py", line 3446, in <listcomp> self.decode( File "/home/ubuntu/transformers/src/transformers/tokenization_utils_base.py", line 3485, in decode return self._decode( File "/home/ubuntu/transformers/src/transformers/tokenization_utils_fast.py", line 549, in _decode text = self._tokenizer.decode(token_ids, skip_special_tokens=skip_special_tokens) OverflowError: out of range integral type conversion attempted ```<|||||>Ah, I've just found this issue for that problem: https://github.com/huggingface/transformers/issues/22634 So I'll close here!
transformers
22,639
closed
Have a beam search sub batch size to limit memory use
### Feature request Currently, beam search effectively multiplies the batch size memory-wise and compute-wise by the batch size. If you have a batch size of 1 and a beam search of 8, `model.forward` sees 8 samples at once. This becomes an unnecessary problem for situations where needs that beam size but is only able to fit a smaller quantity of samples into memory. That's why I suggest that a "beam search sub-batch" size be added to limit the number of samples (including beams) that are seen at once by the model. Eg, if the beam search batch size is 4, even if the main batch size is 1 and the beam search is 8, `model.foward` would only be called on 4 samples at once. Same for if the main batch size is 2 and the beam size is 8. ### Motivation With the size of LLM & their popularity, people are often stretching the capacities of their hardware, & in some cases (like majority voting for chain-of-thoughts), need beam search with some beam size. If that beam size can't fit all at once in the memory, they are currently doomed, which is unnecessary, as the beams could be computed in a smaller quantity at once.
04-06-2023 21:37:40
04-06-2023 21:37:40
@sgugger <|||||>cc @gante <|||||>Hey @JulesGM! Thank you for suggesting sub-batch size beam search. It is somewhat related to #22340 (add some option to stop spending resources on beams that have already been finished). Currently, we are unable to fulfill all requests, and I haven't seen demand for this particular feature. As such, I'll offer my standard pact: if/when this comment reaches 10 reactions, I'll put the feature on my todo list :) (and whoever does the 10th react, plz ping me) Alternatively, if you'd like to implement this feature yourself, I'd be happy to guide you πŸ™Œ <|||||>I was thinking something like taking https://github.com/huggingface/transformers/blob/main/src/transformers/generation/utils.py#L2833 ```python outputs = self( **model_inputs, return_dict=True, output_attentions=output_attentions, output_hidden_states=output_hidden_states, ) ``` Into, fairly naively splitting into sub-batches then concatenating should be fine: ```python # Make it more explicit that all kwargs are shared by the two forward modes # & make the code shorter forward_kwargs = dict( return_dict=True, output_attentions=output_attentions, output_hidden_states=output_hidden_states, ) if beam_search_batch_size: bsbs = beam_search_batch_size # Repeated a bunch of times, makes code too long outputs_per_sub_batch = [] for i in range(0, seq_len, bsbs): model_inputs_sub_batch = { k: v[i * bsbs: (i + 1) * bsbs] for k, v in model_inputs } sub_batch_outputs = self( **model_inputs_sub_batch, **forward_kwargs, ) outputs_per_sub_batch.append(sub_batch_outputs) keys = outputs_per_sub_batch[0].keys() outputs = { k: torch.cat([sub_batch[k] for sub_batch in outputs_per_sub_batch] f, dim=0) for k in keys } else: # Unchanged original behavior outputs = self( **model_inputs, **forward_kwargs, ) ``` This could likely be moved to a new function called "forward on beams" that could be called on `model_inputs` in all beam search variants.<|||||>I guess the one thing I'm not confident about is how to add the "beam_search_sub_batch" parameter to the `generate` call in a way that respects huggingface's .. interface objectives<|||||>Ideally, no parameterization would be needed (e.g. split the beam search batch size in two every time it hits a memory exception). But a try/except on the main code path would be ugly πŸ€” I am working on a plan to increase the flexibility of `.generate()`, hopefully this sort of features will become trivial to integrate πŸ™ <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
22,638
closed
When using transformers.DataCollatorWithPadding normally, always get annoying warning
### Explanation So systematically, people who use `transformers.DataCollatorWithPadding` get ```python You're using a T5TokenizerFast tokenizer. Please note that with a fast tokenizer, using the `__call__` method is faster than using a method to encode the text followed by a call to the `pad` method to get a padded encoding. ``` when the collator is called, because it's called on pre-tokenized samples. Now, I feel like it's still faster to pre-tokenize once & then dynamically pad batches (because samples can be shuffled), than only live tokenize, & tokenizing to fixed `max_length` is just very sub-optimal from a compute and memory standpoint. So, I feel like that warning should be disabled at least when using the collator, & using it as intented. @ArthurZucker @sgugger
04-06-2023 19:38:13
04-06-2023 19:38:13
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Sorry this slipped through the cracks. If you have a way to disable the warning for the data collator, I'm happy to look at a PR!<|||||>The simplest would be to add an argument to `tokenizer.pad` that defaults to `False` that is something like `supress_warning_slow: bool = False`, & when it's passed as `True`, the warning is not emitted. <|||||>This warning is genreated from [here](https://github.com/huggingface/transformers/blob/003a0cf8cc4d78e47ef9debfb1e93a5c1197ca9a/src/transformers/tokenization_utils_base.py#L2949), an easy way to turn off is ```python tokenizer = AutoTokenizer.from_pretrained(...) tokenizer.deprecation_warnings["Asking-to-pad-a-fast-tokenizer"] = True ```<|||||>Oh I didn't know that! Do you want to make a PR to add this in `DataCollatorWithPadding`?<|||||>What is the recommendation? Should we pre-pad together with tokenization or leave that to the collator, when using a fast tokenizer? Has anyone done any performance comparison?<|||||>https://github.com/huggingface/transformers/pull/23742<|||||>> If you need to tokenize the data before training, just turn off this warning, if you can do tokenization during training(for small datasets or only train once), you can use the fast tokenizer in a custom data collector function without calling ```tokenizer.pad```.<|||||>It's weird to me that when using the official Transformers data collators, the warnings are be emitted. I feel like they shouldn't. That is the point of this issue.<|||||>I feel like the fact that one decides to use the collators means that the warnings should indeed be disabled. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
22,637
closed
Update tiny model summary file for recent models
# What does this PR do? I just created tiny models for recent models. This PR updates tiny model summary file to use them. This includes: - TFBartForSequenceClassification - TFBlipForConditionalGeneration - ClapModel - MegaModel - NllbMoeModel - TFVisionTextDualEncoderModel - WhisperForAudioClassification A few tiny changes are included too.
04-06-2023 19:14:46
04-06-2023 19:14:46
_The documentation is not available anymore as the PR was closed or merged._
transformers
22,636
closed
Both `max_new_tokens` and `max_length` seem to have been set.
### System Info - `transformers` version: 4.27.4 - Platform: Linux-5.15.0-69-generic-x86_64-with-glibc2.35 - Python version: 3.9.16 - Huggingface_hub version: 0.13.3 - PyTorch version (GPU?): 1.13.1+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: yes - Using distributed or parallel set-up in script?: no ### Who can help? @ArthurZucker @younesbelkada ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction I'm trying to generate some text with `text-generation` pipeline. ```python from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline, GenerationConfig device = "cuda:0" model_name = "facebook/opt-1.3b" # tokenizer, model tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained( model_name, low_cpu_mem_usage=True, pad_token_id=tokenizer.eos_token_id ).to(device) # pipeline pipe = pipeline("text-generation", model=model, tokenizer=tokenizer, device=device) # generate text text = "Hello " result = pipe( text, generation_config=GenerationConfig( max_new_tokens=70, return_full_text=False, num_beams=1, do_sample=False ) ) # print result print(result) ``` When I execute the code above, it shows error/warning messages like below. ```text --- Logging error --- Traceback (most recent call last): File "/python-path/python3.9/logging/__init__.py", line 1083, in emit msg = self.format(record) File "/python-path/python3.9/logging/__init__.py", line 927, in format return fmt.format(record) File "/python-path/python3.9/logging/__init__.py", line 663, in format record.message = record.getMessage() File "/python-path/python3.9/logging/__init__.py", line 367, in getMessage msg = msg % self.args TypeError: not all arguments converted during string formatting Call stack: File "/python-path/python3.9/runpy.py", line 197, in _run_module_as_main return _run_code(code, main_globals, None, File "/python-path/python3.9/runpy.py", line 87, in _run_code exec(code, run_globals) File "/python-path/python3.9/site-packages/ipykernel_launcher.py", line 17, in <module> app.launch_new_instance() File "/python-path/python3.9/site-packages/traitlets/config/application.py", line 1043, in launch_instance app.start() File "/python-pathpython3.9/site-packages/ipykernel/kernelapp.py", line 725, in start self.io_loop.start() File "/python-path/python3.9/site-packages/tornado/platform/asyncio.py", line 215, in start self.asyncio_loop.run_forever() File "/python-path/python3.9/asyncio/base_events.py", line 601, in run_forever self._run_once() File "/python-path/python3.9/asyncio/base_events.py", line 1905, in _run_once handle._run() File "/python-path/python3.9/asyncio/events.py", line 80, in _run self._context.run(self._callback, *self._args) File "/python-path/python3.9/site-packages/ipykernel/kernelbase.py", line 513, in dispatch_queue await self.process_one() File "/python-path/python3.9/site-packages/ipykernel/kernelbase.py", line 502, in process_one await dispatch(*args) File "/python-path/python3.9/site-packages/ipykernel/kernelbase.py", line 409, in dispatch_shell await result File "/python-path/python3.9/site-packages/ipykernel/kernelbase.py", line 729, in execute_request reply_content = await reply_content File "/python-path/python3.9/site-packages/ipykernel/ipkernel.py", line 422, in do_execute res = shell.run_cell( File "/python-path/python3.9/site-packages/ipykernel/zmqshell.py", line 540, in run_cell return super().run_cell(*args, **kwargs) File "/python-path/python3.9/site-packages/IPython/core/interactiveshell.py", line 3006, in run_cell result = self._run_cell( File "/python-path/python3.9/site-packages/IPython/core/interactiveshell.py", line 3061, in _run_cell result = runner(coro) File "/python-path/python3.9/site-packages/IPython/core/async_helpers.py", line 129, in _pseudo_sync_runner coro.send(None) File "/python-path/python3.9/site-packages/IPython/core/interactiveshell.py", line 3266, in run_cell_async has_raised = await self.run_ast_nodes(code_ast.body, cell_name, File "/python-path/python3.9/site-packages/IPython/core/interactiveshell.py", line 3445, in run_ast_nodes if await self.run_code(code, result, async_=asy): File "/python-path/python3.9/site-packages/IPython/core/interactiveshell.py", line 3505, in run_code exec(code_obj, self.user_global_ns, self.user_ns) File "/tmp/ipykernel_872573/1980627959.py", line 19, in <module> result = pipe( File "/python-path/python3.9/site-packages/transformers/pipelines/text_generation.py", line 209, in __call__ return super().__call__(text_inputs, **kwargs) File "/python-path/python3.9/site-packages/transformers/pipelines/base.py", line 1109, in __call__ return self.run_single(inputs, preprocess_params, forward_params, postprocess_params) File "/python-path/python3.9/site-packages/transformers/pipelines/base.py", line 1116, in run_single model_outputs = self.forward(model_inputs, **forward_params) File "/python-path/python3.9/site-packages/transformers/pipelines/base.py", line 1015, in forward model_outputs = self._forward(model_inputs, **forward_params) File "/python-path/python3.9/site-packages/transformers/pipelines/text_generation.py", line 251, in _forward generated_sequence = self.model.generate(input_ids=input_ids, attention_mask=attention_mask, **generate_kwargs) File "/python-path/python3.9/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context return func(*args, **kwargs) File "/python-path/python3.9/site-packages/transformers/generation/utils.py", line 1297, in generate logger.warn( Message: 'Both `max_new_tokens` (=70) and `max_length`(=73) seem to have been set. `max_new_tokens` will take precedence. Please refer to the documentation for more information. (https://huggingface.co/docs/transformers/main/en/main_classes/text_generation)' Arguments: (<class 'UserWarning'>,) ``` ### Expected behavior 1. It seems like that `transformers` gives a warning message when both `max_new_tokens` and `max_length` are set. But `max_length` is not set by me, but the downloaded pretrained model(`facebook/opt-1.3b`). So far as I know, almost all generative models set `max_length`, so this warning message is always shown up when the user set `max_new_tokens`, regardless of whether the user actually set `max_length` as well or not. However, to avoid unnecessary warning messages, I think **the warning message should be shown up only when the user *explicitly* set both `max_new_tokens` and `max_length`** - Even `max_length` value on the warning message is wrong, because `generation_config.max_length` is overwrited with `generation_config.max_new_tokens + input_ids_seq_length` if `max_new_tokens` has been set. 2. `logging` module throws an error, because `UserWarning` is passed as a parameter to `logger.warn()` method. ```python logger.warn( f"Both `max_new_tokens` (={generation_config.max_new_tokens}) and `max_length`(=" f"{generation_config.max_length}) seem to have been set. `max_new_tokens` will take precedence. " "Please refer to the documentation for more information. " "(https://huggingface.co/docs/transformers/main/en/main_classes/text_generation)", UserWarning, ) ``` - It seems like `transformers` use `warnings.warn()`, `logger.warn()`, and `logger.warning()`. I think **it should be consolidated to use one method consistently for better coherence.**
04-06-2023 18:47:44
04-06-2023 18:47:44
cc @gante <|||||>@HeekangPark yeah, pipelines + new generation arguments have yet to be revisited. Thank you for raising the issue! I took note of your suggestions. However, since the output is not broken, I may take a while to actually fix it :)<|||||>@QuentinAmbard @gante , could you please tell how to fix this bug? I still see "logging error message".<|||||>@IamExperimenting @HeekangPark The warning is no longer present in the text generation pipeline, if you install from `main` :)
transformers
22,635
closed
[doc] Try a few β‰  ways of linking to Papers, users, and org profiles
null
04-06-2023 18:22:34
04-06-2023 18:22:34
_The documentation is not available anymore as the PR was closed or merged._
transformers
22,634
closed
[run_translation.py] out of range integral type conversion attempted
spitting off from https://github.com/huggingface/transformers/issues/22571 as it was a secondary problem reported there: ### Reproduction ``` CUDA_VISIBLE_DEVICES=0 PYTHONPATH=src python examples/pytorch/translation/run_translation.py \ --model_name_or_path t5-base --do_train --do_eval --source_lang en \ --target_lang de --source_prefix 'translate English to German: ' \ --dataset_name stas/wmt14-en-de-pre-processed --output_dir \ /tmp/tst-translation --num_train_epochs 1 --per_device_train_batch_size=1 \ --max_train_samples 10 --overwrite_output_dir --seed 1137 \ --per_device_eval_batch_size 1 --predict_with_generate --fp16 \ --max_eval_samples 10 ``` fails inside eval: ``` [INFO|trainer.py:3126] 2023-04-04 09:28:07,548 >> ***** Running Evaluation ***** [INFO|trainer.py:3128] 2023-04-04 09:28:07,548 >> Num examples = 10 [INFO|trainer.py:3131] 2023-04-04 09:28:07,548 >> Batch size = 1 [INFO|configuration_utils.py:575] 2023-04-04 09:28:07,552 >> Generate config GenerationConfig { "_from_model_config": true, "decoder_start_token_id": 0, "eos_token_id": 1, "pad_token_id": 0, "transformers_version": "4.28.0.dev0" } 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 10/10 [00:02<00:00, 3.72it/s]Traceback (most recent call last): File "examples/pytorch/translation/run_translation.py", line 664, in <module> main() File "examples/pytorch/translation/run_translation.py", line 605, in main metrics = trainer.evaluate(max_length=max_length, num_beams=num_beams, metric_key_prefix="eval") File "/mnt/nvme0/code/huggingface/transformers-master/src/transformers/trainer_seq2seq.py", line 159, in evaluate return super().evaluate(eval_dataset, ignore_keys=ignore_keys, metric_key_prefix=metric_key_prefix) File "/mnt/nvme0/code/huggingface/transformers-master/src/transformers/trainer.py", line 2990, in evaluate output = eval_loop( File "/mnt/nvme0/code/huggingface/transformers-master/src/transformers/trainer.py", line 3278, in evaluation_loop metrics = self.compute_metrics(EvalPrediction(predictions=all_preds, label_ids=all_labels)) File "examples/pytorch/translation/run_translation.py", line 546, in compute_metrics decoded_preds = tokenizer.batch_decode(preds, skip_special_tokens=True) File "/mnt/nvme0/code/huggingface/transformers-master/src/transformers/tokenization_utils_base.py", line 3445, in batch_decode return [ File "/mnt/nvme0/code/huggingface/transformers-master/src/transformers/tokenization_utils_base.py", line 3446, in <listcomp> self.decode( File "/mnt/nvme0/code/huggingface/transformers-master/src/transformers/tokenization_utils_base.py", line 3485, in decode return self._decode( File "/mnt/nvme0/code/huggingface/transformers-master/src/transformers/tokenization_utils_fast.py", line 549, in _decode text = self._tokenizer.decode(token_ids, skip_special_tokens=skip_special_tokens) OverflowError: out of range integral type conversion attempted ``` @sgugger
04-06-2023 16:56:48
04-06-2023 16:56:48
cc @ArthurZucker Seems like the model and tokenizer have mismatched length<|||||>Yeah, but : - the tokenizer has 100 additional special tokens so even if the model predicts something above 32000 (the model's vocab size) you get an extra id (until 32099) - the tokenizer has an `unk_token` so when you go above `32099`, the fast simply outputs `''` while the slow ` '<extra_id_-29>'` (which is a bit strange I'll give you that πŸ˜… snippet: ```python >>> from transformers import T5Tokenizer, T5TokenizerFast >>> tokenizer_slow = T5Tokenizer.from_pretrained("t5-base") >>> tokenizer_slow.decode(32140) # above vocab size '<extra_id_-3167901>' >>> tokenizer_fast = T5TokenizerFast.from_pretrained("t5-base") '' ``` The issue is different. This is a integer overflow in rust: ```python >>> tokenizer_fast.decode(3200000000000) --------------------------------------------------------------------------- OverflowError Traceback (most recent call last) Cell In[29], line 1 ----> 1 tokenizer_fast.decode(3200000000000) File ~/Work/transformers/src/transformers/tokenization_utils_base.py:3485, in PreTrainedTokenizerBase.decode(self, token_ids, skip_special_tokens, clean_up_tokenization_spaces, **kwargs) 3482 # Convert inputs to python lists 3483 token_ids = to_py_obj(token_ids) -> 3485 return self._decode( 3486 token_ids=token_ids, 3487 skip_special_tokens=skip_special_tokens, 3488 clean_up_tokenization_spaces=clean_up_tokenization_spaces, 3489 **kwargs, 3490 ) File ~/Work/transformers/src/transformers/tokenization_utils_fast.py:549, in PreTrainedTokenizerFast._decode(self, token_ids, skip_special_tokens, clean_up_tokenization_spaces, **kwargs) 547 if isinstance(token_ids, int): 548 token_ids = [token_ids] --> 549 text = self._tokenizer.decode(token_ids, skip_special_tokens=skip_special_tokens) 551 clean_up_tokenization_spaces = ( 552 clean_up_tokenization_spaces 553 if clean_up_tokenization_spaces is not None 554 else self.clean_up_tokenization_spaces 555 ) 556 if clean_up_tokenization_spaces: OverflowError: out of range integral type conversion attempted ``` That means you are juste giving a huge huge number to decode is there a reason ?<|||||>Please note I've only relayed the errors reported on the pytorch Issued by a user trying to use `torch.compile`. <|||||>Hi guys, I have the same problem with the `run_seq2seq_qa.py` script and it turns out, that `preds` are passed to the `decode` function, with the following content: ``` [[ 0 250099 1013 ... -100 -100 -100] [ 0 250099 1013 ... -100 -100 -100] [ 0 250099 1013 ... -100 -100 -100] ... [ 0 250099 260 ... -100 -100 -100] [ 0 250099 442 ... -100 -100 -100] [ 0 250099 3883 ... -100 -100 -100]] ``` So the problematic thing here is `-100` I guess, because I can reproduce the error with: ``` >>> tokenizer.decode(-100) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/ubuntu/transformers/src/transformers/tokenization_utils_base.py", line 3485, in decode return self._decode( File "/home/ubuntu/transformers/src/transformers/tokenization_utils_fast.py", line 549, in _decode text = self._tokenizer.decode(token_ids, skip_special_tokens=skip_special_tokens) OverflowError: out of range integral type conversion attempted ```<|||||>Awsome thanks for providing this! Indeed these should be converted to padding<|||||>Could it be similar to this fix? https://github.com/huggingface/transformers/pull/18592 The hardcoded -100 doesn't seem to always do the right thing.<|||||>I tried with another model arch and it's breaks too but in another way. so eval is quite broken in many ways. ``` CUDA_VISIBLE_DEVICES=0 PYTHONPATH=src python examples/pytorch/translation/run_translation.py --model_name_or_path 'facebook/wmt19-en-ru' --do_train --do_eval --source_lang en --target_lang de --source_prefix 'translate English to German: ' --dataset_name stas/wmt14-en-de-pre-processed --output_dir /tmp/tst-translation --num_train_epochs 1 --per_device_train_batch_size=1 --max_train_samples 10 --overwrite_output_dir --seed 1137 --per_device_eval_batch_size 1 --predict_with_generate --fp16 --max_eval_samples 10 Traceback (most recent call last): File "examples/pytorch/translation/run_translation.py", line 664, in <module> main() File "examples/pytorch/translation/run_translation.py", line 605, in main metrics = trainer.evaluate(max_length=max_length, num_beams=num_beams, metric_key_prefix="eval") File "/mnt/nvme0/code/huggingface/transformers-master/src/transformers/trainer_seq2seq.py", line 159, in evaluate return super().evaluate(eval_dataset, ignore_keys=ignore_keys, metric_key_prefix=metric_key_prefix) File "/mnt/nvme0/code/huggingface/transformers-master/src/transformers/trainer.py", line 2993, in evaluate output = eval_loop( File "/mnt/nvme0/code/huggingface/transformers-master/src/transformers/trainer.py", line 3174, in evaluation_loop loss, logits, labels = self.prediction_step(model, inputs, prediction_loss_only, ignore_keys=ignore_keys) File "/mnt/nvme0/code/huggingface/transformers-master/src/transformers/trainer_seq2seq.py", line 290, in prediction_step outputs = model(**inputs) File "/home/stas/anaconda3/envs/py38-pt20/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "/mnt/nvme0/code/huggingface/transformers-master/src/transformers/models/fsmt/modeling_fsmt.py", line 1251, in forward masked_lm_loss = loss_fct(lm_logits.view(-1, self.config.tgt_vocab_size), labels.view(-1)) File "/home/stas/anaconda3/envs/py38-pt20/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "/home/stas/anaconda3/envs/py38-pt20/lib/python3.8/site-packages/torch/nn/modules/loss.py", line 1174, in forward return F.cross_entropy(input, target, weight=self.weight, File "/home/stas/anaconda3/envs/py38-pt20/lib/python3.8/site-packages/torch/nn/functional.py", line 3029, in cross_entropy return torch._C._nn.cross_entropy_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index, label_smoothing) ValueError: Expected input batch_size (56) to match target batch_size (48). ```<|||||>@stas00 I am facing the same issue while fine-tuning t5-small using `examples/pytorch/summarization/run_summarization.py` And I can see `preds` has `-100` and so decode fails with the below error: ``` Traceback (most recent call last): File "examples/pytorch/summarization/run_summarization.py", line 751, in <module> main() File "examples/pytorch/summarization/run_summarization.py", line 705, in main predict_results = trainer.predict(predict_dataset, metric_key_prefix="predict") File "src/transformers/trainer_seq2seq.py", line 216, in predict return super().predict(test_dataset, ignore_keys=ignore_keys, metric_key_prefix=metric_key_prefix) File "src/transformers/trainer.py", line 3069, in predict output = eval_loop( File "src/transformers/trainer.py", line 3281, in evaluation_loop metrics = self.compute_metrics(EvalPrediction(predictions=all_preds, label_ids=all_labels)) File "examples/pytorch/summarization/run_summarization.py", line 635, in compute_metrics decoded_preds = tokenizer.batch_decode(preds, skip_special_tokens=True) File "src/transformers/tokenization_utils_base.py", line 3446, in batch_decode return [ File "src//transformers/tokenization_utils_base.py", line 3447, in <listcomp> self.decode( File "src/transformers/tokenization_utils_base.py", line 3486, in decode return self._decode( File "src/transformers/tokenization_utils_fast.py", line 549, in _decode text = self._tokenizer.decode(token_ids, skip_special_tokens=skip_special_tokens) OverflowError: out of range integral type conversion attempted ``` <|||||>The first issue is addressed in #22693 The second issue with FSMT is due to [this line](https://github.com/huggingface/transformers/blob/151425ddb29d4ad1a121e8cce62000a2ac52d3ba/src/transformers/trainer_seq2seq.py#L270) added by @gante . The `decoder_input_ids` not passed to `generate` result in generations that have the same length as the inputs and not the targets.<|||||>@sgugger thanks for the fix. I can see the same issue in line 718 https://github.com/huggingface/transformers/blob/main/examples/pytorch/summarization/run_summarization.py#L718 possible fix: preds= np.where(predict_results.predictions != -100, predict_results.predictions, tokenizer.pad_token_id) predictions = tokenizer.batch_decode(preds, skip_special_tokens=True, clean_up_tokenization_spaces=True) <|||||>Good catch, adding this too in the PR.<|||||>Thinking more, I think this is also a result of the recent changes in generate, which used to be the one padding the result with `tokenizer.pad_token_id`, and it's now the `Trainer` padding them with -100. cc @gante <|||||>Hey everyone -- the last issues should be gone with #22772, but feel free to comment/reopen if any related problem persists!<|||||>Hi! since a couple of weeks I also stumbled on this error. It was working just fine before. I am pretty sure I have transformer installed from source so the PR with the fix is there as well. I am using Bart-large and the Trainer class. I first define rouge as training evaluation function: ```python def compute_rouge(pred): predictions, labels = pred #decode the predictions decode_predictions = tokenizer.batch_decode(predictions, skip_special_tokens=True) #decode labels decode_labels = tokenizer.batch_decode(labels, skip_special_tokens=True) #compute results res = rouge.compute(predictions=decode_predictions, references=decode_labels, use_stemmer=True) #get % return res ``` And give it to the trainer ```python trainer = Seq2SeqTrainer( model, args, train_dataset=tokenized_dataset['train'], eval_dataset=tokenized_dataset['valid'], data_collator=collator, tokenizer=tokenizer, compute_metrics=compute_rouge ) ``` Then the script breaks in Trainer.train, while decoding for dev set evaluation: ``` Traceback (most recent call last): File "/home/ghoogerw/kg2Narrative/KGNarrative2/script4trainingLLM/finetunemodel.py", line 226, in <module> main(args) File "/home/ghoogerw/kg2Narrative/KGNarrative2/script4trainingLLM/finetunemodel.py", line 149, in main trainer.train() File "/home/ghoogerw/.conda/envs/kg2Narrative/lib/python3.11/site-packages/transformers/trainer.py", line 1662, in train return inner_training_loop( ^^^^^^^^^^^^^^^^^^^^ File "/home/ghoogerw/.conda/envs/kg2Narrative/lib/python3.11/site-packages/transformers/trainer.py", line 2022, in _inner_training_loop self._maybe_log_save_evaluate(tr_loss, model, trial, epoch, ignore_keys_for_eval) File "/home/ghoogerw/.conda/envs/kg2Narrative/lib/python3.11/site-packages/transformers/trainer.py", line 2288, in _maybe_log_save_evaluate metrics = self.evaluate(ignore_keys=ignore_keys_for_eval) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/ghoogerw/.conda/envs/kg2Narrative/lib/python3.11/site-packages/transformers/trainer_seq2seq.py", line 159, in evaluate return super().evaluate(eval_dataset, ignore_keys=ignore_keys, metric_key_prefix=metric_key_prefix) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/ghoogerw/.conda/envs/kg2Narrative/lib/python3.11/site-packages/transformers/trainer.py", line 2994, in evaluate output = eval_loop( ^^^^^^^^^^ File "/home/ghoogerw/.conda/envs/kg2Narrative/lib/python3.11/site-packages/transformers/trainer.py", line 3283, in evaluation_loop metrics = self.compute_metrics(EvalPrediction(predictions=all_preds, label_ids=all_labels)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/ghoogerw/kg2Narrative/KGNarrative2/script4trainingLLM/finetunemodel.py", line 103, in compute_rouge decode_predictions = tokenizer.batch_decode(predictions, skip_special_tokens=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/ghoogerw/.conda/envs/kg2Narrative/lib/python3.11/site-packages/transformers/tokenization_utils_base.py", line 3456, in batch_decode return [ ^ File "/home/ghoogerw/.conda/envs/kg2Narrative/lib/python3.11/site-packages/transformers/tokenization_utils_base.py", line 3457, in <listcomp> self.decode( File "/home/ghoogerw/.conda/envs/kg2Narrative/lib/python3.11/site-packages/transformers/tokenization_utils_base.py", line 3496, in decode return self._decode( ^^^^^^^^^^^^^ File "/home/ghoogerw/.conda/envs/kg2Narrative/lib/python3.11/site-packages/transformers/tokenization_utils_fast.py", line 549, in _decode text = self._tokenizer.decode(token_ids, skip_special_tokens=skip_special_tokens) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ OverflowError: out of range integral type conversion attempted ``` Interestingly enough, on a similar formatted dataset (but longer text) while using Longformer (led), I get the same error but this time at prediction time, thus the trained is completed successfully: ``` Traceback (most recent call last): File "/home/ghoogerw/kg2Narrative/KGNarrative2/script4trainingLLM/LED_4_DWIE.py", line 236, in <module> main(args) File "/home/ghoogerw/kg2Narrative/KGNarrative2/script4trainingLLM/LED_4_DWIE.py", line 161, in main preds, labels, metrics = trainer.predict(tokenized_dataset['test'], num_beams=5, min_length=50, max_length=max_target, no_repeat_ngram_size=2, early_stopping=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/ghoogerw/.conda/envs/kg2Narrative/lib/python3.11/site-packages/transformers/trainer_seq2seq.py", line 216, in predict return super().predict(test_dataset, ignore_keys=ignore_keys, metric_key_prefix=metric_key_prefix) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/ghoogerw/.conda/envs/kg2Narrative/lib/python3.11/site-packages/transformers/trainer.py", line 3070, in predict output = eval_loop( ^^^^^^^^^^ File "/home/ghoogerw/.conda/envs/kg2Narrative/lib/python3.11/site-packages/transformers/trainer.py", line 3283, in evaluation_loop metrics = self.compute_metrics(EvalPrediction(predictions=all_preds, label_ids=all_labels)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/ghoogerw/kg2Narrative/KGNarrative2/script4trainingLLM/LED_4_DWIE.py", line 103, in compute_rouge decode_predictions = tokenizer.batch_decode(predictions, skip_special_tokens=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/ghoogerw/.conda/envs/kg2Narrative/lib/python3.11/site-packages/transformers/tokenization_utils_base.py", line 3456, in batch_decode return [ ^ File "/home/ghoogerw/.conda/envs/kg2Narrative/lib/python3.11/site-packages/transformers/tokenization_utils_base.py", line 3457, in <listcomp> self.decode( File "/home/ghoogerw/.conda/envs/kg2Narrative/lib/python3.11/site-packages/transformers/tokenization_utils_base.py", line 3496, in decode return self._decode( ^^^^^^^^^^^^^ File "/home/ghoogerw/.conda/envs/kg2Narrative/lib/python3.11/site-packages/transformers/tokenization_utils_fast.py", line 549, in _decode text = self._tokenizer.decode(token_ids, skip_special_tokens=skip_special_tokens) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ OverflowError: out of range integral type conversion attempted ``` <|||||>Hey @GabHoo -- could you share with us a short stand-alone script to reproduce the issue? :)<|||||>Thank you for time. Here is a standaone version of the script. I hope it is the case, ```python from transformers import AutoTokenizer,AutoModelForSeq2SeqLM,DataCollatorForSeq2Seq,Seq2SeqTrainingArguments,Seq2SeqTrainer import os from datasets import load_dataset import numpy as np from utils import * import torch import evaluate import sys import json import time import argparse def tokenize_for_evaluation(tokenizer,preds,labels): predicted_text = [] golden_labels = [] for pred, label in zip(preds, labels): gen = tokenizer.decode(pred, skip_special_tokens=True) gen = str(gen) predicted_text.append(gen) gold = tokenizer.decode(label, skip_special_tokens=True) gold = str(gold) golden_labels.append(gold) return predicted_text,golden_labels def process_data_BART(data_to_process,tokenizer,max_input,max_target,typeKG ): #get the dialogue text inputs = [graph for graph in data_to_process[f'{typeKG}']] #tokenize text model_inputs = tokenizer(inputs, max_length=max_input, padding='max_length', truncation=True) #tokenize labels #with tokenizer.as_target_tokenizer(): targets = [target for target in data_to_process['story']] model_targets = tokenizer(targets, max_length=max_target, padding='max_length', truncation=True) #reuturns input_ids, attention_masks, labels data_to_process["input_ids"] = model_inputs.input_ids data_to_process["attention_mask"] = model_inputs.attention_mask data_to_process["labels"] = model_targets.input_ids return data_to_process datapath ='/daatapath dataprefix ='pop' typeKG = 'Instances_KG' model_checkpoint="facebook/bart-base" experiment_name = 'exp' learning_rate =1e-4 batch_size = 1 epochs =3 save_model = False max_target = 512 max_input = 512 train_file = datapath +'/' + dataprefix + '_train' + '.json' dev_file = datapath +'/'+ dataprefix + '_dev' + '.json' test_file = datapath +'/' + dataprefix + '_test'+ '.json' print("Loading dataset from ",datapath) dataset = load_dataset('json', data_files={'train': train_file, 'valid': dev_file, 'test': test_file}) todrop=list(set(dataset['test'].column_names)-set([typeKG,'story'])) #This line returns a list of all the columns to drop (all columns minus the ones we need (input typeKG and story)) print("Loading tokenizer") tokenizer = AutoTokenizer.from_pretrained(model_checkpoint,add_eos_token=True) print("\nProcessing Dataset") #the processing of the data is done batches for make it faster,number of processes 4 tokenized_dataset = dataset.map(lambda example: process_data_BART(example, tokenizer,max_input,max_target,typeKG), batched=True, num_proc=4,remove_columns=todrop) print("\nLoading MODEL") model = AutoModelForSeq2SeqLM.from_pretrained(model_checkpoint) #model.to(device) print("Collator for batches") collator = DataCollatorForSeq2Seq(tokenizer, model=model) #this is necessary for diving in batch for training print('Loading rouge') rouge = evaluate.load('rouge') def compute_rouge(pred): predictions, labels = pred #decode the predictions decode_predictions = tokenizer.batch_decode(predictions, skip_special_tokens=True) #decode labels decode_labels = tokenizer.batch_decode(labels, skip_special_tokens=True,clean_up_tokenization_spaces=True) #compute results res = rouge.compute(predictions=decode_predictions, references=decode_labels, use_stemmer=True) #get % return res print("\nPREPARING FOR TRAINING...") #defining training arogouments args = Seq2SeqTrainingArguments( experiment_name, evaluation_strategy='epoch', learning_rate=learning_rate, per_device_train_batch_size= batch_size, per_device_eval_batch_size= batch_size, gradient_accumulation_steps=3, #compute gradient on n examples KG story weight_decay=0.01, #regularization save_total_limit=1, #this is the max amount of checkpoint saved, after which previous checpoints are removed num_train_epochs=epochs, #number of epochs predict_with_generate=True, generation_max_length = 512, #max number of tokens per generation generation_num_beams=5, #decoding strategy! greedy search, beam search eval_accumulation_steps=1, #backprop fp16=True, #memory management disable_tqdm=True) #only CUDA available -> fp16=True ### almost training time trainer = Seq2SeqTrainer( model, args, train_dataset=tokenized_dataset['train'], eval_dataset=tokenized_dataset['valid'], data_collator=collator, tokenizer=tokenizer, compute_metrics=compute_rouge ) trainer.train() if save_model: print("Saving model") trainer.save_model(experiment_name+"/saved_model") print("\nPREDICTING..") preds, labels, metrics = trainer.predict(tokenized_dataset['test'], num_beams=5, min_length=50, max_length=512, no_repeat_ngram_size=2, early_stopping=True) predicted_text,golden_labels=tokenize_for_evaluation(tokenizer,preds,labels) #here is already past the error print("\nRESULT SCORES:") scores = metrics.items() print(f'Results: {scores}') ``` The data looks the following, to substitute folde in data/path ``` { "story": "Baymax is a character from the film Big Hero 6 starring Scott Adsit. He was created by Steven T Seagle and the American, Duncan Rouleau.", "Types_KG": "[CORE] Baymax is a character from the film Big Hero 6 [TRIPLES] Duncan Rouleau - nationality - Americans | Baymax - creators - Duncan Rouleau | Baymax - creator - Steven T. Seagle | Baymax - series - Big Hero 6 (film) | Big Hero 6 (film) - starring - Scott Adsit | Baymax - creator - Duncan Rouleau | Duncan Rouleau - nationality - Americans | Baymax - creators - Steven T. Seagle | Baymax - series - Big Hero 6 (film) | Big Hero 6 (film) - starring - Scott Adsit | Scott Adsit - type - person | Americans - type - ethnic group | Steven T. Seagle - type - person | Duncan Rouleau - type - person | Big Hero 6 (film) - type - person", "Instances_KG": "[CORE] Baymax is a character from the film Big Hero 6 [TRIPLES] Duncan Rouleau - nationality - Americans | Baymax - creators - Duncan Rouleau | Baymax - creator - Steven T. Seagle | Baymax - series - Big Hero 6 (film) | Big Hero 6 (film) - starring - Scott Adsit | Baymax - creator - Duncan Rouleau | Duncan Rouleau - nationality - Americans | Baymax - creators - Steven T. Seagle | Baymax - series - Big Hero 6 (film) | Big Hero 6 (film) - starring - Scott Adsit", " ```<|||||>@GabHoo I'm afraid you'll have you will have to share complete data example or another script, the current instructions fail at data loading time if I create a file as specified. (`ArrowInvalid: JSON parse error: Missing a name for object member. in row 0`)<|||||>@GabHoo Hello, I had same problem and I think problem in DataCollatorForSeq2Seq, more specifically in label_pad_token_id. Collator using label_pad_token_id = -100, but your tokenizer using a different (tokenizer.pad_token_id = 1). Can you try? ` collator = DataCollatorForSeq2Seq(tokenizer, model=model, label_pad_token_id=tokenizer.pad_token_id) `<|||||>Hey @gante, I think [behavior of DataCollatorForSeq2Seq](https://github.com/huggingface/transformers/blob/fe861e578f50dc9c06de33cd361d2f625017e624/src/transformers/data/data_collator.py#L576) is really unexpected. Why it requires label_pad_token_id, if it can use tokenizer.pad_token_id as with [padding_side](https://github.com/huggingface/transformers/blob/fe861e578f50dc9c06de33cd361d2f625017e624/src/transformers/data/data_collator.py#LL574C13-L574C26)?<|||||>Hey @Pavloveuge -- the label padding triggers a different behavior at train time (if my memory does not fail me, the loss is ignored for that token)<|||||>Oh, yeah, you right, but this behavior still results in an error. And it doesn't matter which version of the tokenizer I use(Fast or not). In case use_fast=False: ` TypeError: sequence item 9: expected str instance, NoneType found ` in case use_fast=True: ` OverflowError: out of range integral type conversion attempted. ` <|||||>@Pavloveuge that sounds like a bug indeed :) Would you be able to share a short stand-alone script to reproduce the issue?<|||||>@gante Should I open new issue or reopen this?<|||||>@Pavloveuge A new issue would be preferable πŸ‘
transformers
22,633
closed
Debugging the doc-builder
null
04-06-2023 16:39:58
04-06-2023 16:39:58
_The documentation is not available anymore as the PR was closed or merged._
transformers
22,632
closed
[`Blip`] Fix slow tests and doctests with correct values
# What does this PR do? Fixes https://github.com/huggingface/transformers/issues/22625 In fact, the `num_attention_heads` of `BlipTextModel` should be 12 and not 8. Hence the models that are on the Hub were producing different logits / generations that the original implementation in some cases. I made PRs on the Hub: - https://huggingface.co/Salesforce/blip-itm-large-flickr/discussions/1 - https://huggingface.co/Salesforce/blip-itm-base-coco/discussions/3 - https://huggingface.co/Salesforce/blip-image-captioning-base/discussions/13 - https://huggingface.co/Salesforce/blip-image-captioning-large/discussions/8#642eed4cce2efe48a1aa1497 - https://huggingface.co/Salesforce/blip-vqa-base/discussions/3#642eed68ae8ae35b7a9bbb7f - https://huggingface.co/Salesforce/blip-vqa-capfilt-large/discussions/5 - https://huggingface.co/Salesforce/blip-itm-large-flickr/discussions/3 - https://huggingface.co/Salesforce/blip-itm-large-coco/discussions/3 And tested them on the slow tests and doctests with `.from_pretrained(xxx, revision="refs/pr/xx")`, this PR fixes the slow tests and doctests with the correct values. Let's merge this PR and I'll merge the PRs that are on the Hub cc @sgugger
04-06-2023 16:29:33
04-06-2023 16:29:33
_The documentation is not available anymore as the PR was closed or merged._
transformers
22,631
closed
Update input values for docstring
# What does this PR do? Updates docstring values for AST model for `input_values` that were incorrectly for pixel values. Fixes #22610 ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests?
04-06-2023 16:16:14
04-06-2023 16:16:14
Asking for a sanity check from you, @sanchit-gandhi to make sure the description of the audio inputs is correct :) <|||||>_The documentation is not available anymore as the PR was closed or merged._
transformers
22,630
closed
LlamaTokenizerFast Fix (.., from_slow=True).
# What does this PR do? `AutoTokenizer.from_pretrained(..., from_slow=True)` and ` tokenizer.save_pretrained("./tmp")` wasn't working without those. @ArthurZucker wants to add some tests for those. I'm surprised by a few of these necessities since they seems quite standard defaults. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
04-06-2023 16:07:24
04-06-2023 16:07:24
_The documentation is not available anymore as the PR was closed or merged._<|||||>WIll add the tests in a followup PR <|||||>So `test_save_slow_from_fast_and_reload_fast` tests this, but it was skipped. It should never be skipped. We should test with a minimal config exactly for the reason we just saw
transformers
22,629
closed
YaLM Implementation
# What does this PR do? Implementation of YaLM model (https://github.com/yandex/YaLM-100B). Model weights are available [here](https://huggingface.co/yandex/yalm-100b). Weight conversion will be included. ### Sources The code is based on the [original model code](https://github.com/yandex/YaLM-100B) which is an old and heavily modified [Megatron-LM](https://github.com/NVIDIA/Megatron-LM) fork. I also borrowed some code from [gpt_neox transformers implementation](https://github.com/huggingface/transformers/tree/main/src/transformers/models/gpt_neox). ### Licences The model weights [were published under the Apache 2.0 license](https://github.com/yandex/YaLM-100B/blob/main/LICENSE). Megatron-LM is licensed under the [Megatron-LM license](https://github.com/yandex/YaLM-100B/blob/main/megatron_lm/LICENSE). Not sure what the latter is. ### Correctness The model works but I can't verify it's correctness since I don't have access to 200GB of VRAM. A smaller model of same architecture should be created for testing purposes. Tokenizer and conversion script are not done yet. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @ArthurZucker @younesbelkada <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
04-06-2023 16:03:07
04-06-2023 16:03:07
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_22629). All of your documentation changes will be reflected on that endpoint.<|||||>Hey! How do you feel about maybe adding this to the hub rather than on transformers? Should be easier to do following this [tutorial](https://huggingface.co/docs/transformers/custom_models). WDYT? <|||||>Wow, didn't know this was an option. I'll definitely look into this since it's unlikely we'll be getting more models based on this architecture anyway.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
22,628
closed
[`bnb`] 8bit models should not be converted to `DDP`
# What does this PR do? Fix issues that users can encounter on multi-GPU such as: https://github.com/huggingface/peft/issues/269#issuecomment-1498776567 In fact 8bit models should not be converted to DDP cc @sgugger @pacman100
04-06-2023 15:52:45
04-06-2023 15:52:45
_The documentation is not available anymore as the PR was closed or merged._
transformers
22,627
closed
Make FlaxPreTrainedModel a Flax Module
# What does this PR do? WIP. Makes `FlaxPreTrainedModel` a `nn.Module` so Flax users can easily integrate it into other Flax networks or systems that expect Flax Modules. See for #22499 some discussion of the approach.
04-06-2023 15:49:37
04-06-2023 15:49:37
@sanchit-gandhi's previous comment: """ This looks super clean @cgarciae - really like this way of making the FlaxPreTrainedModel into an nn.Module through dataclasses. Had some questsions about the PR (mainly for my understanding), but think the general design philosophy looks good here. From my testing it all seems to work as expected - think though we should add some very visible warning messages though when a user passes _do_init=False to advise them to use the returned model as a Flax nn.Module (rather than falling-back on the __call__ method of FlaxPreTrainedModel). This is the only breaking change I see from this PR, but one that is unavoidable (since it's the exact thing we're trying to change). => perhaps as a start we first make a PR that triggers this warning (advising users that the functinality is going to change in N months / releases time), and then have this PR as a follow-up that makes the changes? For Flax BERT though this approach gives equivalence between the FlaxPreTrainedModel and the nn.Module -> for models that do extra pre-processing we can just modify the __call__ to do all the data pre-processing? """<|||||>Thanks for the review @sanchit-gandhi ! > => perhaps as a start we first make a PR that triggers this warning (advising users that the functinality is going to change in N months / releases time), and then have this PR as a follow-up that makes the changes? I was planning on entirely deleting the `params` argument from `__call__` πŸ˜…. If we want to make the change a bit more gradual then maybe we could do something like this: ```diff - if self._do_init: + if self.scope is None: ``` This would condition on whether the module being called inside `apply` or not.<|||||>@sanchit-gandhi cleaned the PR a little, these are the minimal changes required.<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_22627). All of your documentation changes will be reflected on that endpoint.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Leaving closed in favour of #22866<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
22,626
closed
WIP Umt5
# What does this PR do? It supports umt5 models, which need separate relative attention biases for each layer. The current code will have backward compatibility with previous T5 and MT5 checkpoints. Fixes # (issue) https://github.com/huggingface/transformers/issues/22573 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [X] Did you write any new necessary tests? ## Who can review? Models: - text models: @ArthurZucker and @younesbelkada - @stefan-it
04-06-2023 15:30:28
04-06-2023 15:30:28
Many thanks @agemagician ! I could import the flax import for umT5 (and made some suggestions here) and will try to get the conversion script running (I think it only needs adjustments for this relative position embeddings thing).<|||||>cc @ArthurZucker <|||||>So far the Pytorch version correctly uses "relative_attention_bias" across all layers. However, the flax version doesn't. Any idea what could be the issue? You can see this in line 224 on the flax file. I have even removed the if statement to force adding the "relative_attention_bias" layer but it doesn't @stefan-it @sgugger @ArthurZucker<|||||>> So far the Pytorch version correctly uses "relative_attention_bias" across all layers. However, the flax version doesn't. Any idea what could be the issue? > > You can see this in line 224 on the flax file. I have even removed the if statement to force adding the "relative_attention_bias" layer but it doesn't > > @stefan-it @sgugger @ArthurZucker To clarify, if I created a new flax model for UMT5, and printed the keys for the zero layer then: ``` test_flax_model.params["encoder"]["block"]["0"]["layer"]["0"]["SelfAttention"].keys() dict_keys(['q', 'k', 'v', 'o', 'relative_attention_bias']) ``` but if I printed it for any follow layers: ``` test_flax_model.params["encoder"]["block"]["1"]["layer"]["0"]["SelfAttention"].keys() dict_keys(['q', 'k', 'v', 'o']) ``` However, in pytorch it shows that all layers have a separate relative_attention_bias: ``` MT5ForConditionalGeneration( (shared): Embedding(256384, 512) (encoder): UMT5Stack( (embed_tokens): Embedding(256384, 512) (block): ModuleList( (0-7): 8 x UMT5Block( (layer): ModuleList( (0): UMT5LayerSelfAttention( (SelfAttention): UMT5Attention( (q): Linear(in_features=512, out_features=384, bias=False) (k): Linear(in_features=512, out_features=384, bias=False) (v): Linear(in_features=512, out_features=384, bias=False) (o): Linear(in_features=384, out_features=512, bias=False) (relative_attention_bias): Embedding(32, 6) ) (layer_norm): UMT5LayerNorm() (dropout): Dropout(p=0.1, inplace=False) ) (1): UMT5LayerFF( (DenseReluDense): UMT5DenseGatedActDense( (wi_0): Linear(in_features=512, out_features=1024, bias=False) (wi_1): Linear(in_features=512, out_features=1024, bias=False) (wo): Linear(in_features=1024, out_features=512, bias=False) (dropout): Dropout(p=0.1, inplace=False) (act): NewGELUActivation() ) (layer_norm): UMT5LayerNorm() (dropout): Dropout(p=0.1, inplace=False) ) ) ) ) (final_layer_norm): UMT5LayerNorm() (dropout): Dropout(p=0.1, inplace=False) ) (decoder): UMT5Stack( (embed_tokens): Embedding(256384, 512) (block): ModuleList( (0-7): 8 x UMT5Block( (layer): ModuleList( (0): UMT5LayerSelfAttention( (SelfAttention): UMT5Attention( (q): Linear(in_features=512, out_features=384, bias=False) (k): Linear(in_features=512, out_features=384, bias=False) (v): Linear(in_features=512, out_features=384, bias=False) (o): Linear(in_features=384, out_features=512, bias=False) (relative_attention_bias): Embedding(32, 6) ) (layer_norm): UMT5LayerNorm() (dropout): Dropout(p=0.1, inplace=False) ) (1): UMT5LayerCrossAttention( (EncDecAttention): UMT5Attention( (q): Linear(in_features=512, out_features=384, bias=False) (k): Linear(in_features=512, out_features=384, bias=False) (v): Linear(in_features=512, out_features=384, bias=False) (o): Linear(in_features=384, out_features=512, bias=False) (relative_attention_bias): Embedding(32, 6) ) (layer_norm): UMT5LayerNorm() (dropout): Dropout(p=0.1, inplace=False) ) (2): UMT5LayerFF( (DenseReluDense): UMT5DenseGatedActDense( (wi_0): Linear(in_features=512, out_features=1024, bias=False) (wi_1): Linear(in_features=512, out_features=1024, bias=False) (wo): Linear(in_features=1024, out_features=512, bias=False) (dropout): Dropout(p=0.1, inplace=False) (act): NewGELUActivation() ) (layer_norm): UMT5LayerNorm() (dropout): Dropout(p=0.1, inplace=False) ) ) ) ) (final_layer_norm): UMT5LayerNorm() (dropout): Dropout(p=0.1, inplace=False) ) (lm_head): Linear(in_features=512, out_features=256384, bias=False) ) ```<|||||>I have added a script to convert the original t5x jax model directly to pytorch. However, the results of the pytorch output are still garbage. Pytoch automatically check for any missing keys + the shape. So, I think now the problem might be the merging of the KV into a single matrix. I have also disabled relative bias for cross attention as it is not needed there.<|||||>@adarob could you please help us with the `joined_kv` "fusing" operation. T5 uses `joined_kv` for `num_heads` and `d_kv`. But scaled T5 has these two variables, so we need to "fuse" them manually. Can we just use a `.reshape(config.d_model, config.num_heads * config.d_kv)`, e.g. shape of (512, 6, 64) will be converted to (512, 384) :thinking: <|||||>Not sure it follow, in transformers, T5 has ```python self.key_value_proj_dim = config.d_kv self.n_heads = config.num_heads self.inner_dim = self.n_heads * self.key_value_proj_dim ``` are you looking for something like this? <|||||>> @adarob could you please help us with the `joined_kv` "fusing" operation. > > T5 uses `joined_kv` for `num_heads` and `d_kv`. But scaled T5 has these two variables, so we need to "fuse" them manually. Can we just use a `.reshape(config.d_model, config.num_heads * config.d_kv)`, e.g. shape of (512, 6, 64) will be converted to (512, 384) πŸ€” @@adarob, we will highly appreciate your feedback as this is the current bottleneck to finalize the model integration.<|||||>> Not sure it follow, in transformers, T5 has > > ```python > self.key_value_proj_dim = config.d_kv > self.n_heads = config.num_heads > self.inner_dim = self.n_heads * self.key_value_proj_dim > ``` > > are you looking for something like this? The current issue is related to the weights, in T5 the attention k,v,q,v are fused into 2d matrix, while the new umt5x version has a 3d matrix. Please check the difference in "t5x_attention_lookup" function in both T5 and UMT5. We need to use reshape to fix it, but it seems it doesn't provide the correct results.<|||||>Also pinging the main author of umT5 paper @hwchung27 for help :hugs: <|||||>Maybe we have more luck with pinging @cpgaffney1 as t5x contributor here :hugs: <|||||>adarob and hwchung27 no longer have this project as their main focus. I am also not a T5X owner. Tagging @gauravmishra <|||||>When are you planning to merge this MR? I would like to test this new model<|||||>> When are you planning to merge this MR? I would like to test this new model Unfortunately, we are stuck in the conversion process and we didn't get help yet from either the authors or the hugging face team.<|||||>@agemagician Could you please point out where you were asking a Hugging Face team member for help? I'm sorry we missed that message, but looking at the conversation I only see pings directed at persons out of the team. cc @younesbelkada since Arthur is on vacation.<|||||>Thanks for the PR ! @agemagician , could you point me on the exact issue you are facing, and on which conversion process? Is there something that is reproducible so that I can try to have a look locally?<|||||>> @agemagician Could you please point out where you were asking a Hugging Face team member for help? I'm sorry we missed that message, but looking at the conversation I only see pings directed at persons out of the team. > > cc @younesbelkada since Arthur is on vacation. Hi @sgugger No worries, It was mainly when we created a new issue for integrating the model and then we created this PR ourselves. https://github.com/huggingface/transformers/issues/22573 It would be great if HF team could help us to finalize the integration of this model, since it is almost finished.<|||||>> Thanks for the PR ! @agemagician , could you point me on the exact issue you are facing, and on which conversion process? Is there something that is reproducible so that I can try to have a look locally? Hi @younesbelkada , Thanks for offering your help. Here is a summary of the current state: UMT5 model is almost similar to T5 model except the following: 1. The original T5X checkpoint does not merge kv for k, o, q, and v values. 2. They use separate relative attention for each layer. 3. They use byte fallback in case of OOV for the tokenizer. What we have done: 1. We created the script for converting the original t5x to pytorch and jax. 2. We replicated T5 model and separated the relative attention for each layer. 3. We converted the tokenizer. What does not work: 1. The results of the output model are rubbish. Where do we think the problem: 1. It is how we join the the separated kv values. https://github.com/agemagician/transformers/blob/UMT5/src/transformers/models/umt5/convert_umt5x_checkpoint_to_pytorch.py#L51 How you can replicate the process: 1. You can use the pytorch conversion script here: https://github.com/agemagician/transformers/blob/UMT5/src/transformers/models/umt5/convert_umt5x_checkpoint_to_pytorch.py 2. Then you can use the pytorch model from here: https://huggingface.co/agemagician/umt5-small @stefan-it please, let me know if I missed something here. @younesbelkada Please, let me know if you need any additional information :)<|||||>Thanks for the detailed pointers! Can you point me to the t5x checkpoint for umt5-small so that I can try to convert the weights myself? <|||||>> Thanks for the detailed pointers! Can you point me to the t5x checkpoint for umt5-small so that I can try to convert the weights myself? Sure, here is the link: https://github.com/google-research/t5x/blob/main/docs/models.md#umt5-checkpoints<|||||>I appreciate everyone's hard work in this thread. I really hope that umT5 gets merged to HF, since the last time we got an LM that supports ~100 languages was in 2020 with mT5.<|||||>it seems they moved out the checkpoints to another place, this should be the correct path: ```bash gcloud storage cp -r gs://t5-data/pretrained_models/t5x/umt5_small/checkpoint_1000000 ./ ``` <|||||>Thank you for all the work in this thread. Are you aware of [mLongT5](https://arxiv.org/pdf/2305.11129.pdf)? Do you know when this is going to be merged?<|||||>Any update on the status of integrating umT5 and mLong? <|||||>Hi, I am ramping up on taking over the PR, will update on the progress whenever possible. thanks<|||||>I am currently working on the tokenizer, we'll be linking a new PR for this model addition in the coming days! <|||||>Hi @ArthurZucker , The tokenizer in my repo should be working: https://huggingface.co/agemagician/umt5-small/tree/main <|||||>Mmm I meant the fast tokenizer, I am mostly working on adding byte fallback support for fast tokenizers, which doesn’t exist yet πŸ‘Œ
transformers
22,625
closed
BLIP coco base default num_attention_heads should be 12
### System Info I was trying to replicate BLIP results on VG Relation dataset as in https://github.com/mertyg/vision-language-models-are-bows. I was using `BlipForImageTextRetrieval.from_pretrained("Salesforce/blip-itm-base-coco")` following this example https://huggingface.co/docs/transformers/model_doc/blip#transformers.BlipForImageTextRetrieval The results were much poorer, and after a lot of digging it turned out that this is because the num_attention_head in the current HuggingFace implementation is 8 instead of 12 as in https://github.com/mertyg/vision-language-models-are-bows (and as in common in BLIP). I manually changed it via ``` model = BlipForImageTextRetrieval.from_pretrained("Salesforce/blip-itm-base-coco") processor = AutoProcessor.from_pretrained("Salesforce/blip-itm-base-coco") tmp_config=model.config tmp_config.text_config.num_attention_heads = 12 model = BlipForImageTextRetrieval.from_pretrained( "Salesforce/blip-itm-base-coco", config=tmp_config ) ``` and matched the results on VGR. For example, if you run the doc example (with a slightly changed caption): ``` from PIL import Image import requests from transformers import AutoProcessor, BlipForImageTextRetrieval model = BlipForImageTextRetrieval.from_pretrained("Salesforce/blip-itm-base-coco") processor = AutoProcessor.from_pretrained("Salesforce/blip-itm-base-coco") url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) text = "two cats on a couch" inputs = processor(images=image, text=text, return_tensors="pt") outputs = model(**inputs) print(torch.softmax(outputs['itm_score'],dim=-1)) ``` ![image](https://user-images.githubusercontent.com/13796686/230425156-008d7ae2-d947-4f63-bfa1-8688f1e870c4.png) **with 8 heads you get scores of `[[0.9489, 0.0511]]` for the above image i.e. the caption ` "two cats on a couch"` is not relevant, whereas with 12 heads you get scores of `[[0.1727, 0.8273]]` i.e. the caption is very relevant (which is the correct answer)** Proposed fix: **I suggest we switch it to 12 by default.**
04-06-2023 15:27:10
04-06-2023 15:27:10
cc @amyeroberts and @younesbelkada <|||||>This seems to be the right fix, I will check the original repo again and get back to you <|||||>The attention head is indeed seems to be 12 instead of 8: https://github.com/salesforce/BLIP/blob/3a29b7410476bf5f2ba0955827390eb6ea1f4f9d/configs/bert_config.json#L14 / https://github.com/salesforce/BLIP/blob/3a29b7410476bf5f2ba0955827390eb6ea1f4f9d/configs/med_config.json#L14 We'll need to modify the `num_attention_heads` of all blip models as they all use the same base text model. Doctests and slow tests might need to be adapted accordingly. <|||||>Thanks a lot @DianeBouchacourt for the great catch Everything should have been updated correctly, now if you load again your model, you should see the correct results!
transformers
22,624
closed
Decoding with language model issue
### System Info - `transformers` version: 4.26.1 - Platform: Windows-10-10.0.22621-SP0 - Python version: 3.10.0 - Huggingface_hub version: 0.12.1 - PyTorch version (GPU?): 1.11.0+cpu (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ```python from datasets import load_dataset, Dataset,Audio from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor from pyctcdecode import build_ctcdecoder from transformers import Wav2Vec2ProcessorWithLM,AutoProcessor import torch def process_audio(filepath,model_id): print(filepath) model = Wav2Vec2ForCTC.from_pretrained("media/model/transformer/") processor = Wav2Vec2Processor.from_pretrained("media/model/transformer/") audio_file_path = [filepath] audio_data = Dataset.from_dict({"audio": audio_file_path}).cast_column("audio", Audio()) audio_data = audio_data.cast_column("audio", Audio(sampling_rate=16_000)) def prepare_dataset(batch): audio = batch["audio"] batch["input_values"] = processor(audio["array"], sampling_rate=audio["sampling_rate"]).input_values[0] batch["input_length"] = len(batch["input_values"]) return batch test_dataset = audio_data.map(prepare_dataset) input_dict = processor(test_dataset[0]["input_values"], return_tensors="pt", padding=True) logits = model(input_dict.input_values).logits if (int(model_id)==1): pred_ids = torch.argmax(logits, dim=-1)[0] return processor.decode(pred_ids) elif (int(model_id)==2): processorLM = AutoProcessor.from_pretrained("media/model/transformer/") vocab_dict = processorLM.tokenizer.get_vocab() sorted_vocab_dict = {k.lower(): v for k, v in sorted(vocab_dict.items(), key=lambda item: item[1])} decoder = build_ctcdecoder(labels=list(sorted_vocab_dict.keys()), kenlm_model_path="media/model/transformer/5gram_correct.arpa",) processor_with_lm = Wav2Vec2ProcessorWithLM(feature_extractor=processor.feature_extractor,tokenizer=processorLM.tokenizer,decoder=decoder) return processor_with_lm.batch_decode(logits.detach().numpy()).text ``` ### Expected behavior I have been using the below code to decode my wav2vac2 Automatic Speech Recognition Model and it works fine on the google Collab but when I shift to Django to deploy in website the build_ctcdecoder give UnicodeDecodeError: 'charmap' codec can't decode byte 0x81 in position 174: character maps to <undefined> error processorLM = AutoProcessor.from_pretrained("media/model/transformer/") vocab_dict = processorLM.tokenizer.get_vocab() sorted_vocab_dict = {k.lower(): v for k, v in sorted(vocab_dict.items(), key=lambda item: item[1])} decoder = build_ctcdecoder(labels=list(sorted_vocab_dict.keys()), kenlm_model_path="media/model/transformer/5gram_correct.arpa",) processor_with_lm = Wav2Vec2ProcessorWithLM(feature_extractor=processor.feature_extractor,tokenizer=processorLM.tokenizer,decoder=decoder) return processor_with_lm.batch_decode(logits.detach().numpy()).text Note: I have using Dzongkha Language which is based on utf-8
04-06-2023 14:56:23
04-06-2023 14:56:23
cc @sanchit-gandhi <|||||>Hey @ngawang88, sorry for the late reply, but unfortunately the code snippet is currently un-reproducible (it leverages a local model which I cannot load). Without the full stack trace I can't see precisely where the code is failing. If you're able to share a reproducible code snippet (e.g. one that loads a model from the HF Hub) and share the full stack trace, I'd be able to have a deeper look. Thanks<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
22,623
closed
docs: Fix broken link to generation strategies
# What does this PR do? Addresses fixing the broken link from clicking [here](https://huggingface.co/docs/transformers/main_classes/text_generation#:~:text=generate%E2%80%99.%20To%20learn%20more%20about%20decoding%20strategies%20refer%20to%20the-,text%20generation%20strategies%20guide,-.) Link should direct to `https://huggingface.co/docs/transformers/generation_strategies` instead of `https://huggingface.co/docs/transformers/main_classes/generation_strategies` <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> (no issue filed) ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 --> @MKhalusova
04-06-2023 14:50:04
04-06-2023 14:50:04
_The documentation is not available anymore as the PR was closed or merged._
transformers
22,622
closed
No TPU found colab
### System Info Transformers installed from source. ### Who can help? _No response_ ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction Steps to reproduce: 1. Use xla_spawn to run the run_clm.py script. Colab link:https://colab.research.google.com/drive/1U1kcI4gXPhEeAZkichOejw0ceNlX4E6z?usp=share_link ### Expected behavior It should detect and run on the Colab TPU, but no TPU is found. I am using a custom text dataset(created in the colab). When I test if torch XLA detects the TPU, it does. I am able to create tensors on the TPU, but it is not being detected.
04-06-2023 14:32:28
04-06-2023 14:32:28
Flax didn't find the TPU, but the actual error is a connection error when downloading the datasets.<|||||>How do I fix it?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
22,621
closed
TypeError: create_repo() got an unexpected keyword argument 'organization'
### System Info My environment: ``` - `transformers` version: 4.27.4 - Platform: Linux-5.10.147+-x86_64-with-glibc2.31 - Python version: 3.9.16 - Huggingface_hub version: 0.13.3 - PyTorch version (GPU?): 2.0.0+cu118 (True) - Tensorflow version (GPU?): 2.12.0 (True) - Flax version (CPU?/GPU?/TPU?): 0.6.8 (gpu) - Jax version: 0.4.7 - JaxLib version: 0.4.7 - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ``` ### Who can help? _No response_ ### Information - [X] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction ```python model.save_to_hub("sent_bert", organization="gszabo", train_datasets=["gszabo/sentence-compression"], exist_ok=True, ) ``` ``` --------------------------------------------------------------------------- TypeError Traceback (most recent call last) [<ipython-input-13-cbde1c79e18f>](https://localhost:8080/#) in <cell line: 1>() ----> 1 model.save_to_hub("sent_bert", 2 organization="gszabo", 3 train_datasets=["gszabo/sentence-compression"], 4 exist_ok=True, 5 ) 1 frames [/usr/local/lib/python3.9/dist-packages/sentence_transformers/SentenceTransformer.py](https://localhost:8080/#) in save_to_hub(self, repo_name, organization, private, commit_message, local_model_path, exist_ok, replace_model_card, train_datasets) 465 466 endpoint = "https://huggingface.co/" --> 467 repo_url = HfApi(endpoint=endpoint).create_repo( 468 token, 469 repo_name, [/usr/local/lib/python3.9/dist-packages/huggingface_hub/utils/_validators.py](https://localhost:8080/#) in _inner_fn(*args, **kwargs) 118 kwargs = smoothly_deprecate_use_auth_token(fn_name=fn.__name__, has_token=has_token, kwargs=kwargs) 119 --> 120 return fn(*args, **kwargs) 121 122 return _inner_fn # type: ignore TypeError: create_repo() got an unexpected keyword argument 'organization' ``` ### Expected behavior I used Google Colab, and I wanted to fine-tuned a sentence-bert (`sentence_transformers`) with my own data and then I wanted to push to the HF-hub. After that, I created a public repo on the hub in my account under the name `sent_bert`, before that I also logged in to Colab with `notebook_login()`. Also I created the model and then I used the `save_to_hub()` function when got the error. I tried to use the `push_to_hub()` function as well, but that doesn't support the sentence transformer models. Has anyone encountered something similar or do you know a solution to this? Anyway, I pretty much followed [these steps](https://huggingface.co/blog/how-to-train-sentence-transformers):
04-06-2023 14:06:34
04-06-2023 14:06:34
You need to update your installation of `huggingface_hub`: `pip install --upgrade huggingface_hub`.<|||||>I tried to update but got the same error message. Anyway, I'm checking now to see if it's a new update, I mean it came out yesterday, but no, it doesn't work either way.<|||||>Oh I'm sorry, that's a deprecated argument. You need to pass the organization with the model name now: `model.save_to_hub("gszabo/sent_bert", ...)`<|||||>Unfortunately, it's not good either... ```python import huggingface_hub huggingface_hub.__version__ ``` And its version: 0.13.4 <|||||>What is the error message you are getting?<|||||>I got the same one ``` --------------------------------------------------------------------------- TypeError Traceback (most recent call last) [<ipython-input-17-9445b27a9463>](https://localhost:8080/#) in <cell line: 1>() ----> 1 model.save_to_hub("gszabo/sent_bert", 2 organization="gszabo", 3 train_datasets=["gszabo/sentence-compression"], 4 exist_ok=True, 5 ) 1 frames [/usr/local/lib/python3.9/dist-packages/sentence_transformers/SentenceTransformer.py](https://localhost:8080/#) in save_to_hub(self, repo_name, organization, private, commit_message, local_model_path, exist_ok, replace_model_card, train_datasets) 465 466 endpoint = "https://huggingface.co/" --> 467 repo_url = HfApi(endpoint=endpoint).create_repo( 468 token, 469 repo_name, [/usr/local/lib/python3.9/dist-packages/huggingface_hub/utils/_validators.py](https://localhost:8080/#) in _inner_fn(*args, **kwargs) 118 kwargs = smoothly_deprecate_use_auth_token(fn_name=fn.__name__, has_token=has_token, kwargs=kwargs) 119 --> 120 return fn(*args, **kwargs) 121 122 return _inner_fn # type: ignore TypeError: create_repo() got an unexpected keyword argument 'organization' ```<|||||>I am getting the same error for the same environment<|||||>Yes, you need to remove the `organization` argument, it is not accepted anymore. The organization should be part of your repo ID now. ```py model.save_to_hub("gszabo/sent_bert", train_datasets=["gszabo/sentence-compression"], exist_ok=True) ```<|||||>Also please open issues related to sentence-transformers in that repo, there is only so much I can do to help since I don't know it at all and don't maintain it. Closing it here, if you have further issues please go [there](https://github.com/UKPLab/sentence-transformers) :-)<|||||>I think the problem here is a version mismatch between `transformers` and `huggigface_hub`. For eg, a `transformers` version 4.17.0, will not work with a `huggingface_hub` version of 0.14.1. Specifically, [L2954](https://github.com/huggingface/transformers/blob/v4.17.0/src/transformers/file_utils.py#L2954) has the offending call. So we have to check when this tag was released, for 4.17.0, the date is March 3, 2022. The appropriate `huggingface_hub` version is probably `0.5.0`, but then you would have to downgrade `datasets`, which depending on the version, might have a wrong filter implementation ([this issue](https://github.com/huggingface/datasets/pull/2947)). In summary, if you are not updated to the latest transformers + datasets + huggingface_hub, and using all of them, you might break your code.
transformers
22,620
closed
Add TensorFlow implementation of EfficientFormer
# What does this PR do? * Adding EfficientFormer computer vision model TensorFlow port (not a llm port). * Fixes some minor typos and a couple of differences in the PyTorch model code: 1) the non-dict / tuple return was not returning last hidden state but the state before last stage. The dict and tuple return of the encoder should be equivalent, as seen in other [models](https://github.com/huggingface/transformers/blob/main/src/transformers/models/poolformer/modeling_poolformer.py#L258). 2) Two layernorms were not using the config eps (assuming that the config is the ground truth). Let me know how you think about this. Ran tests (CPU-only, all pass) with: `NVIDIA_TF32_OVERRIDE=1 RUN_SLOW=1 RUN_PT_TF_CROSS_TESTS=1 py.test -vv -rA tests/models/efficientformer/test_modeling_tf_efficientformer.py ` Double checked pt and tf architecture codes with the "[EfficientFormer: Vision Transformers at MobileNet Speed](https://proceedings.neurips.cc/paper_files/paper/2022/hash/5452ad8ee6ea6e7dc41db1cbd31ba0b8-Abstract-Conference.html)" paper. Verified on example image shapes and diffs in hidden states: ``` from transformers import EfficientFormerImageProcessor from src.transformers.models.efficientformer.modeling_tf_efficientformer import TFEfficientFormerModel from src.transformers.models.efficientformer.modeling_efficientformer import EfficientFormerModel model_tf = TFEfficientFormerModel.from_pretrained("snap-research/efficientformer-l1-300", from_pt=True) model_pt = EfficientFormerModel.from_pretrained("snap-research/efficientformer-l1-300") image = Image.open("./tests/fixtures/tests_samples/COCO/000000039769.png") proc = EfficientFormerImageProcessor.from_pretrained("snap-research/efficientformer-l1-300") inputstf = proc(images=image, return_tensors="tf") inputspt = proc(images=image, return_tensors="pt") outtf = model_tf(**inputstf, output_hidden_states=True, training=False) with torch.no_grad(): outpt = model_pt(**inputspt, output_hidden_states=True) max_diff = np.amax(np.abs(outtf[0].numpy() - outpt[0].numpy())) print(f"last hidden diff shape: {outtf[0].shape}, last hidden diff: {max_diff}, last hidden <= 1e-4, {max_diff <= 1e-4}") for i in range(7): max_diff = np.amax(np.abs(outtf[1][i].numpy() - outpt[1][i].numpy())) print(f"hidden state {i} shape: {outtf[1][i].shape}, diff: {max_diff}, max_diff <= 1e-4: {max_diff <= 1e-4}") ``` which gives: ``` last hidden diff shape: (1, 49, 448), last hidden diff: 2.1457672119140625e-05, last hidden <= 1e-4, True hidden state 0 shape: (1, 48, 56, 56), diff: 7.271766662597656e-06, max_diff <= 1e-4: True hidden state 1 shape: (1, 48, 56, 56), diff: 5.054473876953125e-05, max_diff <= 1e-4: True hidden state 2 shape: (1, 96, 28, 28), diff: 2.9087066650390625e-05, max_diff <= 1e-4: True hidden state 3 shape: (1, 96, 28, 28), diff: 2.3603439331054688e-05, max_diff <= 1e-4: True hidden state 4 shape: (1, 224, 14, 14), diff: 1.6689300537109375e-05, max_diff <= 1e-4: True hidden state 5 shape: (1, 224, 14, 14), diff: 4.1961669921875e-05, max_diff <= 1e-4: True hidden state 6 shape: (1, 448, 7, 7), diff: 1.9550323486328125e-05, max_diff <= 1e-4: True ``` <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
04-06-2023 13:50:48
04-06-2023 13:50:48
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_22620). All of your documentation changes will be reflected on that endpoint.<|||||>cc @Rocketknight1 <|||||>Hi @D-Roberts, just letting you know the TF team at Hugging Face is aware of this and definitely interested in the port! Please ping me or @gante whenever it's ready for review, or if you run into any issues while porting.<|||||>@Rocketknight1 @gante This PR is now ready for review.<|||||>cc @amyeroberts for core maintainer review as well<|||||>@Rocketknight1 @amyeroberts I addressed your comments and also submitted two PRs for the l1 and l3 weights (and tagged Rocketknight1). Let me know what's next!<|||||>@D-Roberts - that's great! For the CI - it seems there is an issue with your CircleCI permissions, as the tests won't run. Could you try refreshing your permissions as shown [here](https://support.circleci.com/hc/en-us/articles/360048210711-How-to-Refresh-User-Permissions-)? Once all the tests are green, we'll be ready for final reviews :) <|||||>@amyeroberts Thanks for pointing out the circle ci fix. It appears that one doc test which (rightly) can't find tf weights is failing for now. I added back the `from_pt` in the model tests for the sake of ci tests until the tf weights get merged.<|||||>@D-Roberts Just to let you know, we've reached out to the team at Snap to ask them to merge your PRs on the EfficientFormer checkpoints. Sorry for the delay!<|||||>@D-Roberts the checkpoint PRs should be merged now. Thank you to @alanspike for the quick response!<|||||>@amyeroberts @Rocketknight1 All local tests pass with the new tf weights. The CI gets this documentation tests failing; the pt version also predicts 281 which maps to label_281 in config.<|||||>@D-roberts I think it's fine to swap those tests for just checking the actual argmax index rather than the `id2label` string value. Obviously the repository config doesn't actually have the `id2label` values set, so fixing that would require another PR to the repos.<|||||>@Rocketknight1 @amyeroberts Alright. :)<|||||>LGTM now - I'm happy to merge as soon as you and @amyeroberts are!<|||||>Hi @Rocketknight1 , I've addressed the last comments from @amyeroberts and had all tests pass. I am ready for merge whenever you are. I've just rebased to upstream and there are some unrelated ci tests failing, though last night everything was green.<|||||>@Rocketknight1 All green again. :) <|||||>@sgugger @amyeroberts @Rocketknight1 I was wondering - when do you plan a transformers release that includes this code? <|||||>@D-Roberts We release [roughly once a month](https://github.com/huggingface/transformers/releases) and are planning on releasing 4.30 later this week. If you need it right now, it's possible to [install from source](https://huggingface.co/docs/transformers/installation#install-from-source) to have the `main` version too.
transformers
22,619
closed
Add TimmBackbone model
# What does this PR do? Adds a new model TimmBackbone to use for loading timm weights for use in the AutoBackbone API. Example usage: ``` from transformers import AutoBackbone # Loads a transformers model backbone = AutoBackbone.from_pretrained("microsoft/resnet-18") # Loads a timm checkpoint backbone = AutoBackbone.from_pretrained("resnet18", use_timm_backbone=True) ``` ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests?
04-06-2023 13:18:55
04-06-2023 13:18:55
_The documentation is not available anymore as the PR was closed or merged._<|||||>@ydshieh Merging now. I'll address any comments or requests for changes in a follow up PR.
transformers
22,618
closed
Fix docstrings for TF BLIP
Some of the docstrings were still a bit PyTorchy, this is fixed now! (cc @ydshieh)
04-06-2023 13:11:41
04-06-2023 13:11:41
@Rocketknight1 Thanks for adding tf-blip. PS. in case you're interested to contribute https://github.com/keras-team/keras-nlp/issues/941<|||||>@Rocketknight1 I tried to re-run the CI, but it still fails. Could you push an empty commit to trigger it maybe?<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>@ydshieh tests are passing now! Can you approve the PR?<|||||>@Rocketknight1 When I run the doctest ```python python3 -m pytest -v --make-reports doc_tests_gpu --doctest-modules src/transformers/models/blip/modeling_tf_blip.py::transformers.models.blip.modeling_tf_blip.TFBlipForConditionalGeneration.generate -sv --doctest-continue-on-failure --doctest-glob="*.mdx" ``` I got ``` Expected: two cats are laying on a couch Got: two cats sleeping on a couch ```<|||||>Also, the changes are not just docstrings. After looking at the changes, I am wondering why we don't have CI failures (i.e. the usual testing). It turns out that we do have some failures https://github.com/huggingface/transformers/actions/runs/4653835201/jobs/8235101265 @Rocketknight1 Do you want to verify/fix those in this same PR? (probably already fixed as CircleCI is green?)<|||||>Oh, that's very odd - I wonder why it's not visible in the CI? I'll take a look!<|||||>Thank you @Rocketknight1 . I am also confused why CircleCI is green but failed on daily CI.<|||||>@ydshieh tests should pass now! The cause was some expected values in the tests being wrong. I copied the right ones from the torch tests and now everything is passing locally, so hopefully the CI will agree.<|||||>The doctests are weird, though - I think some of them were broken in PyTorch too. Working on it!<|||||>Thanks @Rocketknight1 I will double check. But do you figure out (some of) tests pass on CircleCI but fails on daily CI - the pt<->tf equivalence tests also run on CircleCI. We should see they fail on it (as they fail on daily CI).<|||||>For the doctest, we still get ```python Expected: two cats are laying on a couch Got: two cats sleeping on a couch ``` for `TFBlipForConditionalGeneration.generate`. Regarding the modeling tests - all tests pass now πŸ₯³ <|||||>@ydshieh should be resolved now!
transformers
22,617
closed
Error while installing dev dependencies for Apple Silicon
### System Info - `transformers` version: 4.24.0 - Platform: macOS-13.2.1-arm64-arm-64bit - Python version: 3.10.10 - Huggingface_hub version: 0.10.1 - PyTorch version (GPU?): 1.12.1 (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: no - Using distributed or parallel set-up in script?: no ### Who can help? @sgugger ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Follow contribution guidelines as outlined [here](https://huggingface.co/docs/transformers/contributing#create-a-pull-request), at the `pip install -e ".[dev]"` step and results in the following output and error: ``` $ pip install -e ".[dev]" Obtaining file:///Users/eipizero/Documents/Code/transformers Installing build dependencies ... done Checking if build backend supports build_editable ... done Getting requirements to build editable ... done Installing backend dependencies ... done Preparing editable metadata (pyproject.toml) ... done Collecting tqdm>=4.27 Using cached tqdm-4.65.0-py3-none-any.whl (77 kB) Collecting packaging>=20.0 Using cached packaging-23.0-py3-none-any.whl (42 kB) Collecting pyyaml>=5.1 Using cached PyYAML-6.0-cp310-cp310-macosx_11_0_arm64.whl (173 kB) Collecting tokenizers!=0.11.3,<0.14,>=0.11.1 Using cached tokenizers-0.13.3-cp310-cp310-macosx_12_0_arm64.whl (3.9 MB) Collecting huggingface-hub<1.0,>=0.11.0 Using cached huggingface_hub-0.13.3-py3-none-any.whl (199 kB) Collecting requests Using cached requests-2.28.2-py3-none-any.whl (62 kB) Collecting filelock Using cached filelock-3.10.7-py3-none-any.whl (10 kB) Collecting numpy>=1.17 Using cached numpy-1.24.2-cp310-cp310-macosx_11_0_arm64.whl (13.9 MB) Collecting regex!=2019.12.17 Using cached regex-2023.3.23-cp310-cp310-macosx_11_0_arm64.whl (288 kB) Collecting optax>=0.0.8 Using cached optax-0.1.4-py3-none-any.whl (154 kB) Collecting pyctcdecode>=0.4.0 Using cached pyctcdecode-0.5.0-py2.py3-none-any.whl (39 kB) Collecting nltk Using cached nltk-3.8.1-py3-none-any.whl (1.5 MB) Collecting torch!=1.12.0,>=1.9 Using cached torch-2.0.0-cp310-none-macosx_11_0_arm64.whl (55.8 MB) Collecting jax!=0.3.2,<=0.3.6,>=0.2.8 Using cached jax-0.3.6.tar.gz (936 kB) Preparing metadata (setup.py) ... done Collecting pytest Using cached pytest-7.2.2-py3-none-any.whl (317 kB) Collecting kenlm Using cached kenlm-0.1.tar.gz (424 kB) Preparing metadata (setup.py) ... done Collecting GitPython<3.1.19 Using cached GitPython-3.1.18-py3-none-any.whl (170 kB) Collecting sudachipy>=0.6.6 Using cached SudachiPy-0.6.7-cp310-cp310-macosx_10_12_universal2.whl (2.4 MB) ERROR: Could not find a version that satisfies the requirement decord==0.6.0; extra == "dev" (from transformers[dev]) (from versions: none) ERROR: No matching distribution found for decord==0.6.0; extra == "dev" ``` ### Expected behavior Development dependencies should be installed without error.
04-06-2023 12:49:17
04-06-2023 12:49:17
cc @amyeroberts and @nateraw <|||||>Hmm maybe its the Python version? Some issue mentioning that here: https://github.com/dmlc/gluon-cv/issues/1539<|||||>The issue is mainly because of Apple Silicon. `decord` does not provide any built wheels for apple silicon, and hence cannot be found using pip. I had to build it from source and then install the python bindings. Similar issue arises for `tensorflow-text` since it also does not provide any built wheels for apple silicon, and has to be built from scratch. I used a community built wheel from [here](https://github.com/sun1638650145/Libraries-and-Extensions-for-TensorFlow-for-Apple-Silicon/releases). I think the docs should be updated to account for these issues.<|||||>I see. we should add a note then that in some cases you may need to install `decord` from source, and link to any related issues. Or, perhaps we migrate fully to `pyav` at this point, given we started to do that here: #21572 (since decord is no longer being actively maintained and these issues will never go away). WDYT?<|||||>My dev setup on apple silicon failed with ``` ERROR: Ignored the following versions that require a different python version: 1.21.2 Requires-Python >=3.7,<3.11; 1.21.3 Requires-Python >=3.7,<3.11; 1.21.4 Requires-Python >=3.7,<3.11; 1.21.5 Requires-Python >=3.7,<3.11; 1.21.6 Requires-Python >=3.7,<3.11 ERROR: Could not find a version that satisfies the requirement jaxlib<=0.3.6,>=0.1.65; extra == "dev" (from transformers[dev]) (from versions: 0.3.24, 0.3.25, 0.4.0, 0.4.1, 0.4.2, 0.4.3, 0.4.4, 0.4.6, 0.4.7) ERROR: No matching distribution found for jaxlib<=0.3.6,>=0.1.65; extra == "dev" ``` After a bit of hunting found success by installing jaxlib through conda : https://github.com/google/jax/issues/5501#issuecomment-1032891169 Maybe it helps someone <|||||>@nateraw Migrating fully to `pyav` is indeed the correct thing to do since the migration has already begun. There are still other issues with setting up dev env on apple silicon, and setting it up correctly should be part of docs. It took me some time to correctly install the entire `dev` env. Following is the list of issues and solutions that worked for me for `python 3.9`: - `decord` - `Problem`: No prebuilt wheels for apple silicon. - `Solution`: Building locally, and installing python bindings. - `Action`: Complete migration to `pyav`. - `tensorflow`, `tensorflow-*` - `Problem`: `tensorflow`, `tensorflow-*` are not directly installable for macOS. - `Solution`: Need to install `tensorflow-deps` from conda apple channel. This has already been highlighted in a previous issue #18355. Install `tensorflow-macos`( instead of `tensorflow`). - `Action`: - `setup.py`should be able to detect if the dev env is being setup on apple silicon, and install `tensorflow-macos`instead of `tensorflow`. - Docs should account for setting up dev env on apple silicon. - `pip` - `Problem`: `pip._vendor.resolvelib.resolvers.ResolutionTooDeep: 20000` - Not sure if others have faced this issue, but this could also just be my machine. - `pip` was unable to resolve the dev dependencies in about 6 hours, and failed with the above error. - This could be because some package versions are not aligned for apple silicon correctly. - `Solution`: - Follow the steps mentioned in issue #18355 . - install and build record - install `tensorflow-text` from [here](https://github.com/sun1638650145/Libraries-and-Extensions-for-TensorFlow-for-Apple-Silicon/releases) - Run `pip install -e ".[dev]" --use-deprecated=legacy-resolver` - Run `pip3 install lxml` - Some package versions were still not correctly resolved, and tests were failing, along with the `make fixup`, etc commands. So, I had to install specific versions of the following packages from pip: `jax==0.4.7`, `numpy==1.23`. - `onnx` had to be installed using instructions [here](https://github.com/onnx/onnx/issues/3621#issuecomment-890351498). - `Action`: - I need some sanity check for the pip errors. Am I the only one who faced this or is this reproducible for other `M2`users. - Possible update of `setup.py` to account for apple silicon, and guides in docs. After these steps, the dev environment is finally working for me along with tests, and other commands. But, this took way too long to get working. I also tried setting up `vscode devcontainer` for dev dependencies, but jaxlib still does not provide wheels for `manylinux aarch64` wheels yet. How can we proceed here? I want to actively contribute towards solving these issues :) <|||||>@mayankagarwals Did you face the issues highlighted in above comment? [Link](https://github.com/huggingface/transformers/issues/22617#issuecomment-1501078224)<|||||>For most contributions, you only need to run `pip install -e .["quality"]`, but we do need TensorFlow and Jax for the complete quality checks (as we have many models in both those frameworks). But if you make contributions that do not require them (e.g. you're not touching a TensorFlow or Flax model) you will be fine.<|||||>@nateraw I could already start working on completing the migration from `decord` to `pyav` . What do you think about the other set of problems I point out?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
22,616
closed
transformers python module β€œtokenizers” version is not matching with FastChat project β€œtokenizers”
### System Info Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points. - `transformers` version: 4.28.0.dev0 - Platform: Linux-3.10.0-1160.el7.x86_64-x86_64-with-glibc2.17 - Python version: 3.9.16 - Huggingface_hub version: 0.13.3 - Safetensors version: not installed - PyTorch version (GPU?): 2.0.0+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction 1、When I installed FastChat which need to install the lastest main brach of huggingface/transformers , I found that https://github.com/huggingface/transformers/blob/main/src/transformers/models/llama/tokenization_llama_fast.py which tokenizers version ( require_version("tokenizers>=0.13.3") ) is not matching the latest main branch of FastChat tokenizers version: ``` https://github.com/lm-sys/FastChat/blob/main/pyproject.toml dependencies = [ "accelerate", "fastapi", "gradio==3.23", "markdown2[all]", "numpy", "requests", "sentencepiece", **"tokenizers==0.12.1",** "torch", "uvicorn", "wandb", "transformers @ git+https://github.com/huggingface/transformers.git" ] ``` 2、So ,FastChat current version(0.1.4οΌ‰ needs to match which transformers version and tokenizers version? Errors: ``` Loading base model Loading checkpoint shards: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 41/41 [00:26<00:00, 1.54it/s] Loading delta Loading checkpoint shards: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 3/3 [00:19<00:00, 6.47s/it] Traceback (most recent call last): File "/home/jiagy/transformers/src/transformers/utils/import_utils.py", line 1125, in _get_module return importlib.import_module("." + module_name, self.__name__) File "/root/miniconda3/envs/fast-chat/lib/python3.9/importlib/__init__.py", line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "<frozen importlib._bootstrap>", line 1030, in _gcd_import File "<frozen importlib._bootstrap>", line 1007, in _find_and_load File "<frozen importlib._bootstrap>", line 986, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 680, in _load_unlocked File "<frozen importlib._bootstrap_external>", line 850, in exec_module File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed File "/home/jiagy/transformers/src/transformers/models/llama/tokenization_llama_fast.py", line 19, in <module> require_version("tokenizers>=0.13.3") File "/home/jiagy/transformers/src/transformers/utils/versions.py", line 117, in require_version _compare_versions(op, got_ver, want_ver, requirement, pkg, hint) File "/home/jiagy/transformers/src/transformers/utils/versions.py", line 50, in _compare_versions raise ImportError( ImportError: tokenizers>=0.13.3 is required for a normal functioning of this module, but found tokenizers==0.12.1. The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/root/miniconda3/envs/fast-chat/lib/python3.9/runpy.py", line 197, in _run_module_as_main return _run_code(code, main_globals, None, File "/root/miniconda3/envs/fast-chat/lib/python3.9/runpy.py", line 87, in _run_code exec(code, run_globals) File "/home/jiagy/FastChat/fastchat/model/apply_delta.py", line 49, in <module> apply_delta(args.base_model_path, args.target_model_path, args.delta_path) File "/home/jiagy/FastChat/fastchat/model/apply_delta.py", line 19, in apply_delta delta_tokenizer = AutoTokenizer.from_pretrained(delta_path) File "/home/jiagy/transformers/src/transformers/models/auto/tokenization_auto.py", line 691, in from_pretrained tokenizer_class = tokenizer_class_from_name(tokenizer_class_candidate) File "/home/jiagy/transformers/src/transformers/models/auto/tokenization_auto.py", line 392, in tokenizer_class_from_name return getattr(module, class_name) File "/home/jiagy/transformers/src/transformers/utils/import_utils.py", line 1115, in __getattr__ module = self._get_module(self._class_to_module[name]) File "/home/jiagy/transformers/src/transformers/utils/import_utils.py", line 1127, in _get_module raise RuntimeError( RuntimeError: Failed to import transformers.models.llama.tokenization_llama_fast because of the following error (look up to see its traceback): tokenizers>=0.13.3 is required for a normal functioning of this module, but found tokenizers==0.12.1. ``` ### Expected behavior transformer tokenizers version is matching with FastChat tokenizers version.
04-06-2023 12:28:18
04-06-2023 12:28:18
I'm not sure why you are opening the issue here. It's a problem in the dependencies of FastChat.<|||||>Because this makes me very confused about which dependency configuration should refer to when project updating so quickly<|||||>This issue can resolve by installing a specific version's transformer (with source),: ``` pip uninstall transformers pip install git+https://github.com/huggingface/transformers@cae78c46d ```<|||||>After implementing the soln, this issue pops up Failed to import transformers.models.llama.tokenization_llama_fast because of the following error (look up to see its traceback): No module named 'transformers.models.llama.tokenization_llama_fast' <|||||>pip install tokenizers==0.13.3 -- this should solve your problem
transformers
22,615
closed
Translated title of fast_tokenizer to test PR
# What does this PR do? Firstly, sorry for late PR, and From this week i can handle rest of the task. so Next this kind of thing will not happen again. I did translate the title of fast_tokenizers.mdx. Part of https://github.com/huggingface/transformers/issues/20179 <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Team PseudoLab, may you please review this PR? @0525hhgus, @wonhyeongseo , @sim-so, @gabrielwithappy, @HanNayeoniee, @wonhyeongseo, @jungnerd <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
04-06-2023 12:10:23
04-06-2023 12:10:23
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_22615). All of your documentation changes will be reflected on that endpoint.<|||||>Dear @KIHOON71 , 1. Please remove "Fixes # (issue)" on the decription. 2. Please add [WIP] on the title or change to draft status. Then reivewers can check this PR is now in-progress. :-) BRs.
transformers
22,614
closed
Add DistilBERTForCausalLM
# What does this PR do? Similar to the BertLMHeadModel this PR aims to add a DistilBertForCausalLM model in modeling_distilbert.py. Fixes https://github.com/huggingface/transformers/issues/7397 Replaces https://github.com/huggingface/transformers/pull/8387, https://github.com/huggingface/transformers/pull/11085 ## Who can review? @patrickvonplaten @ArthurZucker @younesbelkada
04-06-2023 12:00:44
04-06-2023 12:00:44
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_22614). All of your documentation changes will be reflected on that endpoint.<|||||>There is no checkpoint available for this task and DistilBERT, which is an encoder model. What is your usecase for adding this?<|||||>> There is no checkpoint available for this task and DistilBERT, which is an encoder model. What is your usecase for adding this? @sgugger To fine-tune DistilBERT models for text generation with EncoderDecoder class.<|||||>How is it different than using BERT?<|||||>@sgugger It is not possible to create Transformer model with EncoderDecoderModel using DistilBERT checkpoint (e.g. BertConfig is supported, but DistilBertConfig is not). If I try to create Encoder Decoder model with distilbert checkpoints like this: ` model_name = 'distilbert-base-multilingual-cased' model = EncoderDecoderModel.from_encoder_decoder_pretrained(model_name, model_name) ` Error is raised: > ValueError: Unrecognized configuration class for this kind of AutoModel: AutoModelForCausalLM. > Model type should be one of BartConfig, BertConfig, BertGenerationConfig, BigBirdConfig, BigBirdPegasusConfig...<|||||>@sgugger @patrickvonplaten @ArthurZucker @younesbelkada Anyone looking at this?<|||||>Those are not changes we want everyone to have in the distilBERT: it makes the model code too unreadable just so that you can use it in the EncoderDecoder framework. We can leave the fork open if you want to share it with others, and you can also push it in any repo on the Hub using the dynamic code feature.<|||||>I followed same code structure as in BERT. Its not only for EncoderDecoder, your current version doesnt allow usage of DistilBERT in Text generation, which can be useful.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
22,613
closed
allow separate relative attention bias
# What does this PR do? It supports umt5 models, which need separate relative attention biases for each layer. The current code will have backward compatibility with previous T5 and MT5 checkpoints. Fixes # (issue) https://github.com/huggingface/transformers/issues/22573 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Models: - text models: @ArthurZucker and @younesbelkada - @stefan-it
04-06-2023 11:29:34
04-06-2023 11:29:34
_The documentation is not available anymore as the PR was closed or merged._<|||||>> Thanks for your PR. Transformers is not a modular toolbox so we never add functionality to existing models like this. If you need this parameter for a new model, you should create a copy of T5 with the add-new-model-like command and just adapt the modeling code. > > Or you can jsut host your slightly modified T5 model on the Hub with the [code on the Hub API](https://huggingface.co/docs/transformers/custom_models). Thanks for your feedback. No problem, I will close this PR and add another one that have a separate model.
transformers
22,612
open
Get output_hidden_state and output_scores from Flax whisper model
I need whisper's output_scores and output_hidden_states as the result of generate() method. On Pytorch model, I can easily get the output_scores and output_hidden_states by setting these parameters in generate() method as follows: ``` whisper_output = model.generate(inputs=input_features, max_new_tokens=180, output_scores=True, output_hidden_states=True, return_dict_in_generate=True) ``` and the resulted `whisper_output` returns 'scores' and 'output_hidden_states' as it keys alongside 'sequences' Now I want to do so for Flax whisper model. but setting these parameters as the static_argnames of model doesn't have effect to get output_scores. Is there any solution for getting output_scores or logits from Flax whisper model?
04-06-2023 11:16:34
04-06-2023 11:16:34
cc @sanchit-gandhi <|||||>I found that Flax model when set to use beam-search calculates the scores value: https://github.com/huggingface/transformers/blob/12d51db243a00726a548a43cc333390ebae731e3/src/transformers/generation/flax_utils.py#L83-L96 and in the _beam_search method it is calculated and returned: https://github.com/huggingface/transformers/blob/12d51db243a00726a548a43cc333390ebae731e3/src/transformers/generation/flax_utils.py#L998-L1004 but it doesn't return scores when greedy-search is done: https://github.com/huggingface/transformers/blob/12d51db243a00726a548a43cc333390ebae731e3/src/transformers/generation/flax_utils.py#L55-L65<|||||>I run the flax whisper model in beam_search model by passing `generation_config.num_beams` to a value larger than 1. It returns `scores` at the output but it is totally different from the `scores` returned from PyTorch model. scores in Flax is just a scalar value but scores output of PyTorch model is a List of n (n = number of output tokens) in which each element of list is a torch.tensor(1, size of vocab). In other words the scores of Pytorch return score of each output token with the probability (score) of every vocab token. So the Flax output scores is something totally different<|||||>I found logits of Flax in flax_utils.py as follows: https://github.com/huggingface/transformers/blob/ed67286465c5e9e3d3005de3e21bc3c679d93072/src/transformers/generation/flax_utils.py#L610-L618 Just need to extract this logits out of greed_search function and return it<|||||>I've added the support of `output_scores` to the flax_utils.py code in the followin fork: https://github.com/hannan72/transformers/commit/116d8f38722359ca5d2dad918975348359cc2ac1 And also add support of the following parameters to the Flax-Whisper model: https://github.com/hannan72/transformers/commit/accdcb2d66496c5ee8547739bf833c95e189344c @sanchit-gandhi Could you review changes and do a PR to support scores value for flax model?<|||||>I have made a PR about this feature: https://github.com/huggingface/transformers/pull/22700<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>https://github.com/huggingface/transformers/pull/22700 is still open and active πŸ€—<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
22,611
closed
[doc] Try a few β‰  ways of linking to Papers, users, and org profiles
would love to hear others' thoughts.
04-06-2023 10:51:55
04-06-2023 10:51:55
_The documentation is not available anymore as the PR was closed or merged._<|||||>for some reason i can't see the model pages in the PR's generated doc<|||||>I can't preview the doc for some reason (getting an error on the [distilbert preview page](https://moon-ci-docs.huggingface.co/docs/transformers/pr_22611/en/model_doc/distilbert)). Also there seems to be an issue with your CircleCI permissions, the tests won't run. Could you try refreshing your permissions as shown [here](https://support.circleci.com/hc/en-us/articles/360048210711-How-to-Refresh-User-Permissions-)? Otherwise the changes look good to me in preview. Would just love to preview the badge for papers!<|||||>Weird, the doc build ran successfully and uploaded a zip file, but it does not contain the modeling files. Will see what's up in a bit.<|||||>It's correctly displayed on the docs now and on the link you shared @sgugger, there was an error with the backend sync.<|||||>ok i kinda like it, WDYT? <img width="1160" alt="image" src="https://user-images.githubusercontent.com/326577/230628465-f3850483-8338-4956-a969-f4f9ffe6b3ea.png"> Link: https://moon-ci-docs.huggingface.co/docs/transformers/pr_22611/en/model_doc/t5
transformers
22,610
closed
ASTModel Signature doesn't work
### System Info - `transformers` version: 4.27.2 - Platform: macOS-13.2.1-arm64-arm-64bit - Python version: 3.10.7 - Huggingface_hub version: 0.13.3 - PyTorch version (GPU?): 2.0.0 (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? @NielsRogge ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction When I run the following code, I expect this to do a forward pass successfully. I'm using random numbers to test the Model. ``` import torch import numpy as np from transformers import ASTModel, ASTConfig from torch.utils.data import DataLoader configuration = ASTConfig() model = ASTModel(configuration) # (batch_size, channels, height, width) dataset = torch.tensor(np.random.normal(size = (100,1,256,256))) dataLoader = DataLoader(dataset, batch_size=25, pin_memory=True) for data in dataLoader: model(torch.tensor(data).float()) ``` the shape of the input_values is pulled from the [Official docs](https://huggingface.co/docs/transformers/v4.27.2/en/model_doc/audio-spectrogram-transformer#transformers.ASTModel.forward.input_values). > `input_values (torch.FloatTensor of shape (batch_size, num_channels, height, width))` What I see is the following error raised ``` Expected 3D (unbatched) or 4D (batched) input to conv2d, but got input of size: [25, 1, 256, 1, 256] ``` It looks like the cause to me is [these lines](https://github.com/huggingface/transformers/blame/1670be4bdec19d5a8893f943bf78a8d9b3dc8911/src/transformers/models/audio_spectrogram_transformer/modeling_audio_spectrogram_transformer.py#L110-L113) in the forward pass of the ASTPatchEmbeddings ``` def forward(self, input_values: torch.Tensor) -> torch.Tensor: input_values = input_values.unsqueeze(1) input_values = input_values.transpose(2, 3) embeddings = self.projection(input_values).flatten(2).transpose(1, 2) return embeddings ``` When I step through with the debugger, I see that the unsqueeze and transpose commands are what is affecting the shape of the tensor. ### Expected behavior I expect to see the model silently do a forward pass.
04-06-2023 10:28:37
04-06-2023 10:28:37
cc @amyeroberts <|||||>Hi @conradg, thanks for reporting this. I believe this is an with the documentation for `input_values` having the incorrect info. Looking at a [code example](https://huggingface.co/docs/transformers/v4.27.2/en/model_doc/audio-spectrogram-transformer#transformers.ASTModel.forward.example), the shape of the input array to the model is `(batch_size, max_length, num_mel_bins)`. Testing, on `main` the example runs successfully. I'll open a quick PR to update.
transformers
22,609
closed
Revert error back into warning for byte fallback conversion.
# What does this PR do? Handles https://github.com/huggingface/transformers/pull/22264#issuecomment-1498681408 <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
04-06-2023 10:14:47
04-06-2023 10:14:47
_The documentation is not available anymore as the PR was closed or merged._
transformers
22,608
open
[DO NOT MERGE] Add Crop Transformation
# What does this PR do? Abstracts out cropping logic to be a more generic `crop` function which other, more specific cropping functions e.g. `center_crop` can call. Motivation: * The output of the CLIP feature extractor changed after #17628. This was due to a difference in how the `top` and `left` coordinates were calculated resulting in some values being off by one. * The original CLIP feature extractor matched the original implementation * Having a more generic `crop` method enables each image processor to have its own center_crop logic with minimal code replication. [BEFORE MERGING]: Verify this doesn't have large impact on any popular CLIP dependant pipelines Fixes #22505 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests?
04-06-2023 10:08:24
04-06-2023 10:08:24
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_22608). All of your documentation changes will be reflected on that endpoint.
transformers
22,607
closed
Revert error back into warning for byte fallback conversion.
# What does this PR do? Handles https://github.com/huggingface/transformers/pull/22264#issuecomment-1498681408 <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
04-06-2023 09:43:31
04-06-2023 09:43:31
_The documentation is not available anymore as the PR was closed or merged._
transformers
22,606
closed
update_pip_test_mapping
# What does this PR do? #22180 added a new script to add/update the attribute `pipeline_model_mapping` (for pipeline testing) in a systematic way. This PR uses that script to update this attributes for new and existing model test files. It turns out that `translation` was missing when I first time adding this attribute in #21516. Fortunately, I am so persistent to continuously improve things, and find+fix this problem as a consequence πŸš€ πŸ› .
04-06-2023 09:05:43
04-06-2023 09:05:43
_The documentation is not available anymore as the PR was closed or merged._
transformers
22,605
closed
UnboundLocalError: local variable 'params_docstring' referenced before assignment
### System Info https://github.com/huggingface/transformers/blob/v4.27.4/src/transformers/utils/doc.py#L130 BUG about 'params_docstring' is reported ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction It occurs when "if i < len(lines):" not match ### Expected behavior Fix bugs or add assertion
04-06-2023 08:52:58
04-06-2023 08:52:58
For Reproduction: It occurs when we define a new subClass with no definition<|||||>No one will be able to help without a clear reproducer.<|||||>For example, we define a subclass of BaseModelOutputWithPoolingAndCrossAttentions, but with no args explanations. ``` class NewBaseModelOutputWithPoolingAndCrossAttentions(BaseModelOutputWithPoolingAndCrossAttentions): final_text_self_embedding: Optional[torch.FloatTensor] = None final_text_visual_embedding: Optional[torch.FloatTensor] = None text_visual_states: Optional[Tuple[torch.FloatTensor]] = None ``` `lines = output_docstring.split("\n")` would return a null result, then `if i < len(lines):` would not be executed.<|||||>Why would you use our internal tools for the documentation if you are not documenting the class?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
22,604
closed
[WIP] Add PoNet
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Implementation of PoNet model (https://arxiv.org/abs/2110.02442). ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @ArthurZucker and @younesbelkada. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada Library: - pipelines: @Narsil - tokenizers: @ArthurZucker Documentation: @sgugger, @stevhliu and @MKhalusova Maintained examples (not research project or legacy): - PyTorch: @sgugger -->
04-06-2023 08:22:22
04-06-2023 08:22:22
cc @ArthurZucker and @younesbelkada <|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_22604). All of your documentation changes will be reflected on that endpoint.<|||||>Hey! great workπŸ”₯ Would you be open to put this model on the hub following [this tutorial](https://huggingface.co/docs/transformers/custom_models)! This model seems very similar to a Bert model, so it makes more sense! Especially for all the additional ressources that you want to add<|||||>Thanks for your advice! I've followed the tutorial and put the codes to the hub.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
22,603
closed
move preprocess_logits_for_metrics before _nested_gather in trainer.e…
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes #22602 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 --> @sgugger
04-06-2023 08:10:25
04-06-2023 08:10:25
_The documentation is not available anymore as the PR was closed or merged._<|||||>Sorry. In my training loop, the `preprocess_logits_for_metrics` do not use `labels`. I ignore the `labels` is gathered before. In the new commit, the code is ``` # Update containers on host if loss is not None: losses = self._nested_gather(loss.repeat(batch_size)) losses_host = losses if losses_host is None else torch.cat((losses_host, losses), dim=0) if labels is not None: labels = self._pad_across_processes(labels) if inputs_decode is not None: inputs_decode = self._pad_across_processes(inputs_decode) inputs_decode = self._nested_gather(inputs_decode) inputs_host = ( inputs_decode if inputs_host is None else nested_concat(inputs_host, inputs_decode, padding_index=-100) ) if logits is not None: logits = self._pad_across_processes(logits) if self.preprocess_logits_for_metrics is not None and logits is not None: logits = self.preprocess_logits_for_metrics(logits, labels) if labels is not None: labels = self._nested_gather(labels) labels_host = labels if labels_host is None else nested_concat(labels_host, labels, padding_index=-100) if logits is not None: logits = self._nested_gather(logits) preds_host = logits if preds_host is None else nested_concat(preds_host, logits, padding_index=-100) ``` labels and logits should be padded first, then be preprocessed before gathering. In my use case, I trained BLOOM with 32 batch size, the gathered logits size is (32, 1024, 250000+), which takes 15G+ gpu memory and cause to OOM during evaluating. <|||||>@sgugger Done. Rewrite the code with your suggestion. Thanks.
transformers
22,602
closed
Preprocess/transform logits before gathering them for computing metrics.
### Feature request In `trainer.evaluation_loop`, `preprocess_logits_for_metrics` should be executed before `_nested_gather` to avoid GPU OOM. ### Motivation `preprocess_logits_for_metrics` processes logits after gathering them when do distributed learning. When training with large batch_size, token_length or vocab_size, gathering all logits to 1 node will cause to out of GPU memory. This preprocessing should be executed before `_nested_gather`. ### Your contribution The main modification would be this in `trainer.evaluation_loop`: ``` if logits is not None: logits = self._pad_across_processes(logits) if self.preprocess_logits_for_metrics is not None: logits = self.preprocess_logits_for_metrics(logits, labels) logits = self._nested_gather(logits) preds_host = logits if preds_host is None else nested_concat(preds_host, logits, padding_index=-100) ```
04-06-2023 07:42:47
04-06-2023 07:42:47
As replied on the PR, this is incorrect. You should probably use a custom training loop powered by Accelerate to be able to move the logits when you want.
transformers
22,601
closed
Incorrect question answering initialization
### System Info - `transformers` version: 4.26.1 - Platform: Linux-5.15.0-1031-azure-x86_64-with-glibc2.17 - Python version: 3.8.16 - Huggingface_hub version: 0.13.3 - PyTorch version (GPU?): 1.13.1 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help? @ArthurZucker @younesbelkada @sgugger ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction 1. Select deberta-mnli model and perform finetuning for question answering on any dataset 2. Errors out with this message ` File "XXXXXX/lib/python3.8/site-packages/transformers/models/deberta/modeling_deberta.py", line 1416, in forward start_logits, end_logits = logits.split(1, dim=-1) ValueError: too many values to unpack (expected 2) ` Models finetuned on mnli have 3 classes by default in their config file (because mnli dataset has 3 classes). When these models are repurposed for question answering task, the classification head is initialized from the *config file* [here](https://github.com/huggingface/transformers/blob/ae54e3c3b18bac0832ad62ea9b896dfd52a09850/src/transformers/models/deberta/modeling_deberta.py#L1362 ) But for question answering task, 2 outputs per token are expected [here](https://github.com/huggingface/transformers/blob/ae54e3c3b18bac0832ad62ea9b896dfd52a09850/src/transformers/models/deberta/modeling_deberta.py#L1416). So there is a mismatch between model head which is initialized with 3 labels. This is likely causing an issue with deberta-mnli when used for question answering. It might potentially cause a similar issue for models trained on datasets other than mnli and having number of labels != 2 ### Expected behavior For question answering task, should the num_labels be hardcoded to 2?
04-06-2023 07:22:12
04-06-2023 07:22:12
This is a bug indeed. Do you want to make a quick PR with your fix (labels should indeed be hardcoded at 2 for question answering)?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
22,600
closed
Add support for Ascend NPU
### Feature request It would be nice if the Transformers suite could be used directly on the Ascend NPU without modifying the source code. ### Motivation In China, Ascend NPU is the second choice after Nvidia GPU and has been adpoted by many companies, such as Alibaba, ByteDance, Meituan, etc. Huawei officially released an adapter called [`torch_npu`](https://github.com/Ascend/pytorch/blob/master/README.en.md), to adapt PyTorch on Ascend NPU. `torch_npu` is user friendly to developers, so that we can still enjoy the same PyTorch experience that we accustomed to today. The native Transformers suite requires minor modifications to run on the Ascend NPU, so it's reasonable to support the Ascend NPU to be a member of Transformers community. ### Your contribution I can assist in adding support if you want, see this PR (#22644 )
04-06-2023 06:53:59
04-06-2023 06:53:59
@sgugger <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
22,599
closed
No module named 'transformers' after installing from source
### System Info Ubuntu 22.04 in Windwos WSL 2. ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction I just followed the doc [here](https://huggingface.co/docs/transformers/installation#install-from-source). However, error occured as below: ``` wu@DESKTOP-COM:~/llama.cpp/transformers$ python Python 3.10.6 (main, Mar 10 2023, 10:55:28) [GCC 11.3.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import transformers Traceback (most recent call last): File "<stdin>", line 1, in <module> ModuleNotFoundError: No module named 'transformers' ``` ### Expected behavior No error occured.
04-06-2023 06:22:55
04-06-2023 06:22:55
same error <|||||>Having the same issue as well.<|||||>Me too. Not sure what is going on, but it looks like in site-packages, the transformers-4.28.0.dev0.dist-info directory is created, but not the transformers directory itself!<|||||>... and confirmed, if I roll back using `git checkout 2194943a3443b924e4cd09f37402230b771008f0` then everything installs fine. Something seems to have broken in the past 3-4 commits.<|||||>same<|||||>Steps to reproduce (after uninstalling any version of transformers that you might have): 1. `git clone https://github.com/huggingface/transformers.git` 2. `cd transformers` 3. `pip install .` 4. `python3 -c "from transformers import pipeline; print(pipeline('sentiment-analysis')('I love you'))"` Resulting error ``` Traceback (most recent call last): File "<string>", line 1, in <module> ModuleNotFoundError: No module named 'transformers' ``` It looks like the change that broke things is https://github.com/huggingface/transformers/pull/22539. If I roll back to the previous change to setup.py, the install works. git checkout 80d1319e1b9dde71b8af641ad1427113058a0af7 --> pip3 install . --> WORKS git checkout 4169dc84bf0072a26f10096a187907d661dcc383 --> pip3 install . --> FAILS Maybe there is a new installation method? <|||||>Thanks for letting us know. I guess that's what happens when you try to clean up to follow the official PEP rules... We'll revert the PR!<|||||>I cannot reproduce this in a virtual environment. Maybe you are using the system `python` and `pip` on Ubuntu, which are installed in `dist-packages` rather than `site-packages`. There is a similar issue oobabooga/text-generation-webui#753. Upgrade your `pip` and `setuptools`, or use a virtual environment will resolve this. In the documentation [Installation](https://huggingface.co/docs/transformers/installation#installation): > # Install with pip > > You **should** install πŸ€— Transformers in a [virtual environment](https://docs.python.org/3/library/venv.html). If you’re unfamiliar with Python virtual environments, take a look at this [guide](https://packaging.python.org/guides/installing-using-pip-and-virtual-environments/). A virtual environment makes it easier to manage different projects, and avoid compatibility issues between dependencies. > > Start by creating a virtual environment in your project directory: > > ```bash > python -m venv .env > > ``` The issue here is the users do not follow the installation guide for using a virtual environment. We may need to add `pip3 install --upgrade pip setuptools` in the [Install from source](https://huggingface.co/docs/transformers/installation#install-from-source) documentation. > # Install from source > > Install πŸ€— Transformers from source with the following command: > > ```bash > pip install git+https://github.com/huggingface/transformers > ``` to ```bash pip install --ugprade pip setuptools # reinstall pip and do not use the apt packages pip install git+https://github.com/huggingface/transformers ``` ------ Solution 1: upgrade (reinstall) `pip` and `setuptools` when using the system apt package. ```bash docker run -it --rm -h ubuntu --pull always ubuntu:22.04 apt update && apt install git python3-dev python3-pip -y python3 -m pip install --upgrade pip setuptools python3 -m pip install git+https://github.com/huggingface/transformers@4169dc84bf0072a26f10096a187907d661dcc383 ``` <details> <summary>Outputs: upgrade (reinstall) `pip` and `setuptools`</summary> ```console $ docker run -it --rm -h ubuntu --pull always ubuntu:22.04 22.04: Pulling from library/ubuntu Digest: sha256:67211c14fa74f070d27cc59d69a7fa9aeff8e28ea118ef3babc295a0428a6d21 Status: Image is up to date for ubuntu:22.04 root@ubuntu:/# apt update && apt install git python3-dev python3-pip -y root@ubuntu:/# which -a python3 /usr/bin/python3 /bin/python3 root@ubuntu:/# which -a pip3 /usr/bin/pip3 /bin/pip3 root@ubuntu:/# python3 -m pip install --upgrade pip setuptools Requirement already satisfied: pip in /usr/lib/python3/dist-packages (22.0.2) Collecting pip Downloading pip-23.0.1-py3-none-any.whl (2.1 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 2.1/2.1 MB 817.6 kB/s eta 0:00:00 Requirement already satisfied: setuptools in /usr/lib/python3/dist-packages (59.6.0) Collecting setuptools Downloading setuptools-67.6.1-py3-none-any.whl (1.1 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.1/1.1 MB 540.5 kB/s eta 0:00:00 Installing collected packages: setuptools, pip Attempting uninstall: setuptools Found existing installation: setuptools 59.6.0 Not uninstalling setuptools at /usr/lib/python3/dist-packages, outside environment /usr Can't uninstall 'setuptools'. No files were found to uninstall. Attempting uninstall: pip Found existing installation: pip 22.0.2 Not uninstalling pip at /usr/lib/python3/dist-packages, outside environment /usr Can't uninstall 'pip'. No files were found to uninstall. Successfully installed pip-23.0.1 setuptools-67.6.1 WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv root@ubuntu:/# which -a pip3 /usr/local/bin/pip3 /usr/bin/pip3 /bin/pip3 root@ubuntu:/# python3 -m pip install git+https://github.com/huggingface/transformers Collecting git+https://github.com/huggingface/transformers@4169dc84bf0072a26f10096a187907d661dcc383 Cloning https://github.com/huggingface/transformers (to revision 4169dc84bf0072a26f10096a187907d661dcc383) to /tmp/pip-req-build-w_o1neea Running command git clone --filter=blob:none --quiet https://github.com/huggingface/transformers /tmp/pip-req-build-w_o1neea Running command git rev-parse -q --verify 'sha^4169dc84bf0072a26f10096a187907d661dcc383' Running command git fetch -q https://github.com/huggingface/transformers 4169dc84bf0072a26f10096a187907d661dcc383 Running command git checkout -q 4169dc84bf0072a26f10096a187907d661dcc383 Resolved https://github.com/huggingface/transformers to commit 4169dc84bf0072a26f10096a187907d661dcc383 Installing build dependencies ... done Getting requirements to build wheel ... done Installing backend dependencies ... done Preparing metadata (pyproject.toml) ... done Requirement already satisfied: requests in /usr/local/lib/python3.10/dist-packages (from transformers==4.28.0.dev0) (2.28.2) Requirement already satisfied: tokenizers!=0.11.3,<0.14,>=0.11.1 in /usr/local/lib/python3.10/dist-packages (from transformers==4.28.0.dev0) (0.13.3) Requirement already satisfied: filelock in /usr/local/lib/python3.10/dist-packages (from transformers==4.28.0.dev0) (3.11.0) Requirement already satisfied: tqdm>=4.27 in /usr/local/lib/python3.10/dist-packages (from transformers==4.28.0.dev0) (4.65.0) Requirement already satisfied: numpy>=1.17 in /usr/local/lib/python3.10/dist-packages (from transformers==4.28.0.dev0) (1.24.2) Requirement already satisfied: huggingface-hub<1.0,>=0.11.0 in /usr/local/lib/python3.10/dist-packages (from transformers==4.28.0.dev0) (0.13.4) Requirement already satisfied: packaging>=20.0 in /usr/local/lib/python3.10/dist-packages (from transformers==4.28.0.dev0) (23.0) Requirement already satisfied: regex!=2019.12.17 in /usr/local/lib/python3.10/dist-packages (from transformers==4.28.0.dev0) (2023.3.23) Requirement already satisfied: pyyaml>=5.1 in /usr/local/lib/python3.10/dist-packages (from transformers==4.28.0.dev0) (6.0) Requirement already satisfied: typing-extensions>=3.7.4.3 in /usr/local/lib/python3.10/dist-packages (from huggingface-hub<1.0,>=0.11.0->transformers==4.28.0.dev0) (4.5.0) Requirement already satisfied: charset-normalizer<4,>=2 in /usr/local/lib/python3.10/dist-packages (from requests->transformers==4.28.0.dev0) (3.1.0) Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.10/dist-packages (from requests->transformers==4.28.0.dev0) (2022.12.7) Requirement already satisfied: idna<4,>=2.5 in /usr/local/lib/python3.10/dist-packages (from requests->transformers==4.28.0.dev0) (3.4) Requirement already satisfied: urllib3<1.27,>=1.21.1 in /usr/local/lib/python3.10/dist-packages (from requests->transformers==4.28.0.dev0) (1.26.15) Building wheels for collected packages: transformers Building wheel for transformers (pyproject.toml) ... done Created wheel for transformers: filename=transformers-4.28.0.dev0-py3-none-any.whl size=6862948 sha256=b8dbe24b1d39a4ae836e24e0b4b7ab27b4e024408b7129a4b1c4aad4a41fc4d7 Stored in directory: /root/.cache/pip/wheels/98/63/05/ec5c37d387d2db776a20dac49e1b830aca7fbc2394956367ad Successfully built transformers Installing collected packages: transformers Successfully installed transformers-4.28.0.dev0 WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv root@ubuntu:/# python3 -c 'import transformers; print(transformers.__version__)' None of PyTorch, TensorFlow >= 2.0, or Flax have been found. Models won't be available and only tokenizers, configuration and file/data utilities can be used. 4.28.0.dev0 ``` </details> ------ Solution 2: use a virtual environment (already there in the documentation). ```bash docker run -it --rm -h ubuntu --pull always ubuntu:22.04 apt update && apt install git python3-dev python3-venv -y python3 -m venv venv source venv/bin/activate python3 -m pip install git+https://github.com/huggingface/transformers@4169dc84bf0072a26f10096a187907d661dcc383 ``` <details> <summary>Outputs: use virtual environment</summary> ```console $ docker run -it --rm -h ubuntu --pull always ubuntu:22.04 22.04: Pulling from library/ubuntu Digest: sha256:67211c14fa74f070d27cc59d69a7fa9aeff8e28ea118ef3babc295a0428a6d21 Status: Image is up to date for ubuntu:22.04 root@ubuntu:/# apt update && apt install git python3-dev python3-venv -y root@ubuntu:/# which -a python3 /usr/bin/python3 /bin/python3 root@ubuntu:/# which -a pip3 /usr/bin/pip3 /bin/pip3 root@ubuntu:/# python3 -m venv venv root@ubuntu:/# source venv/bin/activate (venv) root@ubuntu:/# which -a python3 /venv/bin/python3 /usr/bin/python3 /bin/python3 (venv) root@ubuntu:/# which -a pip3 /venv/bin/pip3 /usr/bin/pip3 /bin/pip3 (venv) root@ubuntu:/# python3 -m pip install git+https://github.com/huggingface/transformers@4169dc84bf0072a26f10096a187907d661dcc383 Collecting git+https://github.com/huggingface/transformers@4169dc84bf0072a26f10096a187907d661dcc383 Cloning https://github.com/huggingface/transformers (to revision 4169dc84bf0072a26f10096a187907d661dcc383) to /tmp/pip-req-build-u7lmhx_v Running command git clone --filter=blob:none --quiet https://github.com/huggingface/transformers /tmp/pip-req-build-u7lmhx_v Running command git rev-parse -q --verify 'sha^4169dc84bf0072a26f10096a187907d661dcc383' Running command git fetch -q https://github.com/huggingface/transformers 4169dc84bf0072a26f10096a187907d661dcc383 Running command git checkout -q 4169dc84bf0072a26f10096a187907d661dcc383 Resolved https://github.com/huggingface/transformers to commit 4169dc84bf0072a26f10096a187907d661dcc383 Installing build dependencies ... done Getting requirements to build wheel ... done Installing backend dependencies ... done Preparing metadata (pyproject.toml) ... done Collecting numpy>=1.17 Using cached numpy-1.24.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (17.3 MB) Collecting regex!=2019.12.17 Using cached regex-2023.3.23-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (769 kB) Collecting tokenizers!=0.11.3,<0.14,>=0.11.1 Using cached tokenizers-0.13.3-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (7.8 MB) Collecting requests Using cached requests-2.28.2-py3-none-any.whl (62 kB) Collecting packaging>=20.0 Using cached packaging-23.0-py3-none-any.whl (42 kB) Collecting pyyaml>=5.1 Using cached PyYAML-6.0-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl (682 kB) Collecting huggingface-hub<1.0,>=0.11.0 Using cached huggingface_hub-0.13.4-py3-none-any.whl (200 kB) Collecting filelock Using cached filelock-3.11.0-py3-none-any.whl (10.0 kB) Collecting tqdm>=4.27 Using cached tqdm-4.65.0-py3-none-any.whl (77 kB) Collecting typing-extensions>=3.7.4.3 Using cached typing_extensions-4.5.0-py3-none-any.whl (27 kB) Collecting idna<4,>=2.5 Using cached idna-3.4-py3-none-any.whl (61 kB) Collecting certifi>=2017.4.17 Using cached certifi-2022.12.7-py3-none-any.whl (155 kB) Collecting charset-normalizer<4,>=2 Using cached charset_normalizer-3.1.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (199 kB) Collecting urllib3<1.27,>=1.21.1 Using cached urllib3-1.26.15-py2.py3-none-any.whl (140 kB) Building wheels for collected packages: transformers Building wheel for transformers (pyproject.toml) ... done Created wheel for transformers: filename=transformers-4.28.0.dev0-py3-none-any.whl size=6862948 sha256=24db4f2655cd212b0097dd4fd88f2bcad3e1236a0bf700988eefdaad9583d0e9 Stored in directory: /root/.cache/pip/wheels/98/63/05/ec5c37d387d2db776a20dac49e1b830aca7fbc2394956367ad Successfully built transformers Installing collected packages: tokenizers, urllib3, typing-extensions, tqdm, regex, pyyaml, packaging, numpy, idna, filelock, charset-normalizer, certifi, requests, huggingface-hub, transformers Successfully installed certifi-2022.12.7 charset-normalizer-3.1.0 filelock-3.11.0 huggingface-hub-0.13.4 idna-3.4 numpy-1.24.2 packaging-23.0 pyyaml-6.0 regex-2023.3.23 requests-2.28.2 tokenizers-0.13.3 tqdm-4.65.0 transformers-4.28.0.dev0 typing-extensions-4.5.0 urllib3-1.26.15 (venv) root@ubuntu:/# python3 -c 'import transformers; print(transformers.__version__)' None of PyTorch, TensorFlow >= 2.0, or Flax have been found. Models won't be available and only tokenizers, configuration and file/data utilities can be used. 4.28.0.dev0 ``` </details>
transformers
22,598
closed
BertTokenizerFast.from_pretrained() reproducibly freezing during download
### System Info - `transformers` version: 4.26.1 - Platform: Windows-10-10.0.19044-SP0 - Python version: 3.10.10 - Huggingface_hub version: 0.12.1 - PyTorch version (GPU?): 1.13.1+cpu (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: no - Using distributed or parallel set-up in script?: no ### Who can help? @ArthurZucker @younesbelkada ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction On Windows systems, the following script hangs forever and never downloads the model to cache: ``` from transformers import BertTokenizerFast model = BertTokenizerFast.from_pretrained('bert-base-uncased') ``` The same script runs to completion on Linux and Macintosh using the same version of transformers. Multiple users of the InvokeAI application are having similar problems. ### Expected behavior I expect the second statement to run to completion, download and cache the BERT model, and return the instantiated model object.
04-06-2023 01:26:47
04-06-2023 01:26:47
Could you upgrade `huggingface_hub` and possibly `transformers` too to the last version? There were some bugs on Windows recently fixed.<|||||>Problem persists with `transformers` 4.27.4 and `huggingface-hub` 0.13.3. The code is freezing in file `huggingface_hub/file_download.py` at line 1296, where it tries to obtain a file lock on the path: ``` .cache\huggingface\hub\models--bert-base-uncased\blobs\W/"fb140275c155a9c7c5a3b3e0e77a9e839594a938.lock ``` The lock file is never created on the file system as far as I can tell. The filelock module is working on my system, but apparently FileLock() does not like filenames that start with the quotation mark. If I try to lock a file that starts with the double quote, I get the same freeze experienced with `from_pretrained()`. By any chance did the format of the blob hashes change recently? Also, at least one other model has the same problem. I confirmed this with CLIPTokenizer.<|||||>same problem in Windows 10 latest version when I use "from_pretrained("openai/whisper-tiny").". The same code worked 2~3 days ago. <|||||>Not sure if its similar. First time ever using it with the help of GPT4. Install Uninstall several times. I tried it in Pycharm and Jupyter NB. Windows 11. from transformers import AutoTokenizer, AutoModelForMaskedLM, TrainingArguments, Trainer from datasets import load_dataset txt_file = "path/to/your/text/file.txt" dataset = load_dataset("text", data_files={"train": txt_file}) model_checkpoint = "distilbert-base-uncased" tokenizer = AutoTokenizer.from_pretrained(model_checkpoint) model = AutoModelForMaskedLM.from_pretrained(model_checkpoint) def tokenize_function(examples): return tokenizer(examples["text"], truncation=True, padding="max_length", max_length=128) tokenized_dataset = dataset.map(tokenize_function, batched=True) training_args = TrainingArguments( output_dir="output", overwrite_output_dir=True, num_train_epochs=3, per_device_train_batch_size=8, save_steps=10_000, save_total_limit=2, ) trainer = Trainer( model=model, args=training_args, train_dataset=tokenized_dataset["train"], ) trainer.train() model.save_pretrained("fine_tuned_model") tokenizer.save_pretrained("fine_tuned_model") It seems the script runs indefinitely and nothing happens. Tried many examples too from the Huggingface page. Hopefully there is a fix to it. Oli <|||||>> same problem in Windows 10 latest version when I use "from_pretrained("openai/whisper-tiny").". The same code worked 2~3 days ago. Now my code is running properly. I haven't do any changes to my code. I think the problem is solved internally.<|||||>Yes, there was an internal change in the Hub that made those downloads stop working. That change was reverted so now it should work again if I understand correctly. cc @Wauplin <|||||>Yes exactly. Sorry for the inconvenience, it should be back to normal now. See related issue in `huggingface_hub`: https://github.com/huggingface/huggingface_hub/issues/1423. @sgugger you can close this one as well now <|||||>Let us know if the problem persist and I'll reopen!<|||||>Fixed. Thanks!<|||||>fixed. Merci
transformers
22,597
closed
[WIP] ONNX Multinomial operator supports only one input. As temporary solut…
…ion commentout multinominal call. # What does this PR do? **This temporarily change is not intended to merge. The purpose of this PR is to share the change with the other teams I am working with.** Fixes # (issue) ONNX [Multinomial](https://github.com/onnx/onnx/blob/main/docs/Operators.md#multinomial) operator only supports one input. The sample_size is only an attribute. It should be known when creating/exporting the ONNX model. torch.onnx.export fails with an error. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
04-05-2023 23:34:04
04-05-2023 23:34:04
_The documentation is not available anymore as the PR was closed or merged._<|||||>You're free to do this on your own to have the model work with ONNX, but this is not the kind of fix we can merge into Transformers as it will hurt every user on other hardware.<|||||>I am not intending to merge this PR. I agree that this is a hack not a fix. The reason to create this PR is only to share that change(s) with the other teams I am working with.
transformers
22,596
closed
Move labels to the same device as logits for LlamaForSequenceClassification and Blip2
# What does this PR do? Fixes issue #22561 by moving the labels to the same device as the logits they are compared to for `LlamaForSequenceClassification`, `Blip2`. @sgugger Could you review this once?
04-05-2023 22:43:34
04-05-2023 22:43:34
_The documentation is not available anymore as the PR was closed or merged._<|||||>@sgugger I have refreshed my permissions, but I still do not see the option of rerunning the pipeline on CircleCI. Is it possible that I cant do that?<|||||>You can just push an empty commit: `git commit -m "Trigger CI" --allow-empty`<|||||>@sgugger Is such a long waiting time for CircleCI report expected?<|||||>Could you try again? Tests still aren't run.<|||||>@sgugger I also added code for Blip2, and the tests now pass.<|||||>Perfect, thanks!
transformers
22,595
closed
`device_map="auto"` doesn't use all available GPUs when `load_in_8bit=True`
### System Info - `transformers` version: 4.28.0.dev0 - Platform: Linux-4.18.0-305.65.1.el8_4.x86_64-x86_64-with-glibc2.28 - Python version: 3.10.4 - Huggingface_hub version: 0.13.3 - PyTorch version (GPU?): 2.0.0+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: yes - Using distributed or parallel set-up in script?: yes ### Who can help? @sgugger @kooshi ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction On a machine with more than 2 GPUs (I have 4*A40s) ```python model = transformers.LlamaForCausalLM.from_pretrained( "path/to/converted/llama-65B", load_in_8bit=True, device_map="auto" ) ``` You'll see that only the first two GPUs are filled up. Possibly related to https://github.com/huggingface/transformers/pull/22377. ### Expected behavior All 4 GPUs should get parameters.
04-05-2023 17:40:46
04-05-2023 17:40:46
If I specify `max_memory`, the parameters do get distributed according to it. ```python model = transformers.LlamaForCausalLM.from_pretrained( "path/to/converted/llama-65B", load_in_8bit=True, device_map="auto", max_memory={0: "10GB", 1: "10GB", 2: "48GB", 3: "48GB"} ) ```<|||||>cc @younesbelkada <|||||>You can also specify `device_map="balanced"` to get the parameters evenly dispatched.<|||||>Hmm maybe this is unrelated to `load_in_8bit`, can you try without that and let us know? on the other hand I second what @Xmaster6y said, you can use `balanced` in this case<|||||>balanced and auto are the same thing FYI.<|||||>Yup, I knew `auto` and `balanced` were the same, but tried `balanced` for good measure. Same behavior :/. I just verified that without `load_in_8bit`, the parameters are distributed evenly among the GPUs as expected.<|||||>So this is specifically for `load_in_8bit`. I think @younesbelkada made a fix after the last patch. Could you try an install from source?<|||||>I already installed from source. ``` $ pip freeze | grep transformers transformers @ git+https://github.com/huggingface/transformers.git@15641892985b1d77acc74c9065c332cd7c3f7d7f ```<|||||>I can't dig too deeply into this until later, and I don't have more than 2 GPUs to test, but I can say that the actual size calculations and dispatch are all done in [accelerate](https://github.com/huggingface/accelerate), and the calculation changed as little as 3 weeks ago, so make sure you have the latest installed. If that doesn't fix it, and you want to dig into it, I'd recommend just sprinkling some `print`s around the relevant `accelerate` and `transformers` functions, just to get some visibility into what it thinks it's calculating. It'll be somewhere in `transformers/modeling_utils.py` which calls `get_balanced_memory` and `infer_auto_device_map` in `accelerate/utils/modeling.py` <|||||>Thanks @kooshi! It seems like something was fixed since accelerate 0.8. Installing accelerate from source resolved this issue.
transformers
22,594
closed
Minimum set of requirements
### Feature request Separate base requirements from development dependencies, as the existing `requirements.txt` file conflates them. For example, `accelerate` already does this via setuptools' "extras" [[link](https://github.com/huggingface/accelerate/blob/3cb9d5fd9c78c1da9fbc3127d6e63679a2475c6a/setup.py)] ### Motivation I am a package maintainer who would like to install `transformers` from source. The current `requirements.txt` lists multiple dependencies, covering various use cases: testing, linting, formatting, MLOps, etc. Not all of these would be required for basic usage of the package, such as loading and inference of pre-trained models. Installation could be streamlined by allowing users to only install the dependencies necessary for their workflow. ### Your contribution This would best be tackled by somebody more familiar with the extent of the functionality available in `transformers` and consequences of any changes here.
04-05-2023 17:10:57
04-05-2023 17:10:57
Transformers also makes use of `extras` and the base requirements are limited to what's strictly necessary (see [here](https://github.com/huggingface/transformers/blob/176ceff91f5e5ff15922715e5a4a4d9f66b92d14/setup.py#L412)).<|||||>Thank you. I shouldn't have assumed that the presence of `requirements.txt` meant that `setup.py` was using it.<|||||>Which requirements.txt are you talking about? We only have some for the examples, but there isn't one at the root of the repo.
transformers
22,593
closed
Use native TF checkpoints for the BLIP TF tests
Stop using `from_pt` now that the checkpoints have native TF weights
04-05-2023 17:04:22
04-05-2023 17:04:22
_The documentation is not available anymore as the PR was closed or merged._
transformers
22,592
closed
[Model request] Meta's SegmentAnything Model (SAM)
### Model description Meta Research recently open-sourced their "SegmentAnything Model" (SAM) for image segmentation. It would be great to have it working with this library's `ImageSegmentationPipeline`. ### Open source status - [X] The model implementation is available - [X] The model weights are available ### Provide useful links for the implementation GitHub repo: https://github.com/facebookresearch/segment-anything Paper: https://ai.facebook.com/research/publications/segment-anything/ Website: https://segment-anything.com/ Demo: https://segment-anything.com/demo Weights: - **`default` or `vit_h`: [ViT-H SAM model.](https://dl.fbaipublicfiles.com/segment_anything/sam_vit_h_4b8939.pth)** - `vit_l`: [ViT-L SAM model.](https://dl.fbaipublicfiles.com/segment_anything/sam_vit_l_0b3195.pth) - `vit_b`: [ViT-B SAM model.](https://dl.fbaipublicfiles.com/segment_anything/sam_vit_b_01ec64.pth)
04-05-2023 16:11:59
04-05-2023 16:11:59
Hi @xenova @alaradirik I would like to work on adding this model.<|||||>@xenova I just checked the model website and I don't have the hardware resources to perform the model inference. <|||||>> @xenova I just checked the model website and I don't have the hardware resources to perform the model inference. Have you tried running it locally w/ python?<|||||>I think I can run it locally, I can work on it<|||||>> Have you tried running it locally w/ python? no, but i dont have gpu also I recently worked on adding [seaformer model](https://github.com/huggingface/transformers/pull/21819) having 14M params, running it locally on cpu took a few seconds so this one with 632M params will take time and RAM. <|||||>> I think I can run it locally, I can work on it Great! How is it going? Let me know if you need any help.<|||||>@xenova I think this week I will finish it<|||||>Hey @Xrenya, I'm pretty across these models and would love to get them into `transformers` so please reach out if I can help you in any way. <|||||>Hi folks, please ignore this if you're already familiar with transformers but otherwise, you can refer to the [guidelines](https://huggingface.co/docs/transformers/add_new_model) to get started with adding a model. I'd recommend first checking you can run the original repo without any issues though. Here are some summarized points that might help with model addition: - Each model, including different checkpoints of the same model, has it's own repo on the Hub (see [DETR-ResNet-50 repo](https://huggingface.co/facebook/detr-resnet-50) as an example). This is basically a git repo that stores the checkpoint specific configuration, preprocessing configuration and the model weights. - The code (PR) added to transformers acts as a boilerplate to load different checkpoints - target model trained on different datasets or with different resolution or larger / smaller architecture. - configuration_sam.py should contain all the hyperparameters, the input image size and architectural details (e.g. number of hidden layers) to initialize the model. - image_processing_sam.py should contain the ImageProcessor class that takes in the raw input image and preprocesses it to the format expected as input to the model (resizing to a fixed input size, normalization, cropping, etc.) - processing_sam.py wraps the CLIPTokenizer used by SAM for prompt encoding and SAMImageProcessor to a single processor class. You can refer to the OWL-ViT model to see how that works. - modeling_sam.py should contain the model definition. - The conversion script: - Loads the pretrained original model and randomly initializes the HF implementation with the corresponding configuration - Copies the pretrained parameters (weights and biases) of the original model to the corresponding parameters of the randomly initialized HF model (the conversion step) - Forward propagates an arbitrary input through both the original model and converted HF model and checks if the outputs match - Uploads the converted HF model to the hub - Each model and image processor class is tested with scripts under `tests/models/<MODEL_NAME>/ `, you can refer to other test files to see what tests to add. Once you are done, you would need to run the following commands to check the PR passes all CI tests: ``` make style make quality make repo-consistency RUN_SLOW=TRUE pytest tests/models/sam/test_modeling_sam.py RUN_SLOW=TRUE pytest tests/models/sam/test_image_processor_sam.py RUN_SLOW=TRUE pytest tests/models/sam/test_processor_sam.py ``` We can do an in-depth review once the PR passes most tests or the configuration, preprocessing and modeling is mostly complete. Hope this helps!<|||||>PR for this model is available here, sorry for not catching this issue : #22654 <|||||>@ArthurZucker I see, okay, next time I should push [WIP]
transformers
22,591
closed
feat(model parallelism): moving the labels to the same device as the logits for gpt2 and bart
# What does this PR do? As suggested in the #22561 moving the labels to the same device as the logits they are compared to for `bart` and `gpt-2` models This action has been referred to from #22535 ``` lm_logits = self.lm_head(outputs[0]) lm_logits = lm_logits + self.final_logits_bias.to(lm_logits.device) masked_lm_loss = None if labels is not None: labels = labels.to(lm_logits.device) loss_fct = CrossEntropyLoss() masked_lm_loss = loss_fct(lm_logits.view(-1, self.config.vocab_size), labels.view(-1)) ``` cc @sgugger could you review this once.
04-05-2023 15:55:44
04-05-2023 15:55:44
_The documentation is not available anymore as the PR was closed or merged._<|||||>Thanks a lot for your PR! Could you apply `make fix-copies` so that the models copied from BART or GPT-2 are auto-updated?<|||||>> Thanks a lot for your PR! Could you apply `make fix-copies` so that the models copied from BART or GPT-2 are auto-updated? Hi, just did that!<|||||>> Thanks a lot! All good! ✨<|||||>Hi, @kaustubh-s1, does this change will fix model parallel for gpt2? I've just tried but got ``` File "/opt/conda/envs/gpt_neox/lib/python3.9/site-packages/torch/nn/functional.py", line 2515, in layer_norm return torch.layer_norm(input, normalized_shape, weight, bias, eps, torch.backends.cudnn.enabled) RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:1 and cuda:0! (when checking argument for argument weight in method wrapper_CUDA__native_layer_norm) ``` P.S. my setup is almost same like [this](https://github.com/huggingface/transformers/issues/22569#issue-1654189111), only the following differences ```python def get_parallel_model(model_name): model = AutoModelForCausalLM.from_pretrained( model_name, device_map='auto', torch_dtype=torch.float16, low_cpu_mem_usage=True ) # # setattr(model, 'model_parallel', True) # setattr(model, 'is_parallelizable', True) setattr(model, 'gradient_checkpointing', True) return model ```<|||||>> Hi, @kaustubh-s1, does this change will fix model parallel for gpt2? I've just tried but got > > ``` > File "/opt/conda/envs/gpt_neox/lib/python3.9/site-packages/torch/nn/functional.py", line 2515, in layer_norm > return torch.layer_norm(input, normalized_shape, weight, bias, eps, torch.backends.cudnn.enabled) > RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:1 and cuda:0! (when checking argument for argument weight in method wrapper_CUDA__native_layer_norm) > ``` > > P.S. my setup is almost same like [this](https://github.com/huggingface/transformers/issues/22569#issue-1654189111), only the following differences > > ```python > def get_parallel_model(model_name): > model = AutoModelForCausalLM.from_pretrained( > model_name, > device_map='auto', > torch_dtype=torch.float16, > low_cpu_mem_usage=True > ) > > # > # setattr(model, 'model_parallel', True) > # setattr(model, 'is_parallelizable', True) > > setattr(model, 'gradient_checkpointing', True) > return model > ``` Hi @innat. It should do that ig. But I do not have a multi gpu setup so can't say for sure. I just followed the steps #22535 to move labels to same device as logits. Theoretically speaking, it should work.
transformers
22,590
closed
Support whisper-timestamped
### Feature request It would be great if [whisper-timestamped](https://github.com/linto-ai/whisper-timestamped) could be added into Transformers. ### Motivation whisper-timestamped is an extension of [openai-whisper](https://pypi.org/project/whisper-openai/) python package and is meant to be compatible with any version of openai-whisper. On top of openai-whisper it provides word timestamps and give more accurate estimation of speech segments when transcribing. This is suitable for karakoe style subtitles etc. ### Your contribution Probably unable to help with this at the moment.
04-05-2023 14:36:42
04-05-2023 14:36:42
We've been thinking about how to add word-level timestamps to Whisper. I still have to look at whisper-timestamped to see exactly what they're doing, but for now I'll reference https://github.com/huggingface/transformers/issues/21412 as the main issue for tracking this.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Probably this one can be closed as work on https://github.com/huggingface/transformers/pull/23205 will deliver the feature<|||||>Closed by https://github.com/huggingface/transformers/pull/23205
transformers
22,589
closed
[WIP] 🌐 [i18n-KO] Translated `sequence_classification.mdx` to Korean
# What does this PR do? Translated the `tasks/sequence_classification.mdx` file of the documentation to Korean. - The file name is `sequence_classification.mdx`, but the document name is `text classification`. Thank you in advance for your review:) Part of https://github.com/huggingface/transformers/issues/20179 ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? <!-- 제좜 μ „ 체크리슀트둜, κ°€μ§œμ—°κ΅¬μ†Œλ§Œμ˜ μ²΄ν¬λ¦¬μŠ€νŠΈλ„ <details>둜 κ°μ‹Έμ„œ λ§Œλ“€μ–΄λ‘λ©΄ 더 쒋을 것 κ°™μ•„μš”. --> ## Who can review? <!-- κ°€μ§œμ—°κ΅¬μ†Œ νŒ€μ›λ“€κ³Ό 리뷰가 λλ‚œ ν›„μ—λ§Œ ν—ˆκΉ…νŽ˜μ΄μŠ€ μ§μ›λ“€μ—κ²Œ 리뷰 μš”μ²­ν•˜λŠ” μ•„λž˜ 주석을 λ…ΈμΆœν•΄μ£Όμ„Έμš”! --> <!-- @sgugger, @ArthurZucker, @eunseojo May you please review this PR? --> Team PseudoLab, may you please review this PR? @0525hhgus, @KIHOON71, @sim-so, @gabrielwithappy, @HanNayeoniee, @wonhyeongseo, @jungnerd
04-05-2023 14:31:07
04-05-2023 14:31:07
_The documentation is not available anymore as the PR was closed or merged._<|||||>There is a problem with the storage stream, so I will PR it again.. 😒
transformers
22,588
closed
Fix a typo in one of the BLIP pretrained checkpoint names
null
04-05-2023 13:22:39
04-05-2023 13:22:39
_The documentation is not available anymore as the PR was closed or merged._<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_22588). All of your documentation changes will be reflected on that endpoint.
transformers
22,587
closed
Move back doctest instructions to setup.cfg
# What does this PR do? As a result of #22539, the options we have for the doctests are now all ignored. This PR reverts the change for those and puts them back in `setup.cfg`.
04-05-2023 11:46:21
04-05-2023 11:46:21
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_22587). All of your documentation changes will be reflected on that endpoint.
transformers
22,586
closed
Fix PT-TF equivalence test for GPT1
This PR fixes the hidden states output from `TFOpenAIGPTDoubleHeadsModel` to have the same shapes as the PT version, and re-enables the relevant test.
04-05-2023 11:31:45
04-05-2023 11:31:45
_The documentation is not available anymore as the PR was closed or merged._<|||||>I may have tagged you slightly too early, please be slower to get to your notifications<|||||>I will do my best to take 72 hours to get to your PR next time you ask for a review ;-)<|||||>:handshake:
transformers
22,585
closed
Tests: disable `accelerate_tests` mark warnings
# What does this PR do? Adds the `accelerate_tests` mark to `conftest.py`, so we don't get related warnings at test time. Here's a print screen before the fix: <img width="1512" alt="Screenshot 2023-04-05 at 11 27 32" src="https://user-images.githubusercontent.com/12240844/230054803-cc95b93d-aab4-4133-baa8-7115760cd3ee.png"> And after the fix: <img width="1512" alt="Screenshot 2023-04-05 at 11 27 53" src="https://user-images.githubusercontent.com/12240844/230054894-46a9435a-43be-4ef9-b111-92d664017d13.png">
04-05-2023 10:28:29
04-05-2023 10:28:29
_The documentation is not available anymore as the PR was closed or merged._
transformers
22,584
closed
Seq2SeqTrainer: use unwrapped model to retrieve the generation config
# What does this PR do? Addresses one of the issues in #22571 As the title indicates, changes the source of the generation config from `model` (wrapped model) to `self.model` (unwrapped model).
04-05-2023 09:52:10
04-05-2023 09:52:10
@stas00 I do not have quick access to a multi-GPU setup. Would you be so kind as to double-check whether this fix solves the issue? πŸ™ <|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>I confirm that it fixes the first crash with 2+ gpus, the 2nd crash in eval remains. ``` $ PYTHONPATH=src python examples/pytorch/translation/run_translation.py --model_name_or_path t5-base --do_train --do_eval --source_lang en --target_lang de --source_prefix 'translate English to German: ' --dataset_name stas/wmt14-en-de-pre-processed --output_dir /tmp/tst-translation --num_train_epochs 1 --per_device_train_batch_size=1 --max_train_samples 10 --overwrite_output_dir --seed 1137 --per_device_eval_batch_size 1 --predict_with_generate --fp16 --max_eval_samples 10 [...] 04/05/2023 10:11:48 - INFO - __main__ - *** Evaluate *** [INFO|trainer.py:3126] 2023-04-05 10:11:48,677 >> ***** Running Evaluation ***** [INFO|trainer.py:3128] 2023-04-05 10:11:48,677 >> Num examples = 10 [INFO|trainer.py:3131] 2023-04-05 10:11:48,677 >> Batch size = 2 [INFO|configuration_utils.py:575] 2023-04-05 10:11:48,691 >> Generate config GenerationConfig { "_from_model_config": true, "decoder_start_token_id": 0, "eos_token_id": 1, "pad_token_id": 0, "transformers_version": "4.28.0.dev0" } Traceback (most recent call last): File "examples/pytorch/translation/run_translation.py", line 664, in <module> main() File "examples/pytorch/translation/run_translation.py", line 605, in main metrics = trainer.evaluate(max_length=max_length, num_beams=num_beams, metric_key_prefix="eval") File "/mnt/nvme0/code/huggingface/transformers-master/src/transformers/trainer_seq2seq.py", line 159, in evaluate return super().evaluate(eval_dataset, ignore_keys=ignore_keys, metric_key_prefix=metric_key_prefix) File "/mnt/nvme0/code/huggingface/transformers-master/src/transformers/trainer.py", line 2990, in evaluate output = eval_loop( File "/mnt/nvme0/code/huggingface/transformers-master/src/transformers/trainer.py", line 3278, in evaluation_loop metrics = self.compute_metrics(EvalPrediction(predictions=all_preds, label_ids=all_labels)) File "examples/pytorch/translation/run_translation.py", line 546, in compute_metrics decoded_preds = tokenizer.batch_decode(preds, skip_special_tokens=True) File "/mnt/nvme0/code/huggingface/transformers-master/src/transformers/tokenization_utils_base.py", line 3445, in batch_decode return [ File "/mnt/nvme0/code/huggingface/transformers-master/src/transformers/tokenization_utils_base.py", line 3446, in <listcomp> self.decode( File "/mnt/nvme0/code/huggingface/transformers-master/src/transformers/tokenization_utils_base.py", line 3485, in decode return self._decode( File "/mnt/nvme0/code/huggingface/transformers-master/src/transformers/tokenization_utils_fast.py", line 549, in _decode text = self._tokenizer.decode(token_ids, skip_special_tokens=skip_special_tokens) OverflowError: out of range integral type conversion attempted ``` This of course can be dealt with in a separate PR since the issue appears to be totally different. In which case please remove `Fixes: ...` so that the original Issue doesn't get closed.<|||||>Thank you for checking @stas00! PR header changed accordingly.
transformers
22,583
closed
Add thousands separator in training summary
# What does this PR do? The logger prints a summary at the beginning of training that displays some info such as number of examples, number of parameters, total number of steps, etc. Those numbers can be quite large and difficult to read. I added a thousand separator to improve readability for the following: - num_examples - num_train_epochs - per_device_train_batch_size - total_train_batch_size - max_steps - num_trainable_params ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @sgugger
04-05-2023 09:11:03
04-05-2023 09:11:03
_The documentation is not available anymore as the PR was closed or merged._
transformers
22,582
closed
Adding support for BPE merge creation from scores instead of ids.
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
04-05-2023 08:54:16
04-05-2023 08:54:16
_The documentation is not available anymore as the PR was closed or merged._
transformers
22,581
closed
docs: ko: complete `_toctree.yml`
# What does this PR do? From our first week in localizing the documentation, we came across issues and git conflicts with `_toctree.yml`. This PR updates the `_toctree.yml` file for translations, making it easier for translators to locate the correct files to translate without worrying about the yaml formatting. Each document title now has `(<in translation phrase>)` added in front of it and the "local" key has been changed to `in_translation`. Translators can now use this scaffold by following these steps: 1. Edit the `local` value by copy & pasting directly from the same line number in `en/_toctree.yml`. 2. Edit the `title` value by replacing the `(<in translation phrase>) <english title>` with the translated title of each document. 3. That's it! By using this updated scaffold, translators will be able to easily identify the correct files to translate, minimizing the time spent on formatting and file location issues. We hope that this will streamline the translation process and make it more accessible to our community. Initial language starters can recreate this scaffold by following these steps: 1. Copy the `_toctree.yml` file from the `en` folder. 2. Paste it into the corresponding language folder (e.g., `ko`, `fr`, `de`). 3. Create a temporary `in_translation.mdx` file in the desired language. 4. Find & Replace `local: .*` with `local: in_translation`. 5. Find & Replace `title:` with `title: (<in translation phrase>)` where the phrase is preferrably in the desired language. For example, in Korean the phrase is "λ²ˆμ—­μ€‘" By taking the initial time to create this scaffold for your language, you will greatly reduce fellow language speakers' yaml formatting issues in the long run. Part of https://github.com/huggingface/transformers/issues/20179 ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Team HuggingFace: @sgugger, @ArthurZucker, @eunseojo May you please review this PR? Team PseudoLab: @0525hhgus, @KIHOON71, @sim-so, @gabrielwithappy, @HanNayeoniee, @wonhyeongseo, @jungnerd May you please review this PR?
04-05-2023 08:10:41
04-05-2023 08:10:41
_The documentation is not available anymore as the PR was closed or merged._<|||||>Here's the preview screenshot: <img src="https://user-images.githubusercontent.com/29195190/230030526-fcdde977-3c46-4606-8700-daecba4bd99c.jpg" width="200px"> Yellow highlighted items are complete. Hence, they do not have `(λ²ˆμ—­μ€‘)` in front of them. I hope this new approach will help my colleagues. May you please merge this, Mr. @sgugger ? Thank you so much for your support πŸ™πŸ’•<|||||>I think this PR is a good idea to avoid wrong depth mistakes or yaml syntax errors
transformers
22,580
closed
resume train
### System Info - `transformers` version: 4.28.0.dev0 - Platform: Linux-4.15.0-175-generic-x86_64-with-glibc2.31 - Python version: 3.10.10 - Huggingface_hub version: 0.13.3 - Safetensors version: not installed - PyTorch version (GPU?): 1.12.0 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? @sgugger ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Currently, I am using 8 GPUs to train GPT. I used the `Trainer` API provided by huggingface for training. Due to the large amount of data, it is expected to train only one epoch. When I was halfway through my training, I stopped training and increased the number of GPU to 24. When I resumed training, there was no change in the number of steps trained. However, due to the increase in the number of GPUs, the global batch size will also increase, so theoretically, the overall training step should change. ### Expected behavior Is this a displayed bug? In other words, the steps required to complete an epoch during training will be less than the displayed number of steps. If not, how should I skip data that has already been trained?
04-05-2023 07:39:27
04-05-2023 07:39:27
Resuming training with a different setup than the one that begun it is at your own peril and is definitely not recommended or officially supported.<|||||>Ok, thank you for your reply. I will attempt to manually skip these already trained data.
transformers
22,579
closed
Hosted Files Compression
### Feature request I am very new to hugging face, so I am not sure this is the right place to request, if not please guide me. I was thinking, the hosted files (i.e. models) could use compression like brotli. Considering its all static files this could be done once instead on per request. For example, [decoder_model_merged.onnx](https://huggingface.co/Xenova/transformers.js/blob/main/quantized/openai/whisper-tiny.en/speech2seq-lm-with-past/decoder_model_merged.onnx) has ~50MB but can be compressed ~30MB using brotli: ``` brotli decoder_model_merged.onnx -o decoder_model_merged.onnx.br -Z -f ``` ### Motivation There are many sites and online demos using huggingface cdn, fetching large models. There might be substantial reduction in traffic and waiting times if these files are compressed. Considering it follows the request headers and serves with proper response headers this will be very transparent (no code change needed) for end devs / users. ### Your contribution I am sorry, I am no expert in this field nor I have knowledge of cdn architecture used
04-05-2023 07:13:00
04-05-2023 07:13:00
Answered in https://github.com/huggingface/huggingface_hub/issues/1446#issuecomment-1521893687 on why this will unfortunately not be supported anytime soon :confused: Since we have a generic issue in `huggingface_hub`, I think we can close this one.