repo
stringclasses
1 value
number
int64
1
25.3k
state
stringclasses
2 values
title
stringlengths
1
487
body
stringlengths
0
234k
created_at
stringlengths
19
19
closed_at
stringlengths
19
19
comments
stringlengths
0
293k
transformers
19,766
closed
Add missing information on token_type_ids for roberta model
# What does this PR do? Add missing information on token_type_ids for roberta model Fixes #19744 - [X ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? ## Who can review? @sgugger
10-20-2022 07:26:22
10-20-2022 07:26:22
_The documentation is not available anymore as the PR was closed or merged._<|||||>> Thanks for the PR. Can you be a bit more explicit as the configuration will always be initialized with this parameter. Maybe it needs to be 2 for instance? Or any vlaue >= 2? Done
transformers
19,765
closed
adding functionality to iterate pipelines over sentence pairs when using dataset
# What does this PR do? This PR adds a new torch data set class to pt_utils.py that helps with using pipelines and datasets for sentence pair tasks. Usage can be simply like ``` dataset = Dataset.from_pandas(dataset_df[['sentence1', 'sentence2']]) pipe = pipeline('text-classification', model=args.input_path_model, device=0, num_workers=4) result = list(tqdm(pipe(KeyPairDataset(dataset, 'sentence1', 'sentence2'), batch_size=32), total=len(dataset))) ``` <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) #19660 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
10-20-2022 07:21:43
10-20-2022 07:21:43
_The documentation is not available anymore as the PR was closed or merged._<|||||>@Narsil , please have a look.
transformers
19,764
closed
Using function "generate()" to generate text based on casual language model like GPT2 will repeat the input in the begining.
### System Info This issue has nothing to do with the relevant system information. ### Who can help? @patrickvonplaten, @Narsil, @gante ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Just 2 kind of examples illustrated in your code. 1.Multinomial Sampling: ```python >>> from transformers import AutoTokenizer, AutoModelForCausalLM >>> import torch >>> tokenizer = AutoTokenizer.from_pretrained("gpt2") >>> model = AutoModelForCausalLM.from_pretrained("gpt2") >>> prompt = "Today I believe we can finally" >>> input_ids = tokenizer(prompt, return_tensors="pt").input_ids >>> # sample up to 30 tokens >>> torch.manual_seed(0) # doctest: +IGNORE_RESULT >>> outputs = model.generate(input_ids, do_sample=True, max_length=30) >>> tokenizer.batch_decode(outputs, skip_special_tokens=True) ['Today I believe we can finally get rid of discrimination," said Rep. Mark Pocan (D-Wis.).\n\n"Just look at the'] ``` 2.Beam-search decoding: ```python >>> from transformers import AutoTokenizer, AutoModelForSeq2SeqLM >>> tokenizer = AutoTokenizer.from_pretrained("Helsinki-NLP/opus-mt-en-de") >>> model = AutoModelForSeq2SeqLM.from_pretrained("Helsinki-NLP/opus-mt-en-de") >>> sentence = "Paris is one of the densest populated areas in Europe." >>> input_ids = tokenizer(sentence, return_tensors="pt").input_ids >>> outputs = model.generate(input_ids, num_beams=5) >>> tokenizer.batch_decode(outputs, skip_special_tokens=True) ['Paris ist eines der dichtesten besiedelten Gebiete Europas.'] ```""" ### Expected behavior I found that, when I used casual language model and "generate()" to generate a text output based on the given text input, there were 2 kinds of situations: 1. if the model is good enough and the input is reasonable, the output will totally repeat the input at the begining; 2.if the model is not good enough or the input is not reasonable, the output will still try to repeat the input but it can't generate the same content as input at the begining. However, what I want is a output without repeating input, I don't know what parameter I need to set to achieve this. Although I know that this kind of language model will generate output token by token, but I just wonder if we can avoid to repeating input in output with your code.
10-20-2022 06:06:32
10-20-2022 06:06:32
The outputs of the causal language model contain the given prefix.<|||||>CODE IT <|||||>> The outputs of the causal language model contain the given prefix. so sometimes if the prefix is not good enough or the model is not good enough, it possibly can't repeat the prefix exactly the same as the given one? just like the beam search example mentioned above<|||||>> CODE IT I've been operating the output before then, I just remove the content of input in output. Actually, I found that the api of openai (InstructGPT) can generate the output without the prefix, I just wonder if they do the same as me, remove the input after the process of generation. <|||||>You might want to use ```python from transformers import pipeline pipe = pipeline(model="gpt2", return_full_text=False) print(pipe("Some text")) ``` `.generate` returns the real tensors, which include the text in `decoder-only` models and do not on `encoder-decoder` models. This is just how these models operate.<|||||>> You might want to use Much Thanks, I've read through the source code of "pipeline" in detail, it seems that it call the "generate()" and do the mentioned process through the following code in "postprocess()": ` if return_type == ReturnType.FULL_TEXT: all_text = prompt_text + text[prompt_length:] else: all_text = text[prompt_length:]` Acatully, I'm confused of the diference between “pipeline('text-generation')” and "generate()" the days before, now I'm more clearly. I'll close the issue, wish you have a good day.<|||||> Just for future readers: - `pipelines`: from raw string to raw string - `generate` from input_ids tensors to output_ids tensor `generate` doesn't have the option to "cut" the input_ids, it really operates on what the model sees, which are all the ids. `pipeline` on the other hand is designed to work as much as possible out of the box for non ML users, so it will add some magic for you sometimes (like here cutting the input which is annoying when writing an autocomplete workflow for instance.
transformers
19,763
closed
[Doctest] Add `configuration_opt.py` , `configuration_openai.py'`
Based on #19487
10-20-2022 03:31:46
10-20-2022 03:31:46
_The documentation is not available anymore as the PR was closed or merged._
transformers
19,762
closed
Why this dummpy input is not consistent with pytorch
https://github.com/huggingface/transformers/blob/bd469c40659ce76c81f69c7726759d249b4aef49/src/transformers/models/layoutlmv3/modeling_tf_layoutlmv3.py#L988 The dummpy input of pytorch layoutlmv3 is like this, from transformers.utils: https://github.com/huggingface/transformers/blob/v4.23.1/src/transformers/modeling_utils.py#L954 from transformers.utils import DUMMY_INPUTS >>> print(DUMMY_INPUTS) [[7, 6, 0, 0, 1], [1, 2, 3, 0, 0], [0, 0, 0, 4, 5]]
10-20-2022 02:27:03
10-20-2022 02:27:03
Hi there! Please ask questions like this on the [forums](https://discuss.huggingface.co/), as we keep issues for bugs and feature requests only.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
19,761
closed
bert2gtp_neo
### System Info - `transformers` version: 4.24.0.dev0 - Platform: Linux-5.10.133+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.15 - Huggingface_hub version: 0.10.1 - PyTorch version (GPU?): 1.12.1+cu113 (True) - Tensorflow version (GPU?): 2.9.2 (True) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? I'm trying to build an encoder-decoder model using `bert as an encoder` and `gpt_neo as a decoder`. I finetune it on the `summarization` task. I got this error at the beginning of the training `TypeError: forward() got an unexpected keyword argument 'encoder_hidden_states'` However, when I used the gpt2 it worked well. I'm sharing the [Colab notebook for more details](https://colab.research.google.com/drive/16pg7XRDJ6iu4ih15qZAx3E09cnk_j77u?usp=sharing) @patil-suraj, @sgugger, @patrickvonplatenit ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction [Colab notebook for more details](https://colab.research.google.com/drive/16pg7XRDJ6iu4ih15qZAx3E09cnk_j77u?usp=sharing) ### Expected behavior Helping to resolve the problem
10-19-2022 23:04:21
10-19-2022 23:04:21
@patil-suraj @sgugger @patrickvonplaten <|||||>Hey @elmadany, In general, could you maybe share a short reproducible code snippet below that is easy to copy-paste? The google colab is more than just a error production and going through it step-by-step sadly takes us too much time. Thank you :-) To answer your question: GPT-Neo is currently not compatible to be used as a decoder in an encoder-decoder setting. GPT-Neo does not have a `encoder_hidden_states` function argument here: https://github.com/huggingface/transformers/blob/371337a95b5d82cc9376c2595ed2022a5eb2ee6e/src/transformers/models/gpt_neo/modeling_gpt_neo.py#L721 whereas GPT-2 has one: https://github.com/huggingface/transformers/blob/371337a95b5d82cc9376c2595ed2022a5eb2ee6e/src/transformers/models/gpt2/modeling_gpt2.py#L1030 => that's the reason why GPT-2 works but GPT-Neo doesn't. To solve this one would need to adapt GPT-Neo to allow for cross-attention, similar to: https://github.com/huggingface/transformers/pull/6415 . Feel free to give it a try if you would like to add a feature to transformers :-) <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>> Hey @elmadany, > > In general, could you maybe share a short reproducible code snippet below that is easy to copy-paste? The google colab is more than just a error production and going through it step-by-step sadly takes us too much time. Thank you :-) > > To answer your question: GPT-Neo is currently not compatible to be used as a decoder in an encoder-decoder setting. GPT-Neo does not have a `encoder_hidden_states` function argument here: > > https://github.com/huggingface/transformers/blob/371337a95b5d82cc9376c2595ed2022a5eb2ee6e/src/transformers/models/gpt_neo/modeling_gpt_neo.py#L721 > > whereas GPT-2 has one: > > https://github.com/huggingface/transformers/blob/371337a95b5d82cc9376c2595ed2022a5eb2ee6e/src/transformers/models/gpt2/modeling_gpt2.py#L1030 > > => that's the reason why GPT-2 works but GPT-Neo doesn't. > > To solve this one would need to adapt GPT-Neo to allow for cross-attention, similar to: #6415 . Feel free to give it a try if you would like to add a feature to transformers :-) Hello, have u ever tried the `generate() ` method using `encoder_hidden_states ` parameter, like: ``` model = GPT2LMHeadModel.from_pretrained('gpt2', output_hidden_states=True, add_cross_attention=True) tokenizer = GPT2Tokenizer.from_pretrained('gpt2') memory = ["Oh no", "Oh no", "Excellent !"] memory = [tokenizer.encode(i, max_length=3) for i in memory] memory = torch.tensor(memory) memory = model(memory, past_key_values=None) memory = memory.hidden_states[0] # batch_size, 1, 768 inputs = ["This is my", "I want to", "The world is"] inputs = [tokenizer.encode(i, max_length=3) for i in inputs] inputs = torch.tensor(inputs) output = model.generate(encoder_hidden_states=memory, inputs=inputs) print(tokenizer.batch_decode(output)) output = model.generate(inputs=inputs) print(tokenizer.batch_decode(output)) ``` This is just a demo, I found that the outputs are the same, no matter how I set the parameter `encoder_hidden_states` is set, and in the debug mode, I found that this parameter is not passed to `GPT2LMHeadModel`
transformers
19,760
closed
docs: All broken links were fixed in CONTRIBUTING.md file.
# What does this PR do? ### This PR fixes the broken links in contributing file. - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). **Yes, this PR does fix broken links in contributing file.** - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? **Yes, I've read the contributor guideline.** - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. **No, it was not discussed in any GitHub issue or in the form.** - [x] Did you write any new necessary tests? **Yes, before creating a PR, I've checked the broken links in the contributing file.**
10-19-2022 20:30:58
10-19-2022 20:30:58
_The documentation is not available anymore as the PR was closed or merged._<|||||>> Thanks for the fixes! You're welcome!
transformers
19,759
closed
Fix docker image build
# What does this PR do? The docker image build fails (for deepspeed-releated docker file) since #19532. It seems `nvcr.io/nvidia` is somehow different, and we can't build 2 images in a single GH job. Current failed run https://github.com/huggingface/transformers/actions/runs/3278157954
10-19-2022 19:26:40
10-19-2022 19:26:40
_The documentation is not available anymore as the PR was closed or merged._<|||||>Not at all an expert on docker (quite the opposite :sweat_smile: ) so will let @LysandreJik comment on this one!
transformers
19,758
closed
[Doctest] Add `configuration_squeezebert.py`
Add `configuration_squeezebert.py` to `utils/documentation_tests.txt` for doctest. Based on issue #19487 @ydshieh could you please take a look at it? Thanks =)
10-19-2022 19:07:20
10-19-2022 19:07:20
_The documentation is not available anymore as the PR was closed or merged._
transformers
19,757
closed
[Doctest] Add `configuration_speech_to_text.py`
Add `configuration_speech_to_text.py` to `utils/documentation_tests.txt` for doctest. Based on issue #19487 @ydshieh could you please check it? Thanks :)
10-19-2022 19:06:50
10-19-2022 19:06:50
transformers
19,756
closed
[Doctest] Add `configuration_speech_to_text_2.py`
Add `configuration_speech_to_text_2.py` to `utils/documentation_tests.txt` for doctest. Based on issue #19487 @ydshieh could you please take a look at it? Thanks :)
10-19-2022 19:06:03
10-19-2022 19:06:03
_The documentation is not available anymore as the PR was closed or merged._
transformers
19,755
closed
Install tf2onnx dev version
# What does this PR do? A temporary fix for the error on CI: `ValueError: from_keras requires input_signature`. Reference: [this commit](https://github.com/onnx/tensorflow-onnx/commit/ddca3a5eb2d912f20fe7e0568dd1a3013aee9fa3)
10-19-2022 19:00:16
10-19-2022 19:00:16
_The documentation is not available anymore as the PR was closed or merged._<|||||>@LysandreJik I guess ready to merge :-) ?
transformers
19,754
closed
Fixed spacing errors in ISSUES.md
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
10-19-2022 18:28:37
10-19-2022 18:28:37
_The documentation is not available anymore as the PR was closed or merged._
transformers
19,753
closed
Added type hints for `LayoutLMv3`
# What does this PR do? Added type hints in `LayoutLMv3`
10-19-2022 16:40:01
10-19-2022 16:40:01
_The documentation is not available anymore as the PR was closed or merged._
transformers
19,752
closed
[Doctests] Add `configuration_detr.py`
Add `configuration_detr.py` to `utils/documentation_tests.txt` for doctest. Based on issue https://github.com/huggingface/transformers/issues/19487 @ydshieh could you please check it? Thank you :)
10-19-2022 15:55:30
10-19-2022 15:55:30
_The documentation is not available anymore as the PR was closed or merged._
transformers
19,751
closed
[Doctest] Add `configuration_decision_transformer.py`
Add `configuration_decision_transformer.py` to `utils/documentation_tests.txt` for doctest. Based on issue https://github.com/huggingface/transformers/issues/19487 @ydshieh could you please check it? Thank you :)
10-19-2022 15:31:39
10-19-2022 15:31:39
_The documentation is not available anymore as the PR was closed or merged._
transformers
19,750
closed
Fix cache version file creation
# What does this PR do? As reported in #19738, the cache version file is never actually created on a new machine. This is because it was under a test a bit too restrictive (there needed to be cached files). This PR should fix this. Fixes #19738
10-19-2022 14:22:19
10-19-2022 14:22:19
_The documentation is not available anymore as the PR was closed or merged._
transformers
19,749
closed
[Doctest] Add `configuration_wavlm.py`
Add configuration_wav2vec2_conformer.py to utils/documentation_tests.txt for doctest. Based on issue: [19487](https://github.com/huggingface/transformers/issues/19487) @ydshieh Could you please take a look? Thank you very much for your kind support!
10-19-2022 13:58:53
10-19-2022 13:58:53
_The documentation is not available anymore as the PR was closed or merged._
transformers
19,748
closed
Specify TF framework explicitly in more pipeline tests
# What does this PR do? It seems these 2 are the remaining ones, and no test failure due to the change in this PR.
10-19-2022 13:20:33
10-19-2022 13:20:33
_The documentation is not available anymore as the PR was closed or merged._
transformers
19,747
closed
(TF) model.generate with beam search to an exportable tf.function
### Feature request It would be good if we could export the `tf.function` `model.generate` with beam search, so it is servable with TF Serving. ### Motivation When we try to export a `model.generate` with `do_sample=True` to a `tf.function` and save it, we get an error due to Transformers and Tensorflow Internals. It would be good if, such as what is done in #18372, we could do for beam search: ```python import tensorflow as tf from transformers import TFAutoModelForSeq2SeqLM class MyOwnModel(tf.Module): def __init__(self, model_path="t5-small"): super(MyOwnModel, self).__init__() self.model = TFAutoModelForSeq2SeqLM.from_pretrained(model_path) @tf.function(input_signature=(tf.TensorSpec((None, 32), tf.int32, name="input_ids"), tf.TensorSpec((None, 32), tf.int32, name="attention_mask")), jit_compile=True) def serving(self, input_ids, attention_mask): outputs = self.model.generate(input_ids=input_ids, attention_mask=attention_mask, max_new_tokens=32, return_dict_in_generate=True, do_sample=True) return {"sequences": outputs["sequences"]} model = MyOwnModel() export_dir = "./t5-example" tf.saved_model.save( model, export_dir, signatures={ "serving_default": model.serving }) ``` That way, we would be able to serve TF text to text models with beam search in TF Serving, which would be a blast. Otherwise we would have to implement our own servers which is not ideal. ### Your contribution I can try submitting a PR, but would be glad if I could get some guidance because this one seems harsh. It seems like we would have to remove lots of `shape` related operations.
10-19-2022 12:22:24
10-19-2022 12:22:24
cc @gante <|||||>Hi @piEsposito 👋 Thanks for raising this issue -- there were indeed two related problems, both fixed in #19773 :)<|||||>@gante you always bring good news when I'm here to bother you folks from Hugging Face. Thanks!
transformers
19,746
closed
Fix accelerate tests
# What does this PR do? Fixes the accelerate test based on new accelerate update. Some of the test are fixed, but not all yet, so WIP `for i in range(len(max_memory)-2):`
10-19-2022 11:38:09
10-19-2022 11:38:09
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19746). All of your documentation changes will be reflected on that endpoint.
transformers
19,745
closed
ClassificationModel .train_model strange behaviour / errors
### System Info - `transformers` version: 4.23.1 - Platform: macOS-10.16-x86_64-i386-64bit - Python version: 3.10.6 - Huggingface_hub version: 0.10.1 - PyTorch version (GPU?): 1.12.1 (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: No - Using distributed or parallel set-up in script?: Distributed ### Who can help? @LysandreJik ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction I am trying to do a very simple training of a HuggingFace model from the community in VS Code. This code works in Jupyter Notebook and the model is seemingly training (progress bar appears and moves) but not when running on the command line (Terminal) or in the interactive cells in VS code. Can I get this code to work in VS code? Apologies for the long error traces but this is what is being printed in the console. Appreciate any help with this. train_dataset: ![image](https://user-images.githubusercontent.com/75484578/196668794-5ece86ea-cd44-4c9a-9878-dae73c60221d.png) 1. `conda create -n ENVIRONMENT python=3.10` 2. Install packages using pip: ```Python import subprocess import sys def install(package): subprocess.check_call([sys.executable, '-m', 'pip', 'install', package]) package_list = ['numpy','pandas', 'matplotlib', 'torch', 'transformers', 'simpletransformers'] for i in package_list: install(i) print('Installed', i) ``` 3. Run .py file containing model parameters and training in environment: ```Python import torch import pandas as pd import transformers from simpletransformers.classification import ClassificationModel, ClassificationArgs cuda_available = torch.cuda.is_available() df = pd.read_csv('./data/bindingDB/processed_data/data.csv') df = df[['SMILES', 'label']].rename(columns={'SMILES':'smiles', 'label':'labels'}) model_args = ClassificationArgs() model_args.num_train_epochs = 5 model_args.output_dir = './model_outputs/baBERTa/' model_args.regression = True model = ClassificationModel(model_type= 'roberta', model_name='seyonec/ChemBERTa-zinc-base-v1', use_cuda=cuda_available, num_labels=1, args=model_args) train_size = 0.8 train_dataset=df.sample(frac=train_size,random_state=200).reset_index(drop=True) test_dataset=df.drop(train_dataset.index).reset_index(drop=True) #train model model.train_model(train_dataset) # STRANGE BEHAVIOUR HERE ! ``` 4. Observe model.train_model(train_dataset) produces strange behaviours: ```text Some weights of the model checkpoint at seyonec/ChemBERTa-zinc-base-v1 were not used when initializing RobertaForSequenceClassification: ['lm_head.layer_norm.weight', 'roberta.pooler.dense.bias', 'lm_head.decoder.weight', 'roberta.pooler.dense.weight', 'lm_head.bias', 'lm_head.dense.bias', 'lm_head.dense.weight', 'lm_head.decoder.bias', 'lm_head.layer_norm.bias'] - This IS expected if you are initializing RobertaForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model). - This IS NOT expected if you are initializing RobertaForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). Some weights of RobertaForSequenceClassification were not initialized from the model checkpoint at seyonec/ChemBERTa-zinc-base-v1 and are newly initialized: ['classifier.dense.weight', 'classifier.out_proj.weight', 'classifier.out_proj.bias', 'classifier.dense.bias'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. Some weights of the model checkpoint at seyonec/ChemBERTa-zinc-base-v1 were not used when initializing RobertaForSequenceClassification: ['lm_head.layer_norm.weight', 'lm_head.decoder.weight', 'lm_head.decoder.bias', 'lm_head.bias', 'roberta.pooler.dense.bias', 'lm_head.dense.weight', 'lm_head.dense.bias', 'lm_head.layer_norm.bias', 'roberta.pooler.dense.weight'] - This IS expected if you are initializing RobertaForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model). - This IS NOT expected if you are initializing RobertaForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). Some weights of RobertaForSequenceClassification were not initialized from the model checkpoint at seyonec/ChemBERTa-zinc-base-v1 and are newly initialized: ['classifier.dense.weight', 'classifier.dense.bias', 'classifier.out_proj.bias', 'classifier.out_proj.weight'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. Some weights of the model checkpoint at seyonec/ChemBERTa-zinc-base-v1 were not used when initializing RobertaForSequenceClassification: ['lm_head.dense.bias', 'lm_head.decoder.bias', 'lm_head.layer_norm.weight', 'lm_head.bias', 'roberta.pooler.dense.weight', 'lm_head.layer_norm.bias', 'lm_head.dense.weight', 'lm_head.decoder.weight', 'roberta.pooler.dense.bias'] - This IS expected if you are initializing RobertaForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model). - This IS NOT expected if you are initializing RobertaForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). Some weights of RobertaForSequenceClassification were not initialized from the model checkpoint at seyonec/ChemBERTa-zinc-base-v1 and are newly initialized: ['classifier.out_proj.weight', 'classifier.dense.bias', 'classifier.out_proj.bias', 'classifier.dense.weight'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. Some weights of the model checkpoint at seyonec/ChemBERTa-zinc-base-v1 were not used when initializing RobertaForSequenceClassification: ['roberta.pooler.dense.bias', 'lm_head.bias', 'roberta.pooler.dense.weight', 'lm_head.decoder.weight', 'lm_head.dense.bias', 'lm_head.layer_norm.weight', 'lm_head.layer_norm.bias', 'lm_head.dense.weight', 'lm_head.decoder.bias'] - This IS expected if you are initializing RobertaForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model). - This IS NOT expected if you are initializing RobertaForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). Some weights of RobertaForSequenceClassification were not initialized from the model checkpoint at seyonec/ChemBERTa-zinc-base-v1 and are newly initialized: ['classifier.out_proj.bias', 'classifier.out_proj.weight', 'classifier.dense.bias', 'classifier.dense.weight'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. Some weights of the model checkpoint at seyonec/ChemBERTa-zinc-base-v1 were not used when initializing RobertaForSequenceClassification: ['lm_head.bias', 'lm_head.layer_norm.bias', 'lm_head.dense.bias', 'lm_head.decoder.weight', 'roberta.pooler.dense.bias', 'lm_head.decoder.bias', 'lm_head.layer_norm.weight', 'roberta.pooler.dense.weight', 'lm_head.dense.weight'] - This IS expected if you are initializing RobertaForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model). - This IS NOT expected if you are initializing RobertaForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). Some weights of RobertaForSequenceClassification were not initialized from the model checkpoint at seyonec/ChemBERTa-zinc-base-v1 and are newly initialized: ['classifier.out_proj.bias', 'classifier.out_proj.weight', 'classifier.dense.weight', 'classifier.dense.bias'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. Some weights of the model checkpoint at seyonec/ChemBERTa-zinc-base-v1 were not used when initializing RobertaForSequenceClassification: ['lm_head.layer_norm.weight', 'lm_head.decoder.weight', 'lm_head.dense.weight', 'lm_head.decoder.bias', 'roberta.pooler.dense.bias', 'lm_head.bias', 'roberta.pooler.dense.weight', 'lm_head.dense.bias', 'lm_head.layer_norm.bias'] - This IS expected if you are initializing RobertaForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model). - This IS NOT expected if you are initializing RobertaForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). Some weights of RobertaForSequenceClassification were not initialized from the model checkpoint at seyonec/ChemBERTa-zinc-base-v1 and are newly initialized: ['classifier.dense.bias', 'classifier.out_proj.weight', 'classifier.dense.weight', 'classifier.out_proj.bias'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. Some weights of the model checkpoint at seyonec/ChemBERTa-zinc-base-v1 were not used when initializing RobertaForSequenceClassification: ['lm_head.decoder.bias', 'lm_head.layer_norm.bias', 'roberta.pooler.dense.weight', 'lm_head.dense.weight', 'lm_head.dense.bias', 'lm_head.layer_norm.weight', 'roberta.pooler.dense.bias', 'lm_head.decoder.weight', 'lm_head.bias'] - This IS expected if you are initializing RobertaForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model). - This IS NOT expected if you are initializing RobertaForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). Some weights of RobertaForSequenceClassification were not initialized from the model checkpoint at seyonec/ChemBERTa-zinc-base-v1 and are newly initialized: ['classifier.dense.weight', 'classifier.out_proj.bias', 'classifier.dense.bias', 'classifier.out_proj.weight'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. Some weights of the model checkpoint at seyonec/ChemBERTa-zinc-base-v1 were not used when initializing RobertaForSequenceClassification: ['lm_head.layer_norm.weight', 'lm_head.decoder.weight', 'lm_head.dense.weight', 'lm_head.layer_norm.bias', 'lm_head.bias', 'roberta.pooler.dense.bias', 'lm_head.decoder.bias', 'roberta.pooler.dense.weight', 'lm_head.dense.bias'] - This IS expected if you are initializing RobertaForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model). - This IS NOT expected if you are initializing RobertaForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). Some weights of RobertaForSequenceClassification were not initialized from the model checkpoint at seyonec/ChemBERTa-zinc-base-v1 and are newly initialized: ['classifier.out_proj.weight', 'classifier.out_proj.bias', 'classifier.dense.bias', 'classifier.dense.weight'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. /Users/<User>/opt/anaconda3/envs/baBERTa/lib/python3.10/site-packages/simpletransformers/classification/classification_model.py:612: UserWarning: Dataframe headers not specified. Falling back to using column 0 as text and column 1 as labels. warnings.warn( /Users/<User>/opt/anaconda3/envs/baBERTa/lib/python3.10/site-packages/simpletransformers/classification/classification_model.py:612: UserWarning: Dataframe headers not specified. Falling back to using column 0 as text and column 1 as labels. warnings.warn( Traceback (most recent call last): Traceback (most recent call last): File "<string>", line 1, in <module> File "<string>", line 1, in <module> File "/Users/<User>/opt/anaconda3/envs/baBERTa/lib/python3.10/multiprocessing/spawn.py", line 116, in spawn_main File "/Users/<User>/opt/anaconda3/envs/baBERTa/lib/python3.10/multiprocessing/spawn.py", line 116, in spawn_main exitcode = _main(fd, parent_sentinel) File "/Users/<User>/opt/anaconda3/envs/baBERTa/lib/python3.10/multiprocessing/spawn.py", line 125, in _main exitcode = _main(fd, parent_sentinel) File "/Users/<User>/opt/anaconda3/envs/baBERTa/lib/python3.10/multiprocessing/spawn.py", line 125, in _main prepare(preparation_data) File "/Users/<User>/opt/anaconda3/envs/baBERTa/lib/python3.10/multiprocessing/spawn.py", line 236, in prepare prepare(preparation_data) File "/Users/<User>/opt/anaconda3/envs/baBERTa/lib/python3.10/multiprocessing/spawn.py", line 236, in prepare _fixup_main_from_path(data['init_main_from_path']) File "/Users/<User>/opt/anaconda3/envs/baBERTa/lib/python3.10/multiprocessing/spawn.py", line 287, in _fixup_main_from_path _fixup_main_from_path(data['init_main_from_path']) File "/Users/<User>/opt/anaconda3/envs/baBERTa/lib/python3.10/multiprocessing/spawn.py", line 287, in _fixup_main_from_path main_content = runpy.run_path(main_path, File "/Users/<User>/opt/anaconda3/envs/baBERTa/lib/python3.10/runpy.py", line 289, in run_path main_content = runpy.run_path(main_path, File "/Users/<User>/opt/anaconda3/envs/baBERTa/lib/python3.10/runpy.py", line 289, in run_path return _run_module_code(code, init_globals, run_name, File "/Users/<User>/opt/anaconda3/envs/baBERTa/lib/python3.10/runpy.py", line 96, in _run_module_code return _run_module_code(code, init_globals, run_name, File "/Users/<User>/opt/anaconda3/envs/baBERTa/lib/python3.10/runpy.py", line 96, in _run_module_code _run_code(code, mod_globals, init_globals, File "/Users/<User>/opt/anaconda3/envs/baBERTa/lib/python3.10/runpy.py", line 86, in _run_code _run_code(code, mod_globals, init_globals, File "/Users/<User>/opt/anaconda3/envs/baBERTa/lib/python3.10/runpy.py", line 86, in _run_code exec(code, run_globals) File "/Users/<User>/Documents/GitHub/baBERT/RoBERTaBa_simple.py", line 39, in <module> exec(code, run_globals) File "/Users/<User>/Documents/GitHub/baBERT/RoBERTaBa_simple.py", line 39, in <module> model.train_model(train_dataset)model.train_model(train_dataset) File "/Users/<User>/opt/anaconda3/envs/baBERTa/lib/python3.10/site-packages/simpletransformers/classification/classification_model.py", line 619, in train_model File "/Users/<User>/opt/anaconda3/envs/baBERTa/lib/python3.10/site-packages/simpletransformers/classification/classification_model.py", line 619, in train_model train_dataset = self.load_and_cache_examples( File "/Users/<User>/opt/anaconda3/envs/baBERTa/lib/python3.10/site-packages/simpletransformers/classification/classification_model.py", line 1827, in load_and_cache_examples train_dataset = self.load_and_cache_examples( File "/Users/<User>/opt/anaconda3/envs/baBERTa/lib/python3.10/site-packages/simpletransformers/classification/classification_model.py", line 1827, in load_and_cache_examples dataset = ClassificationDataset( File "/Users/<User>/opt/anaconda3/envs/baBERTa/lib/python3.10/site-packages/simpletransformers/classification/classification_utils.py", line 282, in __init__ dataset = ClassificationDataset( File "/Users/<User>/opt/anaconda3/envs/baBERTa/lib/python3.10/site-packages/simpletransformers/classification/classification_utils.py", line 282, in __init__ self.examples, self.labels = build_classification_dataset( File "/Users/<User>/opt/anaconda3/envs/baBERTa/lib/python3.10/site-packages/simpletransformers/classification/classification_utils.py", line 248, in build_classification_dataset self.examples, self.labels = build_classification_dataset( File "/Users/<User>/opt/anaconda3/envs/baBERTa/lib/python3.10/site-packages/simpletransformers/classification/classification_utils.py", line 248, in build_classification_dataset with Pool(args.process_count) as p: File "/Users/<User>/opt/anaconda3/envs/baBERTa/lib/python3.10/multiprocessing/context.py", line 119, in Pool with Pool(args.process_count) as p: File "/Users/<User>/opt/anaconda3/envs/baBERTa/lib/python3.10/multiprocessing/context.py", line 119, in Pool return Pool(processes, initializer, initargs, maxtasksperchild, File "/Users/<User>/opt/anaconda3/envs/baBERTa/lib/python3.10/multiprocessing/pool.py", line 215, in __init__ return Pool(processes, initializer, initargs, maxtasksperchild, File "/Users/<User>/opt/anaconda3/envs/baBERTa/lib/python3.10/multiprocessing/pool.py", line 215, in __init__ self._repopulate_pool() File "/Users/<User>/opt/anaconda3/envs/baBERTa/lib/python3.10/multiprocessing/pool.py", line 306, in _repopulate_pool self._repopulate_pool() File "/Users/<User>/opt/anaconda3/envs/baBERTa/lib/python3.10/multiprocessing/pool.py", line 306, in _repopulate_pool return self._repopulate_pool_static(self._ctx, self.Process, File "/Users/<User>/opt/anaconda3/envs/baBERTa/lib/python3.10/multiprocessing/pool.py", line 329, in _repopulate_pool_static return self._repopulate_pool_static(self._ctx, self.Process, File "/Users/<User>/opt/anaconda3/envs/baBERTa/lib/python3.10/multiprocessing/pool.py", line 329, in _repopulate_pool_static w.start() File "/Users/<User>/opt/anaconda3/envs/baBERTa/lib/python3.10/multiprocessing/process.py", line 121, in start w.start() File "/Users/<User>/opt/anaconda3/envs/baBERTa/lib/python3.10/multiprocessing/process.py", line 121, in start self._popen = self._Popen(self)self._popen = self._Popen(self) File "/Users/<User>/opt/anaconda3/envs/baBERTa/lib/python3.10/multiprocessing/context.py", line 288, in _Popen File "/Users/<User>/opt/anaconda3/envs/baBERTa/lib/python3.10/multiprocessing/context.py", line 288, in _Popen return Popen(process_obj) return Popen(process_obj) File "/Users/<User>/opt/anaconda3/envs/baBERTa/lib/python3.10/multiprocessing/popen_spawn_posix.py", line 32, in __init__ File "/Users/<User>/opt/anaconda3/envs/baBERTa/lib/python3.10/multiprocessing/popen_spawn_posix.py", line 32, in __init__ super().__init__(process_obj) File "/Users/<User>/opt/anaconda3/envs/baBERTa/lib/python3.10/multiprocessing/popen_fork.py", line 19, in __init__ super().__init__(process_obj) File "/Users/<User>/opt/anaconda3/envs/baBERTa/lib/python3.10/multiprocessing/popen_fork.py", line 19, in __init__ self._launch(process_obj) File "/Users/<User>/opt/anaconda3/envs/baBERTa/lib/python3.10/multiprocessing/popen_spawn_posix.py", line 42, in _launch self._launch(process_obj) File "/Users/<User>/opt/anaconda3/envs/baBERTa/lib/python3.10/multiprocessing/popen_spawn_posix.py", line 42, in _launch prep_data = spawn.get_preparation_data(process_obj._name) File "/Users/<User>/opt/anaconda3/envs/baBERTa/lib/python3.10/multiprocessing/spawn.py", line 154, in get_preparation_data prep_data = spawn.get_preparation_data(process_obj._name) File "/Users/<User>/opt/anaconda3/envs/baBERTa/lib/python3.10/multiprocessing/spawn.py", line 154, in get_preparation_data _check_not_importing_main() File "/Users/<User>/opt/anaconda3/envs/baBERTa/lib/python3.10/multiprocessing/spawn.py", line 134, in _check_not_importing_main _check_not_importing_main() File "/Users/<User>/opt/anaconda3/envs/baBERTa/lib/python3.10/multiprocessing/spawn.py", line 134, in _check_not_importing_main raise RuntimeError(''' RuntimeError: An attempt has been made to start a new process before the current process has finished its bootstrapping phase. This probably means that you are not using fork to start your child processes and you have forgotten to use the proper idiom in the main module: if __name__ == '__main__': freeze_support() ... The "freeze_support()" line can be omitted if the program is not going to be frozen to produce an executable. raise RuntimeError(''' RuntimeError: An attempt has been made to start a new process before the current process has finished its bootstrapping phase. This probably means that you are not using fork to start your child processes and you have forgotten to use the proper idiom in the main module: if __name__ == '__main__': freeze_support() ... The "freeze_support()" line can be omitted if the program is not going to be frozen to produce an executable. /Users/<User>/opt/anaconda3/envs/baBERTa/lib/python3.10/site-packages/simpletransformers/classification/classification_model.py:612: UserWarning: Dataframe headers not specified. Falling back to using column 0 as text and column 1 as labels. warnings.warn( /Users/<User>/opt/anaconda3/envs/baBERTa/lib/python3.10/site-packages/simpletransformers/classification/classification_model.py:612: UserWarning: Dataframe headers not specified. Falling back to using column 0 as text and column 1 as labels. warnings.warn( Traceback (most recent call last): File "<string>", line 1, in <module> File "/Users/<User>/opt/anaconda3/envs/baBERTa/lib/python3.10/multiprocessing/spawn.py", line 116, in spawn_main exitcode = _main(fd, parent_sentinel) File "/Users/<User>/opt/anaconda3/envs/baBERTa/lib/python3.10/multiprocessing/spawn.py", line 125, in _main prepare(preparation_data) File "/Users/<User>/opt/anaconda3/envs/baBERTa/lib/python3.10/multiprocessing/spawn.py", line 236, in prepare _fixup_main_from_path(data['init_main_from_path']) File "/Users/<User>/opt/anaconda3/envs/baBERTa/lib/python3.10/multiprocessing/spawn.py", line 287, in _fixup_main_from_path main_content = runpy.run_path(main_path, File "/Users/<User>/opt/anaconda3/envs/baBERTa/lib/python3.10/runpy.py", line 289, in run_path return _run_module_code(code, init_globals, run_name, File "/Users/<User>/opt/anaconda3/envs/baBERTa/lib/python3.10/runpy.py", line 96, in _run_module_code _run_code(code, mod_globals, init_globals, File "/Users/<User>/opt/anaconda3/envs/baBERTa/lib/python3.10/runpy.py", line 86, in _run_code exec(code, run_globals) File "/Users/<User>/Documents/GitHub/baBERT/RoBERTaBa_simple.py", line 39, in <module> model.train_model(train_dataset) File "/Users/<User>/opt/anaconda3/envs/baBERTa/lib/python3.10/site-packages/simpletransformers/classification/classification_model.py", line 619, in train_model train_dataset = self.load_and_cache_examples( File "/Users/<User>/opt/anaconda3/envs/baBERTa/lib/python3.10/site-packages/simpletransformers/classification/classification_model.py", line 1827, in load_and_cache_examples Traceback (most recent call last): dataset = ClassificationDataset( File "<string>", line 1, in <module> File "/Users/<User>/opt/anaconda3/envs/baBERTa/lib/python3.10/site-packages/simpletransformers/classification/classification_utils.py", line 282, in __init__ self.examples, self.labels = build_classification_dataset( File "/Users/<User>/opt/anaconda3/envs/baBERTa/lib/python3.10/site-packages/simpletransformers/classification/classification_utils.py", line 248, in build_classification_dataset File "/Users/<User>/opt/anaconda3/envs/baBERTa/lib/python3.10/multiprocessing/spawn.py", line 116, in spawn_main with Pool(args.process_count) as p: File "/Users/<User>/opt/anaconda3/envs/baBERTa/lib/python3.10/multiprocessing/context.py", line 119, in Pool return Pool(processes, initializer, initargs, maxtasksperchild, File "/Users/<User>/opt/anaconda3/envs/baBERTa/lib/python3.10/multiprocessing/pool.py", line 215, in __init__ self._repopulate_pool() File "/Users/<User>/opt/anaconda3/envs/baBERTa/lib/python3.10/multiprocessing/pool.py", line 306, in _repopulate_pool exitcode = _main(fd, parent_sentinel) File "/Users/<User>/opt/anaconda3/envs/baBERTa/lib/python3.10/multiprocessing/spawn.py", line 125, in _main return self._repopulate_pool_static(self._ctx, self.Process, File "/Users/<User>/opt/anaconda3/envs/baBERTa/lib/python3.10/multiprocessing/pool.py", line 329, in _repopulate_pool_static prepare(preparation_data) File "/Users/<User>/opt/anaconda3/envs/baBERTa/lib/python3.10/multiprocessing/spawn.py", line 236, in prepare w.start() File "/Users/<User>/opt/anaconda3/envs/baBERTa/lib/python3.10/multiprocessing/process.py", line 121, in start _fixup_main_from_path(data['init_main_from_path']) File "/Users/<User>/opt/anaconda3/envs/baBERTa/lib/python3.10/multiprocessing/spawn.py", line 287, in _fixup_main_from_path self._popen = self._Popen(self) File "/Users/<User>/opt/anaconda3/envs/baBERTa/lib/python3.10/multiprocessing/context.py", line 288, in _Popen main_content = runpy.run_path(main_path, File "/Users/<User>/opt/anaconda3/envs/baBERTa/lib/python3.10/runpy.py", line 289, in run_path return Popen(process_obj) File "/Users/<User>/opt/anaconda3/envs/baBERTa/lib/python3.10/multiprocessing/popen_spawn_posix.py", line 32, in __init__ return _run_module_code(code, init_globals, run_name, File "/Users/<User>/opt/anaconda3/envs/baBERTa/lib/python3.10/runpy.py", line 96, in _run_module_code super().__init__(process_obj) File "/Users/<User>/opt/anaconda3/envs/baBERTa/lib/python3.10/multiprocessing/popen_fork.py", line 19, in __init__ _run_code(code, mod_globals, init_globals, File "/Users/<User>/opt/anaconda3/envs/baBERTa/lib/python3.10/runpy.py", line 86, in _run_code self._launch(process_obj) File "/Users/<User>/opt/anaconda3/envs/baBERTa/lib/python3.10/multiprocessing/popen_spawn_posix.py", line 42, in _launch prep_data = spawn.get_preparation_data(process_obj._name) File "/Users/<User>/opt/anaconda3/envs/baBERTa/lib/python3.10/multiprocessing/spawn.py", line 154, in get_preparation_data exec(code, run_globals) File "/Users/<User>/Documents/GitHub/baBERT/RoBERTaBa_simple.py", line 39, in <module> model.train_model(train_dataset) File "/Users/<User>/opt/anaconda3/envs/baBERTa/lib/python3.10/site-packages/simpletransformers/classification/classification_model.py", line 619, in train_model _check_not_importing_main() File "/Users/<User>/opt/anaconda3/envs/baBERTa/lib/python3.10/multiprocessing/spawn.py", line 134, in _check_not_importing_main raise RuntimeError(''' RuntimeError: An attempt has been made to start a new process before the current process has finished its bootstrapping phase. This probably means that you are not using fork to start your child processes and you have forgotten to use the proper idiom in the main module: if __name__ == '__main__': freeze_support() ... The "freeze_support()" line can be omitted if the program is not going to be frozen to produce an executable. train_dataset = self.load_and_cache_examples( File "/Users/<User>/opt/anaconda3/envs/baBERTa/lib/python3.10/site-packages/simpletransformers/classification/classification_model.py", line 1827, in load_and_cache_examples dataset = ClassificationDataset( File "/Users/<User>/opt/anaconda3/envs/baBERTa/lib/python3.10/site-packages/simpletransformers/classification/classification_utils.py", line 282, in __init__ self.examples, self.labels = build_classification_dataset( File "/Users/<User>/opt/anaconda3/envs/baBERTa/lib/python3.10/site-packages/simpletransformers/classification/classification_utils.py", line 248, in build_classification_dataset with Pool(args.process_count) as p: File "/Users/<User>/opt/anaconda3/envs/baBERTa/lib/python3.10/multiprocessing/context.py", line 119, in Pool /Users/<User>/opt/anaconda3/envs/baBERTa/lib/python3.10/site-packages/simpletransformers/classification/classification_model.py:612: UserWarning: Dataframe headers not specified. Falling back to using column 0 as text and column 1 as labels. warnings.warn( return Pool(processes, initializer, initargs, maxtasksperchild, File "/Users/<User>/opt/anaconda3/envs/baBERTa/lib/python3.10/multiprocessing/pool.py", line 215, in __init__ /Users/<User>/opt/anaconda3/envs/baBERTa/lib/python3.10/site-packages/simpletransformers/classification/classification_model.py:612: UserWarning: Dataframe headers not specified. Falling back to using column 0 as text and column 1 as labels. warnings.warn( self._repopulate_pool() File "/Users/<User>/opt/anaconda3/envs/baBERTa/lib/python3.10/multiprocessing/pool.py", line 306, in _repopulate_pool return self._repopulate_pool_static(self._ctx, self.Process, File "/Users/<User>/opt/anaconda3/envs/baBERTa/lib/python3.10/multiprocessing/pool.py", line 329, in _repopulate_pool_static w.start() File "/Users/<User>/opt/anaconda3/envs/baBERTa/lib/python3.10/multiprocessing/process.py", line 121, in start self._popen = self._Popen(self) File "/Users/<User>/opt/anaconda3/envs/baBERTa/lib/python3.10/multiprocessing/context.py", line 288, in _Popen return Popen(process_obj) File "/Users/<User>/opt/anaconda3/envs/baBERTa/lib/python3.10/multiprocessing/popen_spawn_posix.py", line 32, in __init__ super().__init__(process_obj) File "/Users/<User>/opt/anaconda3/envs/baBERTa/lib/python3.10/multiprocessing/popen_fork.py", line 19, in __init__ self._launch(process_obj) File "/Users/<User>/opt/anaconda3/envs/baBERTa/lib/python3.10/multiprocessing/popen_spawn_posix.py", line 42, in _launch prep_data = spawn.get_preparation_data(process_obj._name) File "/Users/<User>/opt/anaconda3/envs/baBERTa/lib/python3.10/multiprocessing/spawn.py", line 154, in get_preparation_data _check_not_importing_main() File "/Users/<User>/opt/anaconda3/envs/baBERTa/lib/python3.10/multiprocessing/spawn.py", line 134, in _check_not_importing_main raise RuntimeError(''' RuntimeError: An attempt has been made to start a new process before the current process has finished its bootstrapping phase. This probably means that you are not using fork to start your child processes and you have forgotten to use the proper idiom in the main module: if __name__ == '__main__': freeze_support() ... The "freeze_support()" line can be omitted if the program is not going to be frozen to produce an executable. ``` This keeps being printed until I interrupt the process. What's printed is seemingly random, running the file again I produced this for example: ``` File "/Users/<User>/opt/anaconda3/envs/baBERTa/lib/python3.10/site-packages/scipy/stats/_discrete_distns.py", line 1443, in <module> File "/Users/<User>/opt/anaconda3/envs/baBERTa/lib/python3.10/site-packages/scipy/stats/_levy_stable/__init__.py", line 17, in <module> from ._levy_stable import levy_stable from ._levy_stable import levy_stable File "/Users/<User>/opt/anaconda3/envs/baBERTa/lib/python3.10/site-packages/scipy/stats/_levy_stable/__init__.py", line 17, in <module> File "/Users/<User>/opt/anaconda3/envs/baBERTa/lib/python3.10/site-packages/scipy/stats/_levy_stable/__init__.py", line 17, in <module> from ._levy_stable import levy_stable File "/Users/<User>/opt/anaconda3/envs/baBERTa/lib/python3.10/site-packages/scipy/stats/_levy_stable/__init__.py", line 17, in <module> from ._levy_stable import levy_stable File "/Users/<User>/opt/anaconda3/envs/baBERTa/lib/python3.10/site-packages/scipy/stats/_levy_stable/__init__.py", line 17, in <module> from .levyst import Nolan File "<frozen importlib._bootstrap>", line 404, in parent from .levyst import Nolan File "<frozen importlib._bootstrap>", line 404, in parent from .levyst import Nolan File "<frozen importlib._bootstrap>", line 404, in parent from .levyst import Nolan File "<frozen importlib._bootstrap>", line 404, in parent from .levyst import Nolan File "<frozen importlib._bootstrap>", line 404, in parent from .levyst import Nolan File "<frozen importlib._bootstrap>", line 404, in parent KeyboardInterrupt KeyboardInterruptKeyboardInterrupt KeyboardInterrupt KeyboardInterruptKeyboardInterrupt yulesimon = yulesimon_gen(name='yulesimon', a=1) File "/Users/<User>/opt/anaconda3/envs/baBERTa/lib/python3.10/site-packages/scipy/stats/_distn_infrastructure.py", line 3190, in __init__ skellam = skellam_gen(a=-np.inf, name="skellam", longname='A Skellam') File "/Users/<User>/opt/anaconda3/envs/baBERTa/lib/python3.10/site-packages/scipy/stats/_distn_infrastructure.py", line 3186, in __init__ self._attach_methods() File "/Users/<User>/opt/anaconda3/envs/baBERTa/lib/python3.10/site-packages/scipy/stats/_distn_infrastructure.py", line 3207, in _attach_methods self._construct_argparser(meths_to_inspect=[self._pmf, self._cdf], File "/Users/<User>/opt/anaconda3/envs/baBERTa/lib/python3.10/site-packages/scipy/stats/_distn_infrastructure.py", line 741, in _construct_argparser self._attach_argparser_methods() File "/Users/<User>/opt/anaconda3/envs/baBERTa/lib/python3.10/site-packages/scipy/stats/_distn_infrastructure.py", line 699, in _attach_argparser_methods shapes_args = _getfullargspec(meth) # NB does not contain self File "/Users/<User>/opt/anaconda3/envs/baBERTa/lib/python3.10/site-packages/scipy/_lib/_util.py", line 364, in getfullargspec_no_self exec(self._parse_arg_template, ns) File "<string>", line 1, in <module> KeyboardInterrupt sig = inspect.signature(func) File "/Users/<User>/opt/anaconda3/envs/baBERTa/lib/python3.10/inspect.py", line 3247, in signature ``` ### Expected behavior Progress bar to appear and model to train for 5 epochs.
10-19-2022 10:45:10
10-19-2022 10:45:10
This looks like this is an issue for the [SentenceTransformers](https://github.com/UKPLab/sentence-transformers) repo.
transformers
19,744
closed
Documentation related to token_type_ids for RoBERTa needs to be updated
### System Info This is about online documentation at https://huggingface.co/docs/transformers/v4.23.1/en/model_doc/roberta#transformers.RobertaModel.forward.token_type_ids ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction RoBERTa has no token type embeddings by default, while the documentation, indicates: ``` token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: * 0 corresponds to a sentence A token, * 1 corresponds to a sentence B token. ``` ### Expected behavior The documentation should reflect that the default RoBERTa model has no token type embeddings.
10-19-2022 10:04:25
10-19-2022 10:04:25
The argument is accepted and treated by the model, though.<|||||>Hi @sgugger, although the argument is accepted, when providing a `token_type_ids` with 0 and 1 values, as described in the documentation, an index out of range exception occurs. This is because by default RoBERTa does not have two token_type_id embeddings. I checked, it only has one which contains all zeros (since by default it doesn't use token_type_ids). In my opinion, the documentation should state that this parameter can only be used when token types have been added. `RobertaEmbeddings` uses `config.type_vocab_size` for this. <|||||>Would you like to open a PR with your suggested changes?<|||||>Yes, sure. Not able to give a timeline though.
transformers
19,743
closed
Couldn't process request when using "automatic-speech-recognition" pipeline on SageMaker
### System Info transformers - 4.17.1 torch - 1.10.1 sagemaker - 2.112.2 ### Who can help? @Narsil @patrickvonplaten @anton-l ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Run below code on SageMaker. ```python from sagemaker.huggingface import HuggingFaceModel import sagemaker import numpy as np role = sagemaker.get_execution_role() # Hub Model configuration. https://huggingface.co/models hub = { # 'HF_MODEL_ID':'openai/whisper-base', 'HF_MODEL_ID': 'facebook/wav2vec2-base-960h', 'HF_TASK':'automatic-speech-recognition' } # create Hugging Face Model Class huggingface_model = HuggingFaceModel( transformers_version='4.17.0', pytorch_version='1.10.2', py_version='py38', env=hub, role=role, ) # deploy model to SageMaker Inference predictor = huggingface_model.deploy( initial_instance_count=1, # number of instances instance_type='ml.m5.xlarge' # ec2 instance type ) input_array = np.random.randn(1, 10000) predictor.predict({ 'inputs': input_array }) ``` Returned InternalServerError, ``` 2022-10-19T09:42:32,618 [INFO ] W-facebook__wav2vec2-base-9-3-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - Prediction error -- 2022-10-19T09:42:32,619 [INFO ] W-facebook__wav2vec2-base-9-3-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - Traceback (most recent call last): 2022-10-19T09:42:32,619 [INFO ] W-9000-facebook__wav2vec2-base-9 com.amazonaws.ml.mms.wlm.WorkerThread - Backend response time: 9 2022-10-19T09:42:32,619 [INFO ] W-facebook__wav2vec2-base-9-3-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - File "/opt/conda/lib/python3.8/site-packages/sagemaker_huggingface_inference_toolkit/handler_service.py", line 234, in handle 2022-10-19T09:42:32,621 [INFO ] W-facebook__wav2vec2-base-9-3-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - response = self.transform_fn(self.model, input_data, content_type, accept) 2022-10-19T09:42:32,621 [INFO ] W-9000-facebook__wav2vec2-base-9 ACCESS_LOG - /169.254.178.2:42092 "POST /invocations HTTP/1.1" 400 16 2022-10-19T09:42:32,622 [INFO ] W-facebook__wav2vec2-base-9-3-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - File "/opt/conda/lib/python3.8/site-packages/sagemaker_huggingface_inference_toolkit/handler_service.py", line 190, in transform_fn 2022-10-19T09:42:32,623 [INFO ] W-facebook__wav2vec2-base-9-3-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - predictions = self.predict(processed_data, model) 2022-10-19T09:42:32,623 [INFO ] W-facebook__wav2vec2-base-9-3-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - File "/opt/conda/lib/python3.8/site-packages/sagemaker_huggingface_inference_toolkit/handler_service.py", line 158, in predict 2022-10-19T09:42:32,624 [INFO ] W-facebook__wav2vec2-base-9-3-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - prediction = model(inputs) 2022-10-19T09:42:32,624 [INFO ] W-facebook__wav2vec2-base-9-3-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - File "/opt/conda/lib/python3.8/site-packages/transformers/pipelines/automatic_speech_recognition.py", line 168, in __call__ 2022-10-19T09:42:32,624 [INFO ] W-facebook__wav2vec2-base-9-3-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - return super().__call__(inputs, **kwargs) 2022-10-19T09:42:32,625 [INFO ] W-facebook__wav2vec2-base-9-3-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - File "/opt/conda/lib/python3.8/site-packages/transformers/pipelines/base.py", line 1016, in __call__ 2022-10-19T09:42:32,625 [INFO ] W-facebook__wav2vec2-base-9-3-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - outputs = [output for output in final_iterator] 2022-10-19T09:42:32,626 [INFO ] W-facebook__wav2vec2-base-9-3-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - File "/opt/conda/lib/python3.8/site-packages/transformers/pipelines/base.py", line 1016, in <listcomp> 2022-10-19T09:42:32,626 [INFO ] W-facebook__wav2vec2-base-9-3-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - outputs = [output for output in final_iterator] 2022-10-19T09:42:32,626 [INFO ] W-facebook__wav2vec2-base-9-3-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - File "/opt/conda/lib/python3.8/site-packages/transformers/pipelines/pt_utils.py", line 111, in __next__ 2022-10-19T09:42:32,627 [INFO ] W-facebook__wav2vec2-base-9-3-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - item = next(self.iterator) 2022-10-19T09:42:32,627 [INFO ] W-facebook__wav2vec2-base-9-3-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - File "/opt/conda/lib/python3.8/site-packages/transformers/pipelines/pt_utils.py", line 253, in __next__ 2022-10-19T09:42:32,627 [INFO ] W-facebook__wav2vec2-base-9-3-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - processed = self.infer(next(self.iterator), **self.params) 2022-10-19T09:42:32,628 [INFO ] W-facebook__wav2vec2-base-9-3-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - File "/opt/conda/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 521, in __next__ 2022-10-19T09:42:32,628 [INFO ] W-facebook__wav2vec2-base-9-3-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - data = self._next_data() 2022-10-19T09:42:32,631 [INFO ] W-facebook__wav2vec2-base-9-3-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - File "/opt/conda/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 561, in _next_data 2022-10-19T09:42:32,631 [INFO ] W-facebook__wav2vec2-base-9-3-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - data = self._dataset_fetcher.fetch(index) # may raise StopIteration 2022-10-19T09:42:32,631 [INFO ] W-facebook__wav2vec2-base-9-3-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - File "/opt/conda/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 32, in fetch 2022-10-19T09:42:32,631 [INFO ] W-facebook__wav2vec2-base-9-3-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - data.append(next(self.dataset_iter)) 2022-10-19T09:42:32,631 [INFO ] W-facebook__wav2vec2-base-9-3-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - File "/opt/conda/lib/python3.8/site-packages/transformers/pipelines/pt_utils.py", line 170, in __next__ 2022-10-19T09:42:32,632 [INFO ] W-facebook__wav2vec2-base-9-3-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - processed = next(self.subiterator) 2022-10-19T09:42:32,632 [INFO ] W-facebook__wav2vec2-base-9-3-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - File "/opt/conda/lib/python3.8/site-packages/transformers/pipelines/automatic_speech_recognition.py", line 222, in preprocess 2022-10-19T09:42:32,632 [INFO ] W-facebook__wav2vec2-base-9-3-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - raise ValueError(f"We expect a numpy ndarray as input, got `{type(inputs)}`") 2022-10-19T09:42:32,632 [INFO ] W-facebook__wav2vec2-base-9-3-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - ValueError: We expect a numpy ndarray as input, got `<class 'list'>` 2022-10-19T09:42:32,633 [INFO ] W-facebook__wav2vec2-base-9-3-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - 2022-10-19T09:42:32,633 [INFO ] W-facebook__wav2vec2-base-9-3-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - During handling of the above exception, another exception occurred: 2022-10-19T09:42:32,633 [INFO ] W-facebook__wav2vec2-base-9-3-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - 2022-10-19T09:42:32,633 [INFO ] W-facebook__wav2vec2-base-9-3-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - Traceback (most recent call last): 2022-10-19T09:42:32,633 [INFO ] W-facebook__wav2vec2-base-9-3-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - File "/opt/conda/lib/python3.8/site-packages/mms/service.py", line 108, in predict 2022-10-19T09:42:32,633 [INFO ] W-facebook__wav2vec2-base-9-3-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - ret = self._entry_point(input_batch, self.context) 2022-10-19T09:42:32,633 [INFO ] W-facebook__wav2vec2-base-9-3-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - File "/opt/conda/lib/python3.8/site-packages/sagemaker_huggingface_inference_toolkit/handler_service.py", line 243, in handle 2022-10-19T09:42:32,633 [INFO ] W-facebook__wav2vec2-base-9-3-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - raise PredictionException(str(e), 400) 2022-10-19T09:42:32,634 [INFO ] W-facebook__wav2vec2-base-9-3-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - mms.service.PredictionException: We expect a numpy ndarray as input, got `<class 'list'>` : 400 ``` ### Expected behavior When I use Transformers on SageMaker, I noticed that Automatic Speech Recognition Pipeline doesn't consider receiving requests when deployed on SageMaker. When we use [SageMaker HuggingFace Inference Toolkit](https://github.com/aws/sagemaker-huggingface-inference-toolkit), [pipelines](https://huggingface.co/docs/transformers/main_classes/pipelines) will be used for inference. [AutomaticSpeechRecognitionPipeline](https://huggingface.co/docs/transformers/v4.23.1/en/main_classes/pipelines#transformers.AutomaticSpeechRecognitionPipeline) doesn't accept list as `inputs` parameter for `__call__` method and via API the request body is supposed to be like, ```python { "inputs": np.array([1., 2., 3., 4.])) } ``` but I cannot pass ndarray via JSON Serializer I can only pass list. To solve that problem, pipeline should accept list as inputs. return like ```json {'text': 'UROUND ME ON YOU E'} ```
10-19-2022 09:59:25
10-19-2022 09:59:25
Hi @wildgeece96 . The `np.array` is supposed to be the raw audio waveform in the correct sampling rate, right ? If so, then it seems the bug comes from somewhere around sagemaker where the numpy array gets converted to a list. I am tentatively against adding support for lists instead of numpy arrays: - We already have issues when dealing with lists or list of lists or lists of lists of lists (I am not kidding), because a list can mean you are sending several items to be inferred upon, OR the item can consist itself of a list of things (like here numbers), but also a list of list of things (like multi channel audio). `np.array` makes the distinction clearer, and avoid a big pitfall when the said lists would be misaligned. A `np.array` is a regular tensors, so it comes with more guarantees. - In your particular example, someone is casting a `np.array` to a regular list, and that is costly and will unecessarily add overhead to the inference. That being said there could be workaround probably: Would using a `wav` file work for you ? https://stackoverflow.com/questions/51300281/how-to-convert-numpy-array-to-bytes-object-without-save-audio-file-on-disk Couldn't find better code fast with my google fu, but it's probably doable to create a Wav like buffer with minimal reallocations. Does the sagemaker allow sending raw bytes ? Would that approach work ?<|||||>I confirmed inference code like below works ```python from transformers import pipeline from transformers.pipelines import AutomaticSpeechRecognitionPipeline import numpy as np def model_fn(model_dir) -> AutomaticSpeechRecognitionPipeline: return pipeline(model="facebook/wav2vec2-base-960h") def predict_fn(data, pipeline): inputs = data.pop("inputs", data) parameters = data.pop("parameters", None) if type(inputs) == list: inputs = np.array(inputs, dtype=np.float) print("inputs are: ", inputs) # pass inputs with all kwargs in data if parameters is not None: prediction = pipeline(inputs, **parameters) else: prediction = pipeline(inputs) return prediction ```<|||||>Thanks @Narsil . Actually, in my use case, I deployed wav2vec model on SageMaker, and when I send request via SageMaker SDK seriealizer of SageMaker (like JSONSerializer, NumPySerializer) serialize the input to throw request to the endpoint. I should use JSONSerialization to use SageMaker HuggingFace Inference Toolkit and JSONSeiralizer cannot pass ndarray as it is but convert to list. After reading your comment, the converting logic should be implemented on SageMaker HuggingFace Inference Toolkit because it's specific for SageMaker use case.<|||||>Hello @wildgeece96 the `automatic-speech-recognition` pipeline is supported. instead of sending numpy data you need to send the audio. Check out this example: https://github.com/huggingface/notebooks/blob/main/sagemaker/20_automatic_speech_recognition_inference/sagemaker-notebook.ipynb <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
19,742
closed
Tokenizers should not should not append `[EOS]` for truncated sentences
### System Info - `transformers` version: 4.21.3 - Platform: Linux-5.8.0-1035-gcp-x86_64-with-glibc2.31 - Python version: 3.10.7 - Huggingface_hub version: 0.9.1 - PyTorch version (GPU?): 1.12.1+cu102 (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): 0.6.1 (cpu) - Jax version: 0.3.23 - JaxLib version: 0.3.22 - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help? @SaulLu @sgugger ### Information - My own modified scripts ### Tasks - My own task or dataset (give details below) ### Reproduction ```python from transformers import BartTokenizer tokenizer = BartTokenizer.from_pretrained('facebook/bart-base') inputs = tokenizer(['go go go', 'hi hi hi hi'], return_tensors='np', max_length=3, padding='max_length', truncation=True, verbose=True, add_prefix_space=True) print(inputs.input_ids) ``` Output: ``` [[ 0 213 2] [ 0 20280 2]] ``` ### Expected behavior Expected: ``` [[ 0 213 213] [ 0 20280 20280]] ``` Explanation: Token ID 2 is the `[EOS]` token. We know that for generation tasks, the generation process is ended if the model outputs the `[EOS]` token. While during model training, to avoid occupying too much computational resource, we often truncate the training data to a certain length (e.g. 256/512). Therefore, intuitively, if the `[EOS]` token is appended, it is possible for the model to learn to stop the generation after a specific length, preventing the model from generating long sentences. Moreover, the model may assume that the truncated sentence is also grammatically correct and prematurely stop the generation process, thus affecting the quality of the generation results.
10-19-2022 07:58:36
10-19-2022 07:58:36
I would say that this is a matter of opinion and depends on the actual task (classification vs generation for instance). In any case, we won't be able to change this behavior without causing a breaking change of something other users may be relying on, so we probably won't fix this inside the lib.<|||||>@sgugger Thank you! I have implemented a class `BartTokenizerWithoutOverflowEOS` for my own use case. ```python import jax; jax.config.update('jax_platforms', 'cpu') import numpy as np from transformers import BartTokenizer class BartTokenizerWithoutOverflowEOS(BartTokenizer): def __call__(self, sentences, max_length): inputs = super().__call__(sentences, max_length=max_length-1, truncation=True, verbose=True, add_prefix_space=True, add_special_tokens=False) input_ids_ = [] attention_masks_ = [] for input_id, attention_mask in zip(inputs.input_ids, inputs.attention_mask): token_len = len(input_id) if token_len == max_length - 1: # exceed `max_length - 1`, will not add `[EOS]` input_id = [self.bos_token_id, *input_id] attention_mask = [1, *attention_mask] else: # will add `[EOS]` input_id = [self.bos_token_id, *input_id, self.eos_token_id, *(self.pad_token_id,) * (max_length - token_len - 2)] attention_mask = [1, *attention_mask, 1, *(0,) * (max_length - token_len - 2)] input_ids_.append(input_id) attention_masks_.append(attention_mask) input_ids = np.array(input_ids_, dtype=np.uint16) attention_masks = np.array(attention_masks_, dtype=np.uint8) return input_ids, attention_masks tokenizer = BartTokenizerWithoutOverflowEOS.from_pretrained('facebook/bart-base') sentences = ['a a', 'go go go', 'hi hi hi hi', 'ox ox ox ox ox'] max_length = 6 tokenizer(sentences, max_length) # Result: # (array([[ 0, 10, 10, 2, 1, 1], # [ 0, 213, 213, 213, 2, 1], # [ 0, 20280, 20280, 20280, 20280, 2], # [ 0, 33665, 33665, 33665, 33665, 33665]], dtype=uint16), # array([[1, 1, 1, 1, 0, 0], # [1, 1, 1, 1, 1, 0], # [1, 1, 1, 1, 1, 1], # [1, 1, 1, 1, 1, 1]], dtype=uint8)) ```
transformers
19,741
closed
[Doctest] Add `configuration_gpt_neo.py` , `configuration_gpt_neox.py`, `configuration_gpt_neox_japanese.py`
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Based on #19487 ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
10-19-2022 05:30:09
10-19-2022 05:30:09
_The documentation is not available anymore as the PR was closed or merged._
transformers
19,740
closed
How to make gpt2 accurately tokenize lowercase entities?
Many datasets pre-lowercase their text content, so some entities are lowercase in these datesets, like: **"paris"** For GPT2Tokenizer, it can tokenize "Paris" to "Paris" but it tokenizes **"paris" to "par" and "is".** How to fix it? Is there a uncased GPT2? Thanks!
10-19-2022 03:38:56
10-19-2022 03:38:56
Please use the [forums](https://discuss.huggingface.co/) for such questions, as we keep the issues for bugs and feature requests only.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
19,739
closed
Fix exception thrown using MishActivation
# What does this PR do? Fix an exception when calling MishActivation module when using pytorch 1.9.0 <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes [#42](https://github.com/microsoft/GLIP/issues/51) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
10-19-2022 03:24:08
10-19-2022 03:24:08
_The documentation is not available anymore as the PR was closed or merged._<|||||>What exception are we fixing? The [PyTorch documentation](https://pytorch.org/docs/1.9.0/generated/torch.nn.functional.mish.html?highlight=mish#torch.nn.functional.mish) indicates the mish function was in 1.9.0 and I just tried on a fresh env, the function can be imported without any problem on 1.9.0.<|||||>> What exception are we fixing? The [PyTorch documentation](https://pytorch.org/docs/1.9.0/generated/torch.nn.functional.mish.html?highlight=mish#torch.nn.functional.mish) indicates the mish function was in 1.9.0 and I just tried on a fresh env, the function can be imported without any problem on 1.9.0. I am using pytorch version 1.9.0a0+c3d40fd, and using nn.functional.mish will throw AttributeError exception.
transformers
19,738
closed
Newly created cache directories does not include the version.txt file
### System Info - `transformers` version: 4.22.2 - Platform: macOS-11.7-x86_64-i386-64bit - Python version: 3.9.13 - Huggingface_hub version: 0.10.0 - PyTorch version (GPU?): 1.12.1 (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: no - Using distributed or parallel set-up in script?: no ### Who can help? @sgugger ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Steps to reproduce this behaviour: 1. Start with a clean machine with no cache directories at all 2. Download a pre-trained model (in our case, a clip model) to create the cache directory and download model files ``` from transformers import CLIPModel from transformers import CLIPProcessor model = CLIPModel.from_pretrained("openai/clip-vit-base-patch32") processor CLIPProcessor.from_pretrained("openai/clip-vit-base-patch32") ``` 3. Notice there is no `version.txt` file there under the `hub` directory causing transformers library to assume it's an older version of the cache ### Expected behavior Upon creation of the cache directory a `version.txt` should be created to indicate it is the correct version of the cache system. For context, we are taking a copy of the cache folder and saving that for later use in production under a read only system for faster startup times. As we don't want the box to startup and download a pretrained model every time. However this issue has caused the transformers library to always want to write/migrate the cache as it thinks it is on an older version.
10-19-2022 00:11:12
10-19-2022 00:11:12
Thanks for reporting! I can reproduce and the PR above fixes the problem normally.
transformers
19,737
closed
How can I install transformers v2.11.0?
### System Info Python 3.5.4 (v3.5.4:3f56838, Aug 8 2017, 02:17:05) [MSC v.1900 64 bit (AMD64)] on win32 ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Hello. I want to use https://github.com/xashru/punctuation-restoration It requires specific version of transformers v2.11.0? First of all which specific version of Python is best for this version of transformers? Secondly I have installed Python 3.5.4 and testing like this but it fails to install So how can I install it? Python 3.5.4 (v3.5.4:3f56838, Aug 8 2017, 02:17:05) [MSC v.1900 64 bit (AMD64)] on win32 ``` C:\python355\Scripts>pip.exe install transformers==v2.11.0 DEPRECATION: Python 3.5 reached the end of its life on September 13th, 2020. Please upgrade your Python as Python 3.5 is no longer maintained. pip 21.0 will drop support for Python 3.5 in January 2021. pip 21.0 will remove support for this functionality. ERROR: Could not find a version that satisfies the requirement transformers==v2.11.0 ERROR: No matching distribution found for transformers==v2.11.0 C:\python355\Scripts> ``` ![image](https://user-images.githubusercontent.com/19240467/196563990-75a8193b-e5bc-4810-9ad4-e9b20b08e188.png)
10-18-2022 23:32:09
10-18-2022 23:32:09
Hey @FurkanGozukara -- try running `pip.exe install transformers==2.11.0` (i.e. removing the `v`) Also, please note that this is a very old version. We encourage you to use a more recent version in your project if possible!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
19,736
closed
fixed typo in fp16 training section for perf_train_gpu_one
# What does this PR do? Fixed typo in mdx file for mixed precision in perf_train_gpu_one.mdx Fixes # (issue) ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests?
10-18-2022 21:28:15
10-18-2022 21:28:15
_The documentation is not available anymore as the PR was closed or merged._<|||||>fixed permissions. I'm new to CircleCI. Is there a way to run the CI pipeline again?
transformers
19,735
closed
Handle texts longer than 512 tokens in BERT token classification pipeline
# What does this PR do? Implementation of a BERT-based token classification pipeline which can truncate texts longer than the max token length (512 or otherwise as specified by the user). Long texts are truncated to the max length, with overflowing tokens shifted to subsequent batch items. These are reconstituted as a post-processing step to return an output of the same shape as the inputs. Entity objects can be reconstituted based on a number of different strategies, which can be selected at pipeline creation time. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes #15177 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a [link to it](https://github.com/huggingface/transformers/issues/15177) if that's the case. - [X] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [X] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @Narsil <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
10-18-2022 19:21:52
10-18-2022 19:21:52
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19735). All of your documentation changes will be reflected on that endpoint.<|||||>@Narsil I've updated the test cases as discussed on #15177. These and the docstrings should show how the new pipeline class is intended to work. Not sure why the CircleCI tasks are failing to checkout the PR branch, but the tests pass on my machine? Let me know if this is something I need to change<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>@Narsil could this PR be reopened?<|||||>Is this PR just blocked for review, or something else is required? I worked on a similar issue recently and wanted to contribute. <|||||>It was ready for review but the checks were failing. Looks like I need to update a few things now to get the tests to pass, but otherwise it's good for review<|||||>Hey sorry for keeping your in the dark. We've discussed this internally and I forgot to keep you up-to-date. Several things came out: - This is a very large proposal, unlikely to be merged in the current state (Adding a new pipeline is a big burden for maintainers, and this one in the current state feels particularly complex). - Adding a parameter option to handle long text is something desirable, but not something to be added at all costs. There are several ways forward we can recommend: - Trimming down a lot the code to be more manageable within `TokenClassificationPipeline`. Emphasis on not too much code and readable/understandable code is important. We could also mark the parameter as experimental for a while, so that if maintenance becomes too big to bear, we can put the code elsewhere at a later stage. And we would keep it if it wasn't. Since this route is going moving from Pipeline to ChunkPipeline, I would advise into having 2 PRs at least (1 for the `Pipeline -> ChunkPipeline` which cannot change any existing test code, and then we could add the new parameter. I don't want to discourage, but this is likely to take a bit of time and energy (trying to manage expectations). Ultimately it would be available to all though ! - Using a remote pipeline https://huggingface.co/docs/transformers/v4.25.1/en/add_new_pipeline#how-to-create-a-custom-pipeline That's an easy was to share your work as a starter and do not require any review process so you can get started really fast. There probably a few changes too that we should recommend either way. Do not use `parameterized`. Put the values of the test, within the tests ,not outside (same for expected values). In general remove every indirection possible within tests. The only benefit of a test is to be as readable as possible, so humans can infer if the test is valid or not just by reading it. Do not use `@overload`. Do not use `self.` for pipeline arguments (we can't use that since pipelines are stateless, we need to use `_sanitize_parameters(..)` . Despite being a bit clunky it's the only way to accept every arg BOTH in `pipeline(,, myarg=1)` and `pipe = pipeline(..); pipe(..., myarg=1)` Thanks for working on this though, this is definitely a nice thing to have, and a difficult one to add, so thanks for tackling it.<|||||>Thanks @Narsil for taking a look. That sounds sensible regarding including this feature as a parameter within `TokenClassificationPipeline`. I agree that that makes more sense than a separate `Pipeline`, and smaller PRs make sense towards implementing it. Not sure how much time I'll have to dedicate to the changes, but I'll try to get something together when I can. As for the suggested changes, happy to remove `parameterized` and the use of `self` in those stateless functions. What's the issue with using `@overload`? It helps with type checking and hints in an IDE when a function can take and receive multiple types for an argument. Is there an alternative place this type hint should go, like a `.pyi` file?<|||||>> What's the issue with using `@overload`? It makes code reading much harder since you now don't know where a function starts and where it finishes. Also functions should only have 1 signature, this makes reasoning and calling it much easier. Making functions too dynamic is a real issue and should ideally be contained as much as possible. IDE support is a nice touch, but in my experience it's impossible to satisfy all of IDEs/repl correctly on actual complicated (not complex) behavior. (And since complicated behavior is also hard to read, my personal take is that we should just avoid it) `pipeline` for instance is a magic function and making all sort of crazy runtime inference to infer what to do is kind of the purpose to make user's lives easy. However, the number of such functions should be as limited as possible. Pipelines in the past have been way too tolerant which makes modifying them sometimes painful (everything was done with no regression for now, but there's still room for cleaning up inputs and outputs to make the experience cleaner). > Is there an alternative place this type hint should go, like a `.pyi` file? Type hints go in the signature directly, and if they are not enough on their own, documentation should provide the answer IMHO.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>@jammie19 any updates on this PR? I am interested in contributing, and would be happy to help and discuss further.<|||||>@vaibhavk97 Hey, sorry I haven't had chance to work on it since the last discussion. I agree that the next step is to change the main token classification pipeline to support chunking, so I think the pieces are there in my PR but it'll need restarting. Happy for you to work on it/help if you're able to<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>@Narsil it looks like you and others have now implemented the main part of this in #21771. I can see that the stride is used to reconstruct full sentences, with a nicer implementation than mine 👍 <|||||>You probably inspired the others ! :D
transformers
19,734
closed
[Doctest] Add `wav2vec2_conformer.py`
Add `configuration_wav2vec2_conformer.py` to `utils/documentation_tests.txt` for doctest. Based on issue: [19487](https://github.com/huggingface/transformers/issues/19487) @ydshieh could you please check it? If it is ok for you I can keep working on the "wav2vec2" ones! Thank you very much for your support :)
10-18-2022 19:01:12
10-18-2022 19:01:12
_The documentation is not available anymore as the PR was closed or merged._<|||||>Hi @juancopi81 Wav2Vec2 is already done. See [documentation_tests.txt](https://github.com/huggingface/transformers/blob/main/utils/documentation_tests.txt) on the main branch. You can work on other model configs if you would like to 🤗 <|||||>Great! Thanks a lot @ydshieh 🤗
transformers
19,733
closed
`test_run_squad_no_trainer` is flaky
### System Info Environment: Circle CI image running `examples_torch` ### Who can help? @muellerzr ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Due to the nature of the issue, unfortunately it cannot be reliably replicated. Examples of this occurring can be found here: * https://app.circleci.com/pipelines/github/huggingface/transformers/49621/workflows/14c25312-58a5-4b0b-8b41-6c5bec668043/jobs/593213 * https://app.circleci.com/pipelines/gh/huggingface/transformers/49224/workflows/fbae76ab-9259-4695-bb06-475357172587/jobs/589262 ### Expected behavior Occasionally, `test_run_squad_no_trainer` fails on CI runs, even when the PR is not touching code related to the test e.g. I would expect the [output of the tested run](https://github.com/huggingface/transformers/blob/a23819ed6ab852df6d8f04815306440531418260/examples/pytorch/test_accelerate_examples.py#L199), `result`, to be either deterministic or pass reliably when the are otherwise no other changes to the code.
10-18-2022 17:50:12
10-18-2022 17:50:12
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>ping @muellerzr ;-)
transformers
19,732
closed
PT <-> TF for composite models
# What does this PR do? Make `from_pretrained` (cross loading PT/TF) available for composite models. Ensure `enc_to_dec_proj` layer is correctly loaded. With this PR, we can get the PT/TF same output for the test in #19719 I will apply the same change to Text and Speech composite models if the current version is approved.
10-18-2022 17:23:38
10-18-2022 17:23:38
_The documentation is not available anymore as the PR was closed or merged._
transformers
19,731
closed
Optimizer being initialized twice when using Nvidia APEX mixed precision backend
### System Info - `transformers` version: 4.21.1 - Platform: Linux-5.4.0-128-generic-x86_64-with-glibc2.17 - Python version: 3.8.13 - Huggingface_hub version: 0.8.1 - PyTorch version (GPU?): 1.12.1+cu113 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: True - Using distributed or parallel set-up in script?: False - GPU : Nvidia A4000 ### Who can help? _No response_ ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction I am trying to fine tune models on Jupyter notebooks. When training for the first time using APEX by calling `trainer.train()` it works fine. But if I start another run or try to train another model without restarting the kernel and calling the same function I get an error `RuntimeError: A given optimizer should only be passed through amp.initialize once` . I thought it was because I had used the `adamw_apex_fused` setting for the optim in the`TrainingArguments`, but it happens regardless of whether or not I specify a parameter for the optim. I believe this is a possible bug due to `transformers` calling the `amp.initialize` function every time the trainer is started and does not check if `apex.amp` has been instantiated. An example of my training code is provided below, I would greatly appreciate any help. Thanks! ```python training_args = TrainingArguments( f"gpt2-wikitext", evaluation_strategy = "epoch", learning_rate=2e-5, weight_decay=0.01, half_precision_backend="apex", tf32=True, fp16=True, auto_find_batch_size=True, optim='adamw_apex_fused' ) trainer = Trainer( model=model, args=training_args, train_dataset=lm_datasets["train"], eval_dataset=lm_datasets["validation"], ) trainer.train() ``` ### Expected behavior Training should proceed normally without errors if `trainer.train()` is called multiple times in the same session.
10-18-2022 17:02:03
10-18-2022 17:02:03
I can confirm this is done at every call to train, though I don't see an easy way of undoing it and we stopped actively maintaining the integration with Apex now that mixed precision is upstreamed in PyTorch.<|||||>Ah I see, so I should consider Apex support to be deprecated in `transformers` and move all future code to use native PyTorch MP backend?<|||||>Yes, you'll get better support this way!<|||||>That's great, thanks a lot for clearing things up!
transformers
19,730
open
Lite Transformer with Long-Short Range Attention
### Model description Abstract : > Transformer has become ubiquitous in natural language processing (e.g., machine translation, question answering); however, it requires enormous amount of computations to achieve high performance, which makes it not suitable for mobile applications that are tightly constrained by the hardware resources and battery. In this paper, we present an efficient mobile NLP architecture, Lite Transformer to facilitate deploying mobile NLP applications on edge devices. The key primitive is the Long-Short Range Attention (LSRA), where one group of heads specializes in the local context modeling (by convolution) while another group specializes in the long-distance relationship modeling (by attention). Such specialization brings consistent improvement over the vanilla transformer on three well-established language tasks: machine translation, abstractive summarization, and language modeling. Under constrained resources (500M/100M MACs), Lite Transformer outperforms transformer on WMT’14 English-French by 1.2/1.7 BLEU, respectively. Lite Transformer reduces the computation of transformer base model by 2.5× with 0.3 BLEU score degradation. Combining with pruning and quantization, we further compressed the model size of Lite Transformer by 18.2×. For language modeling, Lite Transformer achieves 1.8 lower perplexity than the transformer at around 500M MACs. Notably, Lite Transformer outperforms the AutoML-based Evolved Transformer by 0.5 higher BLEU for the mobile NLP setting without the costly architecture search that requires more than 250 GPU years. ### Open source status - [x] The model implementation is available - [x] The model weights are available ### Provide useful links for the implementation [Paper](https://arxiv.org/pdf/2004.11886.pdf) [Code](https://github.com/mit-han-lab/lite-transformer) [Old version of the paper, when the model was called `Mobile Transformer (MBT)`](https://openreview.net/attachment?id=ByeMPlHKPH&name=original_pdf)
10-18-2022 16:32:18
10-18-2022 16:32:18
Maybe of interest to @hollance :)<|||||>Yeah looks interesting!<|||||>Related to https://github.com/mit-han-lab/lite-transformer https://arxiv.org/pdf/2004.11886.pdf<|||||>It looks like the paper I linked is just a previous, unpublished version of Lite Transformer paper (linked by @atturaioe) I'll edit the issue accordingly. Thanks @atturaioen !<|||||>@hollance @LysandreJik Can I pick up this if it is it is not in WIP ?
transformers
19,729
closed
[Table Transformer, LiLT] Minor docs fixes
# What does this PR do? This PR: - fixes some URLs in LiLT's docs - adds a figure to the Table Transformer docs
10-18-2022 15:17:06
10-18-2022 15:17:06
_The documentation is not available anymore as the PR was closed or merged._
transformers
19,728
closed
Fix activations being all the same module
# What does this PR do? Since #15616, all activations of the same type are the same instance of the same object. This seems harmless at first glance, but cause issues when hooks are automatically added to models, for instance in the big model inference. In this case all objects share the same hook (since they are all the same) leading to subtle bugs (kudos to @ArthurZucker and @younesbelkada for finding the root cause!) This PR changes the `ACT2FN` dictionary to return new instances of the activation classes each time you provide a key.
10-18-2022 14:11:40
10-18-2022 14:11:40
_The documentation is not available anymore as the PR was closed or merged._
transformers
19,727
closed
Factored out some code in the `image-segmentation` pipeline.
# What does this PR do? Modifies the pipeline's internal a little. - Hopefully less code, so easier to read. - Removed the "blank mask" when nothing gets detected. Sending `[]` instead is more obvious IMO (and used to be the behavior, so making it non breaking change). - Changed the default of `subtask` to `None` instead of `semantic`. `panoptic` used to be the default, and I merely adopted the same behavior. If a user specifies `subtask=x` we definitely try to use it's task and error out when it's impossible. But when nothing gets passed (should be the most common case), then we try **in order** `"panoptic", "instance", , and "semantic"`. Since roughly they are in decreasing order of expressiveness. - Added `Detr` as an exception to sending empty lists in the randomly generated models (Like maskformer, it *can* return nothing because it has an empty slot it seems) - Renamed `task` to `subtask` since `pipeline(task="image-segmentation")` already exists and would be impossible to use from `pipeline` directly. I feel `subtask` is appropriate but welcome any suggestions there. TODO : figure out the real bug that makes `panoptic` on the tiny detr: Figured it out. Not sure what the original code was doing but it was outputting two masks for LABEL_215. What happens is that `pred_masks` is a tensor of shape `(2, H, W)` where `2` is the number of class queries. Now we only have one mask because we have for all pixels `pred_masks[0, :, :] == pred_masks[1, :, :]`. So when we do `pred_masks.argmax(dim=0)` to get the class they are associated with, everything gets attributed to class 0, and no pixel to class 1. Since mask 1 is empty, we exclude it (which is good). <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
10-18-2022 14:09:32
10-18-2022 14:09:32
_The documentation is not available anymore as the PR was closed or merged._<|||||>Okay, I've pinpointed out the filter that filters OUT the prediction for panoptic: (https://huggingface.slack.com/archives/D03VB3U5CVC/p1666108719919289) 36f52e9593 (https://huggingface.slack.com/archives/D03VB3U5CVC/p1666108795507369) It's this line specifically : https://github.com/huggingface/transformers/pull/19262/files#diff-826e0e0303aa4a12e527884e4dc548fe8debb137905260e42ec8b4331c9c2c63R199 (https://huggingface.slack.com/archives/D03VB3U5CVC/p1666108939394469) For the random model, the predictions are rather low (score == 0.004) and so >0.5 filter them out, basically loosing the predictions right there.I'm not sure who is correct, but as mentionned earlier, if semantic is saying something, I feel like panoptic without any thresholds should ALWAYS output at least as much. I think the >0.5 comes from the expectations that sigmoid would have put the range of probs into something normalized and make sure something was in the >0.5 range.I don't really have the time to figure out where the logic fails (if it does) ? Without this line I recover a `LABEL_215` label in the `small_model_pt` test (not two though) but with the same score, which is encouraging IMO.<|||||>@alaradirik @amyeroberts This PR is ready for review. Don't hesitate to voice any concern over the changes. This mostly code cleanup. Except for the empty mask being removed (which wasn't the behavior before, and is not tested against).
transformers
19,726
closed
Parameterize hidden layers for feature-extraction pipeline
### Feature request In the literature, token embeddings for downstream tasks are generated from various combinations of hidden layers, e.g. by summing or concatenating the last four layers. [This image](http://jalammar.github.io/images/bert-feature-extraction-contextualized-embeddings.png) from [this blog entry](http://jalammar.github.io/illustrated-bert/) gives a few examples. Currently, the `feature-extraction` outline only extracts the last hidden state. I propose an additional parameter that specifies the layers to be used would be nice, e.g. for summing up the last four layers: pipeline('feature-extraction', n_layers=4) I suppose there are a few options to discuss, e.g. is a single `n` to imply a range like `[-n:]` sufficient, or should it also be possible to specify e.g. `[-4:-2]` and/or `[2:]`? Furthermore, alternative aggregation methods could be defined, e.g.: pipeline('feature-extraction', n_layers=4, aggr=('sum'|'concat'|...)) ### Motivation I understand from issue https://github.com/huggingface/transformers/issues/4613 that the output of the `feature-extraction` pipeline comes from the last hidden layer. I think this is a reasonable default, but it would be good to have an easy way to compare results when using other layers too. ### Your contribution I have never worked with `transformers` source code, but I suppose I could find my way if I get a pointer to the relevant place(s).
10-18-2022 14:03:49
10-18-2022 14:03:49
In general pipelines are kept simple for demo purposes. You can easily instantiate the processing class and model yourself for more complex behavior.<|||||> > In general pipelines are kept simple for demo purposes. You can easily instantiate the processing class and model yourself for more complex behavior. Ok, I suppose my point of view would be that you might want to be able to demonstrate the same feature with summing several layers. But I understand it is a design decision that is a bit subjective.
transformers
19,725
closed
[Doctest] - Fixing doctest `configuration_pegasus_x.py`
# What does this PR do? Fixes #19487 Added (with random weights) in `configuration_pegasus_x.py`. Added `configuration_pegasus_x.py` in `documentation_tests.txt`. ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @ydshieh
10-18-2022 13:52:34
10-18-2022 13:52:34
Hi @mukesh663 There seems to be some CI issue. Could you push again with an empty commit? You can do `git commit --allow-empty` then push :pray.<|||||>Hi, @mukesh663 I haven't merge your PR yet ...but you already closed the PR and delete the branch 😢 <|||||>Are you able to restore the branch ?<|||||>> Are you able to restore the branch ? @ydshieh Sorry for the mishap. I have restored the branch. <|||||>Merged, thank you again!<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19725). All of your documentation changes will be reflected on that endpoint.
transformers
19,724
closed
[Doctest] Add `configuration_flava.py`, `configuration_fnet.py`
# What does this PR do? Fixes https://github.com/huggingface/transformers/issues/19487 Added (with random weights) in `configuration_flava.py` , `configuration_fnet.py`. Added `configuration_fnet.py, configuration_flava.py` in documentation_tests.txt. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
10-18-2022 13:44:38
10-18-2022 13:44:38
_The documentation is not available anymore as the PR was closed or merged._
transformers
19,723
closed
Added type_hints for `modeling_markuplm`
# What does this PR do? Added type_hints for `modeling_markuplm`
10-18-2022 13:41:43
10-18-2022 13:41:43
cc @Rocketknight1 <|||||>_The documentation is not available anymore as the PR was closed or merged._
transformers
19,722
closed
[Doctest] - Fixing doctest `configuration_pegasus.py`
# What does this PR do? Fixes #19487 Added (with random weights) in `configuration_pegasus.py`. Added `configuration_pegasus.py` in `documentation_tests.txt` ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @ydshieh
10-18-2022 13:24:43
10-18-2022 13:24:43
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19722). All of your documentation changes will be reflected on that endpoint.
transformers
19,721
closed
fix seq2seqtrainer predict without labels
# What does this PR do? This PR allows to use Seq2SeqTrainer for prediction with generation on dataset without target sequence. So None loss and metrics are returned. Technically, model.forward would be called only if we have 'labels' Fixes #19714 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @patrickvonplaten, please check this PR.
10-18-2022 13:24:23
10-18-2022 13:24:23
> This means the model will not be run if there are no labels. I don't think this fixes the issue as you wanted. Thank for your reply. In fact, in `prediction_step()` `model.generate` is still called, so trainer.predict would return generated_tokens, but would not compute loss. See [here](https://github.com/huggingface/transformers/blob/dd523da577f3d1471d570f0bc388af55d026ce95/src/transformers/trainer_seq2seq.py#L198). I have tried the new trainer by myself and the code snippet from the issue works properly. <|||||>_The documentation is not available anymore as the PR was closed or merged._
transformers
19,720
closed
Decode input ids back to string in LayoutLMV3 Processor
Hi, I am using Layoutlmv3. I Have used LMV3 processor. At train time I have labels as well along with image,text,boxes and labels. But at Inference time, I do not have labels. So the code for inference goes like below. ``` processor = AutoProcessor.from_pretrained("microsfotlmv3_repo", apply_ocr=False) encoding = processor(images = resize_image, text = tokens, boxes= boxes, return_offsets_mapping=True, return_tensors="pt", padding = "max_length", truncation = True, max_length = 512 ) offset_mapping = encoding.pop('offset_mapping') outputs = test_model1(**encoding) predictions = outputs.logits.argmax(-1).squeeze().tolist() is_subword = np.array(offset_mapping.squeeze().tolist())[:,0] != 0 true_predictions = [id2label[pred] for idx, pred in enumerate(predictions) if not is_subword[idx]] ``` Currently, I am decoding text like: ``` cleaned_input_ids = encoding['input_ids'][encoding['attention_mask']>0] text = processor.tokenizer.decode(cleaned_input_ids.squeeze().tolist()) text = text[4:-4] tokens = text.split(" ") ``` But the count of tokens/text and count of true_predictions/labels are not matching. I am expecting the result to be: “sun”:label, “rises”:label, “in”:label, “the”:label, “east”:label. Currently i am not able to map them, as their counts/lengths are not matching. How to resolve this. Tagging @NielsRogge , and Others
10-18-2022 13:23:43
10-18-2022 13:23:43
Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests. Could you ask your question on the [forum](https://discuss.huggingface.co) instead? Thanks!<|||||>HI @LysandreJik, I have raised this in the forum as well. I added it here for more support. Thanks<|||||>Hi, I've answered this question on the [forum](https://discuss.huggingface.co/t/how-to-decode-inputids-back-to-string-in-layoutlmv3/24609/2), let's continue the discussion there.
transformers
19,719
closed
Use TF framework
# What does this PR do? This PR specifies `framework="tf"` for `ImageToTextPipelineTests.test_small_model_tf`. -------- The pipeline tests are run in 3 docker environment: only PT, only TF, both PT/TF installation. In the last case (`PT/TF installed`), the loaded models in tests like `test_small_model_tf` are in fact **PT models**, as we didn't specify the framework when creating the pipelines. This is particularly confusing during debugging - especially when a model will produce diff. outputs between PT/TF models. In such case, the same test will fail either in TF-only env. or in PT/TF env. (One example is PR #19565, where I got the expected values from PT/TF env., and it failed on CI for TF-only env.) **TODO**: - Same change to other TF pipeline tests - Investigate why we get diff. outputs between PT/TF models for `ImageToTextPipelineTests.test_small_model_tf`.
10-18-2022 12:50:50
10-18-2022 12:50:50
> Investigate why we get diff. outputs between PT/TF models for ImageToTextPipelineTests.test_small_model_tf. Yup this is weird indeed, but glad we're catching such bugs !<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>@Narsil Yes, that's why I don't want to change other places in this PR 😃 <|||||>I know what's wrong with PT v.s. TF 👀 !
transformers
19,718
closed
Image transforms add center crop
# What does this PR do? Adds `center_crop` to the image transforms library, to be used in the future image processors. Performs equivalent processing as done in the `ImageFeatureExtractionMixin` ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests?
10-18-2022 12:42:32
10-18-2022 12:42:32
_The documentation is not available anymore as the PR was closed or merged._
transformers
19,717
closed
Adding config files for convnext
# What does this PR do? Adding config files for convnext Based on issue: https://github.com/huggingface/transformers/issues/19487 @ydshieh could you please check it? Thanks :)
10-18-2022 12:35:02
10-18-2022 12:35:02
Hi @soma2000-lang Could you run `make style` to see what's wrong with the CI and fix the potential issues 🙏 Thanks<|||||>Ok On Tue, 18 Oct 2022, 18:41 Yih-Dar, ***@***.***> wrote: > Hi @soma2000-lang <https://github.com/soma2000-lang> Could you run make > style to see what's wrong with the CI and fix the potential issues 🙏 > Thanks > > — > Reply to this email directly, view it on GitHub > <https://github.com/huggingface/transformers/pull/19717#issuecomment-1282364070>, > or unsubscribe > <https://github.com/notifications/unsubscribe-auth/ANLS36I3ILRYJW4KW5DXX43WD2OZBANCNFSM6AAAAAARIBH7TE> > . > You are receiving this because you were mentioned.Message ID: > ***@***.***> > <|||||>![image](https://user-images.githubusercontent.com/56045049/196441022-4172d54e-bb7b-465a-b38a-64763d486f03.png) @ydshieh After running make style I am getting the above.I am unable to understand what cold have been the problem<|||||>OK. First thing to resolve: your PR also changes the following file (and also in a strange way) ``` src/transformers/models/clip/configuration_clip.py ``` The goal of this PR is to work on convnext. We should not change the above file. Could you try to make this PR clean by not doing changes to CLIP config filei 🙏 ?<|||||>@ydshieh My last pr was about adding config files for the src/transformers/models/clip/configuration_clip.py thats why changes were also reflected in this pr .Thats why I deleted the src/transformers/models/clip/configuration_clip.py file.<|||||>You can't just delete that file, as it means the file will be deleted when the PR is merged to the library. You should instead to revert that file to the original state.<|||||>_The documentation is not available anymore as the PR was closed or merged._
transformers
19,716
closed
`image-segmentation` pipeline: re-enable `small_model_pt` test.
<!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
10-18-2022 12:32:39
10-18-2022 12:32:39
_The documentation is not available anymore as the PR was closed or merged._<|||||>I re-enable this tests, as it should work. The error @alaradirik was seeing, seems to not happen on main, and the results are actually deterministic. However `semantic` returns 3 different masks `panoptic` returns 0 masks The old tests used to be `panoptic` and would return `2` masks (probably because of thresholding) of `Label_215`. I kept the old values since I think something is wrong currently, but I wanted to re-enable the test before doing any modification to the actual pipeline code. https://github.com/huggingface/transformers/pull/19727/files @ydshieh I pinged you to see if you had better ideas to make tests less sensitive to the `Pillow` version. (there was a difference between CI and my computer), a few different pixels, and totally different hash. We could use `white_pixel_proportion` instead and let the precision handle the slight differences in `Pillow` version, but I fear we're going to miss some really bad changes (when the actual shape of the mask is totally modified), that's why we added the `hash` in the first place. (But it seems to be quite sensitive, not flaky, but still seemed it could be painful to keep track of this) <|||||>I will check this pipeline to get a better idea of it, and see if I have any idea 👀 👀 👀 👀 !
transformers
19,715
closed
Adding config files for configuration_clip.py
# What does this PR do? Add config files for configuration_clip.py to utils/documentation_tests.txt for doctest. Based on issue: https://github.com/huggingface/transformers/issues/19487 @ydshieh could you please check it? Thanks :)
10-18-2022 12:27:58
10-18-2022 12:27:58
_The documentation is not available anymore as the PR was closed or merged._<|||||>Hi @soma2000-lang #19647 already works on this model config. Could you find another one if you would like to contribute 🙏 . Thank you btw!
transformers
19,714
closed
Seq2SeqTrainer predict with generate fails to run without labels
### System Info - `transformers` version: 4.21.2 - Platform: macOS-12.6-arm64-arm-64bit - Python version: 3.9.12 - Huggingface_hub version: 0.9.1 - PyTorch version (GPU?): 1.12.1 (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: no - Using distributed or parallel set-up in script?: no ### Who can help? @patrickvonplaten, please check. ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction This code fails. ```python3 import transformers from transformers import Seq2SeqTrainer, Seq2SeqTrainingArguments, AutoModelForSeq2SeqLM, AutoTokenizer from datasets import Dataset model_name = "google/t5-efficient-tiny" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForSeq2SeqLM.from_pretrained(model_name) ds = Dataset.from_dict({"text": ["Sample text to continue 1:", "Sample text to continue 2:"]}) ds = ds.map(lambda x: tokenizer(x["text"], truncation=True, padding="max_length", max_length=512), batched=True) trainer_args = Seq2SeqTrainingArguments( "tmp_output", predict_with_generate=True, ) trainer = Seq2SeqTrainer( model, args=trainer_args, ) predict_output = trainer.predict(ds) # FAILS ``` ``` ValueError: You have to specify either decoder_input_ids or decoder_inputs_embeds ``` ### Expected behavior It seems to me that Seq2SeqTrainer should output generation results. Therefore this class may be used for model inference. Of course, in this case it is not possible to calculate metrics or loss, thus they should be None. I know how to fix it with tiny change in Seq2SeqTrainer class, so I will create a PR later today.
10-18-2022 12:05:06
10-18-2022 12:05:06
transformers
19,713
closed
first pull request
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
10-18-2022 11:54:03
10-18-2022 11:54:03
Not too sure what the goal here is? Please open a pull request when you have a real contribution to make.
transformers
19,712
closed
Fix redundant normalization of OWL-ViT text embeddings
# What does this PR do? - Fixes double normalization of OWL-ViT text embeddings, tests are not affected as double normalization yields the same results. - Replaces deprecated torch.norm() with torch.linalg.norm() Fixes # 19467 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [X ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests?
10-18-2022 10:53:45
10-18-2022 10:53:45
_The documentation is not available anymore as the PR was closed or merged._
transformers
19,711
closed
RuntimeError when using Label smoother
### System Info - `transformers` version: 4.23.1 - Platform: Linux-5.4.0-124-generic-x86_64-with-glibc2.31 - Python version: 3.9.5 - Huggingface_hub version: 0.10.0 - PyTorch version (GPU?): 1.12.1+cu116 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help? @sgugger ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction Initially, I encountered ``` File "/root/searcher-ranker/.venv/lib/python3.9/site-packages/transformers/trainer_pt_utils.py", line 481, in __call__ nll_loss = log_probs.gather(dim=-1, index=labels) RuntimeError: index 1 is out of bounds for dimension 1 with size 1 ``` when setting label_smoothing_factor for TrainingArguments, however I managed to simplify reproduction to just calling LabelSmoother instance. Minimal example can be seen here (be aware that the data provided there is pre-tokenized using the same tokenizer as the model): https://colab.research.google.com/drive/1GYI4rV7br7RCXlIO1xjkdk8HSzHxbhma?usp=sharing ### Expected behavior I guess there should be no problem doing label smoothing as this is just trivial binary classification task. However, I do have a feeling that I might be doing something wrong, although I seem to have double-checked everything. Thanks!
10-18-2022 10:42:31
10-18-2022 10:42:31
Label smoothing is not for regression tasks, but when you have several labels.<|||||>@sgugger thanks for answering! My bad, I spotted my error. I am doing not regression, but binary classification, so I had to provide `num_labels=2` instead of 1.
transformers
19,710
closed
Improving `image-segmentation` pipeline tests.
# What does this PR do? This PR (https://github.com/huggingface/transformers/pull/19367) introduced a few breaking changes: - Removed an argument `mask_threshold`. - Broke the default behavior (instance vs panoptic in the function call) https://github.com/huggingface/transformers/pull/19367/files#diff-60f846b86fb6a21d4caf60f5b3d593a04accb8f248de3029cccae2ff898c5bc3R119-R120 - Broke the actual masks: https://github.com/huggingface/transformers/pull/1961 This PR is the start of a handful that will aim at bringing back the old behavior(s). - tests should not have to specify `task` by default, unless we want to modify the behavior and have a lower form of segmentation running) - `test_small_model_pt` should be working. This specific PR starts with adding more information to the masks hash because missing the actual mask was actual easy to miss (the hashes do change, but it was easy to miss that one code path wasn't properly updated). So we go from a simple `hash` to ``` {"hash": #smaller hash, "shape": (h, w), "white_pixels": n} ``` The `shape` should help make sure the interpolation of the mask works correctly, the `white_pixels` hopefully helps detect big regressions in their amount when the hash gets modified. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
10-18-2022 10:42:01
10-18-2022 10:42:01
_The documentation is not available anymore as the PR was closed or merged._<|||||>> as it seems this PR is adding supplementary information to existing tests. Do we still need to add a test for this missing codepath? I meant that missing `astype(np.uint8)` was easy to miss, and since mask inspection was pretty manual, it was easy to check a few different setups, think everything was correct, and miss the one mask that was incorrect, and updated the tests without realizing the tests results were actually not making any sense. This addition merely tries to reduce the risk of that happening, but if the tests don´t use that path we won't know either.
transformers
19,709
closed
[Conditional, Deformable DETR] Add postprocessing methods
# What does this PR do? This PR adds `post_process_object_detection` to Conditional DETR and Deformable DETR's feature extractors. They both use the same postprocessing (and it's different from original DETR + YOLOS). It also removes methods which aren't necessary: YOLOS and Deformable DETR don't have a segmentation head model, so no `post_process_segmentation` methods are needed for those models. To do: - [x] add corresponding tests - [x] remove legacy segmentation postprocessing methods of conditional DETR, add new ones
10-18-2022 10:06:14
10-18-2022 10:06:14
_The documentation is not available anymore as the PR was closed or merged._<|||||>> Could you add Conditional DETR to MODEL_FOR_XXX_SEGMENTATION_MAPPING_NAMES? Do you mean I need to add the model to all 3 segmentation mapping names? Cause that isn't the case with DETR either <|||||>> > Could you add Conditional DETR to MODEL_FOR_XXX_SEGMENTATION_MAPPING_NAMES? > > Do you mean I need to add the model to all 3 segmentation mapping names? Cause that isn't the case with DETR either `MODEL_FOR_IMAGE_SEGMENTATION_MAPPING_NAMES` is marked for deprecation and there is no `MODEL_FOR_PANOPTIC_SEGMENTATION_MAPPING_NAMES`, I meant we can create one and add DETR and MaskFormer to panoptic mapping, along with Conditional DETR. <|||||>Ok, let's do that in a separate PR.
transformers
19,708
closed
trying fine-tuning Reformer on Squad2 dataset from pre-trained model "google/crime-and-punishment".
I'm trying fine-tuning Reformer on Squad2 dataset from pre-trained model "google/crime-and-punishment". Using tok.cls_token = tok.pad_token, I have the following error: ![image](https://user-images.githubusercontent.com/75449189/196391419-7387c255-cc07-4f81-975d-da544fef95c4.png) So I add tok.pad_token = tok.eos_token, but I have a new error: 2 is not in list. can someone help me? Thank you _Originally posted by @FrancescoTroiano in https://github.com/huggingface/transformers/issues/5436#issuecomment-1282092684_
10-18-2022 09:26:43
10-18-2022 09:26:43
Please follow the template to file your issue or ask your question on the [forums](https://discuss.huggingface.co/).<|||||>Ok, thank you. I open a discussion in the forum in the Transformers category about that<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
19,707
closed
add return_tensors parameter for feature_extraction 2
# What does this PR do? Fixes #10016 add return_tensor parameter for feature extraction Revert "Merge branch 'feature-extraction-return-tensor' of https://github.com/ajsanjoaquin/transformers into feature-extraction-return-tensor" This reverts commit d559da743b87914e111a84a98ba6dbb70d08ad88, reversing changes made to bbef89278650c04c090beb65637a8e9572dba222. call parameter directly Fixup. Update src/transformers/pipelines/feature_extraction.py <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
10-18-2022 07:56:01
10-18-2022 07:56:01
@ajsanjoaquin FYI.<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>@Narsil Thanks for fixing!
transformers
19,706
closed
[WHISPER] Update documentation of processor and nits in forward example
# What does this PR do? Fixes #19672 With respect to #19672, the `__call__` method of the `WhisperProcessor` was updated. The doctest were also not passing with the new `max_length`, see #19670 and #19668 @ydshieh
10-18-2022 07:21:11
10-18-2022 07:21:11
_The documentation is not available anymore as the PR was closed or merged._
transformers
19,705
closed
[TYPO] Update perf_train_gpu_one.mdx
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @osanseviero <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
10-18-2022 07:03:50
10-18-2022 07:03:50
_The documentation is not available anymore as the PR was closed or merged._
transformers
19,704
closed
🔥[Community Event] Doc Tests Sprint - Configuration files🔥 #19487
I want to work on this project. Thank you
10-18-2022 06:59:30
10-18-2022 06:59:30
Please follow the [contributing guide](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) and make sure you open a pull request that does some modifications on one of the configurations.<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19704). All of your documentation changes will be reflected on that endpoint.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
19,703
closed
Added config for configuration_clip.py
# What does this PR do? Based on issue: https://github.com/huggingface/transformers/issues/19487 @ydshieh could you please check it?
10-18-2022 02:24:33
10-18-2022 02:24:33
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19703). All of your documentation changes will be reflected on that endpoint.
transformers
19,702
closed
Running tokenizer on dataset -- Hangs
### System Info transformers 4.20.0.dev0 dev_0 python 3.7.11 h12debd9_0 linux os. ### Who can help? @SaulLu when I use the wikitext-103 dataset the tokenizer hangs with `Running tokenizer on dataset` and shows no progress. This was not always an issue but as of today has become one. It will either hang at the end of tokenizing or at the very beginning. Any idea why this would be hanging here? I am not out of memory and am using 16 or 8 CPUs with the `--preprocessing_num_workers` and `run_clm.py` interface with a GPT-2 model. Using an A100. Thanks, Trenton ### Information - [X] The official example scripts - [X] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ``` python run_clm.py --train_name testing --fp16 True --dataloader_num_workers 4 --dataloader_pin_memory True --per_device_train_batch_size 8 --per_device_eval_batch_size 8 --optim adamw_hf --seed 42 --model_type gpt2 --dataset_name wikitext --dataset_config_name wikitext-103-v1 --tokenizer_name gpt2 --preprocessing_num_workers 16 --do_train --do_eval --evaluation_strategy steps --eval_steps 10000 --save_steps 10000 --save_total_limit 1 --num_train_epochs 175 ``` ### Expected behavior No hanging.
10-18-2022 01:28:13
10-18-2022 01:28:13
Setting `--preprocessing_num_workers 1` removes this issue...<|||||>This looks like an issue that should be reported to the [Datasets repo](https://github.com/huggingface/datasets/).<|||||>Ok thanks I've just cross linked it.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
19,701
closed
AttributeError: 'DistributedDataParallel' object has no attribute 'push_to_hub'
### System Info - `transformers` version: 4.23.1 - Platform: Linux-4.15.0-193-generic-x86_64-with-glibc2.27 - Python version: 3.10.6 - Huggingface_hub version: 0.10.1 - PyTorch version (GPU?): 1.12.1 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: Yes. 4 GPUs. ### Who can help? @sgugger ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction ```python import transformers import argparse from tqdm import tqdm import copy from transformers import GPT2Tokenizer, OPTForCausalLM, get_scheduler, default_data_collator from datasets import load_dataset from itertools import chain import torch from torch.optim import AdamW from torch.utils.data import DataLoader from accelerate import Accelerator accelerator = Accelerator() parser = argparse.ArgumentParser() parser.add_argument('--seq_len', default = 2048, type = int) parser.add_argument('--batch_size', default = 4, type = int) parser.add_argument('--num_proc', default = 16, type = int) parser.add_argument('--gradient_accumulation_steps', default = 1, type = int) parser.add_argument('--epochs', default = 1, type = int) args = parser.parse_args() # Constants EPOCHS = args.epochs SEQ_LEN = args.seq_len gradient_accumulation_steps = args.gradient_accumulation_steps BATCH_SIZE = args.batch_size NUM_PROC = args.num_proc model = OPTForCausalLM.from_pretrained("facebook/opt-350m") optimizer = AdamW(model.parameters(), lr=3e-5) # Load tokenizer tokenizer = GPT2Tokenizer.from_pretrained("facebook/opt-350m") # Load dataset load_train_dataset = load_dataset('conceptofmind/code-train-dedup') # Tokenizer def tokenize(examples): seq_length = SEQ_LEN examples = tokenizer(examples['content']) concatenated_examples = {k: list(chain(*examples[k])) for k in examples.keys()} total_length = len(concatenated_examples[list(examples.keys())[0]]) if total_length >= seq_length: total_length = (total_length // seq_length) * seq_length result = { k: [t[i : i + seq_length] for i in range(0, total_length, seq_length)] for k, t in concatenated_examples.items() } result["labels"] = copy.deepcopy(result["input_ids"]) return result with accelerator.main_process_first(): tokenized_train_dataset = load_train_dataset.map( tokenize, batched = True, num_proc = NUM_PROC, remove_columns = 'content' ) pytorch_train_dataset = tokenized_train_dataset.with_format('torch') # Create dataloader train_dataloader = DataLoader( pytorch_train_dataset['train'], shuffle = True, drop_last = True, collate_fn = default_data_collator, batch_size = BATCH_SIZE ) lr_scheduler = get_scheduler( "linear", optimizer=optimizer, num_warmup_steps=1000, num_training_steps = (len(train_dataloader) * EPOCHS) // gradient_accumulation_steps ) model, optimizer, train_dataloader = accelerator.prepare( model, optimizer, train_dataloader ) progress_bar = tqdm(range(EPOCHS * len(train_dataloader)), disable=not accelerator.is_main_process) model.train() for epoch in range(EPOCHS): for step, batch in enumerate(train_dataloader, start=1): # Do training loss = model(**batch).loss loss = loss / gradient_accumulation_steps accelerator.backward(loss) accelerator.clip_grad_norm_(model.parameters(), 1.0) if step % gradient_accumulation_steps == 0: optimizer.step() lr_scheduler.step() optimizer.zero_grad() progress_bar.update(1) if step % 5 == 0: if accelerator.is_main_process: model.push_to_hub("code-model") if accelerator.is_main_process: model.push_to_hub("code-model-final") ``` ### Expected behavior Push model checkpoint to hub during multi-GPU training with accelerate. Using a total of 4 GPUs. Pushes to hub when using a singular GPU. I am unfamiliar with accelerate. Thank you for any help. Error thrown: ``` Traceback (most recent call last): File "/home/dl/demo/train.py", line 152, in <module> model.push_to_hub("code-model") File "/opt/conda/envs/demo/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1207, in __getattr__ raise AttributeError("'{}' object has no attribute '{}'".format( AttributeError: 'DistributedDataParallel' object has no attribute 'push_to_hub' WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 3987 closing signal SIGTERM WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 3988 closing signal SIGTERM WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 3989 closing signal SIGTERM ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 3986) of binary: /opt/conda/envs/demo/bin/python ```
10-18-2022 01:21:50
10-18-2022 01:21:50
Does the model need to be unwrapped before pushing to hub? ```python unwrapped_model = accelerator.unwrap_model(model) unwrapped_model.push_to_hub("code-350-model") ```<|||||>Yes :)<|||||>> Yes :) Ok. Thank you for the response!
transformers
19,700
closed
Update contribution guide
This PR adds some updates to the contribution guide such as including a brief section for adding new documentation and being able to preview the docs once you successfully open a pull request.
10-18-2022 00:08:12
10-18-2022 00:08:12
_The documentation is not available anymore as the PR was closed or merged._
transformers
19,699
closed
`model.generate()` enforces `max_length` to longest sequence in the batch, leading to inconsistent many batch generation for OPT models
### System Info - `transformers` version: 4.22.0.dev0 - Platform: Linux-5.4.0-1089-azure-x86_64-with-glibc2.27 - Python version: 3.10.4 - Huggingface_hub version: 0.9.0 - PyTorch version (GPU?): 1.12.1 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? @patrickvonplaten ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction ```python import unittest from transformers import GPT2Tokenizer from transformers import AutoModelForCausalLM class TestManyBatchOptGeneration(unittest.TestCase): def test_batch_generation(self): model_id = 'facebook/opt-350m' tokenizer = GPT2Tokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained(model_id) tokenizer.padding_side = 'left' # use different length sentences to test batching sentences = ['Hello, my dog is a little', 'Today, I',] inputs = tokenizer(sentences, return_tensors='pt', padding=True) input_ids = inputs['input_ids'] max_length = 20 outputs = model.generate(input_ids=input_ids, attention_mask=inputs['attention_mask'], max_length=max_length) inputs_non_padded = tokenizer(sentences[0], return_tensors='pt').input_ids output_non_padded = model.generate(input_ids=inputs_non_padded, max_length=max_length) inputs_padded = tokenizer(sentences[1], return_tensors='pt').input_ids output_padded = model.generate(input_ids=inputs_padded, max_length=max_length) batch_out_sentence = tokenizer.batch_decode(outputs, skip_special_tokens=True) non_padded_sentence = tokenizer.decode(output_non_padded[0], skip_special_tokens=True) padded_sentence = tokenizer.decode(output_padded[0], skip_special_tokens=True) expected_output_sentence = [ "Hello, my dog is a little bit of a dork.\nI'm a little bit", "Today, I was in the middle of a conversation with a friend about the state of the world", ] self.assertListEqual(expected_output_sentence, [non_padded_sentence, padded_sentence]) print(batch_out_sentence, [non_padded_sentence, padded_sentence]) # THIS NOW FAILS!! self.assertListEqual(expected_output_sentence, batch_out_sentence) if __name__ == '__main__': unittest.main() ``` ### Expected behavior In many batch generation for OPT models, `model.generate()` stops the generation once the longest sequence in the batch reaches `max_length`, even if shorter sequences in the same batch haven't reached the `max_length` yet. This leads to inconsistent behavior for generation when using batches of different sizes. In the example above, when generating with a batch size of one, the output of the shorter sequence is: `Today, I was in the middle of a conversation with a friend about the state of the world`, however, when we use a batch size of two, the output is truncated, and changes to: `Today, I was in the middle of a conversation with a friend about the`. I modified this test case from a similar test found here: https://github.com/huggingface/transformers/issues/17514 notice how the test case in issue 17514 modified the maximum generation length to get around this behavior.
10-17-2022 23:31:43
10-17-2022 23:31:43
cc @gante <|||||>Hi @amrsharaf 👋 `max_length` is relative to the length of the padded sequence -- actually, all sequences have the same length during generate, no exceptions. Nevertheless, there is a way to control the length of the generation that is independent of the input length: with `max_new_tokens`. Check the example below ```python import unittest from transformers import GPT2Tokenizer from transformers import AutoModelForCausalLM class TestManyBatchOptGeneration(unittest.TestCase): def test_batch_generation(self): model_id = 'facebook/opt-350m' tokenizer = GPT2Tokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained(model_id) tokenizer.padding_side = 'left' # use different length sentences to test batching sentences = ['Hello, my dog is a little', 'Today, I',] inputs = tokenizer(sentences, return_tensors='pt', padding=True) input_ids = inputs['input_ids'] max_new_tokens = 10 outputs = model.generate(input_ids=input_ids, attention_mask=inputs['attention_mask'], max_new_tokens=max_new_tokens) inputs_non_padded = tokenizer(sentences[0], return_tensors='pt').input_ids output_non_padded = model.generate(input_ids=inputs_non_padded, max_new_tokens=max_new_tokens) inputs_padded = tokenizer(sentences[1], return_tensors='pt').input_ids output_padded = model.generate(input_ids=inputs_padded, max_new_tokens=max_new_tokens) batch_out_sentence = tokenizer.batch_decode(outputs, skip_special_tokens=True) non_padded_sentence = tokenizer.decode(output_non_padded[0], skip_special_tokens=True) padded_sentence = tokenizer.decode(output_padded[0], skip_special_tokens=True) expected_output_sentence = [ "Hello, my dog is a little bit of a dork.\nI'm a", "Today, I was in the middle of a conversation with a friend", ] self.assertListEqual(expected_output_sentence, [non_padded_sentence, padded_sentence]) print(batch_out_sentence, [non_padded_sentence, padded_sentence]) self.assertListEqual(expected_output_sentence, batch_out_sentence) if __name__ == '__main__': unittest.main() ``` It is not possible to control BATCHED generation such that all sequences have the same UNPADDED maximum length. __________________________________________ I'm assuming this solves your issue, and thus I'm closing it. Feel free to reopen it with further questions :)
transformers
19,698
closed
Add configuration_wav2vec2.py
Add configuration_wav2vec2.py to utils/documentation_tests.txt for doctest. Based on issue: [19487](https://github.com/huggingface/transformers/issues/19487) @ydshieh could you please check it? Thanks :)
10-17-2022 21:37:53
10-17-2022 21:37:53
_The documentation is not available anymore as the PR was closed or merged._<|||||>Thank you very much @ydshieh!!
transformers
19,697
closed
Add `accelerate` support for `Whisper`
# What does this PR do? This PR adds `accelerate` support for `Whisper`, this way `Whisper` can be loaded in 8-bit as follows: ``` from transformers import WhisperModel model = WhisperModel.from_pretrained("openai/whisper-tiny", device_map="auto", load_in_8bit=True) ``` However some slow tests are not passing when running the whole `Whisper` testing suite but they pass if I run them independently one by one - I still need to understand why this is happening ``` FAILED tests/models/whisper/test_modeling_whisper.py::WhisperModelTest::test_multi_gpu_data_parallel_forward FAILED tests/models/whisper/test_modeling_whisper.py::WhisperModelIntegrationTests::test_large_batched_generation FAILED tests/models/whisper/test_modeling_whisper.py::WhisperModelIntegrationTests::test_large_generation FAILED tests/models/whisper/test_modeling_whisper.py::WhisperModelIntegrationTests::test_large_generation_multilingual FAILED tests/models/whisper/test_modeling_whisper.py::WhisperModelIntegrationTests::test_large_logits_librispeech FAILED tests/models/whisper/test_modeling_whisper.py::WhisperModelIntegrationTests::test_small_en_logits_librispeech FAILED tests/models/whisper/test_modeling_whisper.py::WhisperModelIntegrationTests::test_tiny_en_batched_generation FAILED tests/models/whisper/test_modeling_whisper.py::WhisperModelIntegrationTests::test_tiny_en_generation FAILED tests/models/whisper/test_modeling_whisper.py::WhisperModelIntegrationTests::test_tiny_generation FAILED tests/models/whisper/test_modeling_whisper.py::WhisperModelIntegrationTests::test_tiny_logits_librispeech ``` cc @sgugger @ArthurZucker @ydshieh Thanks!
10-17-2022 20:29:45
10-17-2022 20:29:45
For the failing tests, Wait a little bit a PR related to The tests will be merged ☺️<|||||>Great! I'll try again then once the PR that you have mentioned will be merged <|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>We now know the root cause of the initial issue explained above after an intense debugging session with @ArthurZucker ! the `test_disk_offload` test first overrides the `ACT2FN[config.activation_function]` variable that is initialized only once by the testing suite when importing the model. Therefore when running the slow tests that does not use `device_map=...` all submodules will be initialized without the `_hf_hook` variable except for the [`activation_fn`](https://github.com/huggingface/transformers/blob/072dfdaee4deee65328071e92ac48f471a78490c/src/transformers/models/whisper/modeling_whisper.py#L269) attribute since it is not re-initialized. A workaround could be to design a proper `tearDown` function for those tests, or to change the slow tests with `device_map=auto` ;) cc @sgugger <|||||>The fix will be to change `ACT2FN`. It's not healthy that all models share the same instance of activation modules ;-) Will fix this morning.<|||||>I can confirm all tests are green after applying the fix proposed in #19728 ! Thanks a lot for adding the fix 💪 ``` ======================= 76 passed, 3 skipped, 32 warnings in 146.28s (0:02:26) ======================== ```<|||||>Since https://github.com/huggingface/transformers/pull/19728 has been merged, I will merge this PR once the tests below are green! 🟢
transformers
19,696
closed
Repo utils test
# What does this PR do? This PR adds a new job to test our repo utils, which is only run when there is a change in said utils. This is to make sure community contributions to those don't break anything there. As an example, it also add a couple of tests for the test fetcher.
10-17-2022 19:54:01
10-17-2022 19:54:01
_The documentation is not available anymore as the PR was closed or merged._
transformers
19,695
closed
[Doctest] Add configuration_cvt.py
Add configuration_cvt.py to utils/documentation_tests.txt for doctest. Based on #19487 @ydshieh could you please check it? Thanks :)
10-17-2022 19:41:28
10-17-2022 19:41:28
_The documentation is not available anymore as the PR was closed or merged._
transformers
19,694
closed
Update CONTRIBUTING.md
adverb missing # What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
10-17-2022 19:31:45
10-17-2022 19:31:45
Thanks for the PR! In this case, adding *that* doesn't make it an adverb, and I don't think it's necessary to use it as a determiner since we're talking about the bug in a general and generic sense.
transformers
19,693
closed
Update CONTRIBUTING.md
punctuation missing # What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
10-17-2022 19:29:26
10-17-2022 19:29:26
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19693). All of your documentation changes will be reflected on that endpoint.<|||||>Hey @shreem-123, thanks for your contribution, but this is not a problem. This is the intended punctuation. Could you please bundle your PRs in a single one? You're opening a lot of different PRs for single-word changes, we'd much rather you do these in a single PR. Thank you.<|||||>for sure....sorry for the inconvenience.
transformers
19,692
closed
Fix checkpoint used in `VisualBertConfig` doc example
# What does this PR do? The checkpoint was wrong, and not cached during the review in the config doctest PR (as it was wrong all the time).
10-17-2022 19:18:47
10-17-2022 19:18:47
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19692). All of your documentation changes will be reflected on that endpoint.
transformers
19,691
closed
Not able to run evaluate on whisper.tflite that got generated from TFWhisper model
### Model description @gante Generated HF TFwhisper model into whisper.tfllite model. However, I'm not sure how to evaluate the created whisper tflite model. https://colab.research.google.com/github/usefulsensors/openai-whisper/blob/main/tflite_from_huggingface_whisper.ipynb I would appreciate your assistance in evaluating whisper.tflite. The notebook mentioned above produces a whisper.tflite file. ### Open source status - [X] The model implementation is available - [X] The model weights are available ### Provide useful links for the implementation _No response_
10-17-2022 19:00:23
10-17-2022 19:00:23
Hi @nyadla-sys 👋 That is a great question! The problem here is that generation is much more than a forward pass of the model. Fortunately, our generation code is compatible with TF Graph mode, which means you can compile the entire generation procedure into a graph, which you can directly compare to our examples. Here is a continuation of your notebook, which creates a TF Lite model for generation with Whisper: https://colab.research.google.com/drive/1tGL73xRs9mFUY5R03im0R6NNcvJriHun?usp=sharing<|||||>@gante is it possible to add representative_dataset and generate tflite(int8) model. converter.representative_dataset = representative_dataset https://colab.research.google.com/github/usefulsensors/openai-whisper/blob/main/tinynn_pytorch_to_tflite_int8.ipynb <|||||>> Hi @nyadla-sys wave > > That is a great question! The problem here is that generation is much more than a forward pass of the model. Fortunately, our generation code is compatible with TF Graph mode, which means you can compile the entire generation procedure into a graph, which you can directly compare to our examples. > > Here is a continuation of your notebook, which creates a TF Lite model for generation with Whisper: https://colab.research.google.com/drive/1tGL73xRs9mFUY5R03im0R6NNcvJriHun?usp=sharing @gante Great work and appreciate for your efforts to make it open <|||||>@nyadla-sys I don't know how to answer your latest question. Gently pinging @hollance, who might have better pointers for Whisper + TF Lite + int8<|||||>@gante Is it feasible to include Conv2d and avoid getting FlexConv2D as part of the model? TFLite interpreter needs to link Flex delegate in order to run the model since it contains the following Select TFop(s): Flex ops: FlexConv2D Details: tf.Conv2D(tensor<1x1x?x?xf32>, tensor<1x3x80x384xf32>) -> (tensor<1x1x?x384xf32>) : {data_format = "NHWC", device = "", dilations = [1, 1, 1, 1], explicit_paddings = [], padding = "VALID", strides = [1, 1, 1, 1], use_cudnn_on_gpu = true} <|||||>@gante When I run generated tflite file with the minimal example from tensorflow/lite/example and it fails with below error msg Execution plan as the list of 568 nodes invoked in-order: [0-567] --------------Subgraph-8 dump has completed-------------- --------------Memory Arena Status Start-------------- Total memory usage: 396 bytes (0.000 MB) - Total arena memory usage: 396 bytes (0.000 MB) - Total dynamic memory usage: 0 bytes (0.000 MB) Subgraph#0 Arena (Normal) 268 (67.68%) Subgraph#0 Arena (Persistent) 128 (32.32%) --------------Memory Arena Status End-------------- 2022-10-20 16:55:50.791845: W tensorflow/core/framework/op_kernel.cc:1830] OP_REQUIRES failed at conv_ops.cc:688 : INVALID_ARGUMENT: input depth must be evenly divisible by filter depth: 1 vs 80 ERROR: input depth must be evenly divisible by filter depth: 1 vs 80 ERROR: Node number 696 (TfLiteFlexDelegate) failed to invoke. Error at /home/niranjanyadla/useful_sensors/download_tools/openai-work/tflite_linux/tflite_build/tensorflow/tensorflow/lite/examples/minimal/minimal.cc:71<|||||>@gante I modified generation code as below and it works fine @tf.function( # shouldn't need static batch size, but throws exception without it (needs to be fixed) input_signature=[ tf.TensorSpec((1, 80, 3000), tf.float32, name="input_features"), ], )<|||||>@gante I found that my 30-second audio has more generated ids than the 21 produced by the whisper TFlite model. Is there anything from the tflite model that I am missing? https://colab.research.google.com/github/usefulsensors/openai-whisper/blob/main/notebooks/tflite_from_huggingface_whisper.ipynb and also it does not produce english transcript for total of 30 seconds audio <|||||>increased the max_tokens to 200 and now I could generate whole audio text <|||||>@nyadla-sys two questions to help pinpoint the problem: 1. Does the standard TF model (i.e. non-TFLite) work correctly for that audio file? 2. If the answer to 1 is yes: can you share a code example of the problem? (the link above doesn't work for me)<|||||>@gante now I modified the [colab](https://colab.research.google.com/github/usefulsensors/openai-whisper/blob/main/notebooks/tflite_from_huggingface_whisper.ipynb) notebook to generate more tokens as per below line from HF colab predicted_ids = model.generate(inputs, max_length=480_000) Referred this snippet from HF colab https://colab.research.google.com/drive/191WGH59ZZ-xyu8d6GWbuqZHa_MQJmQpA?usp=sharing#scrollTo=yENhy_7Qq5nU <|||||>@gante @hollance Have added something like below and it is giving segmentation fault. Could you please help me on this ? "converter.representative_dataset = representative_dataset" and def representative_dataset(): for x in range(1): inputs = feature_extractor( ds[x]["audio"]["array"], sampling_rate=ds[0]["audio"]["sampling_rate"], return_tensors="tf") input_features = inputs.input_features # print(input_features) yield [input_features] Please see the below code for detailed information: class GenerateModel(tf.Module): def __init__(self, model): super(GenerateModel, self).__init__() self.model = model @tf.function( # shouldn't need static batch size, but throws exception without it (needs to be fixed) input_signature=[ tf.TensorSpec((1, 80, 3000), tf.float32, name="input_features"), ], ) def serving(self, input_features): outputs = self.model.generate( input_features, max_new_tokens=223, #change as needed return_dict_in_generate=True, ) return {"sequences": outputs["sequences"]} def representative_dataset(): for x in range(1): inputs = feature_extractor( ds[x]["audio"]["array"], sampling_rate=ds[0]["audio"]["sampling_rate"], return_tensors="tf") input_features = inputs.input_features # print(input_features) yield [input_features] import tensorflow as tf saved_model_dir = '/content/tf_whisper_saved' tflite_model_path = 'whisper.tflite' generate_model = GenerateModel(model=model) tf.saved_model.save(generate_model, saved_model_dir, signatures={"serving_default": generate_model.serving}) # Convert the model converter = tf.lite.TFLiteConverter.from_saved_model(saved_model_dir) converter.target_spec.supported_ops = [ tf.lite.OpsSet.TFLITE_BUILTINS, # enable TensorFlow Lite ops. tf.lite.OpsSet.SELECT_TF_OPS # enable TensorFlow ops. ] converter.representative_dataset = representative_dataset #converter.inference_input_type = tf.int8 # or tf.uint8 #converter.inference_output_type = tf.int8 # or tf.uint8 converter.optimizations = [tf.lite.Optimize.DEFAULT] tflite_model = converter.convert() # Save the model with open(tflite_model_path, 'wb') as f: f.write(tflite_model)<|||||>@hollance @gante I was able to convert from Hugging face whisper onnx to tflite(int8) model,however am not sure how to run the inference on this model Could you please review and let me know if there is anything i am nissing in onnx to tflite conversion https://colab.research.google.com/github/usefulsensors/openai-whisper/blob/main/notebooks/whisper_to_onnx_tflite_int8.ipynb <|||||>Hey @nyadla-sys -- model quantization with TFLite is beyond what we support at the moment here in `transformers`, I am afraid I won't dig into your issue at the moment. You can, however, try asking that question in our [forum](https://discuss.huggingface.co/) 🤗, you might find support from other users there.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>keep it open .!<|||||>@gante Is it possible to modify the input audio spectrograms from 30s to 10 seconds in order to use them as input for a Hugging Face Whisper TensorFlow model? on other note if you have any clue to generate int8 model ,please share your thoughts?<|||||>@nyadla-sys > Is it possible to modify the input audio spectrograms from 30s to 10 seconds in order to use them as input for a Hugging Face Whisper TensorFlow model? Not directly -- the model expects a fixed size input, corresponding to 30s. > if you have any clue to generate int8 model ,please share your thoughts? I'm not an int8 expert, so I have minimal pointers: see our [Optimum](https://github.com/huggingface/optimum) library, which has support for int8 quantization<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>@gante I separated encoder and decoder tflite models, however while running inference of decoder I only get single output . Could you please review the notebook and let me know if you have any input for me. <|||||>Hi @nyadla-sys 👋 TF Lite is not (yet) a priority for us, as we don't have enough bandwidth to support it. I won't look at your notebook.<|||||> I was able to successfully separate the encoder and decoder whisper tflite models in the following notebook and working correctly . https://colab.research.google.com/github/usefulsensors/openai-whisper/blob/main/notebooks/whisper_encoder_decoder_tflite.ipynb Posting here to help some of HF users who are interested in whisper tflite models <|||||>@sanchit-gandhi how do i get transcript from the below script ``` import torch from transformers import AutoFeatureExtractor, WhisperModel from datasets import load_dataset model = WhisperModel.from_pretrained("openai/whisper-base") feature_extractor = AutoFeatureExtractor.from_pretrained("openai/whisper-base") ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation") inputs = feature_extractor(ds[0]["audio"]["array"], return_tensors="pt") input_features = inputs.input_features decoder_input_ids = torch.tensor([[1, 1]]) * model.config.decoder_start_token_id last_hidden_state = model(input_features, decoder_input_ids=decoder_input_ids).last_hidden_state list(last_hidden_state.shape) ```<|||||>@gante I am attempting to divide the TFWhisperModel into an encoder and a decoder, but the code I have is producing an error. Can you assist me in resolving this issue? ``` import tensorflow as tf from transformers import TFWhisperModel class WhisperEncoder(TFWhisperModel): def call(self, inputs, **kwargs): return self.encoder(inputs, **kwargs) class WhisperDecoder(TFWhisperModel): def call(self, inputs, **kwargs): return self.decoder(inputs, **kwargs) model = TFWhisperModel.from_pretrained("openai/whisper-tiny") encoder_model = WhisperEncoder.from_pretrained("openai/whisper-tiny") decoder_model = WhisperDecoder.from_pretrained("openai/whisper-tiny") tf.saved_model.save(encoder_model, "whisper_encoder_model_dir") tf.saved_model.save(decoder_model, "whisper_decoder_model_dir") ``` here is the error message TypeError: Exception encountered when calling layer "whisper_encoder" (type WhisperEncoder). encoder() got an unexpected keyword argument 'training' Call arguments received by layer "whisper_encoder" (type WhisperEncoder): • inputs={'input_features': 'tf.Tensor(shape=(2, 80, 2999), dtype=float32)', 'decoder_input_ids': 'tf.Tensor(shape=(1, 2), dtype=int32)'} • kwargs={'training': 'None'}<|||||>Hey @nyadla-sys 👋 The encoder and decoder components of Whisper, when isolated, are not compatible with `from_pretrained`. However, you can still serialize them separately, from different sources: ```py import tensorflow as tf from transformers import TFWhisperModel model_1 = TFWhisperModel.from_pretrained("openai/whisper-tiny") model_2 = TFWhisperModel.from_pretrained("openai/whisper-tiny") tf.saved_model.save(model_1.get_encoder(), "/tmp/whisper/encoder") tf.saved_model.save(model_2.get_decoder(), "/tmp/whisper/decoder") ```<|||||>Hey @nyadla-sys! For inference, we can use the `.generate()` method to auto-regressively generate using the Whisper model: ```python import torch from transformers import AutoProcessor, WhisperForConditionalGeneration from datasets import load_dataset model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-base") processor = AutoProcessor.from_pretrained("openai/whisper-base") ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation") inputs = processor(ds[0]["audio"]["array"], return_tensors="pt") input_features = inputs.input_features with torch.no_grad(): predicted_ids = model.generate(input_features) transcription = processor.batch_decode(predicted_ids, skip_special_tokens=True) print(transcription) ``` **Print Output:** ``` [' Mr. Quilter is the apostle of the middle classes, and we are glad to welcome his gospel.'] ```<|||||>@sanchit-gandhi is it possible to generate transcript using TFWhisperModel ? instead of WhisperForConditionGeneragion<|||||>Hey @nyadla-sys! TFWhisperModel is just the base encoder-decoder model that outputs decoder hidden-states: https://huggingface.co/docs/transformers/model_doc/whisper#transformers.TFWhisperModel TFWhisperForConditionalGeneration adds a language modelling head on top of TFWhisperModel, mapping the decoder hidden-states to logits over the vocabulary: https://huggingface.co/docs/transformers/model_doc/whisper#transformers.TFWhisperForConditionalGeneration So you'll need TFWhisperForConditionalGeneration in order to get logits over the vocab (and hence generate text) Hope that makes sense!<|||||>@sanchit-gandhi Is it possible to directly map the decoder hidden states to logits without using the language modeling head? I am focusing on using only TFWhisperModel because it can be fully converted into an int8 model. I'm curious if there is any way to generate text using the decoder hidden states without adding the language modeling head.<|||||>Hey @nyadla-sys, it's precisely the job of the language modelling head to **directly** map the decoder hidden-states to logits. The language modelling head is a single linear layer that maps from $\mathcal{R}^{d} -> \mathcal{R}^{v}$, where $d$ is the dimensionality of the hidden-states, and $v$ is the dimensionality of the vocabulary, so for Whisper small this is a mapping from 768 -> 52000 So if you need to map to the vocabulary, you're best off using `TFWhisperForConditionalGeneration`!<|||||>@sanchit-gandhi Thanks for your detailed information<|||||>@gante @younesbelkada @sanchit-gandhi, I encountered an issue while generating a tflite file from the TFWhisperForConditionalGeneration model for transcribing Spanish. The output is only producing zeros. For further reference, please refer to the attached [notebook](https://colab.research.google.com/drive/1cS3jorRn1kcVbakCcd_VF_zcZOcpQ6LC?usp=sharing). Can you please let me know if I made any mistakes in the conversion process?<|||||>@nyadla-sys have you confirmed that the model works as expected before converting to TFLite? :)<|||||>> @nyadla-sys have you confirmed that the model works as expected before converting to TFLite? :) Yes, it worked as expected before converting <|||||>@gante any update on this ? <|||||>@sayakpaul do you have any tips for debugging TF to TFLite mismatches?<|||||>@nyadla-sys, this is better off in the TensorFlow repository in the TFLite category. Here are some things that we can try out: * I would first see if removing the dynamic-range quantization from conversion (TFLite) helps with the predictive performance of the model. If the outputs are as expected with dynamic-range quantization, then probably that's the bug. * I noticed you're not using the processor that comes with the TF Whisper model from `transformers`. I would use that to prepare the inputs for the TFLite model. Just like you did for the `tf.Module`. I am afraid, beyond this point, we won't be able to help you out with anything related to TFLite. <|||||>@sayakpaul thanks for your response I have already raised a couple of issues in the TFLite category on the TF repository, but I have found the support from Google to be lacking when it comes to resolving these kinds of issues. https://github.com/tensorflow/tensorflow/issues/58451 https://github.com/tensorflow/tensorflow/issues/59506 I also removed the dynamic quantization and issue still persists.. <|||||>If I simply convert the Whisper PyTorch model to an ONNX model and then use a TFLite encoder and decoder, it performs well. You can refer to the notebook available at [Link](https://colab.research.google.com/github/usefulsensors/openai-whisper/blob/main/notebooks/whisper_encoder_decoder_tflite.ipynb) for more details. However, I suspect that the HF TF Whisper model may be the root cause of the error I am encountering. <|||||>Hey @nyadla-sys! I had a quick look at the code snippet. Could you double check that you're setting the correct prompt tokens? E.g. that we force the model to **translate** (not transcribe). Since our source audio (English) is different from our target text (Spanish), we need to force the model to do speech translation (rather than speech transcription). This is most easily done by using the `processor`'s `get_decoder_prompt_ids` method: https://github.com/huggingface/transformers/blob/a8eb4f79f946c5785f0e91b356ce328248916a05/src/transformers/models/whisper/processing_whisper.py#L44 Which enables you to specify the target language and task: ```python forced_ids = processor. get_decoder_prompt_ids(language="Spanish", task="translate") ``` And then pass them to the generate config as required. The first two tokens generated by the model look off, which suggests that the forced decoder ids are not being set correctly. More details here: https://huggingface.co/openai/whisper-tiny#usage<|||||>@sanchit-gandhi Created simple notbook to reproduce the issue https://colab.research.google.com/drive/1jvXy4SVqVocOYgAMGDKGcBxMN5u97oet?usp=sharing Please review and let me know your comments <|||||>Hey @nyadla-sys I reviewed the Colab up until the second code snippet (where you run `TFWhisperForConditionalGeneration`). I don't see any abnormalities in how the TF model performs - it correctly transcribes the audio in French: ``` [' Un vrai travail intéressant va enfin être mené sur ce sujet.'] ``` So we can verify that the HF TF Whisper implementation is indeed correct ✅<|||||>I converted HF TF to TFLite model then i see the issue. I am unsure why the TFLite model is causing this issue. <|||||>Hey @nyadla-sys, since the problem is with the TFLite model, the issue is better suited for the TensorFlow repository (see https://github.com/huggingface/transformers/issues/19691#issuecomment-1432418643). Feel free to open an issue there for dedicated TFLite support.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
19,690
closed
Fix dtype in radnomly initialized head
# What does this PR do? This PR fixes the following [issue](https://github.com/huggingface/accelerate/issues/757) reported in Accelerate: when a model is loaded with `device_map="auto"` and some `torch_dtype`, the randomly initialized head does not respect said dtype.
10-17-2022 18:54:24
10-17-2022 18:54:24
_The documentation is not available anymore as the PR was closed or merged._
transformers
19,689
closed
Update CONTRIBUTING.md
punctuation missing # What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
10-17-2022 18:45:36
10-17-2022 18:45:36
_The documentation is not available anymore as the PR was closed or merged._
transformers
19,688
closed
Update CONTRIBUTING.md
words with different meaning used together. # What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
10-17-2022 18:44:27
10-17-2022 18:44:27
No, those two words complement each other. No need for a slash here.<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19688). All of your documentation changes will be reflected on that endpoint.
transformers
19,687
closed
[YOLOS] Minor fixes
# What does this PR do? This PR updates a Copied from statement for YolosFeatureExtractor, and removes 2 methods which aren't necessary for the model (as there is no segmentation head model).
10-17-2022 18:34:18
10-17-2022 18:34:18
_The documentation is not available anymore as the PR was closed or merged._
transformers
19,686
closed
Make use of KenLM's pypi version
### Feature request Use PyPi version of KenLM instead of installing archieve file from Github. ### Motivation Now that kenlm is on pypi: https://github.com/kpu/kenlm/issues/363#issuecomment-1280050903 let's use it :-) ### Your contribution @sanchit-gandhi would you give this a try as an exercise to see how we deal with soft dependencies in Transformers maybe? :-) I think we can now update the `setup.py` file to include the pypi version of `kenlm` and remove all those `pip install https://github.com/kpu/kenlm/archive/master.zip` statements e.g. in https://github.com/huggingface/transformers/blob/c7edde1a692012eda23bc2b837588557b97ad729/docker/transformers-all-latest-gpu/Dockerfile#L44 I'd also advocate to add a `is_kenlm_available` flag to give a nice error message and update warnings / errors here: https://github.com/huggingface/transformers/blob/c7edde1a692012eda23bc2b837588557b97ad729/src/transformers/pipelines/__init__.py#L839
10-17-2022 17:48:00
10-17-2022 17:48:00
Also cc @ydshieh and @sgugger here<|||||>And let's not forget the [circleCI config](https://github.com/huggingface/transformers/blob/c7edde1a692012eda23bc2b837588557b97ad729/.circleci/create_circleci_config.py#L130) as well ;-)<|||||>@patrickvonplaten I would like to pick up this.
transformers
19,685
closed
[Doctest] Add configuration_xlm.py
Add configuration_xlm.py to utils/documentation_tests.txt for doctest. Based on #19487 @ydshieh could you please check it? Thanks :)
10-17-2022 17:41:20
10-17-2022 17:41:20
_The documentation is not available anymore as the PR was closed or merged._
transformers
19,684
closed
[Examples] make default preprocessing_num_workers=1
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes #19630 ## Who can review? @sgugger <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
10-17-2022 17:40:15
10-17-2022 17:40:15
Ok I removed it.<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>The failing check was fixed on main, so no need to worry about it. Thanks for your work on this!
transformers
19,683
closed
Small fixes for TF-ESM1b and ESM-1b weight conversions
Layernorm weights from ESM-1b weren't being converted properly for TF, this is fixed now!
10-17-2022 16:30:14
10-17-2022 16:30:14
_The documentation is not available anymore as the PR was closed or merged._
transformers
19,682
closed
add return_tensor parameter for feature extraction
# What does this PR do? Fixes #10016 Addresses stale issue #10016. Please review @LysandreJik and @Narsil. Thanks. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
10-17-2022 16:22:19
10-17-2022 16:22:19
No- op ???<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19682). All of your documentation changes will be reflected on that endpoint.
transformers
19,681
closed
fix some device issues for pt 1.13
# What does this PR do? `PyTorch 1.13` is coming. We have 2 models with a lot of test failures due to some tensor indexing with `indexed tensor` and `indices` on the different devices. (It works with torch <= 1.12.1 though). This PR fixes this device issue, so we are better prepared for torch 1.13.
10-17-2022 16:00:58
10-17-2022 16:00:58
_The documentation is not available anymore as the PR was closed or merged._
transformers
19,680
closed
Revert "add return_tensor parameter for feature extraction"
Reverts huggingface/transformers#19257
10-17-2022 15:55:58
10-17-2022 15:55:58
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19680). All of your documentation changes will be reflected on that endpoint.
transformers
19,679
closed
Fix imports in pipeline tests
# What does this PR do? #19257 broke the pipeline tests on main, this PR fixes them.
10-17-2022 15:36:13
10-17-2022 15:36:13
transformers
19,678
closed
:rotating_light: :rotating_light: :rotating_light: [Breaking change] Deformable DETR intermediate representations
# What does this PR do? - Fixes naturally the `object-detection` pipeline. - Moves from `[n_decoders, batch_size, ...]` to `[batch_size, n_decoders, ...]` instead. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
10-17-2022 14:23:40
10-17-2022 14:23:40
_The documentation is not available anymore as the PR was closed or merged._<|||||>``` ==================================================================================================== 73 passed, 8 skipped, 49 warnings in 98.91s (0:01:38) ===================================================================================================== ``` Is that OK (I think the skipped are just unsupported features like resize_tokens)<|||||>Ok!<|||||>@sgugger Since this is breaking, I'll let you merge when it's more convenient.<|||||>Let's go :-)
transformers
19,677
closed
object-detection instead of object_detection
The Object detection Sample in the README.md does not work out of the box, as it tries to initialize a pipeline `object_detection` however it is called `object-detection` Not sure if this is intended such that a new user has to overcome a minimal sanity check?! 🤣 @sgugger
10-17-2022 14:07:31
10-17-2022 14:07:31
_The documentation is not available anymore as the PR was closed or merged._
transformers
19,676
closed
[TYPO] Update perf_train_gpu_one.mdx
# What does this PR do? Fixes typo. negligable -> negligible ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @osanseviero
10-17-2022 13:58:24
10-17-2022 13:58:24
_The documentation is not available anymore as the PR was closed or merged._
transformers
19,675
closed
Update ESM checkpoints to point to `facebook/`
ESM checkpoints were initially uploaded to my account, but have now been moved to the correct location. This PR updates all references to point to their new home under `facebook/`
10-17-2022 13:55:08
10-17-2022 13:55:08
_The documentation is not available anymore as the PR was closed or merged._
transformers
19,674
closed
Add decorator to flaky accelerate test
# What does this PR do? Adds `is_flaky` decorator to the `test_run_squad_no_trainer` test which occasionally fails on CI runs which are independent to the changes in the PR e.g.: * https://app.circleci.com/pipelines/github/huggingface/transformers/49621/workflows/14c25312-58a5-4b0b-8b41-6c5bec668043/jobs/593213 * https://app.circleci.com/pipelines/gh/huggingface/transformers/49224/workflows/fbae76ab-9259-4695-bb06-475357172587/jobs/589262 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests?
10-17-2022 13:49:07
10-17-2022 13:49:07
_The documentation is not available anymore as the PR was closed or merged._<|||||>Great! Feel free to merge and open an issue to track it :)<|||||>I've opened an issue here: https://github.com/huggingface/transformers/issues/19733
transformers
19,673
closed
[WIP] Adding logprobs to `text-generation` pipeline.
# What does this PR do? Given the discussions to add this parameter here is a first (non working) draft. - `text-generation` only, we would need to support `text2text-generation` too, which unfortunately is a different pipeline - This gets quite complex in the case of `num_return_sequences` + `num_beams` to keep track of all the dimensions. There seems to be something wrong in the `output_scores` + `num_beams` - This is IMO quite unaligned with purpose of pipelines. Pipelines are supposed to be used for naive non ML practitionners, so understanding what tokens are is beyong the purpose of the pipelines. I'm creating this PR, just because multiple discussions have been started asking for this feature. - The current code becomes very bloated IMO (and it's not finished to handle all the different params). `generate` scores already generated a lot of discussion: - https://github.com/huggingface/transformers/issues/17424 - https://github.com/huggingface/transformers/issues/18942 To be clear, it seems everyone involved in said discussions wants parity with GPT-3 OpenAI and CoHere. since the API is powered by the pipelines we should implement it here to have the functionnality within the API. https://huggingface.co/bigscience/bloom/discussions/89#6322b153a418a789a23f3380 Discussion for Bloom (currently custom code so not concerned, but still to be considered) Other discussions are internal/email but follow roughly the same pattern as far as I could read. The purpose of this PR is to gauge interest for this feature and/or aligning with other APIs. Please star this PR if you are interested in this feature. And please do comment if other/more important features are interesting to you. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @spolu @sgugger @gante (Maybe we have some other comments on how to handle TF too.) <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
10-17-2022 13:18:21
10-17-2022 13:18:21
This looks great. You may want to cap the logprob argument to a small-ish number (maybe 12?) to avoid giving the ability to people to generate massive amount of data from short queries. I think 12 is very fair and covers most use-cases.<|||||>Thanks for the PR, but I am very much not in favor of this change. We have developed [tools](https://huggingface.co/docs/transformers/add_new_pipeline#share-your-pipeline-on-the-hub) so that users can put their preferred pipeline code in the repo of their model specifically for use cases like this one. I think those tools should be leveraged instead of changing the base pipeline.<|||||>To provide some color, the lack of logprobs for the inference API text generation makes it hardly usable for any non trivial use case. I also tried deploying a model on inference endpoints and it was not obvious to me where to add code to extract the logprobs?<|||||>From a TF standpoint, whatever works for PT should also work there -- their interface should be the same. Regarding the root issue, it seems that we have to decide whether we want to optimize the default pipelines for a simple interface or for the inference API. I'm in favor of the first, as we have a myriad of solutions for non-trivial use cases: 1. as @sgugger mentioned, custom pipelines can be defined 2. If the user is writing code, replacing the pipeline by a `.generate()` call is also simple (and unlocks several other outputs) 3. if the issue is the GUI for non-trivial use cases, a custom Space can be built (we could create a template where only the model ID needs to be changed, if that would make things easier)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
19,672
closed
'WhisperProcessor' object has no attribute 'as_target_processor'
### System Info - `transformers` version: 4.23.1 - Platform: Linux-5.15.0-48-generic-x86_64-with-glibc2.17 - Python version: 3.8.12 - Huggingface_hub version: 0.10.1 - PyTorch version (GPU?): 1.11.0 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: yes - Using distributed or parallel set-up in script?: no ### Who can help? @patrickvonplaten ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction I'm trying to train a Whisper model, using the WhisperProcessor. As written in the doc of the `__call__` method of the WhisperProcessor, I should have a context processor.as_target_processor() but it seems it doesn't exist. "If used in the context ~WhisperProcessor.as_target_processor this method forwards all its arguments to WhisperTokenizer’s [call()]" Steps to reproduce the bug : ``` target_text = 'this is a test text' processor = WhisperProcessor.from_pretrained('openai/whisper-large') with processor.as_target_processor(): targets = processor(target_text).input_ids ``` ### Expected behavior The `__call__` method of WhisperProcessor instance in the as_target_processor context should give the result of the WhisperTokenizer `__call__`.
10-17-2022 13:10:33
10-17-2022 13:10:33
Hi, We'll update that code snippet as we've recently deprecated the use of `as_target_processor`, and new processors like `WhisperProcessor` don't implement it anymore. See #18325 for details. You can replace ``` text = "hello world" with processor.as_target_processor(): encoded_labels = processor(text, padding=True) ``` by ``` encoded_labels = processor(text=text).input_ids ```<|||||>Also cc'ing @ArthurZucker for updating the docs<|||||>Hello, Thank you for clarification !<|||||>Another thing, it seems that the `pad` method does not exist for WhisperProcessor ?<|||||>I think we can add it, similar to [this method](https://github.com/huggingface/transformers/blob/3b3024da70a7ada6599390c5b3e1a721c9a4aa4c/src/transformers/models/wav2vec2/processing_wav2vec2.py#L104-L132). For now you can do `processor.tokenizer.pad(...) `<|||||>Nice catch! Will update the doc soon
transformers
19,671
closed
check decoder_inputs_embeds is None before shifting labels
# What does this PR do? This is related to #19157, which pointed out that a few models do not check whether `decoder_inputs_embeds` are None, which is inconsistent with other models. Since this is not the first time this is brought, let's solve it all at once, unless there is a particular reason ?
10-17-2022 12:46:25
10-17-2022 12:46:25
_The documentation is not available anymore as the PR was closed or merged._
transformers
19,670
closed
[WHISPER] fix tests after updating the max_length
# What does this PR do? Fixes the `max_length` used in the generate function in whisper. The default `max_length` argument was changed in the `config.json` as it is more convenient for people who either don't know that the argument exist/don't know the value to set.
10-17-2022 12:44:48
10-17-2022 12:44:48
_The documentation is not available anymore as the PR was closed or merged._<|||||>Thanks @amyeroberts for running the tests locally, both TF and PT are all green. <|||||>@ArthurZucker It would be nice to explain a bit the reason for the PR, not just what it did 🙏 . It will help others to understand , and it also helps to track (if we need to check this PR for some reason in the future)<|||||>@sgugger This is the Hub PR @ArthurZucker mentioned (change `max_length` to 448 in order to fix 2 whisper pipeline tests) https://huggingface.co/openai/whisper-large/commit/baca495426386f789702e7f10edccf761c5f5592 After that change, this PR is required to pass some TF Whisper tests.
transformers
19,669
closed
Fix code examples of DETR and YOLOS
# What does this PR do? This is a follow-up PR of #19205. YOLOS and DETR share the same postprocessing, hence I've added post_process_object_detection to YOLOS, leveraging Copied from statements. It also improves the code example of DETR, and adds a better one for YOLOS.
10-17-2022 12:23:40
10-17-2022 12:23:40
_The documentation is not available anymore as the PR was closed or merged._
transformers
19,668
closed
fix test whisper with new max length
# What does this PR do? Fixes pipeline test after `max_length` update @ydshieh
10-17-2022 12:01:07
10-17-2022 12:01:07
Could you link the Hub PR that is related? <|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>It seems the checkpoint involved is `openai/whisper-tiny.en` which is also used in other test methods. Could you confirm the other tests also pass after that Hub PR?<|||||>The `pytorch` tests all pass, 3 `TF` tests fails for now. Only the `config.max_length` was changed so should not have a lot of impact.
transformers
19,667
closed
Swin2sr
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes #19568 ## Who can review? @NielsRogge
10-17-2022 10:50:48
10-17-2022 10:50:48
WIP. Found the equivalents for PatchEmb, PatchMerging, SwinV2Stage<|||||>Hi @venkat-natchi, I actually played around with the Swin2SR model this weekend and got a [working implementation](https://github.com/NielsRogge/transformers/tree/add_swin2sr/src/transformers/models/swin2sr) already. I see you're still at the start of the process, so would you be interested in working on another model [from this list](https://github.com/huggingface/transformers/issues?q=is%3Aissue+is%3Aopen+label%3A%22New+model%22)? <|||||>Sure, make sense. I was actually going through the papers in the weekend to get the high level understanding of it. I thought of picking [this](https://github.com/huggingface/transformers/issues/19631) one up now, but seems to be closed. Do you know why? If possible could you point me to any one model, which was not taken up so far. Thanks. <|||||>Do you have an email address? I'll set up a Slack channel so we can discuss :)<|||||>sure. Mine is [email protected]<|||||>Hey, @venkat-natchi, would you like to collaborate on Adding EDSR to HuggingFace? I closed that issue because I felt it was irrelevant.<|||||>> Hey, @venkat-natchi, would you like to collaborate on Adding EDSR to HuggingFace? I closed that issue because I felt it was irrelevant. Sure, I am interested. <|||||>That's great! then i will reopen that issue and tag u there<|||||>I'll close this PR as you'll work on a new model.