repo
stringclasses 1
value | number
int64 1
25.3k
| state
stringclasses 2
values | title
stringlengths 1
487
| body
stringlengths 0
234k
⌀ | created_at
stringlengths 19
19
| closed_at
stringlengths 19
19
| comments
stringlengths 0
293k
|
---|---|---|---|---|---|---|---|
transformers | 20,669 | closed | Progress Bar for large model loading | ### Feature request
Add progress bars for large model loading from cache files.
### Motivation
Most of the time, model loading time will be dominated by download speed. However, for very large models we will often first download the checkpoints, then during runtime simply load them from cache. For models like Bloom however, it can take upwards of 100 minutes to load the model into RAM. During this time, there is no feedback to the user, even with verbosity set to debug. This can be frustrating as the only way to check progress is by checking system utilisation through `top`.
### Your contribution
Happy to help if I am pointed to the relevant file or files! I don't think the progress bar would need to be extremely accurate, just some indication that something is happening. | 12-08-2022 10:36:50 | 12-08-2022 10:36:50 | Sounds like a reasonable request. This should be done by the PR linked above if you want to try it!<|||||>I might be using this wrong, but I've taken the following steps and don't see any changes 🤔 :
```
from transformers import BloomForCausalLM
import torch
model = BloomForCausalLM.from_pretrained('bigscience/bloom-7b1', cache_dir='bloom-7b1-ckpt', torch_dtype=torch.float16)
```
after cloning `transformers`, running checkout on `large_model_progress`, then `pip install -e .`
Is there some edge case with the use of `cache_dir`?<|||||>The PR has not been merged into the main branch yet, so you need to checkout the branch of the PR before trying.<|||||>I had built the correct branch, the issue was me not being patient enough, as the progress bar did eventually appear.
Seems there is some processing going on between downloading and actually loading the shards. I am not sure what is being done, but the PR works ~
LGTM 🤗 |
transformers | 20,668 | closed | Add AltCLIP | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 12-08-2022 08:50:07 | 12-08-2022 08:50:07 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20668). All of your documentation changes will be reflected on that endpoint. |
transformers | 20,667 | closed | Albert resource | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # 20055
## Before submitting
- This PR adds resources on ALBERT model based on the materials outlined in #20055.
## Who can review?
@stevhliu
Co-authored by: @Adia Wu
| 12-08-2022 06:55:04 | 12-08-2022 06:55:04 | Closing this issue and reopening the issue at [Issue 20697](https://github.com/huggingface/transformers/pull/20697). |
transformers | 20,666 | closed | Generating with Flax fails when using padding | ### System Info
- `transformers` version: 4.25.1
- Platform: Linux-5.10.133+-x86_64-with-glibc2.27
- Python version: 3.8.15
- Huggingface_hub version: 0.11.1
- PyTorch version (GPU?): 1.13.0+cu116 (False)
- Tensorflow version (GPU?): 2.9.2 (False)
- Flax version (CPU?/GPU?/TPU?): 0.6.2 (cpu)
- Jax version: 0.3.25
- JaxLib version: 0.3.25
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help?
@patil-suraj, @patrickvonplaten, @LysandreJik
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Generation with GPT-2 and OPT doesn't work when using padding token.
Specifically, the issues are
1. In GPT-2, using builtin |endoftext| as pad token works with padding, but using customized token <|pad|> doesn't work, the generation only repeats `!`, e.g., `<|pad|> My cat is cute!!!!!!!!!!!!!!!`.
2. In GPT-2, despite pad_token_id is optional in generate() function, one has to provide pad_token_id=tokenizer.pad_token_id otherwise error.
3. In OPT, even using builtin <pad> token as padding token doesn't work, the generation only repeats `<s>`, e.g., `<pad></s>My cat is cute<s><s><s><s><s><s><s><s><s><s><s><s><s><s>`.
The colab to reproduce the problems 1-3 is [here](https://colab.research.google.com/drive/1pcwTRU3snLjz8wTJx_Z5t4Z7IXnxHnDI?usp=sharing).
A related issue to problem 2 is #18884.
### Expected behavior
Expect the generation works with padding. E.g., `<pad></s>My cat is super cute and I love her so much. I love her so much. I love`. | 12-08-2022 05:49:57 | 12-08-2022 05:49:57 | cc @sanchit-gandhi and @gante <|||||>Hey @lhao499!
> using builtin |endoftext| as pad token works with padding, but using customized token <|pad|> doesn't work
Note that GPT2 was not pre-trained with padding tokens. We can use the `<|endoftext|>` token as a substitute, but really we should specify an [attention mask](https://huggingface.co/transformers/glossary.html#attention-mask) to the model so that it doesn't attend to padded indices, therefore ignoring the value of the token. It's good to see that you've done this in your Colab for the GPT2 examples! (`model.generate(**inputs, ...)`)
In using a customised token (such as `<|pad|>`), you are providing the model with a format entirely different to that seen during pre-training. We cannot expect our model to understand this different format given that it has never seen it before. This is likely the reason for the unexpected behaviour when using `<|pad|>` as the padding token.
As for the OPT generation, it looks like this is fixed by setting the padding side: https://colab.research.google.com/drive/1pcwTRU3snLjz8wTJx_Z5t4Z7IXnxHnDI?usp=sharing<|||||>Hi Sanchit, when attention mask is provided, it's expected that customized token won't be attended at all, so effectively there is no differences between pretraining and inference formats?
For the OPT, it looks like you linked my Colab, where the OPT generation does not work. <|||||>Hey @lhao499! Sorry for the delay in getting back to you. I checked against PyTorch GPT-2, and we see the same phenomenon here, so we can exclude it as being a Flax specific issue: https://colab.research.google.com/drive/1qK2t8YNKLnX-oednqiVRDSFkFs5DPnY4#scrollTo=eGKOy0NbzCAW
cc'ing @gante here who might be able to provide some more insight!
Context: when we pass an attention mask for auto-regressive generation, it's expected that any padded tokens won't be attended to. Meaning we should be able to specify any arbitrary token as our pad token?
For GPT-2, we see that generation works when the pad token is set to the default pad token. But it breaks when we set it to some arbitrary pad token.<|||||>Hey @lhao499 @sanchit-gandhi 👋
In practice, adding new tokens and using the model straight away has very unpredictable results that depend on the framework used. For instance, the current version of JAX has the problem flagged above. However, if you use an older version of JAX, everything seems to work fine 🤷 If you try to do the same in TF CPU it might work, whereas on TF GPU it will crash unless you explicitly expand the vocabulary. PT also works if you expand the vocabulary, although with bad results.
Adding a new token corresponds to initializing a random entry in the embedding matrix, which has unforeseeable consequences. I highly advise against it unless you fine-tune the model afterward.
See [this colab](https://colab.research.google.com/drive/1Qly3125Q2happG1dGqdOpl1Q8MVNP_Nq?usp=sharing) for examples -- if you really want to go down this path, you might get away with an older Jax version ;)
P.S.: this Jax finding was a happy coincidence, my local desktop env had an older version Jax version which happened to work 👀 That's how unreliable this strategy is.<|||||>Thanks @sanchit-gandhi @gante for looking into the issues.
Hi @gante,
It is a surprising finding that adding new tokens has such unpredictable results depends framework and hardware. Thanks for trying different frameworks.
It is a good strategy to resize the embedding matrix and fine-tune the model after adding new padding tokens. It would be great if transformers library could get rid of the strange results and/or raise errors.
The attention mask still doesn't work as expected though. For instance, in OPT, even using builtin <pad> padding token only leads to repeating`<s>`, e.g., `<pad></s>My cat is cute<s><s><s><s><s><s><s><s><s><s><s><s><s><s>`, as shown in the last block of https://colab.research.google.com/drive/1pcwTRU3snLjz8wTJx_Z5t4Z7IXnxHnDI?usp=sharing<|||||>👍 will have a look at OPT<|||||>@lhao499 there was indeed a problem with the attention masking in OPT, causing all but the longest input to fail. #21150 fixes it :) |
transformers | 20,665 | closed | beam_search 中记录 beam_indices 的问题 | ### System Info
some beam_indices become 0
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
if return_dict_in_generate and output_scores:
beam_indices = tuple((beam_indices[beam_idx[i]] + (beam_idx[i],) for i in range(len(beam_indices))))
### Expected behavior
if return_dict_in_generate and output_scores:
beam_indices = tuple((beam_indices[i] + (beam_idx[i],) for i in range(len(beam_indices)))) | 12-08-2022 03:29:43 | 12-08-2022 03:29:43 | Please follow the template of the issue. No one can help you without a description of the problem and a clear reproducer. |
transformers | 20,664 | closed | [TF] Save finetuned-model without huggingface-hub login | ### Feature request
[TF] Save finetuned-model in local without huggingface-hub login
### Motivation
in TF, We need to login for saving finetuned-model.
```
from transformers.keras_callbacks import PushToHubCallback
push_to_hub_callback = PushToHubCallback(
output_dir="my_awesome_model",
tokenizer=tokenizer,
)
```
But I don't want to sync in my hub yet. Firstly, I want to save my models in local and test them
I checked that works in PyTorch, But It's not in Tensorflow
### Your contribution
I think we need to add argument whether to login or not
https://github.com/huggingface/transformers/blob/0526a075c567d7508205fe6054310f3d132b3227/src/transformers/keras_callbacks.py#L267 | 12-08-2022 02:16:19 | 12-08-2022 02:16:19 | cc @Rocketknight1 <|||||>I also find the problem that finetuned-model doesn't sync with hub when give only 1 epoch or at the last epoch<|||||>Hi @goreng2, you're correct that right now the callback expects a HF login. This is because that callback is designed for uploading models to the hub. If you just want to save the model locally, you can try either:
1) The [ModelCheckpoint callback](https://keras.io/api/callbacks/model_checkpoint/) in Keras to save the weights every epoch if you just want to save/resume training.
2) The `model.save_pretrained()` method if you want to save the model locally after training and reload it with `from_pretrained` afterwards.
Are you specifically interested in saving the model like `save_pretrained()` every epoch? We could add that, but let us know if you think that'd be useful for you first, or the solutions above are enough!<|||||>Hi @Rocketknight1 ! Thanks for your comment.
I want to use Huggingface's `pipeline` API for inference. I think `pipeline` perhaps can receive only `.h5` model
When I tried `ModelCheckpoint callback`, It returns `ckpt` files. It can't be used in `pipeline`.
For convert `ckpt` to `.h5`, I need to write model architecture (in my case `ELECTRA`) But It's so difficult and complex to me 😥
I tried to convert `ckpt` to `pth (PyTorch)` But It doesn't work... Maybe [this code](https://github.com/huggingface/transformers/blob/main/src/transformers/models/electra/convert_electra_original_tf_checkpoint_to_pytorch.py) only works in converting TF1 to PyTorch
When I tried `model.save('my_model.h5')`, Error msg raised. Maybe Something format is not match
I don't test `model.save_pretrained()` yet, It returns `.h5`?<|||||>Ah, yes. The `.ckpt` files from `ModelCheckpoint` are only useful for saving/resuming training, and you won't be able to use them in pipelines.
The way TF models on HuggingFace work is that they're built on top of Keras models. `model.save()` and `ModelCheckpoint` are both part of Keras. However, if you want to save the model to load with other HuggingFace tools, you should use `save_pretrained()`. This is our method and doesn't exist in base Keras models. It saves the model as `.h5`, but also adds a `config.json` that will allow the `pipeline` API and other methods like `from_pretrained` to initialize the model correctly.
Try just doing this:
```
model.save_pretrained("my_model")
pipe = pipeline("text-classification", model="my_model")
```
Though of course, make sure to change `text-classification` to the task you want to do!<|||||>@Rocketknight1
Hi, Thanks for your answer 😀
I tested it and It worked!
I got `tf_model.h5`, `config.json` and success to run `pipeline`!
But It's not perfect for inference, Because `model.save_pretrained("output_folder")` returns only 2 files that I mentioned upper.
`tokenizer.json`, `tokenizer_config.json` and so on are also needed for inference
So, How about make `model.save_pretrained("output_folder")` return other files about tokenizer?<|||||>@goreng2 Sorry for the delay! Yes, you will need to save the tokenizer to the same directory with `tokenizer.save_pretrained()` in order to load the whole directory with a pipeline. |
transformers | 20,663 | closed | Adding Pix2Struct to transformers | ### Model description
Image-to-Text encoder decoder presented in: https://arxiv.org/abs/2210.03347
It is in spirit similar to the [Donut](https://huggingface.co/docs/transformers/model_doc/donut) model that has been added to transformers not so long ago, but it has outperformed it on pretty much all benchmarks (thanks to different pre-training and slightly different encoders and decoders, more weights).
### Open source status
- [X] The model implementation is available
- [X] The model weights are available
### Provide useful links for the implementation
Github repo: https://github.com/google-research/pix2struct
Pretrained checkpoints are available on the repo, along with the code to fine-tune the model. The only thing is that everything is in Jax, so might take a while to convert | 12-07-2022 22:47:03 | 12-07-2022 22:47:03 | Really cool!
Related (sorry first read Pix2Struct as Pix2Seq): I have a working implementation of Pix2Seq: https://github.com/NielsRogge/transformers/tree/add_pix2seq. It even works with the generate method to autoregressively generate bounding boxes. However I didn't add it yet as training was quite cumbersome with a lot of hacks. Another reason why I didn't add it yet is because it's slow, you generate one token at a time, whereas object detection often has a real-time requirement<|||||>Yeah, interested to help out if there's someone working on the integration on Pix2Struct!
Re. Pix2Seq, sorry to hear it's hard to integrate. I actually think that Pix2Seq is about to become a lot more relevant for Document Processing because of those image-to-text models (like Donut and Pix2Struct): without a bounding box for the prediction (since you only get text as output), it becomes a lot harder to QA the model results since it takes a lot more time to map a string to its original position on the document than it is to check the position of a bounding box. Exciting times ahead :) <|||||>I'll cc @younesbelkada and @ArthurZucker here as they have extensive experience with the T5x code base, on which Pix2Struct is based.
Original checkpoints can be found here: https://console.cloud.google.com/storage/browser/pix2struct-data?pageState=(%22StorageObjectListTable%22:(%22f%22:%22%255B%255D%22))&prefix=&forceOnObjectsSortingFiltering=false<|||||>
> Original checkpoints can be found here: https://console.cloud.google.com/storage/browser/pix2struct-data?pageState=(%22StorageObjectListTable%22:(%22f%22:%22%255B%255D%22))&prefix=&forceOnObjectsSortingFiltering=false
@NielsRogge These are checkpoints for the ai2d task , correct? I do not see checkpoints for refexp or other tasks in the shared GS folder.
I am working on reproducing the test results in the original repo and hitting a string of build errors. Working thought them one at a time. Will share results if/when I get through all. Opened a few issues in the repo and hoping to hear back from the authors as well.
<|||||>> Yeah, interested to help out if there's someone working on the integration on Pix2Struct!
>
> Re. Pix2Seq, sorry to hear it's hard to integrate. I actually think that Pix2Seq is about to become a lot more relevant for Document Processing because of those image-to-text models (like Donut and Pix2Struct): without a bounding box for the prediction (since you only get text as output), it becomes a lot harder to QA the model results since it takes a lot more time to map a string to its original position on the document than it is to check the position of a bounding box. Exciting times ahead :)
My understanding is that some of the pix2struct tasks use bounding boxes. For example refexp uses the rico dataset (uibert extension), which includes bounding boxes for UI objects.
One potential way to automate QA for UI tasks is to take bounding boxes from a test set, feed to the `Widget Captioning` task and then use the captions as input to the `refexp` task. Ideally we will end up with the original bounding boxes.
A{test set ground truth bounding boxes} -> Widget Captioning -> refexp -> B{predicted bounding boxes)
A=B ?
<|||||>Not sure if this is the right place for this question, but let me try. Is anyone working on fine tuning multi-modal transformers that are already in the hugging face hub on UI tasks? Based on [this paper](https://paperswithcode.com/paper/grounding-natural-language-instructions-can), it seems like LayoutML might be a good candidate to train on Rico, RicoSCA or UIBert.<|||||>Quick update. I am testing the idea of using Document Understanding models like Donut on UI tasks that pix2struct targets. Space, model, and datasets are now on HF Hub. Colab notebook and other links can be found in the space page below:
https://huggingface.co/spaces/ivelin/ui-refexp
<|||||>Hi there, I have a working implementation in https://github.com/huggingface/transformers/pull/21400 will keep you posted.<|||||>Amazing! Can't wait to try it out<|||||>> Hi there, I have a working implementation in #21400 will keep you posted.
Awesome. Can't wait to compare performance to Donut.<|||||>The model has been merged! 🎉
You can find all the weights here: https://huggingface.co/models?search=pix2struct
and a fine-tuning notebook here: https://github.com/huggingface/notebooks/blob/main/examples/image_captioning_pix2struct.ipynb
Let us know if you face into any issue! <|||||>Amazing work! Thank you for sharing, @younesbelkada .
In the list of pre-trained models, I did not see one for the RefExp task. Do you know if someone is working on that already?
<|||||>Ah indeed I forgot to add them, will add them and post a message here once it's done<|||||>hey @younesbelkada , thank you for great work, i'm not sure if this is a proper place to ask, but i haven't find a better one.
Is it possible to use pix2struct for widget captioning task, with bounding box as an additional input?<|||||>@Misterion777 [the forums](https://discuss.huggingface.co/) would be the right place for this question :) <|||||>> @Misterion777 [the forums](https://discuss.huggingface.co/) would be the right place for this question :)
yeah, I just wanted to contact directly the author of the HF implementation, but I'll post there, thanks! :)<|||||>> Ah indeed I forgot to add them, will add them and post a message here once it's done
@younesbelkada I am not seeing the RefExp, is there a ticket where I can see the progress?<|||||>It is probably not being worked on, feel free to open a PR 🤗 <|||||>Hi @igortoliveira ,
I still didn't had time to look at it, for adding the support you just need to add the bounding box support for Pix2Struct and the conversion script should stay the same |
transformers | 20,662 | closed | Clarify return_tensor and return_text parameters | This PR fixes #20615 by clarifying that setting `return_tensors=True` will not return the decoded text, and you can't get a combination of `generated_text` and `generated_token_ids`. | 12-07-2022 22:08:36 | 12-07-2022 22:08:36 | _The documentation is not available anymore as the PR was closed or merged._<|||||>> LGTM thanks! @Narsil maybe a clear `ValueError` could be raised when sanitizing parameters when both `return_text` and `return_tensors` are `True`?
https://github.com/huggingface/transformers/pull/20729 |
transformers | 20,661 | closed | Fix load from PT-formatted checkpoint in composite TF models | # What does this PR do?
This PR fixes the slow test `TFViT2GPT2EncoderDecoderModelTest::test_real_model_save_load_from_pretrained` which was broken by the new `safetensors` integration. The main problem was that this model loads a GPT-2 as its decoder, which has a safetensors checkpoint formatted in a PyTorch-like format, and that model was loaded with wrong weight names.
Moving the variable scope code before we try to load PyTorch-like checkpoints fixes the issued. | 12-07-2022 19:28:05 | 12-07-2022 19:28:05 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 20,660 | closed | Add BackboneMixin | # What does this PR do?
Add `BackboneMixin` with a method `forward_with_filtered_kwargs`. | 12-07-2022 18:48:00 | 12-07-2022 18:48:00 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Oh by the way, this is definition of a mixin in Python and the base class should be called `BackboneMixin` :-) |
transformers | 20,659 | closed | Replicating SQuAD results on T5 | ### System Info
Hi, I'm trying to replicate the SQuAD experiment in the [T5 paper](https://arxiv.org/abs/1910.10683). I'm following the paper's recommended hyperparameters for finetuning:
* AdaFactor optimizer
* Batch size 128 (I'm doing 16 per GPU on 8xRTX 3090 GPUs)
* 2^18 steps for fine-tuning (which is around 300 epochs)
* Max sequence length 512
* Learning rate 0.001
I'm running the following:
```run_seq2seq_qa.py --model_name_or_path t5-base --dataset_name squad --context_column context --question_column question --answer_column answers --do_train --do_eval --per_device_train_batch_size 16 --optim adafactor --learning_rate 0.001 --num_train_epochs 300 --evaluation_strategy epoch --max_seq_length 512 --predict_with_generate --output_dir /tmp/t5_squad/ --overwrite_output_dir```
After 4 epochs, the validation Exact Match score is 79.054 and F1 is 86.895. After 4 epochs, the model starts to overfit and the performance decreases. However, the paper reports 85.44 EM and 92.08 F1 score on T5-base (Table 14).
Has anyone been able to reproduce the official paper results or am I missing anything?
### Who can help?
@patrickvonplaten
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
See above
### Expected behavior
Should get around 85.44 EM and 92.08 F1 score on this task. | 12-07-2022 18:03:22 | 12-07-2022 18:03:22 | Please use the [forums](https://discuss.huggingface.co/) for questions like this as we keep issues for bugs and feature requests only.
Note that we did not try to replicate the result of this paper with this script :-)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 20,658 | closed | LayoutLM Cuda Memory Error | ### System Info
torch==1.13.0+cu116
transformers==4.24.0
### Who can help?
@philschmid
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I am trying to train LayoutLMv1 following this guide https://www.philschmid.de/fine-tuning-layoutlm
However, when I execute `trainer.train()` I get this error:
```python
RuntimeError Traceback (most recent call last)
<command-1206590250403017> in <module>
----> 1 trainer.train()
/databricks/python/lib/python3.8/site-packages/transformers/trainer.py in train(self, resume_from_checkpoint, trial, ignore_keys_for_eval, **kwargs)
1499 self._inner_training_loop, self._train_batch_size, args.auto_find_batch_size
1500 )
-> 1501 return inner_training_loop(
1502 args=args,
1503 resume_from_checkpoint=resume_from_checkpoint,
/databricks/python/lib/python3.8/site-packages/transformers/trainer.py in _inner_training_loop(self, batch_size, args, resume_from_checkpoint, trial, ignore_keys_for_eval)
1747 tr_loss_step = self.training_step(model, inputs)
1748 else:
-> 1749 tr_loss_step = self.training_step(model, inputs)
1750
1751 if (
/databricks/python/lib/python3.8/site-packages/transformers/trainer.py in training_step(self, model, inputs)
2506
2507 with self.compute_loss_context_manager():
-> 2508 loss = self.compute_loss(model, inputs)
2509
2510 if self.args.n_gpu > 1:
/databricks/python/lib/python3.8/site-packages/transformers/trainer.py in compute_loss(self, model, inputs, return_outputs)
2538 else:
2539 labels = None
-> 2540 outputs = model(**inputs)
2541 # Save past state if it exists
2542 # TODO: this needs to be fixed and made cleaner later.
/databricks/python/lib/python3.8/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
1188 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1189 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1190 return forward_call(*input, **kwargs)
1191 # Do not call functions when jit is used
1192 full_backward_hooks, non_full_backward_hooks = [], []
/databricks/python/lib/python3.8/site-packages/transformers/models/layoutlm/modeling_layoutlm.py in forward(self, input_ids, bbox, attention_mask, token_type_ids, position_ids, head_mask, inputs_embeds, labels, output_attentions, output_hidden_states, return_dict)
1190 return_dict = return_dict if return_dict is not None else self.config.use_return_dict
1191
-> 1192 outputs = self.layoutlm(
1193 input_ids=input_ids,
1194 bbox=bbox,
/databricks/python/lib/python3.8/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
1188 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1189 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1190 return forward_call(*input, **kwargs)
1191 # Do not call functions when jit is used
1192 full_backward_hooks, non_full_backward_hooks = [], []
/databricks/python/lib/python3.8/site-packages/transformers/models/layoutlm/modeling_layoutlm.py in forward(self, input_ids, bbox, attention_mask, token_type_ids, position_ids, head_mask, inputs_embeds, encoder_hidden_states, encoder_attention_mask, output_attentions, output_hidden_states, return_dict)
825 inputs_embeds=inputs_embeds,
826 )
--> 827 encoder_outputs = self.encoder(
828 embedding_output,
829 extended_attention_mask,
/databricks/python/lib/python3.8/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
1188 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1189 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1190 return forward_call(*input, **kwargs)
1191 # Do not call functions when jit is used
1192 full_backward_hooks, non_full_backward_hooks = [], []
/databricks/python/lib/python3.8/site-packages/transformers/models/layoutlm/modeling_layoutlm.py in forward(self, hidden_states, attention_mask, head_mask, encoder_hidden_states, encoder_attention_mask, past_key_values, use_cache, output_attentions, output_hidden_states, return_dict)
494 )
495 else:
--> 496 layer_outputs = layer_module(
497 hidden_states,
498 attention_mask,
/databricks/python/lib/python3.8/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
1188 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1189 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1190 return forward_call(*input, **kwargs)
1191 # Do not call functions when jit is used
1192 full_backward_hooks, non_full_backward_hooks = [], []
/databricks/python/lib/python3.8/site-packages/transformers/models/layoutlm/modeling_layoutlm.py in forward(self, hidden_states, attention_mask, head_mask, encoder_hidden_states, encoder_attention_mask, past_key_value, output_attentions)
379 # decoder uni-directional self-attention cached key/values tuple is at positions 1,2
380 self_attn_past_key_value = past_key_value[:2] if past_key_value is not None else None
--> 381 self_attention_outputs = self.attention(
382 hidden_states,
383 attention_mask,
/databricks/python/lib/python3.8/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
1188 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1189 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1190 return forward_call(*input, **kwargs)
1191 # Do not call functions when jit is used
1192 full_backward_hooks, non_full_backward_hooks = [], []
/databricks/python/lib/python3.8/site-packages/transformers/models/layoutlm/modeling_layoutlm.py in forward(self, hidden_states, attention_mask, head_mask, encoder_hidden_states, encoder_attention_mask, past_key_value, output_attentions)
306 output_attentions: Optional[bool] = False,
307 ) -> Tuple[torch.Tensor]:
--> 308 self_outputs = self.self(
309 hidden_states,
310 attention_mask,
/databricks/python/lib/python3.8/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
1188 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1189 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1190 return forward_call(*input, **kwargs)
1191 # Do not call functions when jit is used
1192 full_backward_hooks, non_full_backward_hooks = [], []
/databricks/python/lib/python3.8/site-packages/transformers/models/layoutlm/modeling_layoutlm.py in forward(self, hidden_states, attention_mask, head_mask, encoder_hidden_states, encoder_attention_mask, past_key_value, output_attentions)
170 output_attentions: Optional[bool] = False,
171 ) -> Tuple[torch.Tensor]:
--> 172 mixed_query_layer = self.query(hidden_states)
173
174 # If this is instantiated as a cross-attention module, the keys
/databricks/python/lib/python3.8/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
1188 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1189 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1190 return forward_call(*input, **kwargs)
1191 # Do not call functions when jit is used
1192 full_backward_hooks, non_full_backward_hooks = [], []
/databricks/python/lib/python3.8/site-packages/torch/nn/modules/linear.py in forward(self, input)
112
113 def forward(self, input: Tensor) -> Tensor:
--> 114 return F.linear(input, self.weight, self.bias)
115
116 def extra_repr(self) -> str:
RuntimeError: CUDA error: CUBLAS_STATUS_NOT_INITIALIZED when calling `cublasCreate(handle)`
```
I can train LayoutLMv3 without any memory issue.
### Expected behavior
I shouldn't get any memory issue as this model is smaller. | 12-07-2022 17:57:28 | 12-07-2022 17:57:28 | It's advised to run the same code on CPU to get a more understandable error message<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 20,657 | closed | [`BiT`] Small patch fix | # What does this PR do?
This PR fixes a tiny issue that you can encounter if you load `BiT` in fp16.
`diffusers` uses this model under the hood for Depth Estimation inpainting and users get this error:
```
593
594 layer_dropouts = [
--> 595 x.tolist() for x in torch.linspace(0, config.drop_path_rate, sum(config.depths), dtype=torch.float32).split(config.depths)
596 ]
597
RuntimeError: "linspace_cpu" not implemented for 'Half'
```
However on `diffusers` side this can be fixed by installing `accelerate` and load the pipeline with `low_cpu_mem_usage=True`. But better to fix it to avoid any misleading issue
cc @sgugger @patil-suraj
Otherwise to reproduce:
```
import torch
from transformers import BitModel
model = BitModel.from_pretrained("google/bit-50", torch_dtype=torch.float16)
``` | 12-07-2022 16:16:05 | 12-07-2022 16:16:05 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 20,656 | closed | Fix gpt2 fp16 training when tracing is enabled | # What does this PR do?
With the PR #20061, the tracing will fail during mixed-precision training, as the dtype for the inputs of a where node are not the same, which is invalid while reusing the ONNX model for inference.
The node:
https://github.com/huggingface/transformers/blob/3ac040bca1efbf5cfe9604a5b2a10a5392917c20/src/transformers/models/gpt2/modeling_gpt2.py#L201
Error message:
```
======================================================================
ERROR: test_ort_trainer (__main__.TestORTTrainer) (model_name='gpt2', dataset_name='sst2', inference_with_ort=False)
----------------------------------------------------------------------
Traceback (most recent call last):
File "test_onnxruntime_train.py", line 131, in test_ort_trainer
train_result = trainer.train()
File "/workspace/optimum/onnxruntime/trainer.py", line 349, in train
return inner_training_loop(
File "/workspace/optimum/onnxruntime/trainer.py", line 615, in _inner_training_loop
tr_loss_step = self.training_step(model, inputs)
File "/usr/local/lib/python3.8/dist-packages/transformers/trainer.py", line 2523, in training_step
loss = self.compute_loss(model, inputs)
File "/usr/local/lib/python3.8/dist-packages/transformers/trainer.py", line 2555, in compute_loss
outputs = model(**inputs)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/onnxruntime/training/ortmodule/_utils.py", line 371, in _forward
return ortmodule._torch_module.forward(*inputs, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/onnxruntime/training/ortmodule/_utils.py", line 351, in _forward
return torch_module_ort._execution_manager(torch_module_ort.is_training()).forward(*inputs, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/onnxruntime/training/ortmodule/_training_manager.py", line 273, in forward
self._fallback_manager.handle_exception(
File "/usr/local/lib/python3.8/dist-packages/onnxruntime/training/ortmodule/_fallback.py", line 162, in handle_exception
raise exception
File "/usr/local/lib/python3.8/dist-packages/onnxruntime/training/ortmodule/_training_manager.py", line 210, in forward
self._initialize_graph_builder()
File "/usr/local/lib/python3.8/dist-packages/onnxruntime/training/ortmodule/_graph_execution_manager.py", line 478, in _initialize_graph_builder
self._graph_builder.initialize(self._onnx_models.exported_model.SerializeToString(), grad_builder_config)
RuntimeError: /onnxruntime_src/orttraining/orttraining/python/orttraining_pybind_state.cc:731 onnxruntime::python::addObjectMethodsForTraining(pybind11::module&, onnxruntime::python::ExecutionProviderRegistrationFn)::<lambda(onnxruntime::training::OrtModuleGraphBuilder*, const pybind11::bytes&, const onnxruntime::training::OrtModuleGraphBuilderConfiguration&)> [ONNXRuntimeError] : 1 : FAIL : Type Error: Type parameter (T) of Optype (Where) bound to different types (tensor(float) and tensor(float16) in node (Where_223).
```
| 12-07-2022 15:51:51 | 12-07-2022 15:51:51 | A little bit more context on the issue, I previously fixed the tracing issue in #18017, but it will harm the performance due to host<->device synchronization, which has been targeted in #20061, but cause the tracing once again failed.
It seems that we can't guarantee the tracing correctness and inference performance with the same line of code while using PyTorch at the same time, that's why in the PR, I distinguish two cases to solve it:
* Case 1: Tracing
* Case 2: Inference with PyTorch<|||||>Also @michaelbenayoun I saw this: https://github.com/huggingface/transformers/pull/18017#issuecomment-1197597894, does the current modeling won't have an issue while doing mixed-precision training for torch.fx?
<|||||>Feel the same, If/else removed!<|||||>_The documentation is not available anymore as the PR was closed or merged._ |
transformers | 20,655 | closed | [Trainer] Corrects typing of Trainer __init__ args | # What does this PR do?
Corrects typing for Trainer class. Updates typing to match change made in https://github.com/huggingface/transformers/pull/19158/ and fixes a few other typing issues while I'm there
Unless I'm missing something these changes take the typing to parity with both class docstring and implementation
Who can review: @sgugger
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
| 12-07-2022 14:17:16 | 12-07-2022 14:17:16 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 20,654 | closed | [NAT, DiNAT] Add backbone class | # What does this PR do?
This PR adds the `NatBackbone` and `DinatBackbone` classes, to be used for #20577. | 12-07-2022 14:12:51 | 12-07-2022 14:12:51 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 20,653 | closed | Add Whisper large V2 model | ### Model description
It seems openAI has just released a V2 of its large Whisper model.
- The "large-v2" model is trained for more epochs with regularization and shows improved performance compared to the previous large.
- It has the same architecture as the original large model.
- When `load_model("large")` is called, the "large-v2" model will be loaded.
More, here: https://github.com/openai/whisper/commit/4179ed2475cc84cba66868b516232ef1b74dacdf
I can upload it to the hub. Cc: @younesbelkada @patrickv
### Open source status
- [X] The model implementation is available
- [X] The model weights are available
### Provide useful links for the implementation
_No response_ | 12-07-2022 14:09:50 | 12-07-2022 14:09:50 | cc @ArthurZucker <|||||>Hey! The `large-v2` is already converted, we are just waiting for OpenAI's approval for the release 😉 <|||||>Cool!<|||||>FYI : https://huggingface.co/openai/whisper-large-v2 <|||||>Not really sure if we want to change the `large` for `large-v2`, not really backward compatible |
transformers | 20,652 | closed | [Whisper] Fix forced decoder ids | # What does this PR do?
The Whisper tokenizer has a property `self.prefix_tokens` that returns the token ids appended to the start of label sequence:
```
<|startoftranscript|> <|lang_id|> <|task|> <|notimestamps|> ...
```
In the PR https://github.com/huggingface/transformers/pull/20589, the method `get_decoder_prompt_ids` was copied from the Whisper processor to the Whisper tokenizer, where it then made use of the tokenizer property `self.prefix_tokens`. The method `get_decoder_prompt_ids` is used to set the tokens that are forced at the beginning of the generation process.
However, the forced decoder ids **should not** contain the `<|startoftranscript|>` token: this is the `decoder_start_token_id` that we use as token 0 when we start generation. If we include `<|startoftranscript|>` in our forced decoder ids, we'll get a double generation of `<|startoftranscript|>`. Thus, we only want to set the following tokens in the `forced_decoder_ids`:
```
<|lang_id|> <|task|> <|notimestamps|> ...
```
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 12-07-2022 13:59:13 | 12-07-2022 13:59:13 | _The documentation is not available anymore as the PR was closed or merged._<|||||>fyi @sgugger, the final fix we hope 🤞<|||||>Yes! Let me clarify!
When training, we need to encode a sentence to a sequence of label ids. Here, we need to append the 'special' beginning of sentence tokens to the label ids. This is so that the model learns to predict the correct 'special' tokens for the generation process. For a full list of the tokens added, see this PR: https://github.com/huggingface/transformers/pull/19921
One of these tokens is the `<|startoftranscript|>` token. This is consistent with other tokenisers in the library, such as the BART tokeniser:
```python
from transformers import BartTokenizer
tokenizer = BartTokenizer.from_pretrained("facebook/bart-base")
input_str = "the cat"
tokens = tokenizer(input_str).input_ids
print(tokenizer.decode(tokens))
```
**Print Output:**
```
<s>the cat</s>
```
Now, it doesn't matter for training whether or not we append the decoder start token id to the start of our label sequence, because we cut it in our data collator:
https://github.com/huggingface/transformers/blob/3ac040bca1efbf5cfe9604a5b2a10a5392917c20/examples/pytorch/speech-recognition/run_speech_recognition_seq2seq.py#L249
So, adding the decoder start token id is more for making the tokeniser user friendly and consistent with other tokenisers in the library.
<|||||>@sanchit-gandhi Thanks. Just want to point out: For `bart`, yes, we have bos `<s>` (id `0`). But it is not the **decoder** start token (which is `</s>` for bart, with id `2`) - it is just the start of the sentence (not ready for generation). The `labels` has `bos` but not `decoder_start_token`. The labels will be shifted and prepended with `</s>` to become decoder input ids.
In Whisper, I understand we want to be user-friendly. And as you have cut it in data collator, it is fine. But IMO, this is something a bit different from our NLP models (i.e. Bart here). Hopefully I understand it correctly.
|
transformers | 20,651 | closed | [Trainer] add error when passing `8bit`models | # What does this PR do?
Before this PR, any user could load an 8bit model and pass it to a Trainer, which is wrong. In fact, it is not possible to train an 8bit model (yet). Therefore we should raise an error until this will be supported in the future
Related: https://github.com/huggingface/transformers/issues/20348#issuecomment-1335106257
cc @ydshieh @sgugger
| 12-07-2022 13:54:45 | 12-07-2022 13:54:45 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 20,650 | open | [New Model] UDOP: Unifying Vision, Text, and Layout for Universal Document Processing | ### Model description
We propose Universal Document Processing (UDOP), a foundation Document AI model which unifies text, image, and layout modalities together with varied task formats, including document understanding and generation. UDOP leverages the spatial correlation between textual content and document image to model image, text, and layout modalities with one uniform representation. With a novel Vision-Text-Layout Transformer, UDOP unifies pretraining and multi-domain downstream tasks into a prompt-based sequence generation scheme. UDOP is pretrained on both large-scale unlabeled document corpora using innovative self-supervised objectives and diverse labeled data. UDOP also learns to generate document images from text and layout modalities via masked image reconstruction. To the best of our knowledge, this is the first time in the field of document AI that one model simultaneously achieves high-quality neural document editing and content customization. Our method sets the state-of-the-art on 9 Document AI tasks, e.g., document understanding and QA, across diverse data domains like finance reports, academic papers, and websites. UDOP ranks first on the leaderboard of the Document Understanding Benchmark (DUE).
### Open source status
- [x] The model implementation is available
- [x] The model weights are available
### Provide useful links for the implementation
UDOP Paper: https://arxiv.org/abs/2212.02623
UDOP Repo: https://github.com/microsoft/UDOP
UDOP Model Weights: https://huggingface.co/ZinengTang/Udop/tree/main | 12-07-2022 13:48:22 | 12-07-2022 13:48:22 | @NielsRogge as you implemented Donut, you might be interested :)<|||||>Let's hope they open-source :)<|||||>@NielsRogge they added the code here https://github.com/microsoft/i-Code/tree/main/i-Code-Doc<|||||>Hi @NielsRogge, can I help in this implementation?<|||||>@NielsRogge here you have the weights: https://huggingface.co/ZinengTang/Udop/tree/main<|||||>@WaterKnight1998 Is the model accessible now?<|||||>> @WaterKnight1998 Is the model accessible now?
No, the PR from @raghavanone was closed. @NielsRogge is working on opening a PR with a refactor of UDop code as it was not very good.
I saw he has a branch for this: https://github.com/NielsRogge/transformers/tree/add_udop |
transformers | 20,649 | closed | [`ViTHybrid`] + [`BiT`] cleaner `__init__` | # What does this PR do?
a function in `BiT` is not needed, thus this PR makes the codebase less error-prone.
Before that, to get the feature maps size from the backbone model, `ViTHybrid` assumed that the backbone has the method `_get_feature_map` which is not the case for all backbones.
Related #20645 | 12-07-2022 13:08:40 | 12-07-2022 13:08:40 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Yes, this can be addressed in a future PR |
transformers | 20,648 | closed | Add UperNet | # What does this PR do?
This PR adds the classic [UperNet](https://arxiv.org/abs/1807.10221) framework to Transformers.
Many papers that introduce a new vision backbone, such as BEiT, ConvNeXt, Swin,... benchmark their model on downstream tasks such as semantic segmentation and object detection. All of these papers use the UperNet framework (introduced in 2018) when evaluating their backbone on semantic segmentation.
Hence, this PR implements this framework, making use of the new [AutoBackbone API](#20229) to make the following possible:
```
from transformers import SwinConfig, UperNetConfig, UperNetForSemanticSegmentation
backbone_config = SwinConfig(out_features=["stage1", "stage2", "stage3", "stage4"])
config = UperNetConfig(backbone_config=backbone_config)
model = UperNetForSemanticSegmentation(config)
```
In the code above, we're instantiating the UperNet framework with Swin Transformer as backbone. The code looks equivalent for another backbone, like ConvNeXt:
```
from transformers import ConvNextBackbone, UperNetConfig, UperNetForSemanticSegmentation
backbone_config = ConvNextBackbone(out_features=["stage1", "stage2", "stage3", "stage4"])
config = UperNetConfig(backbone_config=backbone_config)
model = UperNetForSemanticSegmentation(config)
```
To do:
- [ ] looking into supporting `from_pretrained` of backbones => will be done in a follow-up PR
- [x] make sure UperNetImageProcessor does exact same preprocessing
- [x] make UperNetImageProcessor also take `segmentation_maps` as optional input
- [x] add image processor tests
- [x] convert all checkpoints + update organization
- [x] fix integration tests | 12-07-2022 12:38:04 | 12-07-2022 12:38:04 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Thanks for the review, I'm waiting for the authors to respond regarding the creation of an organization on the hub. |
transformers | 20,647 | closed | Add batch of resources | # What does this PR do?
This PR adds a batch of resources, primilarly for all image classifiers. | 12-07-2022 11:30:46 | 12-07-2022 11:30:46 | Thanks for your review! It's unclear to me why the "build PR documentation" check is failing, thought it had to do with the pipeline tags, but it's still failing. Any insight would be greatly appreciated<|||||>You will have to isolate which file triggers the issue and then which line inside that file by trial and error I'm afraid. That's one of the reason smaller PRs are easier to deal with :-)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>_The documentation is not available anymore as the PR was closed or merged._ |
transformers | 20,646 | closed | [pipeline] fix Whisper test | # What does this PR do?
Fixes Whisper pipeline test.
Previously, we suppressed the hyphen and apostrophe tokens from Whisper generation, meaning they were alway predicted with zero probability. This meant that these tokens could never be predicted. With the Hub PR https://huggingface.co/openai/whisper-large/discussions/12, these tokens were removed from the set of suppressed tokens, meaning they can now (correctly) be predicted with non-zero probability.
We get the correct contraction now in the Italian prediction: "allo universo" -> "all'universo"
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@ArthurZucker @ydshieh @sgugger
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 12-07-2022 10:57:45 | 12-07-2022 10:57:45 | Test already fixed in https://github.com/huggingface/transformers/pull/20588 |
transformers | 20,645 | closed | Add `dpt-hybrid` support | # What does this PR do?
Adds `DPT-hybrid` support in `transformers`
Currently only DPT is supported. This PR leverages `AutoBackbone` from @NielsRogge to replace the embedding layer from `DPT` to support `DPT-hybrid`
Fixes #20435
Model weights: https://huggingface.co/Intel/dpt-hybrid-midas | 12-07-2022 10:13:22 | 12-07-2022 10:13:22 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Thanks a bunch! Fortunately it was already on the config file :D https://huggingface.co/Intel/dpt-hybrid-midas/blob/main/config.json#L277 but will open a PR to remove the `embedding_type` as it is not needed anymore<|||||>The config file has been modified, merging! |
transformers | 20,644 | closed | ONNX encoder decoder exchange invoke issue | ### System Info
TR-OCR Model
Encoder - BeiT -encoder.onnx
Decoder - Roberta large- decoder.onnx
System config:
Intel i9 11 gen
Nvidia Quadro RTX 4000 Max Q design - 16GB
Dependencies version:
onnx == 1.12.0
onnx-runtime == 1.13.1
torch == 1.13.0
transformers == 4.24.0
torchvision ==0.14.0
Issue:
Unable to start ONNX inference sessions with Tr-OCR ONNX conversions-> encoder.onnx & decoder.onnx with **ORTModelForVision2Seq** , model.generate() -> raising this error:
```
model.generate(pixel_values.to('cpu'))
```
Traceback (most recent call last):
```
File "<string>", line 1, in <module> File "C:\Users\110769\Anaconda3\envs\ocr2\lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context return func(*args, **kwargs) File "C:\Users\110769\Anaconda3\envs\ocr2\lib\site-packages\transformers\generation_utils.py", line 1339, in generate model_kwargs = self._prepare_encoder_decoder_kwargs_for_generation( File "C:\Users\110769\Anaconda3\envs\ocr2\lib\site-packages\transformers\generation_utils.py", line 583, in _prepare_encoder_decoder_kwargs_for_generation model_kwargs["encoder_outputs"]: ModelOutput = encoder(**encoder_kwargs) File "C:\Users\110769\Anaconda3\envs\ocr2\lib\site-packages\torch\nn\modules\module.py", line 1188, in _call_impl if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks File "C:\Users\110769\Anaconda3\envs\ocr2\lib\site-packages\torch\nn\modules\module.py", line 1265, in __getattr__ raise AttributeError("'{}' object has no attribute '{}'".format( AttributeError: 'ORTEncoder' object has no attribute '_backward_hooks'
```
Whole Code Snippet :
```
class ORTEncoder(nn.Module):
"""
Encoder model for ONNX Runtime inference.
Arguments:
session (`onnxruntime.InferenceSession`):
The ONNX Runtime inference session associated to the encoder.
"""
def __init__(
self, session: onnxrt.InferenceSession, device: torch.device, main_input_name: str = "input_ids"
):
self.session = session
self._device = device
self.main_input_name = main_input_name
self.input_names = {input_key.name: idx for idx, input_key in enumerate(self.session.get_inputs())}
self.output_names = {output_key.name: idx for idx, output_key in enumerate(self.session.get_outputs())}
class ORTDecoder(nn.Module):
"""
Encoder model for ONNX Runtime inference.
Arguments:
session (`onnxruntime.InferenceSession`):
The ONNX Runtime inference session associated to the encoder.
"""
def __init__(
self, session: onnxrt.InferenceSession, device: torch.device, main_input_name: str = "input_ids"
):
self.session = session
self._device = device
self.main_input_name = main_input_name
self.input_names = {input_key.name: idx for idx, input_key in enumerate(self.session.get_inputs())}
self.output_names = {output_key.name: idx for idx, output_key in enumerate(self.session.get_outputs())}
class ORTModelForVision2Seq(VisionEncoderDecoderModel):
def __init__(self, *args, **kwargs):
config = AutoConfig.from_pretrained('microsoft/trocr-base-printed')
super().__init__(config)
self._device = "cpu"
self.encoder = ORTEncoder(onnxrt.InferenceSession(encoder_path,providers=["CPUExecutionProvider"]),device='cpu')
self.decoder = ORTDecoder(onnxrt.InferenceSession(decoder_path,providers=["CPUExecutionProvider"]),device='cpu')
def forward(
self,
pixel_values: Optional[torch.FloatTensor] = None,
decoder_input_ids: Optional[torch.LongTensor] = None,
encoder_outputs: Optional[Tuple[Tuple[torch.Tensor]]] = None,
**kwargs,
) -> Seq2SeqLMOutput:
# Encode if needed : first prediction pass
if encoder_outputs is None:
encoder_outputs = self.encoder(pixel_values=pixel_values)
# Decode
decoder_attention_mask = decoder_input_ids.new_ones(decoder_input_ids.shape)
decoder_outputs = self.decoder(
input_ids=decoder_input_ids,
attention_mask=decoder_attention_mask,
encoder_hidden_states=encoder_outputs.last_hidden_state,
)
return Seq2SeqLMOutput(
logits=decoder_outputs.logits,
)
def prepare_inputs_for_generation(self, input_ids, attention_mask=None, encoder_outputs=None, **kwargs):
return {
"decoder_input_ids": input_ids,
"decoder_atttention_mask": input_ids,
"encoder_outputs": encoder_outputs,
}
model = ORTModelForVision2Seq()
start = time.time()
img = Image.open(r'PATH').convert("RGB")
processor = TrOCRProcessor.from_pretrained('microsoft/trocr-base-printed')
pixel_values = processor(images=img, return_tensors="pt").pixel_values
model.config.decoder_start_token_id = 2
model.config.pad_token_id = processor.tokenizer.pad_token_id
model.config.eos_token_id = processor.tokenizer.sep_token_id
model.config.vocab_size = model.config.decoder.vocab_size
generated_ids = model.generate(pixel_values.to(device))
end = time.time()
```
How can I run encoder/decoder with wrapped ORT instead invoking two concurrent sessions for Encoder and decoder in loop
@mht-sharma @NielsRogge
encoder_path -> takes encoder ONNX
decoder_path -> take decdoer ONNX
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Steps to reproduce:
1. Run above code snippet model.generate() invokes error
### Expected behavior
ONNX inference session should be invoked using ORTVisionseq2seq ,when given encoder and decoder. Also Model.generate() not a valid function for generating IDS | 12-07-2022 09:32:29 | 12-07-2022 09:32:29 | @NielsRogge @mht-sharma
New issue raised here: (https://github.com/huggingface/transformers/issues/20644)<|||||>Added a draft PR in optimum for easing inference. https://github.com/huggingface/optimum/pull/588<|||||>What about this :
**Adds ORTModelForVision2Seq for inference (In progress...)**
Like I said I TrOCR model in encoder_onnx and decoder_onnx , how can I invoke these two models together on ONNX runtime for fast inference.
@mht-sharma
<|||||>Hi @umanniyaz could you try the following code for inference for testing.
https://gist.github.com/mht-sharma/f38c670930ac7df413c07327e692ee39<|||||>> What about this : **Adds ORTModelForVision2Seq for inference (In progress...)**
>
> Like I said I TrOCR model in encoder_onnx and decoder_onnx , how can I invoke these two models together on ONNX runtime for fast inference.
>
> @mht-sharma
I am waiting for a few PRs to merge before this and would work on the inference. Should be available by next week.<|||||>> Hi @umanniyaz could you try the following code for inference for testing.
>
> https://gist.github.com/mht-sharma/f38c670930ac7df413c07327e692ee39
@mht-sharma Inference using these helper classes is still bad, I don't see any decrease in latency, plus performance in terms of text recognition decreases<|||||>Hi @umanniyaz could you share which device are you using for inference. If you are using GPU, you need to add the iobinding to observe the speedup.
For CPU inference, it may differ on the kind of CPU you are using. Currently the ORT inference uses the torch for the generation, hence, there can be a resource crunch between the torch and ORT which may lead to a slowdown. You may need to set the appropriate `intra-op-threads` and `torch threads` to observe the speedup. https://github.com/microsoft/onnxruntime/issues/13808<|||||>> Hi @umanniyaz could you share which device are you using for inference. If you are using GPU, you need to add the iobinding to observe the speedup.
>
> For CPU inference, it may differ on the kind of CPU you are using. Currently the ORT inference uses the torch for the generation, hence, there can be a resource crunch between the torch and ORT which may lead to a slowdown. You may need to set the appropriate `intra-op-threads` and `torch threads` to observe the speedup. [microsoft/onnxruntime#13808](https://github.com/microsoft/onnxruntime/issues/13808)
@mht-sharma I am using Intel i9 10th generation using this class for a Django REST API - CPU,not using GPU ,my GPU configuration stands Nvidia Quadro RTX-4000 Max Q design.
Can you provide with CPU and with GPU implementation,i just need to see where it speeds up.
Is onnx runtime for task image-to-text in Optimum pipelines for coming anytime soon<|||||>@mht-sharma @NielsRogge I just tried the above things in you inference_testing code for GPU with iobinding and on CPU by adding intra_op_threads for parrallel execution and noticed change in inference , but the accuracy of TR-OCR on changing to respective Encoder_model.onnx and Decoder_model.onnx suffers,it gives bad results than Original,like whitespaces between text are ignored and CER in text recognition increases in case of using above ONNX models

<|||||>Hi @umanniyaz ,
`Is ONNX runtime for task image-to-text in Optimum pipelines for coming anytime soon` - Things got little delayed due to the NYE. I would work on the ONNXRuntime pipeline in optimum in the coming week.
The decrease in accuracy may not be because of adding `iobinding` or `intra_op_threads`. Let me know if it is otherwise. The drop in accuracy is on both CPU and GPU (CUDAExecutionProvider)?
Could you share which `atol` you have used for the ONNX export.<|||||>@mht-sharma There is a consistent decrease in accuracy irrespective of CPU, GPU intra op threads or iobinding, For onnx export I have utilised your latest PR on Vision EncoderDecoder Model conversion as mentioned,you can send the TrOCR conversion again,further i used this:
python -m transformers.onnx --model=microsoft/trocr-base-printed --feature=vision2seq-lm models_trocr_base --atol 1e-3
Note: Need to use Tr-OCR-Base Printed <|||||>> Hi @umanniyaz ,
>
> `Is ONNX runtime for task image-to-text in Optimum pipelines for coming anytime soon` - Things got little delayed due to the NYE. I would work on the ONNXRuntime pipeline in optimum in the coming week.
>
> The decrease in accuracy may not be because of adding `iobinding` or `intra_op_threads`. Let me know if it is otherwise. The drop in accuracy is on both CPU and GPU (CUDAExecutionProvider)?
>
> Could you share which `atol` you have used for the ONNX export.
Hi @mht-sharma --atol 1e-3 gives inaccurate results then actual model,using 1e-4 onwards not feasible can you please tell value for atol for converting models<|||||>> Hi @umanniyaz ,
>
> `Is ONNX runtime for task image-to-text in Optimum pipelines for coming anytime soon` - Things got little delayed due to the NYE. I would work on the ONNXRuntime pipeline in optimum in the coming week.
>
> The decrease in accuracy may not be because of adding `iobinding` or `intra_op_threads`. Let me know if it is otherwise. The drop in accuracy is on both CPU and GPU (CUDAExecutionProvider)?
>
> Could you share which `atol` you have used for the ONNX export.
Any updates on this? @mht-sharma <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 20,643 | closed | Use encoder_last_hidden_states instead of tokens as input to do beam-search on text generation (BART cases) | ```
def shift_tokens_right(input_ids: torch.Tensor, pad_token_id: int, decoder_start_token_id: int):
"""
Shift input ids one token to the right.
"""
shifted_input_ids = input_ids.new_zeros(input_ids.shape)
shifted_input_ids[:, 1:] = input_ids[:, :-1].clone()
shifted_input_ids[:, 0] = decoder_start_token_id
if pad_token_id is None:
raise ValueError("self.model.config.pad_token_id has to be defined.")
# replace possible -100 values in labels by `pad_token_id`
shifted_input_ids.masked_fill_(shifted_input_ids == -100, pad_token_id)
return shifted_input_ids
model = BartForConditionalGeneration.from_pretrained("facebook/bart-large")
model.config.is_encoder_decoder = False
model.eval()
tokenizer = BartTokenizer.from_pretrained("facebook/bart-large")
text = 'the team had decided to replace the rubber with plastic due to the budget limit .</s>when evaluating the material of the remote control , marketing admitted that sponginess was what most users desired , which was the feel given by rubber .</s>project manager agreed .</s>however, project manager pointed out that a plastic remote control was no worse than other remote controls in the market , so it would not be a step-back at least .</s>okey. That\'s great. '
inputs = tokenizer(text, return_tensors = "pt")["input_ids"]
decoder_input_ids = shift_tokens_right(inputs, model.model.config.pad_token_id, model.model.config.decoder_start_token_id)
encoder_outputs = model.model.encoder(inputs)
print(encoder_outputs)
decoder_outputs = model.model.decoder(input_ids = decoder_input_ids, encoder_hidden_states= encoder_outputs[0]).last_hidden_state
sepa_logits = model.lm_head(decoder_outputs) + model.final_logits_bias
logits = model(inputs).logits #You can get logits = sepa_logits
```
I want to do some experiments on text summarization tasks by separating the BART model and modifing its encoder outputs. I notice that the pipeline of standard generate() function is text tokens ->(encoder) ->encoder outputs->(decoder + beam search) -> output tokens. Instead of the tokens, I want to take the encoder last hidden states, which size is [batch_size, sequence length, 1024] as the inputs to generate the text by using the bart decoder part only to do beam search. However, I don't know how to modify the generate() function to implement it.
The code above is to make sure that the separation is all right and I want to take encoder_outputs[0] as input (now it's the direct output of the input tokens, but later I want to do some modification, that's why I say I can't use the tokens2tokens generate function. ) , then use the decoder part to generate output tokens via beam search.
I believe that it's possible in theory but the generate() funcation is quite complicated and I need some hints on how to modify it. Thanks!
@patrickvonplaten | 12-07-2022 09:07:28 | 12-07-2022 09:07:28 | Please use the [forums](https://discuss.huggingface.co/) for such questions as we keep issues for bugs and feature requests only. |
transformers | 20,642 | closed | pin TF 2.11 in docker files | # What does this PR do?
Same as #20635 but for dockerfiles for GH actions.
(I already built the images and re-launch the daily CI) | 12-07-2022 07:49:16 | 12-07-2022 07:49:16 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 20,641 | closed | Speed up git-lfs detection on error | Prevent read and discard of entire checkpoint file.
# What does this PR do?
Mutates an error handler that checks only 7 bytes, to only read those 7 bytes rather than an entire checkpoint file.
Fixes # (issue)
Issue not opened. I encountered a memory allocation crash here when exploring disk offloading.
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@sgugger | 12-07-2022 07:48:04 | 12-07-2022 07:48:04 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 20,640 | closed | Convert the data type of embeddings and masks to bfloat16 for torch amp | ### Motivation
Add a attribute `use_torch_bfloat16_embeddings` in `PretrainedConfig` to indicate if bfloat16 data type for embeddings and masks is used and convert the data type of embeddings and masks to bfloat16 accordingly.
This will reduce the number of data type conversion between float and bfloat16 when running models with `torch.cpu.amp.autocast(dtype=torch.bfloat16)` and improve the performance with little accuracy regression. This is because there are many residual modules in models and thus result in data type promotion by binary operations implemented by tensoriterator in PyTorch.
For example: out = tensor1 + tensor2
If data type of tensor1 is float and tensor2 is bfloat16, pytorch will convert tensor2 to float and get float output. When running models using amp for bfloat16, the conversion will results in additional `to` operations, which will reduce performance.
### Testing
- Number of `to` operations
Model | wo/ bf16 embedding and masks| w/ bf16 embedding and masks
-- | -- | --
albert | 22 | 11
bert | 49 | 10
bart | 65 | 38
gpt2 | 56 | 29
distilbert | 40 | 19
roberta | 54 | 15
- Accuracy testing
Model | fp32 | amp bf16 | amp bf16 w/ bf16 embedding
-- | -- | -- | --
masked-language-modeling+bert-base-cased | 0.4819 | 0.4818 | 0.4819
masked-language-modeling+distilbert-base-cased | 0.3143 | 0.3158 | 0.3152
multiple-choice+distilbert-base-cased | 0.246 | 0.2461 | 0.2454
multiple-choice+google-electra-base-discriminator | 0.1193 | 0.1194 | 0.1201
text-classification+google-electra-base-generator | 0.6901 | 0.6838 | 0.6838
token-classification+google-electra-base-generator | 0.0414 | 0.0411 | 0.041
token-classification+gpt2 | 0.0379 | 0.0379 | 0.0379
albert | 0.453431373 | 0.428921569 | 0.446078431
distilbert | 0.681372549 | 0.681372549 | 0.681372549
roberta | 0.683823529 | 0.683823529 | 0.683823529
xlm-roberta | 0.637254902 | 0.637254902 | 0.639705882
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 12-07-2022 06:55:58 | 12-07-2022 06:55:58 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Thanks for your PR!
We're not really interested in adding optimizations like this in each model file, as if we were to do this for all possible hardwares and dtypes, the code would be unreadable. Since each model is defined in its own file, it's easy for a user to customize the code for their specific need (like here for bfloat16).<|||||>@sgugger Thank you for your comments! Yes, adding optimizations like this in each model file is not general. Users can customize the code for their need based on existing models. I posted the PR for testing and discussing, and also want to see if there is any way in huggingface to avoid such additional data type conversions since for some tasks like masked-language-modeling+bert-base-cased there may **be 30% performance drop**.<|||||>@sgugger May I know if there is any way in huggingface to avoid such additional data type conversions ? Thanks for your any advice ! |
transformers | 20,639 | closed | Added type hints to modeling_tf_encoder_decoder.py | # What does this PR do?
This pull request adds type hints for modeling_tf_encoder_decoder.py as outlined in Issue #16059
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@Rocketknight1
| 12-07-2022 02:27:26 | 12-07-2022 02:27:26 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 20,638 | closed | ValueError: Unable to create tensor, you should probably activate truncation and/or padding with 'padding=True' 'truncation=True' to have batched tensors with the same length. Perhaps your features (`labels` in this case) have excessive nesting (inputs type `list` where type `int` is expected). | ### System Info
- `transformers` version: 4.25.1
- Platform: Linux-5.10.133+-x86_64-with-glibc2.27
- Python version: 3.8.15
- Huggingface_hub version: 0.11.1
- PyTorch version (GPU?): 1.12.1+cu113 (True)
- Tensorflow version (GPU?): 2.9.2 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes (Tesla T4)
- Using distributed or parallel set-up in script?: no
### Who can help?
@sgugger maybe you could help?
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
# Information
I am using the implementation of text classification given in official [documentation ](https://huggingface.co/docs/transformers/tasks/sequence_classification)from huggingface and one given by @lewtun in his book.
I retrained an instance of sentence-transformers using contrastive loss on an unsupervised data dump and now want to finetune the above model on a labeled, binary dataset.
[This ](https://github.com/huggingface/transformers/issues/15505)issue is similar, and I followed the fix but to no help.
# To reproduce
1. Run [this notebook](https://colab.research.google.com/drive/1VMl5l1O4lrgSMiGTh4yKIWEY2XGUgSIm?usp=sharing)
2. Trainer.train() should produce the following error:
```
ValueError Traceback (most recent call last)
[/usr/local/lib/python3.8/dist-packages/transformers/tokenization_utils_base.py](https://localhost:8080/#) in convert_to_tensors(self, tensor_type, prepend_batch_axis)
716 if not is_tensor(value):
--> 717 tensor = as_tensor(value)
718
ValueError: too many dimensions 'str'
During handling of the above exception, another exception occurred:
ValueError Traceback (most recent call last)
9 frames
[<ipython-input-75-ce45916ac715>](https://localhost:8080/#) in <module>
7 )
8
----> 9 trainer.train()
[/usr/local/lib/python3.8/dist-packages/transformers/trainer.py](https://localhost:8080/#) in train(self, resume_from_checkpoint, trial, ignore_keys_for_eval, **kwargs)
1525 self._inner_training_loop, self._train_batch_size, args.auto_find_batch_size
1526 )
-> 1527 return inner_training_loop(
1528 args=args,
1529 resume_from_checkpoint=resume_from_checkpoint,
[/usr/local/lib/python3.8/dist-packages/transformers/trainer.py](https://localhost:8080/#) in _inner_training_loop(self, batch_size, args, resume_from_checkpoint, trial, ignore_keys_for_eval)
1747
1748 step = -1
-> 1749 for step, inputs in enumerate(epoch_iterator):
1750
1751 # Skip past any already trained steps if resuming training
[/usr/local/lib/python3.8/dist-packages/torch/utils/data/dataloader.py](https://localhost:8080/#) in __next__(self)
679 # TODO(https://github.com/pytorch/pytorch/issues/76750)
680 self._reset() # type: ignore[call-arg]
--> 681 data = self._next_data()
682 self._num_yielded += 1
683 if self._dataset_kind == _DatasetKind.Iterable and \
[/usr/local/lib/python3.8/dist-packages/torch/utils/data/dataloader.py](https://localhost:8080/#) in _next_data(self)
719 def _next_data(self):
720 index = self._next_index() # may raise StopIteration
--> 721 data = self._dataset_fetcher.fetch(index) # may raise StopIteration
722 if self._pin_memory:
723 data = _utils.pin_memory.pin_memory(data, self._pin_memory_device)
[/usr/local/lib/python3.8/dist-packages/torch/utils/data/_utils/fetch.py](https://localhost:8080/#) in fetch(self, possibly_batched_index)
50 else:
51 data = self.dataset[possibly_batched_index]
---> 52 return self.collate_fn(data)
[/usr/local/lib/python3.8/dist-packages/transformers/data/data_collator.py](https://localhost:8080/#) in __call__(self, features)
247
248 def __call__(self, features: List[Dict[str, Any]]) -> Dict[str, Any]:
--> 249 batch = self.tokenizer.pad(
250 features,
251 padding=self.padding,
[/usr/local/lib/python3.8/dist-packages/transformers/tokenization_utils_base.py](https://localhost:8080/#) in pad(self, encoded_inputs, padding, max_length, pad_to_multiple_of, return_attention_mask, return_tensors, verbose)
3015 batch_outputs[key].append(value)
3016
-> 3017 return BatchEncoding(batch_outputs, tensor_type=return_tensors)
3018
3019 def create_token_type_ids_from_sequences(
[/usr/local/lib/python3.8/dist-packages/transformers/tokenization_utils_base.py](https://localhost:8080/#) in __init__(self, data, encoding, tensor_type, prepend_batch_axis, n_sequences)
208 self._n_sequences = n_sequences
209
--> 210 self.convert_to_tensors(tensor_type=tensor_type, prepend_batch_axis=prepend_batch_axis)
211
212 @property
[/usr/local/lib/python3.8/dist-packages/transformers/tokenization_utils_base.py](https://localhost:8080/#) in convert_to_tensors(self, tensor_type, prepend_batch_axis)
731 "Please see if a fast version of this tokenizer is available to have this feature available."
732 )
--> 733 raise ValueError(
734 "Unable to create tensor, you should probably activate truncation and/or padding with"
735 " 'padding=True' 'truncation=True' to have batched tensors with the same length. Perhaps your"
ValueError: Unable to create tensor, you should probably activate truncation and/or padding with 'padding=True' 'truncation=True' to have batched tensors with the same length. Perhaps your features (`labels` in this case) have excessive nesting (inputs type `list` where type `int` is expected).
```
### Expected behavior
The model should train without failure, | 12-07-2022 02:10:35 | 12-07-2022 02:10:35 | Please use the [forums](https://discuss.huggingface.co/) to help debug your code. We also have a [step by step guide](https://huggingface.co/course/chapter8/4?fw=pt) to help debug issues with the `Trainer`.
In this instance you did not convert your labels from strings to integers, so the data collator cannot build a batch. Also you shouldn't share your huggingface token in a notebook like this, I recommend you invalidate it :-)<|||||>> Please use the [forums](https://discuss.huggingface.co/) to help debug your code. We also have a [step by step guide](https://huggingface.co/course/chapter8/4?fw=pt) to help debug issues with the `Trainer`.
>
> In this instance you did not convert your labels from strings to integers, so the data collator cannot build a batch. Also you shouldn't share your huggingface token in a notebook like this, I recommend you invalidate it :-)
Thank you for the suggestions @sgugger
I do have a quick question - shouldn't the below snippet take care of converting labels to ids and back?
```
id2label=id2label,
label2id=label2id,
```<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>I am having the same issue without labels. It works for the first 24 iterations but then suddenly it stops padding. I have two images `[image1 PIL, image2 PIL]` and two sentences `[sentence1, sentence2]`.
`inputs = preprocess([image1 PIL, image2 PIL], [sentence1, sentence2], return_tensors="pt", padding=True, truncation=True).to(device)`
The first iterations produce the correct output:
`[[101, 1037, 6302, 1997, 1037, 3287, 5093, 1012, 102, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [101, 1037, 6302, 1997, 1037, 2931, 5093, 1012, 102, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]]
`
Then suddenly it does not:
`[[101, 1037, 6302, 1997, 1037, 13755, 2492, 1012, 102], [101, 1037, 6302, 1997, 1037, 3103, 15909, 2492, 1012, 102]]
`
Both have padding='True'. The error is:
`Traceback (most recent call last):
File "anaconda3/envs/SRI/lib/python3.8/site-packages/transformers/tokenization_utils_base.py", line 718, in convert_to_tensors
tensor = as_tensor(value)
ValueError: expected sequence of length 9 at dim 1 (got 10)`
I cannot quite figure out why it suddently stops padding to the same lenghth. I have even tried setting the max length and the same thing happens. I have tested the text to make sure there are no changes there and the images. |
transformers | 20,637 | closed | added model resources for xlm-roberta | # What does this PR do?
Fixes [20055](https://github.com/huggingface/transformers/issues/20055)
- I created a link to task guides for casual language modeling and text classification. I think they are useful and applicable but not directly related to the xlm-roberta model class per say.
- For casual language modeling, should I write it under "text-generation" pipeline tag or create a subheader like multiple choice?
- I've checked notebooks from the community but so far none of them do a tutorial on xlm-roberta. Hopefully, there'll be one soon!
- I've also found a few blog posts related to roberta but not xlm-roberta. should we include them? cus they are technically the same architecture just that one is multi-lingual and the other is not
<!-- Remove if not applicable -->
## Before submitting
- [x] This PR improves the docs of xlm-roberta by adding common and most used resources
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
@stevhliu please check the work and let me know if I need to do any changes. thanks | 12-07-2022 02:05:18 | 12-07-2022 02:05:18 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 20,636 | closed | CLIP not releasing GPU memory after each inference batch | ### System Info
- `transformers` version: 4.25.1
- Platform: Linux-5.15.0-46-generic-x86_64-with-glibc2.31
- Python version: 3.10.8
- Huggingface_hub version: 0.11.1
- PyTorch version (GPU?): 1.12.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help?
@patil-suraj
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```python
from transformers import CLIPTokenizerFast, CLIPProcessor, CLIPModel
import torch
model_id = 'openai/clip-vit-base-patch32'
device = 'cuda'
tokenizer = CLIPTokenizerFast.from_pretrained(model_id)
processor = CLIPProcessor.from_pretrained(model_id)
model = CLIPModel.from_pretrained(model_id).to(device)
images = glob.glob('/data/index/abo/images/small/*/*.jpg')
dataset = Dataset.from_dict({'image': images}).cast_column('image', Image())
for i in range(0, len(dataset), 500):
print(i)
batch = processor(
text=None,
images=dataset[i:i+500]['image'],
return_tensors='pt'
)['pixel_values'].to(device)
model.get_image_features(batch)
```
### Expected behavior
Each time I call `model.get_image_features(batch)` about 20GB of GPU memory is consumed. However, the GPU memory is never cleared, so I quickly run into a `CUDA out of memory` error. This memory also isn't cleared if I manually call `torch.cuda.empty_cache()`.
It's possible I'm missing a step, but to me it looks like there may be a bug in the model code causing it not to free GPU memory after it finishes inference on a batch? | 12-07-2022 01:56:12 | 12-07-2022 01:56:12 | Maybe it's missing a `torch.no_grad`? Not sure though, cc @amyeroberts and @ArthurZucker if you have time to dive into this a bit more :-)<|||||>@sgugger thanks for the tip; I think that's the source of the issue! When I wrap my code in a `with torch.no_grad():` context it starts releasing GPU memory correctly. Not going to close the issue just yet since I'm not sure whether this *should* be the caller's responsibility or not when calling `get_image_features`, but at any rate the problem is solved for me. 🙂<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 20,635 | closed | Pin TensorFlow to the next release | # What does this PR do?
Pin TensorFlow to the next release, which should fix the current errors on the CI when trying to install `tensorflow-text`. | 12-06-2022 22:57:36 | 12-06-2022 22:57:36 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Merging without approval to fix main since all tests are passing. |
transformers | 20,634 | closed | Migrate torchdynamo to torch.compile | # What does this PR do?
This PR migrates the current integration with PyTorch 2.0 to use the entry point they introduced: `torch.compile`. As a consequence, the `torchdynamo` argument is deprecated to the profit of `torch_compile_backend` and `torch_compile_mode`. Setting either will trigger a model compilation. | 12-06-2022 21:13:06 | 12-06-2022 21:13:06 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 20,633 | closed | Fix link to speech encoder decoder model in speech recognition readme | # What does this PR do?
Current README documentation aims to `https://huggingface.co/docs/transformers/main/en/model_doc/speechencoderdecoder#speech-encoder-decoder-models`, which redirects to a 404 Not found. The actual link seems to be `https://huggingface.co/docs/transformers/main/en/model_doc/speech-encoder-decoder#speech-encoder-decoder-models` | 12-06-2022 20:32:42 | 12-06-2022 20:32:42 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20633). All of your documentation changes will be reflected on that endpoint. |
transformers | 20,632 | closed | fix natten installation | # What does this PR do?
I made a mistake in #20546 and it ended up with
```bash
# For `dinat` model
RUN python3 -m pip install --no-cache-dir natten
RUN python3 -m pip install --no-cache-dir natten -f https://shi-labs.com/natten/wheels/$CUDA/
```
so the CUDA version was not installed (due to `Requirement already satisfied`) | 12-06-2022 20:01:47 | 12-06-2022 20:01:47 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 20,631 | closed | Add missing is_decoder parameter | This PR fixes #20452 by adding the missing `is_decoder` parameter to the `BertConfig` docstring and other model docs with the same issue. | 12-06-2022 19:40:49 | 12-06-2022 19:40:49 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 20,630 | closed | Fixed num_channels!=3 normalization training | Fixes #20580 and #19913
## Who can review?
@NielsRogge
| 12-06-2022 19:26:44 | 12-06-2022 19:26:44 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Not exactly, the issue with your CircleCI permissions, the tests won't run.
Could you try refreshing your permissions as shown [here](https://support.circleci.com/hc/en-us/articles/360048210711-How-to-Refresh-User-Permissions-)?<|||||>I have been now stuck on this for a while. I refreshed the permissions and re-ran the CircleCI checks, and I get the error:
"Resource class docker for xlarge is not available for your project, or is not a valid resource class. This message will often appear if the pricing plan for this project does not support docker use."
<|||||>You might need to push an empty commit to re-trigger the tests after refreshing your permissions.<|||||>Hi @layjain let me know whether you could pick this up :)<|||||>FYI I pushed an empty commit to trigger CI<|||||>Hi @NielsRogge , I have fixed the CircleCI permissions, can this be merged.<|||||>@layjain The CI is currently running under your profile and not the Hugging Face profile, and as such our tests are mostly not run (there should be 22 checks here). If you rebase your PR on main you will see a new check failing (we added a fix to detect this recently). |
transformers | 20,628 | closed | `past_time_features` attribute for TimeSeriesTransformer is not optional | ### System Info
Hello,
I am trying to use `TimeSeriesTransformer` with `past_time_features=None` but I don't see anything in the code taking into account when this parameter is not defined, for example inside the method `create_network_inputs`:
https://github.com/huggingface/transformers/blob/v4.25.1/src/transformers/models/time_series_transformer/modeling_time_series_transformer.py#L1566
```
# time feature
time_feat = (
torch.cat(
(
past_time_features[:, self._past_length - self.config.context_length :, ...],
future_time_features,
),
dim=1,
)
if future_values is not None
else past_time_features[:, self._past_length - self.config.context_length :, ...]
)
```
We should either update the documentation to make it mandatory, or update `create_network_inputs` and construct the inputs differently if `past_Time_features` is not present?
### Who can help?
@LysandreJik
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```python
number_qty = 3
number_features = 100
prediction_length = 10
context_length = 10
nrows = 15
static_real_features = np.zeros((nrows, 100))
future_values = ...
past_values = ...
configuration = TimeSeriesTransformerConfig(input_size=number_qty,
num_static_real_features=number_features,
prediction_length=prediction_length,
context_length=context_length,
# past_time_features
)
model = TimeSeriesTransformerModel(configuration)
# model
model.forward(past_values,
static_real_features=static_real_features,
future_values=future_values,
past_time_features=None,
past_observed_mask=None,
static_categorical_features=None
)
```
### Expected behavior
We expect the `forward` method to works without past_time_features defined | 12-06-2022 19:18:07 | 12-06-2022 19:18:07 | cc @kashif and @NielsRogge <|||||>Thank you for the issue! My feeling was that, since the transformer is a permutation equivariant layer, time features should be mandatory. For the case when you do not have date times, you can add positional encoding of a size of your choosing.
What are your thoughts about this @simonMoisselin ?
<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Is it possible to reopen this.
I'm personally in favor of having the interface support both datasets that have {past,future}_time_features and those that do not contain them.
However, it's not call. But would it be possible to update the documentation, if it's not changed? In its current state it is misleading. https://huggingface.co/docs/transformers/model_doc/time_series_transformer#transformers.TimeSeriesTransformerModel.forward.past_observed_mask
<|||||>thanks, @nathanhack just to confirm, the discussion above was about the `past_time_features` being required and the like you have is for the observation mask... can you kindly clarify?<|||||>Correct. The link I gave was a mistake. past_observation_mask is right below past_time_feature. When I copied the link my page was correctly displaying past_time_features. Which is clearly wrong. Thank you for catching it and I'm sorry it was confusing. The correct link should have been:
https://huggingface.co/docs/transformers/model_doc/time_series_transformer#transformers.TimeSeriesTransformerModel.forward.past_time_features<|||||>Not that this is the right place but, but lags_sequence also says it's optional but it can't be None and can't be an empty list as it will cause an exception on the following line: https://github.com/huggingface/transformers/blob/75a208ef66c0176fc12a4c98922728ced5befbf9/src/transformers/models/time_series_transformer/modeling_time_series_transformer.py#L1445
self.config.context_length + max(self.config.lags_sequence)<|||||>@kashif we should probably make lags sequence optional as these are just additional features<|||||>so `lags_sequence` is set to optional since if you do not specify it, it will default to a pre-specified array, namely [1, 2, 3, 4, 5, 6, 7]. I believe you can just ignore this option, and everything should work... It serves to offset the input so that we train to predict the next time step, as well as the "output dim size" of a "token embedding" so that the input vectors have some dimension to them (especially in the univariate setting) and also allows us to trade-off sequence length with feature size. If you do not want lags you can set the `lags_sequence=[1]` for example.<|||||>> Correct. The link I gave was a mistake. past_observation_mask is right below past_time_feature. When I copied the link my page was correctly displaying past_time_features. Which is clearly wrong. Thank you for catching it and I'm sorry it was confusing. The correct link should have been:
>
> https://huggingface.co/docs/transformers/model_doc/time_series_transformer#transformers.TimeSeriesTransformerModel.forward.past_time_features
Fixed in PR #21020 |
transformers | 20,627 | closed | When Pillow is not installed, importing from transformers.image_transforms raises an unclear NameError | ### System Info
- `transformers` version: 4.25.1
- Platform: macOS-11.6.8-x86_64-i386-64bit
- Python version: 3.10.8
- Huggingface_hub version: 0.11.1
- PyTorch version (GPU?): not installed (NA)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help?
@amyeroberts @NielsRogge
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
Steps to reproduce the behavior:
1. Create and activate clean Python environment
```sh
python3 -m venv venv
source venv/bin/activate
```
2. Install `transformers` and its direct dependencies
```sh
pip install transformers
```
3. Attempt to import `transformers.image_transforms` or [one of its publicly-documented members](https://huggingface.co/docs/transformers/internal/image_processing_utils#transformers.image_transforms.center_crop)
```sh
python -c 'from transformers.image_transforms import center_crop'
```
4. Encounter a `NameError`
```
None of PyTorch, TensorFlow >= 2.0, or Flax have been found. Models won't be available and only tokenizers, configuration and file/data utilities can be used.
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/Users/miliu/venv/lib/python3.10/site-packages/transformers/image_transforms.py", line 51, in <module>
def to_channel_dimension_format(image: np.ndarray, channel_dim: Union[ChannelDimension, str]) -> np.ndarray:
NameError: name 'ChannelDimension' is not defined
```
### Expected behavior
Rather than a `NameError` on import (caused by [`to_channel_dimension_format()`'s signature type annotation containing `ChannelDimension`](https://github.com/huggingface/transformers/blob/7586a1a367f5974e099e1be2fa8a751aa766179f/src/transformers/image_transforms.py#L51), which is [conditionally imported](https://github.com/huggingface/transformers/blob/7586a1a367f5974e099e1be2fa8a751aa766179f/src/transformers/image_transforms.py#L29) only [when `PIL` is available](https://github.com/huggingface/transformers/blob/7586a1a367f5974e099e1be2fa8a751aa766179f/src/transformers/utils/import_utils.py#L566-L567)), I would've expected something like `transformers.rescale`'s user experience, where it helpfully recommends installing `Pillow` when one attempts to use it:
```python
>>> from transformers import rescale
None of PyTorch, TensorFlow >= 2.0, or Flax have been found. Models won't be available and only tokenizers, configuration and file/data utilities can be used.
>>> rescale()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/miliu/venv/lib/python3.10/site-packages/transformers/utils/dummy_vision_objects.py", line 14, in rescale
requires_backends(rescale, ["vision"])
File "/Users/miliu/venv/lib/python3.10/site-packages/transformers/utils/import_utils.py", line 997, in requires_backends
raise ImportError("".join(failed))
ImportError:
rescale requires the PIL library but it was not found in your environment. You can install it with pip:
`pip install pillow`. Please note that you may need to restart your runtime after installation.
``` | 12-06-2022 19:04:24 | 12-06-2022 19:04:24 | cc @amyeroberts <|||||>Thanks for raising @convoliution ! I agree both the imports and error message could be improved and your suggestion.
It's highlighted another thing that needs to be addressed: adding center_crop to the `transformers` [module init](https://github.com/huggingface/transformers/blob/4f78bcb2871e0c51bec55edb87aadcaedce58069/src/transformers/__init__.py#L745). (I realised that if we import `rescale` using `from transformers.image_transforms import rescale` we get the same `ChannelDimension` error).
I'll open up PRs to add to the init and to address the imports issue. <|||||>^--- the second PR will need to be merged in to be fully resolved. <|||||>Closing as the issue is now resolved: all image transforms can be safely imported and raise a clear error if Pillow is not installed in the environment if required.
@convoliution Thanks again for raising. One change to note is that some transforms that were previously importable directly from `transformers` can now only be imported through the `image_transforms` module e.g.:
`from transformers.image_transforms import rescale` c.f. #20704
|
transformers | 20,626 | closed | add in layer tf clip text tokenizer | # What does this PR do?
- Adds in layer `TFCLIPTokenizer` to enable serialization and serving it with TF Serving
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Addresses first step of #19992
## Before submitting
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case. -> https://github.com/huggingface/transformers/issues/19992
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 12-06-2022 19:02:42 | 12-06-2022 19:02:42 | Just need to figure out where to append `eos_token` and `bos_token` within the tokenizers.<|||||>cc @Rocketknight1 so it's on your radar when the PR is ready :-) <|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20626). All of your documentation changes will be reflected on that endpoint.<|||||>Clip `</w>` formatting is too cursed, I'm thinking of jump ship and do the tokenizer from Roberta instead haha.<|||||>I was gonna jump ship, but then this absolute beast @pedrogengo came in and found a magic way to make it work. Making him a coauthor of the PR bc of that.
We are not supporting batches (yet).<|||||>We now have batch tokenization implemented and working :)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>We are still working on it!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 20,625 | closed | Fix donut image processor | # What does this PR do?
This PR addresses failing integration tests for the Donut image processor which involves four main changes:
* Resolve bug where `size` wasn't passed to `do_align_axis`
* Remove a bug in the `get_resize_output_image_size` function which wouldn't take account of `max_size` ([inherited from previous resize without fixing](https://github.com/huggingface/transformers/blob/7586a1a367f5974e099e1be2fa8a751aa766179f/src/transformers/image_utils.py#L451))
* Update logic for getting output size in `thumbnail` method - ensuring the image dimensions are never increased.
* Update test values to reflect changes in resizing logic for thumbnail creation - see notes below.
### Changing resizing logic for `thumbnail` method
The DonutFeatueExtractor used the [Pillow thumbnail functionality](https://github.com/huggingface/transformers/blob/6cc06d17394f5715cdf2d13a1ef7680bedaee9e2/src/transformers/models/donut/feature_extraction_donut.py#LL109C18-L109C18) to resize images which was [replaced with reusing `resize`](https://github.com/huggingface/transformers/blob/bf9a5882a7125a6050aaad0f52257f07df062d6a/src/transformers/models/donut/image_processing_donut.py#L226) in the image_transforms library. This was done primarily as `image.thumbnail` modifies in place and uses [Pillow's resize](https://github.com/python-pillow/Pillow/blob/1e28c8cffd8492af6bf5df2045e7ffe08b124033/src/PIL/Image.py#LL2538C13-L2538C13) with some additional logic for calculating the output size. Unlike `resize` which will resize an image to the requested `(height, width)`, `thumbnail` will produce an image which is no larger than the original image or requested size i.e. it will scale down an image preserving the aspect ratio c.f. [Pillow docs](https://pillow.readthedocs.io/en/stable/reference/Image.html#PIL.Image.Image.thumbnail).
This is a similar behaviour to torchvision when resizing:
* the shortest image edge is resized to `size` (int for torchvision, `min(requested_heigh, requsted_width)` for Pillow)
* the other edge is resized to preserve the aspect ratio
* if the longest edge > `max_size`, the longest edge is resized to `max_size` and the shortest edge resized to preserve the aspect ratio.
The calculation of the other dimension to preserve the aspect ratio is slightly different between the libraries. In pytorch the length of the edge is found [using `int` to round](https://github.com/pytorch/vision/blob/511924c1ced4ce0461197e5caa64ce5b9e558aab/torchvision/transforms/functional.py#L383), whereas Pillow [rounds to the value which produces an aspect ratio closest to the original image](https://github.com/python-pillow/Pillow/blob/1e28c8cffd8492af6bf5df2045e7ffe08b124033/src/PIL/Image.py#L2505). The torchvision resizing logic is replicated in our image transforms library [here](https://github.com/huggingface/transformers/blob/ae1cffaf3cd42d0ab1d7529e3b3118725bca0bcf/src/transformers/image_transforms.py#L155).
In the test [`tests/models/vision_encoder_decoder/test_modeling_vision_encoder_decoder.py::DonutModelIntegrationTest::test_inference_docvqa`](https://github.com/huggingface/transformers/blob/6cc06d17394f5715cdf2d13a1ef7680bedaee9e2/tests/models/vision_encoder_decoder/test_modeling_vision_encoder_decoder.py#L816), the input image to the `thumbnail` method has dimension `(3713, 1920)`. The requested size is `(2560, 1920)`. `image.thumbnail` will resize to `(2560, 1373)` and our resizing logic (matching torchvision) will resize to `(2560, 1374)`.
As using torchvision resizing logic is more consistent with the rest of the library; Donut is the only model in the library that used the Pillow thumbnail functionality, and is more experimental than other models; I considered this to be an acceptable change.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests? | 12-06-2022 17:39:25 | 12-06-2022 17:39:25 | _The documentation is not available anymore as the PR was closed or merged._<|||||>> Thanks @amyeroberts . LGTM, but I don't see any change related to
>
> ```
> Resolve bug where size wasn't passed to do_align_axis
> ```
>
> Do I miss anything?
Nope - I've pushed it now :) <|||||>@sgugger @ydshieh This also uncovered another sneaky bug when resizing:
* When resizing, the image is coverted to `PIL.Image.Image` from numpy. The channel dimension format of the input image, before resizing inferred is also inferred.
* When the image is converted back to numpy the image is always in `"ChannelDimension.LAST"` format
* A final `to_channel_dimension_format` call is made to make sure the output resized image is in the same channel dimension format as the input.
* In `to_channel_dimension_format` the input image (resized in this case) channel dimension format is inferred and compared to the requested format.
* If the `height` dimensions is of size 3 or 1, then format is incorrectly inferred as `ChannelDimension.FIRST`
* This resulted in images in the incorrect format being returned after resizing
For practical purposes, this doesn't cause an issue as it's very unlikely an image has a height dimension of 3. However, it results in flaky tests and is a bug.
I've added an optional `input_channel_dimension` argument to `to_channel_dimension_format` which resolves this and additional tests for our `resize` functionality which previously failed and now pass with this update. |
transformers | 20,624 | closed | Whisper doesn't compute positional embeddings properly when given batches of prompt tokens | ### System Info
v4.25.1 on M1 Mac with python 3.8
### Who can help?
@sanchit-gandhi @patrickvonplaten @anton-l
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
When we want to run Whisper generation for a batch of samples with different prompt lengths (prefix tokens given to the decoder), positional embeddings for the decoder are improperly computed. It assumes all sequences have the same `past_key_values_length`, but this is not true in general.
Scenario:
`decoder_input_ids = [50361, 45431, 2584, 28682, 13, 50258, 50257, 50257]`
(`"<|startofprev|>Something completely irrelevant.<|startoftranscript|><|pad|><|pad|>"`)
`model.generate(input_features, decoder_input_ids=decoder_input_ids, decoder_attention_mask=decoder_attention_mask)` will not give the correct output because at the beginning of decoding, the pad tokens won't be taken into account that the positional embedding will be off.
### Expected behavior
Instead of tracking `past_key_values_length`, it should use the attention mask to compute position ids. The current implementation is more based off of encoder-decoder architectures that would never do decoder prompting, but it should take more inspiration from decoder-only models to handle prompting. This is done for the Flax implementation in #20479 | 12-06-2022 17:34:14 | 12-06-2022 17:34:14 | cc @ArthurZucker <|||||>Thanks for opening this good issue 🤗 I'll have a proper look, I think you insight is pretty good. <|||||>I have similar issue while using whisper with `padding=True` and I got this error:
```
RuntimeError: The size of tensor a (359) must match the size of tensor b (1500) at non-singleton dimension 1
```
However, there isn't any issue if I use `padding=max_length`.<|||||>The padding in whisper should always be set to `max_length` , and you should not really modify it. We should probably prevent people from using just `True`. <|||||>@hannan72's issue is separate to what I'm describing. But yes, padding should always be `max_length` - the issue I'm describing arises as a result of pad tokens being added to shorter sequences in batches (and won't raise any errors - it's just that Whisper's handling of multiple sequence lengths under the hood is flawed and would be fixed by computing `position_ids` based off `attention_mask`).<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Any update on this issue?<|||||>@samuelazran See https://github.com/huggingface/transformers/pull/21455 - feel free to run with that and give any fixes. My Flax PR also shows how to handle this.<|||||>> @samuelazran See #21455 - feel free to run with that and give any fixes. My Flax PR also shows how to handle this.
Thank you! I will test it.
Could you provide a code example of using prompts for training / inference?
I have an implementation but not sure yet:
https://discuss.huggingface.co/t/adding-prompt-context-to-whisper-with-huggingface-transformers/31070<|||||>just like @samuelazran, would really like to see the example and the #21455 in, being able to use 🤗 transformers directly is very helpful compared to using the external (original) library.<|||||>Prompting is an ongoing PR here: https://github.com/huggingface/transformers/pull/22496
Regarding #21455 - I think this should be handled by the aforementioned PR |
transformers | 20,623 | closed | Update summarization `run_pipeline_test` | # What does this PR do?
Update summarization `run_pipeline_test`.
A few more models can handle longer sequences, and won't give expected exception at this place
```python
with self.assertRaises(Exception):
outputs = summarizer("This " * 1000)
```
So we need to ignore those model config classes. | 12-06-2022 15:53:44 | 12-06-2022 15:53:44 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 20,622 | closed | Improved logo display in dark mode | ### Feature request

Use [github's](https://github.blog/changelog/2021-11-24-specify-theme-context-for-images-in-markdown/) features to improve how the logo is displayed
### Motivation
...
### Your contribution

 | 12-06-2022 13:52:45 | 12-06-2022 13:52:45 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 20,621 | closed | fix past_key_values in GPTNeoXForCausalLM.prepare_inputs_for_generation | # What does this PR do?
@gante @sgugger
Fixes `past_key_values` in `GPTNeoXForCausalLM.prepare_inputs_for_generation`. Passing `past_key_values` to `model.generate` had no effect whatsoever, since the argument was swallowed. Described in Issue #20347 (note that the validation bug was fixed in PR #20353, but the argument was still not passed along to the forward method)
The attached commit fixes the issue on my end, i.e. I now get different results when passing `past_key_values` to `generate`, as opposed to before. | 12-06-2022 13:37:01 | 12-06-2022 13:37:01 | _The documentation is not available anymore as the PR was closed or merged._<|||||>After doing some more testing, I noticed another issue that might or might not be a bug. Currently, it's not possible to use anything else than `1` for `num_return_sequences`. Here is a MWE:
```
import torch
from transformers import GPTNeoXForCausalLM, AutoTokenizer
# Load model
s = "NinedayWang/PolyCoder-160M"
model = GPTNeoXForCausalLM.from_pretrained(s)
tokenizer = AutoTokenizer.from_pretrained(s, pad_token="<|PAD|>")
# Create random prompt
N_TOKENS = 100
BATCH_SIZE=1
NUM_RETURN_SEQUENCES=8
pkv = torch.rand(
(
BATCH_SIZE, # batch size
N_TOKENS, # number of tokens
2 * model.config.num_hidden_layers,
model.config.num_attention_heads,
model.config.hidden_size // model.config.num_attention_heads
)
).permute([2, 0, 3, 1, 4]).split(2)
# Tokenize
enc = tokenizer("Hello world", return_tensors="pt")
enc["attention_mask"] = torch.cat((torch.ones((1, N_TOKENS)), enc["attention_mask"]), dim=1)
# Generate
print(
tokenizer.decode(
model.generate(
**enc,
past_key_values=pkv,
max_new_tokens=100,
pad_token_id=tokenizer.pad_token_id,
do_sample=True,
num_return_sequences=NUM_RETURN_SEQUENCES
)[0],
skip_special_tokens=True
)
)
```
Leads to
```
Traceback (most recent call last):
File "stuff/test.py", line 32, in <module>
num_return_sequences=2
File "/home/st/st_us-052400/st_st175337/conda/envs/thesis/lib/python3.7/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "/pfs/data5/home/st/st_us-052400/st_st175337/thesis/transformers/src/transformers/generation/utils.py", line 1581, in generate
**model_kwargs,
File "/pfs/data5/home/st/st_us-052400/st_st175337/thesis/transformers/src/transformers/generation/utils.py", line 2538, in sample
output_hidden_states=output_hidden_states,
File "/home/st/st_us-052400/st_st175337/conda/envs/thesis/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1190, in _call_impl
return forward_call(*input, **kwargs)
File "/pfs/data5/home/st/st_us-052400/st_st175337/thesis/transformers/src/transformers/models/gpt_neox/modeling_gpt_neox.py", line 663, in forward
return_dict=return_dict,
File "/home/st/st_us-052400/st_st175337/conda/envs/thesis/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1190, in _call_impl
return forward_call(*input, **kwargs)
File "/pfs/data5/home/st/st_us-052400/st_st175337/thesis/transformers/src/transformers/models/gpt_neox/modeling_gpt_neox.py", line 552, in forward
output_attentions=output_attentions,
File "/home/st/st_us-052400/st_st175337/conda/envs/thesis/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1190, in _call_impl
return forward_call(*input, **kwargs)
File "/pfs/data5/home/st/st_us-052400/st_st175337/thesis/transformers/src/transformers/models/gpt_neox/modeling_gpt_neox.py", line 325, in forward
output_attentions=output_attentions,
File "/home/st/st_us-052400/st_st175337/conda/envs/thesis/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1190, in _call_impl
return forward_call(*input, **kwargs)
File "/pfs/data5/home/st/st_us-052400/st_st175337/thesis/transformers/src/transformers/models/gpt_neox/modeling_gpt_neox.py", line 148, in forward
key = torch.cat((past_key, key), dim=-2)
RuntimeError: Sizes of tensors must match except in dimension 2. Expected size 1 but got size 2 for tensor number 1 in the list.
```
Is that expected behavior? I can fix it by creating multiple prompts (see below) per input, but it seems unintuitive, and I don't see anything about it in the docs. Perhaps the docs should simply mention that.
```
pkv = torch.rand(
(
BATCH_SIZE * NUM_RETURN_SEQUENCES, # <--- expand the batch size
N_TOKENS, # number of tokens
2 * model.config.num_hidden_layers,
model.config.num_attention_heads,
model.config.hidden_size // model.config.num_attention_heads
)
).permute([2, 0, 3, 1, 4]).split(2)
```
<|||||>Hey @ValeKnappich 👋
Thank you for the addition, I really think we should do this for all models for a better interface. In fact, the argument should be `past_key_values` and not `past`, [as mentioned in the original issue](https://github.com/huggingface/transformers/issues/20347#issuecomment-1346255761), but that's a deeper change. This PR is a quick fix for the problem, so I approve it.
As for `num_return_sequences`, let's open a new issue for it to avoid mixing too many things here :D<|||||>Hi, has this issue been resolved? I tried running the code snippet above:
```
import torch
from transformers import GPTNeoXForCausalLM, AutoTokenizer
# Load model
s = "NinedayWang/PolyCoder-160M"
model = GPTNeoXForCausalLM.from_pretrained(s)
tokenizer = AutoTokenizer.from_pretrained(s, pad_token="<|PAD|>")
# Create random prompt
N_TOKENS = 100
BATCH_SIZE=1
NUM_RETURN_SEQUENCES=8
pkv = torch.rand(
(
BATCH_SIZE, # batch size
N_TOKENS, # number of tokens
2 * model.config.num_hidden_layers,
model.config.num_attention_heads,
model.config.hidden_size // model.config.num_attention_heads
)
).permute([2, 0, 3, 1, 4]).split(2)
# Tokenize
enc = tokenizer("Hello world", return_tensors="pt")
enc["attention_mask"] = torch.cat((torch.ones((1, N_TOKENS)), enc["attention_mask"]), dim=1)
# Generate
print(
tokenizer.decode(
model.generate(
**enc,
past_key_values=pkv,
max_new_tokens=100,
pad_token_id=tokenizer.pad_token_id,
do_sample=True,
num_return_sequences=NUM_RETURN_SEQUENCES
)[0],
skip_special_tokens=True
)
)
```
and it returned with
```
RuntimeError: The size of tensor a (101) must match the size of tensor b (102) at non-singleton dimension 3
```
Is this a different error?<|||||>@ardywibowo the script I paste below works. But keep in mind that it is probably not doing what you expect: when `past_key_values` is passed, only the latest input token is considered (the all other previous tokens are supposed to be encoded in `past_key_valies`) -- in other words, "Hello" in "Hello world" is ignored when generating the next token, despite being present in the output text.
To understand why, you would have to dive into [this blog post](https://jalammar.github.io/illustrated-gpt2/) and into our `generate` code :)
____________________________
```py
import torch
from transformers import GPTNeoXForCausalLM, AutoTokenizer
# Load model
s = "NinedayWang/PolyCoder-160M"
model = GPTNeoXForCausalLM.from_pretrained(s)
tokenizer = AutoTokenizer.from_pretrained(s, pad_token="<|PAD|>")
# Create random prompt
N_TOKENS = 100
BATCH_SIZE=1
pkv = torch.rand(
(
BATCH_SIZE, # batch size
N_TOKENS, # number of tokens
2 * model.config.num_hidden_layers,
model.config.num_attention_heads,
model.config.hidden_size // model.config.num_attention_heads
)
).permute([2, 0, 3, 1, 4]).split(2)
# Tokenize
enc = tokenizer("Hello world", return_tensors="pt")
enc["attention_mask"] = torch.ones((1, N_TOKENS+1))
# Generate
print(
tokenizer.decode(
model.generate(
**enc,
past_key_values=pkv,
max_new_tokens=100,
pad_token_id=tokenizer.pad_token_id,
do_sample=True,
)[0],
skip_special_tokens=True
)
)
``` |
transformers | 20,620 | closed | Whisper Timestamp processor and prediction | # What does this PR do?
This will add support for correct `timestamp` prediction in the generation, and should update the ASR pipeline to use these when generating on longer audio file.
The rough idea is that when the timestamps are generated, the model is more *aware* of the timing and generates `<|endoftext|>` tokens to fill in the silence. So the token-to-time is approximatly a linear regression, and provides valuable information for matching begining and end of chunk of a longer audio.
By using both the fact that **timestamp** tokens always come in pair when separating two sentences, and the approximate **toke-to-time** (see [here](https://github.com/openai/whisper/blob/main/whisper/transcribe.py#L134)) we should increase the performances and also have the timestamp prediction.
| 12-06-2022 12:40:40 | 12-06-2022 12:40:40 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Looking forward to this PR<|||||>Example output, HF vs openai :
Model : `openai/whisper-medium`.
```python
{'chunks': [{'text': " Je m'appelle Claude.", 'timestamp': (0.0, 2.0)},
{'text': ' Je te coupe, Plow.', 'timestamp': (2.0, 4.0)},
{'text': " Let's just try it again.", 'timestamp': (8.0, 10.0)},
{'text': " Je m'appelle Claude.", 'timestamp': (10.0, 12.0)},
{'text': ' Je te plie, Mlew.', 'timestamp': (12.0, 14.0)},
{'text': " Huh. It's not quite what I'm saying.",'timestamp': (16.0, 20.0)},
{'text': ' Really?', 'timestamp': (20.0, 22.0)},
{'text': ' Sounds exactly the same to me.','timestamp': (22.0, 24.0)},
{'text': ' It does? Really?', 'timestamp': (24.0, 26.0)},
{'text': ' Yeah.', 'timestamp': (26.0, 28.0)},
{'text': " Let's try it again. Really listen.",'timestamp': (29.0, 30.88)},
{'text': ' Got it.', 'timestamp': (30.88, 32.48)},
{'text': " Je m'appelle Claude.",'timestamp': (32.480, 35.24)},
{'text': ' Je te flou-flee.', 'timestamp': (35.24, 37.28)},
{'text': ' Oh, mon Dieu.', 'timestamp': (39.28, 40.28)},
{'text': ' Oh, de fouf.', 'timestamp': (40.28, 41.88)},
{'text': " Je m'appelle Claude.",'timestamp': (43.48, 46.6)},
{'text': ' Je te call blue.', 'timestamp': (46.6, 48.24)},
{'text': ' No!', 'timestamp': (48.24, 50.44)},
{'text': ' Okay, maybe if we just break it down.','timestamp': (50.44, 53.28)},
{'text': " Okay, let's just try it one syllable at a time.",'timestamp': (53.28, 56.08)},
{'text': ' Okay, so repeat after me.', 'timestamp': (56.08, 58.08)},
{'text': ' Pardon me.', 'timestamp': (58.0, 59.0)},
{'text': ' Je...', 'timestamp': (59.0, 60.0)},
{'text': ' Je...', 'timestamp': (60.0, 61.0)},
{'text': ' Ma...', 'timestamp': (61.0, 62.0)},
{'text': ' Ma...', 'timestamp': (62.0, 63.0)},
{'text': ' Pelle.', 'timestamp': (63.0, 64.0)},
{'text': ' Pelle.', 'timestamp': (64.0, 65.0)},
{'text': ' Great!', 'timestamp': (65.0, 66.0)},
{'text': ' Okay, faster.', 'timestamp': (66.0, 67.0)},
{'text': ' Je...', 'timestamp': (67.0, 68.0)},
{'text': ' Je...', 'timestamp': (68.0, 69.0)},
{'text': ' Ma...', 'timestamp': (69.0, 70.0)},
{'text': ' Pelle.', 'timestamp': (70.0, 71.0)},
{'text': ' Pelle.', 'timestamp': (71.0, 72.0)},
{'text': " Je m'appelle.", 'timestamp': (72.0, 73.0)},
{'text': ' Mais pour pour?', 'timestamp': (73.0, 74.0)},
{'text': " It's too hard.", 'timestamp': (74.0, 75.0)},
{'text': " I can't teach you.", 'timestamp': (75.0, 76.0)},
{'text': ' What are you doing?', 'timestamp': (76.0, 77.0)},
{'text': ' I have to go before I put your head through a wall.','timestamp': (77.0, 78.0)},
{'text': " Don't go!", 'timestamp': (78.0, 79.0)},
{'text': " Don't go!", 'timestamp': (79.0, 80.0)},
{'text': ' I need you!', 'timestamp': (80.0, 81.0)},
{'text': ' My addition is tomorrow!', 'timestamp': (81.0, 82.0)},
{'text': ' Cha-blu-bla!', 'timestamp': (82.0, 83.0)},
{'text': ' Mille-la-pille!', 'timestamp': (83.0, 84.0)},
{'text': ' Oum-bla!', 'timestamp': (84.0, 85.0)},
{'text': ' Hola!', 'timestamp': (82.56, 83.4)}],
'text': " Je m'appelle Claude. Je te coupe, Plow. Let's just try it again. Je "
"m'appelle Claude. Je te plie, Mlew. Huh. It's not quite what I'm "
'saying. Really? Sounds exactly the same to me. It does? Really? '
"Yeah. Let's try it again. Really listen. Got it. Je m'appelle "
"Claude. Je te flou-flee. Oh, mon Dieu. Oh, de fouf. Je m'appelle "
'Claude. Je te call blue. No! Okay, maybe if we just break it down. '
"Okay, let's just try it one syllable at a time. Okay, so repeat "
'after me. Pardon me. Je... Je... Ma... Ma... Pelle. Pelle. Great! '
"Okay, faster. Je... Je... Ma... Pelle. Pelle. Je m'appelle. Mais "
"pour pour? It's too hard. I can't teach you. What are you doing? I "
"have to go before I put your head through a wall. Don't go! Don't "
'go! I need you! My addition is tomorrow! Cha-blu-bla! '
'Mille-la-pille! Oum-bla! Hola! Boo.'}
```
```
[(" Je m'appelle Claude.", 0.0, 2.0),
(' Je te coupe, Plow.', 2.0, 4.0),
(" Let's just try it again.", 8.0, 10.0),
(" Je m'appelle Claude.", 10.0, 12.0),
(' Je te plie, Mlew.', 12.0, 14.0),
(" Huh. It's not quite what I'm saying.", 16.0, 20.0),
(' Really?', 20.0, 22.0),
(' Sounds exactly the same to me.', 22.0, 24.0),
(' It does? Really?', 24.0, 26.0),
(' Yeah.', 26.0, 28.0),
(" Let's try it again. Really listen.", 28.0, 30.0),
(' Got it.', 30.0, 32.0),
(" Je m'appelle Claude.", 32.0, 34.0),
(' Je te plie, Mlew.', 34.0, 36.0),
(' Oh, mon Dieu.', 38.0, 40.0),
(' Oh, de fouf.', 40.0, 42.0),
(" Je m'appelle Claude.", 42.0, 44.0),
(' Je te coupe, Mlew.', 44.0, 46.0),
(' No!', 46.0, 48.0),
(' Okay.', 48.0, 50.0),
(' Maybe if we just break it down.', 50.0, 52.0),
(" Okay, let's just try it one syllable at a time.", 52.0, 54.0),
(' Okay, so repeat after me.', 54.0, 56.0),
(" Je m'appelle.", 56.0, 60.0),
(' Great. Okay, faster.', 60.0, 62.0),
(" Je m'appelle.", 62.0, 64.0),
(" Je m'appelle.", 64.0, 66.0),
(' Mais pour pour?', 66.0, 68.0),
(" It's too hard. I can't teach you.", 70.0, 72.0),
(' What are you doing?', 72.0, 74.0),
(' I have to go before I put your head through a wall.', 74.0, 76.0),
(" Don't go. I need you.", 76.0, 78.0),
(' My audition is tomorrow.', 78.0, 80.0),
(' Cha-blah-blah.', 80.0, 82.0),
(' Mela-pi.', 82.0, 84.0),
(' Hola!', 84.0, 86.0),
(' Boo!', 86.0, 114.0)]
```
Note that the differences in the text are related to the logit processors that they updated. But overall it is very similar, but 3x faster 😉 <|||||>Looking close!!!
🙏🏾🤞🏾 this review gets pushed through!! <|||||>> Looking close!!! 🙏🏾🤞🏾 this review gets pushed through!!
I'm too thirsty lol. Love yall and appreciate all the work being done! <|||||>> > Looking close!!! 🙏🏾🤞🏾 this review gets pushed through!!
>
> I'm too thirsty lol. Love yall and appreciate all the work being done!
@Narsil 🫠👀<|||||>🙌🏾<|||||>Really appreciate the constant updates to get this finalized! Thanks! <|||||>This should just need a little code cleaning / documenting and will be good for a final review! <|||||>Wow! That was a lot of refactoring. Ready for a final review @Narsil <|||||>@TheExGenesis the current implementation should not really be different with the `Trie` approach. Problem with the `Trie` was that it does not really keep track of the longest, but rather the first longest common sequence. It also assumed that the entire sequence had to be present (an extra loop would have had to be added).
We can still discuss cases with a term appearing twice, in the current implementation the last occurence would be chosen for merge. Do you have a specific example in mind? <|||||>@ArthurZucker I'm choosing the first occurrence rather than the last one and it's working well. Otherwise, if an expression is the actual prefix, and occurs later in the sequence, the beginning of the sequence gets eaten. Also, I think you should be discounting stride_right as well when incrementing chunk time. I apologize for not making specific code recommendations right now, I'm a little short on time and working in my own messy environment.<|||||>I think it is pretty random and would need a heuristic for small sequences. If you merge on a single term, you should probably be using just a little bit more chunks length. Have you tried both versions? 😉 <|||||>> I think it is pretty random and would need a heuristic for small sequences. If you merge on a single term, you should probably be using just a little bit more chunks length. Have you tried both versions? 😉
I'm sorry I don't understand, are you responding to the first or second point? If to the second point, you really want the timestamps to be accurate otherwise they won't match up with the audio.<|||||>No I was talking about the first point, did you try taking the last occurence as well? Just wondering if you have some kind of experimental benchmark on this.
The `stride_right` is not used, based on [this](https://huggingface.co/blog/asr-chunking), the stride right is part of the speech that is disregarded. It is not very intuitive, but basically the stride right does not influence the beginning time of the next sequence.<|||||>> No I was talking about the first point, did you try taking the last occurence as well? Just wondering if you have some kind of experimental benchmark on this.
>
> The `stride_right` is not used, based on [this](https://huggingface.co/blog/asr-chunking), the stride right is part of the speech that is disregarded. It is not very intuitive, but basically the stride right does not influence the beginning time of the next sequence.
Yeah I tried taking the last occurrence, it ate like 10s of audio, and then I changed it.
re stride_right - [chunk_iter does use stride_left and stride_right ](https://github.com/huggingface/transformers/blob/15573920bccf879d621e99e21367e216017adf7d/src/transformers/pipelines/automatic_speech_recognition.py#L59), and I verified this empirically, the timestamps are only correct when I take both left and right strides into account. <|||||>Oh okay thanks. Would be awesome if you have a sample audio on which I could work on 😉
I think you are making a good point, `chunk_iter` is indeed stepping w.r.t the stride right! My bad. <|||||>> Oh okay thanks. Would be awesome if you have a sample audio on which I could work on 😉 I think you are making a good point, `chunk_iter` is indeed stepping w.r.t the stride right! My bad.
All good :) I've been using the first few minutes of [this podcast](https://www.iheart.com/podcast/256-global-voices-podcast-31091854/episode/bangladeshs-new-years-celebration-of-diversity-63758318/) |
transformers | 20,619 | closed | Stripping last some words from output of model.generate() method | ### System Info
Hi All,
Am using pretrained "gec-t5_small" for grammar error correction. But the output from model.generate() method stripping the output. Please any one suggest a solution for the same.
**Code**
```
model = T5ForConditionalGeneration.from_pretrained("Unbabel/gec-t5_small", torch_dtype="auto")
tokenizer = T5Tokenizer.from_pretrained('t5-small',model_max_length=1024, torch_dtype="auto")
sentence = "600 character length sentence"
sentence = sentence.strip()
tokenized_sentence = tokenizer('gec: ' + sentence , max_length=1024, truncation=True, return_tensors='pt',add_special_tokens=True)
model_output = model.generate(
input_ids = tokenized_sentence.input_ids,
attention_mask = tokenized_sentence.attention_mask,
max_new_tokens = 1024, (max_length also tried)
use_cache=True,
num_beams=3, (tried num_beams=5 as well)
early_stopping=True,
do_sample=False,
)
corrected_sentence = tokenizer.decode(
model_output[0],
skip_special_tokens=True,
clean_up_tokenization_spaces=True
)
```
**Output**
sentence : "Manager Hi sir I am Srikanth so all ready the team is discussed to develop a new feature of XYZ so that's why we discussed a that new features about that project so in that case in my team to learn about the new projects topics to based on our discussion the project so it's take's time to learn to that features and to implement and as off you know my team members are very fast to learn about new things and in work purpose also my team members are very fast and so please give additional to complete about new features of our project and sorry for the delay but you give me additional time my team members are to give our more best about to our new features."
corrected_sentence : "Manager Hi sir I am Srikanth so all ready the team is discussed to develop a new feature of XYZ so that's why we discussed those new features about that project so in that case in my team to learn about the new projects topics to based on our discussion the project so it's take's time to learn to those features and to implement and as of you know my team members are very fast to learn about new things and in work purpose also my team members are very fast and so please give additional information to complete"
Please suggest any solution for the same? Is there any way to increase length of output.
Note:- platform Azure databricks
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Steps:
1) Use the above mentioned code .
2) with sentence as input .
3) check output is deletion some word from end of the string.
### Expected behavior
Corrected should give fully inputted sentence with correct grammar. | 12-06-2022 12:03:18 | 12-06-2022 12:03:18 | cc @gante <|||||>Hi @ancil009 👋
I've spent some time debugging, and here's what's happening:
The output is shorter because the model outputs `eos_token_id` at that point. In other words, it thinks it is done. In fact, if I modify the code to ignore `eos_token_id`, the output is as follows.
```
Manager Hi sir I am Srikanth so all ready the team is discussed to develop a new feature of XYZ so that's why we discussed those new features about that project so in that case in my team to learn about the new projects topics to based on our discussion the project so it's take's time to learn to those features and to implement and as of you know my team members are very fast to learn about new things and in work purpose also my team members are very fast and so please give additional information to complete Manager Hi sir I am Srikanth. I am ready to discuss the team is discussed to develop a new feature of XYZ. The team is discussed. So, I am ready to discuss the new features about that project, so in that case in my team to learn about the new projects topics to based on our discussion the project so it's take's time to learn about the new features and in that case in the Manager Hi sir. Hi sir I am Sri Lankan. Hi sir. I am Sri Lankan. Hi sir. I am Sri Lankan. So, I am Sri Lankan. So, I am Sri Lankan. So, I am Sri Lankan. So, I am very fast and so please give me the team members are very fast and Manager Hi sir. Hi sir, Hi sir, Hi sir, Hi sir, Hi sir, Hi sir, I am Sri Lankan. Hi sir, I am Sri Lankan. Hi sir, I am Sri Lankan. Hi sir, I am Sri Lankan. Hi sir, I am Sri Lankan. Hi sir, I am Sri Lankan. So, I am very fast and so please give me additional information about the new features of XYZ. Manager Hi sir. [...]
```
In other words, the model starts repeating itself, which isn't helpful. From this, we can rule out code-related problems.
Despite accepting infinite sequences, T5 has a relatively small attention window. Depending on the dataset the model was fine-tuned with, it might also be biased towards short sequences. Can you try splitting your input into multiple (smaller) sequences, to see if it helps?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 20,618 | closed | Incremental Training on model of my domain which I have fine tuned using run_mlm | ### Model description
I have used run_mlm to fine tune the model of my own domain. Now I want to do is, I want to pass a incremental flag in run_mlm, if that flag will be true then instead of fine tuning I want to start training on the already trained model of my own domain which we got before. What are the changes we need to do in run_mlm?
### Open source status
- [ ] The model implementation is available
- [ ] The model weights are available
### Provide useful links for the implementation
_No response_ | 12-06-2022 11:46:16 | 12-06-2022 11:46:16 | Please use the [forums](https://discuss.huggingface.co/) to ask such questions as we keep issues for bugs and feature requests only.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 20,617 | closed | pre | ### Model descript
### Open source status
- [ ] The model implementation is available
- [ ] The model weights are available
### Provide useful links for the implementation
_No response_ | 12-06-2022 11:38:01 | 12-06-2022 11:38:01 | close |
transformers | 20,616 | closed | Cpmant test | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 12-06-2022 11:20:48 | 12-06-2022 11:20:48 | |
transformers | 20,615 | closed | return_tensors and return_text in TextGenerationPipeline don't work or partially work | ### System Info
- transformers version: 4.24.0
- python version: 3.8.11
### Who can help?
Library:
- Text generation: @patrickvonplaten, @Narsil, @gante
- Pipelines: @Narsil
Documentation: @sgugger, @stevhliu
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
1. initialize TextGenerationPipeline, assume we call it `pipeline` below
2. run the following code snips:
```python
results = pipeline(text_input, return_text=True, return_full_text=False, return_tensors=False)[0]
```
```python
results = pipeline(text_input, return_text=True, return_full_text=False, return_tensors=True)[0]
```
```python
results = pipeline(text_input, return_text=False, return_full_text=False, return_tensors=True)[0]
```
```python
results = pipeline(text_input, return_text=False, return_full_text=False, return_tensors=False)[0]
```
3. all the four code snips return the same dict with only one key `generated_text`
### Expected behavior
1. when `return_text=True` and `return_tensors=False`, return a dict contains only one key `generated_text`
2. when `return_text=False` and `return_tensors=True`, return a dict contains only one key `generated_token_ids`
3. when `return_text=True` and `return_tensors=True`, return a dict contains both `generated_text` and `generated_token_ids` | 12-06-2022 10:58:26 | 12-06-2022 10:58:26 | This is perfectly normal as any value being set will choose its value in order.
boolean were a bad choice since some combinations don't mean anything.
https://github.com/huggingface/transformers/blob/main/src/transformers/pipelines/text_generation.py#L132<|||||>@Narsil, thanks for responding!
Well then I think there may have some misguided on the [documentation](https://huggingface.co/docs/transformers/main_classes/pipelines#transformers.TextGenerationPipeline.__call__), where demonstrates `return_text`, `return_full_text` and `return_tensors` are boolean and default to True or False, also there is no pamareter called `return_type` in `__call__` but undert the hood it's the real one that decide what will be returned. And the document also not clearly demonstrates the relationship of `return_text` and `return_tensors`.
And I remember back to the earlier versions (4.1x and earlier I think) we can decide what will be returned (only `generated_text`, only `generated_token_ids` or both of them) by using the combinations of the three parameters.<|||||>I may be wrong, but I think `return_type` is an internal parameter, but you can still decide what to return with the other three parameters.
As far as I can tell, you can't return a combination of `generated_text` and `generated_token_ids`. You can only return one or the other, which I guess is why some of those combinations don't do anything. Would it help if there was a note in the docs about this?<|||||>> I may be wrong, but I think `return_type` is an internal parameter, but you can still decide what to return with the other three parameters.
>
> As far as I can tell, you can't return a combination of `generated_text` and `generated_token_ids`. You can only return one or the other, which I guess is why some of those combinations don't do anything. Would it help if there was a note in the docs about this?
@stevhliu thanks for the replying! Now I'm clear with the functionality and relationship between `return_text` and `return_tensors`, and I think it would be clear to more people if the documentation also target this out. 😄 <|||||>Yes the docs could use some polish here, maybe even soft deprecate `return_text` & co in favor of `return_type`.
Soft deprecate meaning we don't ever have to actually remove them, just don't make them as prominent since they are indeed confusing. |
transformers | 20,614 | closed | Can we add an augment `min_new_tokens` to the `generate` function? | ### Feature request
Can we add a new parameter `min_new_tokens` to the `generate` function to limit the length of newly generated tokens? The current parameter `min_length` limits the length of `prompt + newly generated tokens`, not the length of `newly generated tokens`.
### Motivation
We already have `max_new_tokens` to limit the max length of the generated tokens, i.e., `max_length = max_new_tokens + prompt`.
Why not add the `min_new_token` to limit the min length of the generated tokens? (i.e., `min_length = min_new_tokens + prompt`)
I know this is kind like an other syntax sugar, but it will be much convenient if we have this parameter.
### Your contribution
I can sumbit a PR. | 12-06-2022 09:50:31 | 12-06-2022 09:50:31 | the `min_length` already does what you want the `min_new_tokens` does under the hood, so personally I don't understand why you like to add a new `min_new_tokens` and change what `min_length` original mean. But add `min_new_tokens` as an alias of `min_length` may be a good idea (but not necessary).<|||||>For my understanding, in the current implementation, `min_length` set the length limit of `len(promt) + len(generated tokens)`
See the implementation of [`MinLengthLogitsProcessor`](https://github.com/huggingface/transformers/blob/28f3d431d4b8b74a458a5583297d5101483edb74/src/transformers/generation/logits_process.py#L119):
```python
def __call__(self, input_ids: torch.LongTensor, scores: torch.FloatTensor) -> torch.FloatTensor:
cur_len = input_ids.shape[-1]
if cur_len < self.min_length:
scores[:, self.eos_token_id] = -float("inf")
return scores
```
`input_ids` in the previous code block refers to `prompt + generated tokens`. For example, see the implemented of some decoding method for how logits processors are called. (See [`beam_search()`](https://github.com/huggingface/transformers/blob/28f3d431d4b8b74a458a5583297d5101483edb74/src/transformers/generation/utils.py#L2818) or [`greedy_decoding()`](https://github.com/huggingface/transformers/blob/28f3d431d4b8b74a458a5583297d5101483edb74/src/transformers/generation/utils.py#L2298))
It will be more convenient if we set an argument `min_new_tokens` to **only** limit the length of `generated tokens`, not `prompt + generated tokens`.
---
Please correct me if I have missed something
<|||||>cc @gante <|||||>> For my understanding, in the current implementation, `min_length` set the length limit of `len(promt) + len(generated tokens)`
>
> See the implementation of [`MinLengthLogitsProcessor`](https://github.com/huggingface/transformers/blob/28f3d431d4b8b74a458a5583297d5101483edb74/src/transformers/generation/logits_process.py#L119):
>
> ```python
> def __call__(self, input_ids: torch.LongTensor, scores: torch.FloatTensor) -> torch.FloatTensor:
> cur_len = input_ids.shape[-1]
> if cur_len < self.min_length:
> scores[:, self.eos_token_id] = -float("inf")
> return scores
> ```
>
> `input_ids` in the previous code block refers to `prompt + generated tokens`. For example, see the implemented of some decoding method for how logits processors are called. (See [`beam_search()`](https://github.com/huggingface/transformers/blob/28f3d431d4b8b74a458a5583297d5101483edb74/src/transformers/generation/utils.py#L2818) or [`greedy_decoding()`](https://github.com/huggingface/transformers/blob/28f3d431d4b8b74a458a5583297d5101483edb74/src/transformers/generation/utils.py#L2298))
>
> It will be more convenient if we set an argument `min_new_tokens` to **only** limit the length of `generated tokens`, not `prompt + generated tokens`.
>
> Please correct me if I have missed something
@silverriver yes you are right, its my mistake 😂
But I still think `min_new_tokens` and `min_length` should mean the same thing and also to `max_new_tokens` and `max_length` (though they are actually different now), because most people who use `model.generate` would think `min_length` means to 'at least generate min_length tokens' and `max_length` means to 'generate tokens no more than max_length'<|||||>@PanQiWei I agree, but I think it is impossible to change the current implementation of `max_length` and `min_length` for the conern of back compatibility.<|||||>Hey @silverriver @PanQiWei 👋
Having `min_new_tokens` would certainly be a welcome change, for the same reason as `max_new_tokens`. It is clear what it does, regardless of the type of model, where `min_tokens`/`max_tokens` are not. In the long run, we'd like to deprecate `min_tokens`/`max_tokens` in favor of `min_new_tokens`/`max_new_tokens`.
I'll have a look at your PRs :)<|||||>I have closed my original PR and made a new one (#21044 ) to avoid messing with other commits when I tried to rebase my change. |
transformers | 20,613 | closed | Ci-jukebox | # What does this PR do?
Just skips the 5b test as there is not enough RAM on the CI instance.
Keeping the test is important for local testing IMO | 12-06-2022 09:31:51 | 12-06-2022 09:31:51 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 20,612 | closed | Add DPT hybrid | # What does this PR do?
Adds DPT Hybrid support to `transformers`
Do not merge until #20550 gets merged
cc @NielsRogge @sgugger @patrickvonplatten @patil-suraj
| 12-06-2022 09:11:17 | 12-06-2022 09:11:17 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20612). All of your documentation changes will be reflected on that endpoint.<|||||>You should have opened your PR to go on the branch adding BiT and VitHybrid as the PR is not easy to review as it is.<|||||>Yes sorry :/ Let me open a PR on the other branch<|||||>Here is a much cleaner version of the PR: https://github.com/NielsRogge/transformers/pull/51 ;) <|||||>Closing in favor of https://github.com/huggingface/transformers/pull/20645 |
transformers | 20,611 | closed | ImportError: cannot import name 'TFGenerationMixin' from 'transformers.generation' | ### System Info
# Info
- `transformers` version: 4.25.1
- Platform: Linux-6.0.8-1-MANJARO-x86_64-with-glibc2.36
- Python version: 3.10.8
- Huggingface_hub version: 0.11.1
- PyTorch version (GPU?): 1.13.0+cu117 (True)
- Tensorflow version (GPU?): 2.11.0 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: False
- Using distributed or parallel set-up in script?: False
# Problem
`ImportError: cannot import name 'TFGenerationMixin' from 'transformers.generation'`
I am getting this error while loading a pretrained TensorFlow model as shown below.
```python
import tensorflow,torch
from transformers import AutoTokenizer, AutoModel
model = AutoModel.from_pretrained("model-name", from_tf=True)
```
Model is located in a local folder
```
model-name\
config.json
tf-model.h5
```
# Stacktrace
```
---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
Cell In[20], line 5
2 from transformers import AutoTokenizer, AutoModel
4 # load the model
----> 5 model = AutoModel.from_pretrained("model-name", from_tf=True)
File ~/project/venv/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py:463, in _BaseAutoModelClass.from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
461 elif type(config) in cls._model_mapping.keys():
462 model_class = _get_model_class(config, cls._model_mapping)
--> 463 return model_class.from_pretrained(
464 pretrained_model_name_or_path, *model_args, config=config, **hub_kwargs, **kwargs
465 )
466 raise ValueError(
467 f"Unrecognized configuration class {config.__class__} for this kind of AutoModel: {cls.__name__}.\n"
468 f"Model type should be one of {', '.join(c.__name__ for c in cls._model_mapping.keys())}."
469 )
File ~/project/venv/lib/python3.10/site-packages/transformers/modeling_utils.py:2344, in PreTrainedModel.from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
2341 try:
2342 from .modeling_tf_pytorch_utils import load_tf2_checkpoint_in_pytorch_model
-> 2344 model, loading_info = load_tf2_checkpoint_in_pytorch_model(
2345 model, resolved_archive_file, allow_missing_keys=True, output_loading_info=True
2346 )
2347 except ImportError:
2348 logger.error(
2349 "Loading a TensorFlow model in PyTorch, requires both PyTorch and TensorFlow to be installed."
2350 " Please see https://pytorch.org/ and https://www.tensorflow.org/install/ for installation"
2351 " instructions."
2352 )
File ~/project/venv/lib/python3.10/site-packages/transformers/modeling_tf_pytorch_utils.py:359, in load_tf2_checkpoint_in_pytorch_model(pt_model, tf_checkpoint_path, tf_inputs, allow_missing_keys, output_loading_info)
355 raise
357 import transformers
--> 359 from .modeling_tf_utils import load_tf_weights
361 logger.info(f"Loading TensorFlow weights from {tf_checkpoint_path}")
363 # Instantiate and load the associated TF 2.0 model
File ~/project/venv/lib/python3.10/site-packages/transformers/modeling_tf_utils.py:42
40 from .configuration_utils import PretrainedConfig
41 from .dynamic_module_utils import custom_object_save
---> 42 from .generation import TFGenerationMixin
43 from .tf_utils import shape_list
44 from .utils import (
45 DUMMY_INPUTS,
46 SAFE_WEIGHTS_INDEX_NAME,
(...)
63 working_or_temp_dir,
64 )
ImportError: cannot import name 'TFGenerationMixin' from 'transformers.generation'
```
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
Can be reproduced by loading a tensorflow model in local storage using
```
AutoModel.from_pretrained()
```
### Expected behavior
I have trained a TFBertForMaskedLM model with a custom dataset on google colab. I saved weights of this model by calling
```python
model.save_pretrained()
```
Now I want to load it in my local machine and use it. | 12-06-2022 08:57:59 | 12-06-2022 08:57:59 | cc @gante if you have any idea.<|||||>Hi @Furknn 👋
I have tried to reproduce this in my local machine (current `main` branch) in a local notebook (with `transformers==4.25.1`), using the following script:
```python
import tensorflow,torch
from transformers import AutoTokenizer, AutoModel
model = AutoModel.from_pretrained("gpt2", from_tf=True)
```
In both cases, no exception was thrown. Can I ask you to reinstall `transformers` and, if the issue persists, to share a script I can call on my end where I can reproduce the issue? :)<|||||>I have reinstalled transformers version 4.25.1 and tried. It works correctly now.
Thanks |
transformers | 20,610 | closed | MBART pretrained model is unable to produce output in the target language | Hi,
I am using mbart-large-50 for generation task. Source language is Hindi and target language is Gujarati. However, I am always getting the output in Hindi. It is expected to get few tokens in the target language even though its a pretrained model since i am forcing the BOS token to the target language.
Sharing the code that i am using for this task.
`
# translate Hindi to Gujarati
from transformers import MBartForConditionalGeneration, MBart50TokenizerFast
model = MBartForConditionalGeneration.from_pretrained("facebook/mbart-large-50")
tokenizer = MBart50TokenizerFast.from_pretrained("facebook/mbart-large-50")
tokenizer.src_lang = "hi_IN"
article_hi = "संयुक्त राष्ट्र के प्रमुख का कहना है कि सीरिया में कोई सैन्य समाधान नहीं है"
encoded_hi = tokenizer(article_hi, return_tensors="pt")
generated_tokens = model.generate(**encoded_hi, forced_bos_token_id=tokenizer.lang_code_to_id["gu_IN"])
tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)`
@patrickvonplaten | 12-06-2022 08:55:28 | 12-06-2022 08:55:28 | You also need to set the `tokenizer.tgt_lang` I believe.
Also cc @ArthurZucker <|||||>I think you are just using the wrong checkpoint.
Using the `"facebook/mbart-large-50-many-to-many-mmt"` I obtain the following :
```યુનાઇટેડ સ્ટેટ્સ ઓફ અમેરિકાના પ્રાંતિકારી کہتے हैं कि सीरिया में कोई सैन्य समाधान नहीं है```
which, according to Google is Gujarati!. <|||||>@ArthurZucker "facebook/mbart-large-50-many-to-many-mmt" is fine tuned checkpoint. I am trying with a pretrained checkpoint which is "facebook/mbart-large-50".
The pretrained checkpoint should also be able to give output in the target language if we force the BOS token to the target language. The output may be little bit distorted but that's fine. Here, its giving the output same as the source language. <|||||>> The pretrained checkpoint should also be able to give output in the target language if we force the BOS token to the target language
I think this depends on the language since it is a `pretrained checkpoint` as mentioned on the model card :
> `mbart-large-50` is pre-trained model and primarily aimed at being fine-tuned on translation tasks. It can also be fine-tuned on other multilingual sequence-to-sequence tasks. See the [model hub](https://huggingface.co/models?filter=mbart-50) to look for fine-tuned versions.
Since it works totally fine with the fine-tuned checkpoint, this is not a bug.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Hi, is there some mismatch between the tokenizer of `facebook/mbart-large-50` and `shift_tokens_right` of `MBartForConditionalGeneration`? Since the tokenizer of `facebook/mbart-large-en-ro` would give **X [eos, src_lang_code]** while `facebook/mbart-large-50`'s tokenizer would give **[src_lang_code] X [eos]**, but they both use the same `shift_tokens_right` method which I believe is only suitable for input like this **X [eos, src_lang_code]** :
```python
def shift_tokens_right(input_ids: torch.Tensor, pad_token_id: int):
"""
Shift input ids one token to the right, and wrap the last non pad token (the <LID> token) Note that MBart does not
have a single `decoder_start_token_id` in contrast to other Bart-like models.
"""
prev_output_tokens = input_ids.clone()
if pad_token_id is None:
raise ValueError("self.model.config.pad_token_id has to be defined.")
# replace possible -100 values in labels by `pad_token_id`
prev_output_tokens.masked_fill_(prev_output_tokens == -100, pad_token_id)
index_of_eos = (prev_output_tokens.ne(pad_token_id).sum(dim=1) - 1).unsqueeze(-1)
decoder_start_tokens = prev_output_tokens.gather(1, index_of_eos).squeeze()
prev_output_tokens[:, 1:] = prev_output_tokens[:, :-1].clone()
prev_output_tokens[:, 0] = decoder_start_tokens
return prev_output_tokens
```<|||||>Indeed. But as mentioned in the documentation :
> The text format for MBart-50 is slightly different from mBART. For MBart-50 the language id token is used as a prefix for both source and target text i.e the text format is [lang_code] X [eos], where lang_code is source language id for source text and target language id for target text, with X being the source or target text respectively.
While
> For MBart [...] the source text format is X [eos, src_lang_code] where X is the source text. The target text format is [tgt_lang_code] X [eos]. bos is never used.
Which is why they don't have the same tokenization scheme.
I checked that when generating, the `forced_decoder_id` properly works, and I think this issue can be closed as there are no guarantee that a certain pair of language will produce intelligible result as the checkpoints are pretrained.
<|||||>Hi, thanks for the comments!
It is true that using MBart-50 to do generation with proper `forced_decoder_id` works. But it doesn't work on supervised learning scenarios. When there is no `decoder_input_ids` for training, Mbart-50 would automatically create`decoder_input_ids` from `labels` which follows the tokenization scheme of Mbart rather than Mbart-50. And I think this should be fixed.
<img width="770" alt="MBart and MBart-50 2023-01-30 17-49-21" src="https://user-images.githubusercontent.com/38466901/215444119-90199c9d-baa2-421d-86be-0d0e4e585e2c.png">
<|||||>I am not sure I understand. When the `decoder_input_ids` are created from the `labels`, they are a shifted version.
Let's use the example:
- src_text : `'en_XX UN Chief Says There Is No Military Solution in Syria</s>'`
- labels : `'ro_RO Şeful ONU declară că nu există o soluţie militară în Siria</s>'`
- [shifted labels](https://github.com/huggingface/transformers/blob/main/src/transformers/models/mbart/modeling_mbart.py#L1348-L1349) : `'</s>ro_RO Şeful ONU declară că nu există o soluţie militară în Siria'` (= decoder_inputs_ids)
This means that the `shifted_labels` will follow the correct pattern (which you enforce when generating).
<|||||>Sorry, my bad. You are right. I mistakenly thought the generation schema of MBart-50 is the same as MBart, whose `decoder_start_token_id` is the `lang_id`. |
transformers | 20,609 | closed | Data to text representation considers only first 2 triplets | Hello,
I trained t5-base with wenNLG2020 data set which takes the data in the form of multiple triplets. When a query is made to the model in the same format, it explains the first 2 triplets and ignores the rest. Is it any config issue? | 12-06-2022 08:15:46 | 12-06-2022 08:15:46 | Please use the [forums](https://discuss.huggingface.co/) to ask such questions as we keep the issues for bugs and feature requests only.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 20,608 | open | Is it possible to add simple custom pytorch-crf layer on top of TokenClassification model. It will make the model more robust. | ### Model description
Is it possible to add simple custom `pytorch-crf` layer on top of `TokenClassification model`. It will make the model more robust.
There should be simple `Notebook tutorial` which teaches us to add our own `custom layer` on top of `Hugging face models` for
- Classification
- Token Classification ( BIO)
By taking an example from `dslim/bert-base-NER`. Then add `from torchcrf import CRF` on top of it.
I am planning to do this, but I don't know how to get this feature coded. Any leads or Notebook example would be helpful.
```
from torchcrf import CRF
model_checkpoint = "dslim/bert-base-NER"
tokenizer = BertTokenizer.from_pretrained(model_checkpoint,add_prefix_space=True)
config = BertConfig.from_pretrained(model_checkpoint, output_hidden_states=True)
bert_model = BertForTokenClassification.from_pretrained(model_checkpoint,id2label=id2label,label2id=label2id,ignore_mismatched_sizes=True)
class BERT_CRF(nn.Module):
def __init__(self, bert_model, num_labels):
super(BERT_CRF, self).__init__()
self.bert = bert_model
self.dropout = nn.Dropout(0.25)
self.classifier = nn.Linear(4*768, num_labels)
self.crf = CRF(num_labels, batch_first = True)
def forward(self, input_ids, attention_mask, labels=None, token_type_ids=None):
outputs = self.bert(input_ids, attention_mask=attention_mask)
**sequence_output = torch.cat((outputs[1][-1], outputs[1][-2], outputs[1][-3], outputs[1][-4]),-1)**
sequence_output = self.dropout(sequence_output)
emission = self.classifier(sequence_output) # [32,256,17]
labels=labels.reshape(attention_mask.size()[0],attention_mask.size()[1])
if labels is not None:
loss = -self.crf(log_soft(emission, 2), labels, mask=attention_mask.type(torch.uint8), reduction='mean')
prediction = self.crf.decode(emission, mask=attention_mask.type(torch.uint8))
return [loss, prediction]
else:
prediction = self.crf.decode(emission, mask=attention_mask.type(torch.uint8))
return prediction
```
```
args = TrainingArguments(
"spanbert_crf_ner-pos2",
# evaluation_strategy="epoch",
save_strategy="epoch",
learning_rate=2e-5,
num_train_epochs=1,
weight_decay=0.01,
per_device_train_batch_size=8,
# per_device_eval_batch_size=32
fp16=True
# bf16=True #Ampere GPU
)
trainer = Trainer(
model=model,
args=args,
train_dataset=train_data,
# eval_dataset=train_data,
# data_collator=data_collator,
# compute_metrics=compute_metrics,
tokenizer=tokenizer)
```
I get error on line ` **sequence_output = torch.cat((outputs[1][-1], outputs[1][-2], outputs[1][-3], outputs[1][-4]),-1)** `
As `outputs = self.bert(input_ids, attention_mask=attention_mask)` gives the logits for tokenclassification` . How can we get hidden states so that I can concate last 4 hidden states. so that I can do `outputs[1][-1]`?
### Open source status
- [ ] The model implementation is available
- [ ] The model weights are available
### Provide useful links for the implementation
_No response_ | 12-06-2022 06:13:29 | 12-06-2022 06:13:29 | Hi,
Please use the forum for these kind of questions. We'd like to keep Github issues for bugs and feature requests.
Thanks!<|||||>> Hi,
>
> Please use the forum for these kind of questions. We'd like to keep Github issues for bugs and feature requests.
>
> Thanks!
This is kind of feature request only. @NielsRogge <|||||>Models are fully defined in each modeling file in an independent fashion so you can easily copy/paste them and then customize them to your need :-) |
transformers | 20,607 | closed | Documentation fixes | # What does this PR do?
This PR just fixes some typos in the documentation.
Please note: Apart from the typos in the *paragraphs*, the other changes were because of significantly differing results I got from running the examples. For instance, "aweful" didn't result in a high negative score, but "awful" did.
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@sgugger
| 12-06-2022 03:31:40 | 12-06-2022 03:31:40 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 20,606 | closed | Adding anchor links to Hindi README | # What does this PR do?
1. Adding anchor links to Hindi README
| 12-06-2022 02:52:39 | 12-06-2022 02:52:39 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 20,605 | closed | Clip floating point constants to bf16 range to avoid inf conversion | When running HuggingFace BERT (any size) fine-tuning tutorial with transformers version >= 4.21.0 and using XLA_USE_BF16=1 or XLA_DOWNCAST_BF16=1, I see NaNs in the loss after the first step.
# What does this PR do?
This PR addresses the issue where the model code passes a value that is out of range for XLA_USE_BF16=1 or XLA_DOWNCAST_BF16=1, so the conversion would cast it to -inf.
The NaNs likely come from the transformers library change: https://github.com/huggingface/transformers/pull/17306 . This PR replaced many lines which used to be -float(inf) (or other small constants) with torch.finfo().min. For torch.float32 the min value is -3.4028234663852886e+38 which is smaller than the bfloat16 minimum of -3.3895313892515355e+38. So the problem is that torch.finfo(torch.float32).min = -3.4028234663852886e+38 gets converted to -inf. When the original encoder_extended_attention_mask is 1, then encoder_extended_attention_mask becomes (1.0 - 1.0 ) * -inf which becomes NaN (via IEEE rule Inf * 0.0 = NaN).
This PR ensures torch.finfo(torch.bfloat16).min = -3.3895313892515355e+38 and not -inf. Then the results would not have Nans.
The following lines checks for XLA_USE_BF16 or XLA_DOWNCAST_BF16 environment variable and sets the dtype accordingly:
```
if is_torch_tpu_available():
if os.environ.get("XLA_USE_BF16") == 1:
return torch.bfloat16
if os.environ.get("XLA_DOWNCAST_BF16") == 1:
if t.dtype == torch.float:
return torch.bfloat16
if t.dtype == torch.double:
return torch.float32
```
Referencing related issues: https://github.com/aws-neuron/aws-neuron-sdk/issues/593 and https://github.com/pytorch/xla/issues/4152
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@sgugger
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 12-06-2022 00:10:29 | 12-06-2022 00:10:29 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 20,604 | closed | Fix test for file not found | # What does this PR do?
The test for file not found in the TensorFlow auto model tests is failing on main as the message does not match exactly (see [here](https://app.circleci.com/pipelines/github/huggingface/transformers/53015/workflows/6ea05b10-a541-46db-bcce-b93dc654610e/jobs/636205)). This PR fixes that. | 12-05-2022 23:11:15 | 12-05-2022 23:11:15 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Merging to have tests passing on main, but I will address any comment in followup PRs :-) |
transformers | 20,603 | closed | Update the list of contributors to reflect current organization | # What does this PR do?
This PR updates the list of who to tag on PRs/Issues. With the growing number of models, I chose to split them through modality. | 12-05-2022 21:06:24 | 12-05-2022 21:06:24 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 20,602 | closed | Fix dtype of weights in from_pretrained when device_map is set | # What does this PR do?
As reported in #20390, the dtype of the weights after `from_pretrained` is used for a checkpoint is inconsistent between `device_map=None` or `device_map` set:
- `device_map=None` (which uses `nn.Module.laod_state_dict`) will have the dtype of the model stay the same, even if the checkpoints are in a different dtype (so loading a float16 checkpoint in a float32 model gives a float32 model)
- `device_map` set (which manually sets the parameters) will change the dtype of the model to the dtype of the checkpoint (so loading a float16 checkpoint in a float32 model gives a float16 model).
This PR addresses this. | 12-05-2022 19:58:03 | 12-05-2022 19:58:03 | _The documentation is not available anymore as the PR was closed or merged._<|||||>There is no more safetensors at this stage, (`is_safetensors` means the checkpoint comes from safetensors, but the state dict is a dictionary name to parameter in this case as well). |
transformers | 20,601 | closed | updating T5 and BART models to support Prefix Tuning | # What does this PR do?
1. updating T5 and BART models to support Prefix Tuning. Currently, passing `past_key_value` fails. This PR fixes it. Doesn't impact any current functionality. | 12-05-2022 18:52:52 | 12-05-2022 18:52:52 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Hey, just for reference could you provide a link to an issue or something explaining what `prefix tuning` is? |
transformers | 20,600 | closed | Add-whisper-conversion | # What does this PR do?
Add the conversion script from whisper which was deleted during the sprint. See this [commit](https://github.com/huggingface/transformers/pull/19166/commits/f92b9a8181f9a84114becd31a5a4210723cdf1ad).
This will help for the Whisper Event! | 12-05-2022 18:28:43 | 12-05-2022 18:28:43 | See the new checkpoints : https://huggingface.co/openai/whisper-large-v2 <|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20600). All of your documentation changes will be reflected on that endpoint. |
transformers | 20,599 | closed | [Whisper] Fix decoder ids methods | # What does this PR do?
The previous PR https://github.com/huggingface/transformers/pull/20589 incorrectly returned a list of forced decoder ids:
```python
from transformers import WhisperProcessor
processor = WhisperProcessor.from_pretrained("openai/whisper-tiny.en")
print(processor.get_decoder_prompt_ids(task="transcribe"))
```
**Print Output:**
```
[50257, 50358, 50362]
```
The correct format is a nested list of decoder ids, where the first element of each list specifies the position of the forced token and the second the token id:
```python
print(processor.get_decoder_prompt_ids(task="transcribe"))
```
**Print Output:**
```
[(1, 50257), (2, 50358), (3, 50362)]
```
(at position 1 we force token 50257, at 2 we force 50358, at 3 we force 50362)
The PR also implements a test, thus making sure that no such error can be made again 😅
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 12-05-2022 17:59:07 | 12-05-2022 17:59:07 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 20,598 | closed | Fix `get_decoder_prompt_ids` in whisper | # What does this PR do?
Hi @sanchit-gandhi,
I think there is one missed line in https://github.com/huggingface/transformers/pull/20589. I've added it back in this PR.
The `forced_decoder_ids` should be something like `[[<token/position>, <token/id>], ...]`. But `prefix_tokens` only returns only token ids.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 12-05-2022 17:52:08 | 12-05-2022 17:52:08 | I found this cause I got the following error when running the code in main branch
```
File ~/transformers/src/transformers/generation/utils.py:867, in GenerationMixin._get_logits_processor(self, repetition_penalty, no_repeat_ngram_size, encoder_no_repeat_ngram_size, input_ids_seq_length, encoder_input_ids, bad_words_ids, min_length, max_length, eos_token_id, forced_bos_token_id, forced_eos_token_id, prefix_allowed_tokens_fn, num_beams, num_beam_groups, diversity_penalty, remove_invalid_values, exponential_decay_length_penalty, logits_processor, renormalize_logits, suppress_tokens, begin_suppress_tokens, forced_decoder_ids)
865 begin_index = begin_index if (input_ids_seq_length > 1 or forced_bos_token_id is None) else begin_index + 1
866 if forced_decoder_ids is not None:
--> 867 begin_index += forced_decoder_ids[-1][0] # generation starts after the last token that is forced
868 processors.append(SuppressTokensAtBeginLogitsProcessor(begin_suppress_tokens, begin_index))
869 if forced_decoder_ids is not None:
TypeError: 'int' object is not subscriptable
```<|||||>Duplicate of https://github.com/huggingface/transformers/pull/20599<|||||>Hey @bofenghuang! Sorry about that, hoping to merge the fix ASAP<|||||>@sanchit-gandhi no problem, thanks for the quick fix! |
transformers | 20,597 | closed | Fix `AutomaticSpeechRecognitionPipelineTests.run_pipeline_test` | # What does this PR do?
Fix `AutomaticSpeechRecognitionPipelineTests.run_pipeline_test` which was changed in #19570 and #20104.
See the comments in this PR changes.
I detected this when working on improving pipeline tests using tiny models. Previously, `Speech2TextConfig` is not used in ASR pipeline tests, but now it does, and gives errors without this PR. | 12-05-2022 17:44:54 | 12-05-2022 17:44:54 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Thank you ! |
transformers | 20,596 | closed | Remove unused `classifier_dropout` in configs | # What does this PR do?
Similar to #20554, but this time for `classifier_dropout`.
The existing checkpoints with this attribute in their config files could still be loaded via the `**kwargs` --> so won't fail.
@sgugger If you would prefer me to cleanup multiple different unused config attributes in a single PR, let me know 😉
| 12-05-2022 16:02:04 | 12-05-2022 16:02:04 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 20,595 | closed | Fix whisper and speech to text doc | # What does this PR do?
Previously the documentation was badly indented for both models and indicated that
> If `decoder_input_ids` and `decoder_inputs_embeds` are both unset, `decoder_inputs_embeds` takes the value of `inputs_embeds`.`
Which is on valid for the forward pass of the `ForConditionnalGeneration` not for the model alone. | 12-05-2022 15:40:18 | 12-05-2022 15:40:18 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 20,594 | closed | Transformers model inference via pipeline not releasing memory after 2nd call. Leads to memory leak and crash in Flask web app | ### System Info
- `transformers` version: 4.22.0
- Platform: Linux-5.15.0-56-generic-x86_64-with-glibc2.35
- Python version: 3.10.6
- Huggingface_hub version: 0.11.1
- PyTorch version (GPU?): 1.12.1+cu102 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help?
@Narsil
@Lysa
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I am using a locally saved model to perform ``token-classification``. I saved the model files using the below code
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("dslim/bert-large-NER")
model = AutoModelForTokenClassification.from_pretrained("dslim/bert-large-NER")
tokenizer.save_pretrained('./modelfiles')
model.save_pretrained('./modelfiles')
```
I am using the model in a Flask web app to take in text, perform ``token-classification`` and return the result. The minimal example of that is given below
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline
from flask import Flask
import gc
app = Flask(__name__)
def model_test(text):
tokenizer = AutoTokenizer.from_pretrained("./modelfiles")
model = AutoModelForTokenClassification.from_pretrained("./modelfiles")
nlp = pipeline("token-classification", model=model, tokenizer=tokenizer, aggregation_strategy="simple")
ner_results = nlp(text)
del model
del tokenizer
del nlp
gc.collect() # adding this releases the memory after first call only..
return text
@app.route('/')
def memory_test():
text = "Adam is going to London with Mark and then to Paris with Mary."
output_text = model_test(text)
return output_text
if __name__ == '__main__':
app.run()
```
The above script creates a simple flask web app and then calls the ``model_test()`` every time the page is refreshed.
The memory is not released after each call.. Whats interesting is that after adding ``gc.collect()`` in the function it is released on the first call only and then after second call it does not release memory, as can be seen from the memory usage graph screenshot.. without ``gc.collect()`` the first function call does not release memory.

### Expected behavior
As can be seen from the screenshot, the memory is released after the first call. but for some reason it just keeps accumulating after 2nd call and this leads to a crash.
The models are expected to release memory after each call as is done after first. | 12-05-2022 14:44:25 | 12-05-2022 14:44:25 | Hello you are loading the model twice.
Depending on how you launch your flask webserver, you will use threads or processes. Each request will reach reach a different thread/process and each wlll load all dependencies (including torch) which by itself is like 300Mo. So you could indeed easily blow the amount of memory required.
What we usually recommend is this: Will be soon in the actual docs; https://github.com/huggingface/transformers/pull/20437
Making sure you have your model loaded once on a single thread/process. This can be achieved in many ways.
Here you are loading your model at runtime which will make requests much slower than intended too. I would recommend loading it beforehand during load time of the actual webserver. It doesn't really apply if you want to run models dynamically, but you could still apply the 1 thread techniques which should limit your memory requirements.
Does that answer your question ?<|||||>Thank you @Narsil for your suggestion. You were right and loading the model only once at the app level solved the memory issue and is also much faster in handling every request :)
The example you showed in documentation is using ``starlette``. I achieved the same in Flask using below.. just adding the model loading lines at app level instead of inside a function. Below is the updated version of my minimal example:
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline
from flask import Flask
app = Flask(__name__)
tokenizer = AutoTokenizer.from_pretrained("./modelfiles")
model = AutoModelForTokenClassification.from_pretrained("./modelfiles")
def model_test(text):
nlp = pipeline("token-classification", model=model, tokenizer=tokenizer, aggregation_strategy="simple")
ner_results = nlp(text)
print(ner_results)
return text
@app.route('/')
def memory_test():
text = "Adam is going to London with Mark and then to Paris with Mary."
output_text = model_test(text)
return output_text
if __name__ == '__main__':
app.run()
```<|||||>If I have to host the inference code using FastAPI and transformers Pipeline, should I be creating new instances of Pipeline every time I get a request? I can ensure the model loaded only once. Also, is Pipeline thread safe?<|||||>> should I be creating new instances of Pipeline every time I get a request?
That would be super wasteful. The pipeline creates tokenizer, feature_extractor and model for you. Even if you ensure the model is loaded only once, those other ressources will probably be created.
Seems simpler to simply cache the create of the pipeline directly.
> is Pipeline thread safe?
No. The pipeline itself doesn't do anything too fancy, so you should be ok, but PyTorch is not thread safe itself (it **should** be for reading). But Torch is already using all your cores for inference so nothing to gain by multiplexing the inference itself. And for GPU, it's even worse since you cannot multiplex the kernels either, but you could end up entangling requests from the pipeline (leading to worse latency for all requests)
In general, in my experience playing with threads and torch is just asking for trouble. I would go for a single thread pipeline owning thread (or process) and communicate with it your requests. Seems to work much better in almost all the cases. Now, torch itself is not async, so it will block the main thread if you're using async. |
transformers | 20,593 | closed | How to convert a gradio text-geno script to run on gpu | I've been at this a while so I've decided to just ask.
```
import gradio as gr
from transformers import pipeline
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("facebook/galactica-125m")
model = AutoModelForCausalLM.from_pretrained("facebook/galactica-125m")
text2text_generator = pipeline("text-generation", model=model, tokenizer=tokenizer, num_workers=2)
def predict(text, max_length=64, temperature=0.7, do_sample=True):
text = text.strip()
out_text = text2text_generator(text, max_length=max_length,
temperature=temperature,
do_sample=do_sample,
eos_token_id = tokenizer.eos_token_id,
bos_token_id = tokenizer.bos_token_id,
pad_token_id = tokenizer.pad_token_id,
)[0]['generated_text']
out_text = "<p>" + out_text + "</p>"
out_text = out_text.replace(text, text + "<b><span style='background-color: #ffffcc;'>")
out_text = out_text + "</span></b>"
out_text = out_text.replace("\n", "<br>")
return out_text
iface = gr.Interface(
fn=predict,
inputs=[
gr.inputs.Textbox(lines=5, label="Input Text"),
gr.inputs.Slider(minimum=32, maximum=256, default=64, label="Max Length"),
gr.inputs.Slider(minimum=0.0, maximum=1.0, default=0.7, step=0.1, label="Temperature"),
gr.inputs.Checkbox(label="Do Sample"),
],
outputs=gr.HTML(),
description="Galactica Base Model",
examples=[[
"The attention mechanism in LLM is",
128,
0.7,
True
],
[
"Title: Attention is all you need\n\nAbstract:",
128,
0.7,
True
]
]
)
iface.launch()
```
That's what I want to make run on my gpu, here's what I've got that doesn't work.
```
import gradio as gr
import torch
from transformers import pipeline
from transformers import AutoTokenizer, OPTForCausalLM
tokenizer = AutoTokenizer.from_pretrained("facebook/galactica-1.3b")
#tokenizer.pad_token_id = 1
#tokenizer.padding_side = 'left'
#tokenizer.model_max_length = 2020
model = OPTForCausalLM.from_pretrained("facebook/galactica-1.3b", device_map="auto")
text2text_generator = pipeline("text-generation", model=model, tokenizer=tokenizer, num_workers=1, device_map="auto")
device = torch.device('cuda')
model.to(device)
def predict(text, max_length=64, temperature=0.7, top_k=25, top_p=0.9, no_repeat_ngram_size=10, do_sample=True):
text = text.strip()
#input_ids = tokenizer(text, return_tensors="pt").input_ids.to("cuda")
out_text = text2text_generator(text,
max_length=max_length,
temperature=temperature,
top_k=top_k,
top_p=top_p,
no_repeat_ngram_size=10,
do_sample=do_sample,
eos_token_id = tokenizer.eos_token_id,
bos_token_id = tokenizer.bos_token_id,
pad_token_id = tokenizer.pad_token_id,
return_tensors="pt",
)[0]['generated_text']
out_text=out_text.to(device)
out_text = "<p>" + out_text + "</p>"
out_text = out_text.replace(text, text + "<b><span style='background-color: #ffffcc;'>")
out_text = out_text + "</span></b>"
out_text = out_text.replace("\n", "<br>")
return out_text
iface = gr.Interface(
fn=predict,
inputs=[
gr.inputs.Textbox(lines=5, label="Input Text"),
gr.inputs.Slider(minimum=32, maximum=1024, default=64, label="Max Length"),
gr.inputs.Slider(minimum=0.0, maximum=1.0, default=0.7, step=0.05, label="Temperature"),
gr.inputs.Slider(minimum=1, maximum=99, default=25, step=5, label="Top k"),
gr.inputs.Slider(minimum=0.5, maximum=0.99, default=0.9, step=0.01, label="Top p"),
gr.inputs.Slider(minimum=1, maximum=999, default=10, step=1, label="No Repeat Ngram Size"),
gr.inputs.Checkbox(label="Do Sample"),
],
outputs=gr.HTML(),
description="Galactica Base Model",
examples=[[
"The attention mechanism in LLM is",
128,
0.7,
25,
0.9,
10,
True
],
[
"Title: Attention is all you need\n\nAbstract:",
128,
0.7,
25,
0.9,
10,
True
]
]
)
iface.launch()
```
Any pointers would be appreciated I'm rusty if you couldn't tell | 12-05-2022 14:24:10 | 12-05-2022 14:24:10 | cc @Narsil, @abidlabs and @dawoodkhan82 <|||||>What doesn't work ?
- Is the model not on GPU ?
- Does it crash ? If yes, can we see the stacktrace ?
This line is incorrect:
```python
out_text=out_text.to(device)
```
out_text is `str` so it can't be on a device (it's a pure python object :) )
```
model.to(device)
```
will also fail, since the model with device_map="auto" is supposed to be on multiple device. (If one device is enough, just don't use it and use directly `device=0` for instance.
For your loading logic:
```python
text2text_generator = pipeline( model="facebook/galactica-1.3b", num_workers=1, device_map="auto")
#
```
should be enough
Then `device_map="auto"` only works when accelerate is in the environment. Could you make sure it's there ?
Does this help ?
If you had the space to show it might help also fetch some information about what is going wrong.
Thank you !
<|||||>From the `gradio` side, there should be no difference whether the model is running on cpu or gpu. Can you confirm that the `predict()` function correctly runs on GPU?<|||||>@Narsil Thank you it's now functional with the following:
```
import gradio as gr
import torch
from transformers import pipeline
from transformers import AutoTokenizer, AutoModelForCausalLM
#tokenizer = AutoTokenizer.from_pretrained("facebook/galactica-125m")
#model = AutoModelForCausalLM.from_pretrained("facebook/galactica-125m")
text2text_generator = pipeline(model="facebook/galactica-1.3b", num_workers=1, device=0)
def predict(text, max_length=64, temperature=0.7, do_sample=True):
text = text.strip()
out_text = text2text_generator(text, max_length=max_length,
temperature=temperature,
do_sample=do_sample,
)[0]['generated_text']
out_text = "<p>" + out_text + "</p>"
out_text = out_text.replace(text, text + "<b><span style='background-color: #ffffcc;'>")
out_text = out_text + "</span></b>"
out_text = out_text.replace("\n", "<br>")
return out_text
torch.cuda.empty_cache()
iface = gr.Interface(
fn=predict,
inputs=[
gr.inputs.Textbox(lines=5, label="Input Text"),
gr.inputs.Slider(minimum=32, maximum=5160, default=64, label="Max Length"),
gr.inputs.Slider(minimum=0.0, maximum=1.0, default=0.7, step=0.1, label="Temperature"),
gr.inputs.Checkbox(label="Do Sample"),
],
outputs=gr.HTML(),
description="Galactica Base Model",
examples=[[
"The attention mechanism in LLM is",
128,
0.7,
True
],
[
"Title: Attention is all you need\n\nAbstract:",
128,
0.7,
True
]
]
)
iface.launch(share=True)
```
But, I run out of memory making it do anything long and I don't know how to make it clear the ram once it gets a new prompt. I know `torch.dtype=torch.float16` but I'm not sure how to use it in this. Thank you for your help, I would share the space but I'm always changing it so it won't be online.<|||||>You are clearning the cache AFTER the return so it won't be ever run.
I think this code should be correct. But large prompts, large generation and even worse large beams (don't see them here) are really memory hungry, so it might just be a regular OOM. Have you tried using a larger GPU ?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 20,592 | closed | Check if docstring is `None` before formating it | docstrings could be `None` if Python optimize level is set to 2.
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #20591.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 12-05-2022 13:46:04 | 12-05-2022 13:46:04 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Thanks again for your contribution! |
transformers | 20,591 | closed | AttributeError: 'NoneType' object has no attribute 'format' | ### System Info
transformers version: 4.21.3
OS: Windows 10
Python version: 3.10
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
- Install one of spaCy's transformer model.
```
$ python -m pip install spacy[cuda-autodetect]
$ python -m spacy download en_core_web_trf
```
- Set `PYTHONOPTIMIZE` to 2 or use `-OO` option.
- Load spaCy model:
```python
import spacy
spacy.load("en_core_web_trf")
```
- Get error similar to this:
```
File "C:\x\spacy\__init__.py", line 54, in load
File "C:\x\spacy\util.py", line 432, in load_model
File "C:\x\spacy\util.py", line 468, in load_model_from_package
File "C:\x\en_core_web_lg\__init__.py", line 10, in load
File "C:\x\spacy\util.py", line 649, in load_model_from_init_py
File "C:\x\spacy\util.py", line 506, in load_model_from_path
File "C:\x\spacy\util.py", line 554, in load_model_from_config
File "C:\x\spacy\language.py", line 1788, in from_config
File "C:\x\spacy\language.py", line 163, in __init__
File "C:\x\catalogue\__init__.py", line 119, in get_all
File "C:\x\catalogue\__init__.py", line 134, in get_entry_points
File "importlib\metadata\__init__.py", line 162, in load
File "importlib\__init__.py", line 126, in import_module
File "<frozen importlib._bootstrap>", line 1050, in _gcd_import
File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
File "<frozen importlib._bootstrap>", line 992, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "<frozen importlib._bootstrap>", line 1050, in _gcd_import
File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 688, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 883, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "C:\x\spacy_transformers\__init__.py", line 1, in <module>
File "C:\x\spacy_transformers\architectures.py", line 6, in <module>
File "C:\x\spacy_transformers\layers\__init__.py", line 1, in <module>
File "C:\x\spacy_transformers\layers\listener.py", line 4, in <module>
File "C:\x\spacy_transformers\data_classes.py", line 5, in <module>
File "C:\x\transformers\tokenization_utils.py", line 26, in <module>
File "C:\x\transformers\tokenization_utils_base.py", line 3646, in <module>
AttributeError: 'NoneType' object has no attribute 'format'
```
This could happen when running the code in a Python interpreter which has optimize level set to 2 and it can't be changed, for exmaple: [calibre](https://calibre-ebook.com).
### Expected behavior
No error. | 12-05-2022 13:41:56 | 12-05-2022 13:41:56 | |
transformers | 20,590 | closed | Vision processors - replace FE with IPs | # What does this PR do?
Replaces feature extractors with image processors in the `Processor` class which bundle together tokenizers and feature extractor.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
| 12-05-2022 13:22:59 | 12-05-2022 13:22:59 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 20,589 | closed | [Whisper] Move decoder id method to tokenizer | # What does this PR do?
Moves the method `get_decoder_prompt_ids` from the processor to the tokenizer. The primary reason for this change is that the ASR pipeline class does not load the processor object, but rather the feature extractor and tokenizer separately (see [docs](https://github.com/huggingface/transformers/blob/699e90437f984d69ad3c9b891dd2e9d0fc2cffe4/src/transformers/pipelines/automatic_speech_recognition.py#L123)). Therefore, as things currently stand, pipeline does not have access to the processor method `get_decoder_prompt_ids`. By moving it to the tokenizer, it will be able to call this method with pipeline.
Note that this is not a breaking change: we retain a method `get_decoder_prompt_ids` in the processor. This method simply calls the `get_decoder_prompt_ids` from the tokenizer:
https://github.com/huggingface/transformers/blob/ca8b332d31a1b90e18f134620e69063418add69e/src/transformers/models/whisper/processing_whisper.py#L44-L45
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 12-05-2022 12:11:21 | 12-05-2022 12:11:21 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 20,588 | closed | Ci-whisper-asr | # What does this PR do?
In a recent update, we followed the original code which changed some of the suppress tokens for better performances. This lead to a small change in the output of on particular case. Tested with the original code and we have the correct output now!
Related to #20493 and #20512
See [here](https://huggingface.co/openai/whisper-large/commit/ed97120f929257fb801f99587ada69be0daf5b0a) for the particular commit | 12-05-2022 11:57:53 | 12-05-2022 11:57:53 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 20,587 | closed | [Vision] fix small nit on `BeitDropPath` layers | # What does this PR do?
Fixes a small nit for `DropPath` layers pointed out in: https://github.com/huggingface/transformers/pull/20550#discussion_r1039395745 & https://github.com/huggingface/transformers/pull/20550#discussion_r1039459045
Preferred to fix this separately in a PR to avoid modifying too much files in #20550
cc @NielsRogge @patrickvonplaten
| 12-05-2022 11:37:27 | 12-05-2022 11:37:27 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 20,586 | closed | Install `tensorflow_probability` for TF pipeline CI | # What does this PR do?
So tests like `TQAPipelineTests.test_integration_sqa_tf` or `TQAPipelineTests.test_slow_tokenizer_sqa_tf` could run. | 12-05-2022 11:33:21 | 12-05-2022 11:33:21 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20586). All of your documentation changes will be reflected on that endpoint. |
transformers | 20,585 | closed | Add `require_torch` to 2 pipeline tests | # What does this PR do?
The 2 tests are for `pytorch`, but in TF pipeline test CI job (where `torch` is not available), it runs with TF models.
This is not expected.
Before #20149, these 2 tests are decorated with `require_torch_scatter`. After that PR, the tests try to run with TF, but failed with `TFTapasMainLayer requires the tensorflow_probability library but it was not found in your environment.`
(this is another thing to fix in docker file) | 12-05-2022 11:18:33 | 12-05-2022 11:18:33 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 20,584 | closed | Fix torch device issue | # What does this PR do?
The fixes in #20304 is changed somehow to the wrong places in #20160, and we got the torch device issues.
This PR fixes this device issue - just put `to` in the correct places. | 12-05-2022 10:17:28 | 12-05-2022 10:17:28 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 20,583 | closed | AttributeError: 'DataParallel' object has no attribute 'model' | ### System Info
System Info
- torch==1.8.1+cu101
- transformers==4.10.1
- Python 3.8
- "Ubuntu 18.04.6 LTS"
I am training parallel GPUs and not using pretrained weight. However, during training, I got this issue and break training:
```python
15%|█▌ | 246/1617 [09:01<48:36, 2.13s/it]
15%|█▌ | 247/1617 [09:03<48:21, 2.12s/it]
15%|█▌ | 248/1617 [09:05<48:20, 2.12s/it]
15%|█▌ | 249/1617 [09:07<48:17, 2.12s/it]
15%|█▌ | 250/1617 [09:10<48:35, 2.13s/it]***** Running Evaluation *****
Num examples = 1500
Batch size = 512
{'loss': 1.2497, 'learning_rate': 4.941249226963513e-05, 'epoch': 0.04}
{'loss': 0.6803, 'learning_rate': 4.879406307977737e-05, 'epoch': 0.07}
{'loss': 0.6134, 'learning_rate': 4.817563388991961e-05, 'epoch': 0.11}
{'loss': 0.5777, 'learning_rate': 4.7557204700061845e-05, 'epoch': 0.15}
{'loss': 0.5626, 'learning_rate': 4.6938775510204086e-05, 'epoch': 0.19}
{'loss': 0.5413, 'learning_rate': 4.6320346320346326e-05, 'epoch': 0.22}
{'loss': 0.5249, 'learning_rate': 4.570191713048856e-05, 'epoch': 0.26}
{'loss': 0.5015, 'learning_rate': 4.50834879406308e-05, 'epoch': 0.3}
{'loss': 0.5017, 'learning_rate': 4.4465058750773034e-05, 'epoch': 0.33}
{'loss': 0.4924, 'learning_rate': 4.3846629560915274e-05, 'epoch': 0.37}
{'loss': 0.4831, 'learning_rate': 4.3228200371057515e-05, 'epoch': 0.41}
{'loss': 0.4695, 'learning_rate': 4.2609771181199755e-05, 'epoch': 0.45}
Traceback (most recent call last):
File "examples/run_train.py", line 105, in <module>
main()
File "examples/run_train.py", line 99, in main
train_result = trainer.train()
File "/root/data/huyhuynh/clrcmd-master/venv/lib/python3.8/site-packages/transformers/trainer.py", line 1340, in train
self._maybe_log_save_evaluate(tr_loss, model, trial, epoch, ignore_keys_for_eval)
File "/root/data/huyhuynh/clrcmd-master/venv/lib/python3.8/site-packages/transformers/trainer.py", line 1445, in _maybe_log_save_evaluate
metrics = self.evaluate(ignore_keys=ignore_keys_for_eval)
File "/root/data/huyhuynh/clrcmd-master/venv/lib/python3.8/site-packages/transformers/trainer.py", line 2051, in evaluate
output = eval_loop(
File "/root/data/huyhuynh/clrcmd-master/venv/lib/python3.8/site-packages/transformers/trainer.py", line 2223, in evaluation_loop
loss, logits, labels = self.prediction_step(model, inputs, prediction_loss_only, ignore_keys=ignore_keys)
File "/root/data/huyhuynh/clrcmd-master/src/clrcmd/trainer.py", line 29, in prediction_step
score = model.model(inputs1, inputs2)
File "/root/data/huyhuynh/clrcmd-master/venv/lib/python3.8/site-packages/torch/nn/modules/module.py", line 947, in __getattr__
raise AttributeError("'{}' object has no attribute '{}'".format(
AttributeError: 'DataParallel' object has no attribute 'model'
15%|█▌ | 250/1617 [09:10<50:11, 2.20s/it]
```
This is training code:
```python
import argparse
import logging
import os
import uuid
from transformers import TrainingArguments, set_seed
from clrcmd.data.dataset import (
ContrastiveLearningCollator,
NLIContrastiveLearningDataset,
STSBenchmarkDataset,
)
from clrcmd.data.sts import load_stsb_dev
from clrcmd.models import create_contrastive_learning, create_tokenizer
from clrcmd.trainer import STSTrainer, compute_metrics
import torch
logger = logging.getLogger(__name__)
parser = argparse.ArgumentParser(formatter_class=argparse.ArgumentDefaultsHelpFormatter)
# fmt: off
parser.add_argument("--data-dir", type=str, help="Data directory", default="data")
parser.add_argument("--model", type=str, help="Model", default="bert-cls",
choices=["bert-cls", "bert-avg", "bert-rcmd", "roberta-cls", "roberta-avg", "roberta-rcmd"])
parser.add_argument("--output-dir", type=str, help="Output directory", default="ckpt")
parser.add_argument("--temp", type=float, help="Softmax temperature", default=0.05)
parser.add_argument("--seed", type=int, help="Seed", default=0)
# fmt: on
def main():
args = parser.parse_args()
experiment_name = f"{args.model}-{uuid.uuid4()}"
training_args = TrainingArguments(
os.path.join(args.output_dir, experiment_name),
per_device_train_batch_size=128,
per_device_eval_batch_size=128,
learning_rate=5e-5,
num_train_epochs=3,
fp16=True,
logging_strategy="steps",
logging_steps=20,
evaluation_strategy="steps",
eval_steps=250,
save_strategy="steps",
save_steps=250,
metric_for_best_model="eval_spearman",
load_best_model_at_end=True,
greater_is_better=True,
save_total_limit=1,
seed=args.seed,
)
if training_args.local_rank == -1 or training_args.local_rank == 0:
logging.basicConfig(
level=logging.INFO,
format="%(asctime)s - %(message)s",
filename=f"log/train-{experiment_name}.log",
)
logger.info("Hyperparameters")
for k, v in vars(args).items():
logger.info(f"{k} = {v}")
# Log on each process the small summary:
logger.warning(
f"Process rank: {training_args.local_rank}, "
f"device: {training_args.device}, "
f"n_gpu: {training_args.n_gpu}, "
f"distributed training: {bool(training_args.local_rank != -1)}, "
f"16-bits training: {training_args.fp16} "
)
# Set seed before initializing model.
set_seed(training_args.seed)
# Load pretrained model and tokenizer
tokenizer = create_tokenizer(args.model)
model = create_contrastive_learning(args.model, args.temp)
### model = torch.nn.DataParallel(model) --> tried but not fix ...
model.train()
train_dataset = NLIContrastiveLearningDataset(
os.path.join(args.data_dir, "nli_for_simcse.csv"), tokenizer
)
eval_dataset = STSBenchmarkDataset(
load_stsb_dev(os.path.join(args.data_dir, "STS", "STSBenchmark"))["dev"], tokenizer
)
trainer = STSTrainer(
model=model,
data_collator=ContrastiveLearningCollator(),
args=training_args,
train_dataset=train_dataset,
eval_dataset=eval_dataset,
tokenizer=tokenizer,
compute_metrics=compute_metrics,
)
train_result = trainer.train()
logger.info(train_result)
trainer.module.save_model(os.path.join(training_args.output_dir, "checkpoint-best"))
if __name__ == "__main__":
main()
```
I searched problem, but I didn't find any solution for this.
Could you help me?
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Wrap the model with train_result = trainer.train()
### Expected behavior
Can solve issue | 12-05-2022 06:00:30 | 12-05-2022 06:00:30 | contact me, I fixed it <|||||>Hi, @huynhhoanghuy. I think that clrcmd trainer is trying to access `model.model` while your model is wrapped into DataParallel, hence there is no `.model` attribute.
See addressed [issue](https://github.com/sh0416/clrcmd/issues/2) <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 20,582 | closed | ValueError: Tokenizer class `NllbTokenizer` does not exist or is not currently imported when using NLLB (On Paperspace) | ### System Info
Hello,
When i use the code below
```py
...
tokenizer = AutoTokenizer.from_pretrained("facebook/nllb-200-distilled-600M")
...
```
Models : `nllb-distilled-600M`
I got an error in my notebook instance (on `paperspace`) and I thought the problem was with the version of huggingface `(4.26.0.dev0)` even if I was on the right one it still doesn't work.
🤗
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```py
...
def load_models():
# build model and tokenizer
model_name_dict = {'nllb-distilled-600M': 'facebook/nllb-200-distilled-600M',
#'nllb-1.3B': 'facebook/nllb-200-1.3B',
#'nllb-distilled-1.3B': 'facebook/nllb-200-distilled-1.3B',
#'nllb-3.3B': 'facebook/nllb-200-3.3B',
}
model_dict = {}
for call_name, real_name in model_name_dict.items():
print('\tLoading model: %s' % call_name)
model = AutoModelForSeq2SeqLM.from_pretrained(real_name)
tokenizer = AutoTokenizer.from_pretrained(real_name)
model_dict[call_name+'_model'] = model
model_dict[call_name+'_tokenizer'] = tokenizer
return model_dict
...
```
### Expected behavior
See my model working as except on the Gradio space | 12-04-2022 23:25:20 | 12-04-2022 23:25:20 | Bug resolved !
I relaunch my instance many times and run this command `!pip3 install git+https://github.com/huggingface/transformers.git`<|||||>I have the same problem. help!!! |
transformers | 20,581 | closed | Sensible default for Trainer's dataloader_num_workers argument | ### Feature request
As a relative beginner with Transformers and ML, it took me quite a bit of performance analysis and fiddling to figure out why my GPU was being vastly underutilized in training on image classification. I finally figured out that the bottleneck was dataloaders (as typical for image tasks, it has a few image transformations) and I got a 10X performance increase by setting `dataloader_num_workers` to 16.
Could this have a more a higher default to avoid this gotcha? Maybe it could default to something like half the number of available CPUs?
### Motivation
It's frustrating for a beginner to have training times super slow because of an unset parameter. It'd be great for the defaults to work well out-of-the-box.
### Your contribution
Happy to submit a PR if the idea is greenlit. | 12-04-2022 22:35:46 | 12-04-2022 22:35:46 | It's hard to have a nice default that would work everywhere: for NLP tasks you wouldn't need this since the preprocessing is fast. How about doing something in the vision examples?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 20,580 | closed | VideoMAE with `num_channels!=3` needs a small fix | ## Issue
The VideoMAE model doesn't work for non-RGB videos (when `num_channels!=3`). I believe this is caused by the harcoded image-net means and stds in the following lines:
https://github.com/huggingface/transformers/blob/d51e7c7e8265d69db506828dce77eb4ef9b72157/src/transformers/models/videomae/modeling_videomae.py#L824L826
## Code to Reproduce
```
import torch
from transformers import VideoMAEConfig, VideoMAEForPreTraining
NUM_CHANNELS = 1
config = VideoMAEConfig(num_channels=NUM_CHANNELS)
model = VideoMAEForPreTraining(config)
pixel_values = torch.rand(1, 16, NUM_CHANNELS, 224, 224)
num_patches_per_frame = (model.config.image_size // model.config.patch_size) ** 2
seq_length = (model.config.num_frames // model.config.tubelet_size) * num_patches_per_frame
bool_masked_pos = torch.randint(0, 2, (1, seq_length)).bool()
outputs = model(pixel_values, bool_masked_pos=bool_masked_pos)
loss = outputs.loss
```
This produces the following error: ```File ".../lib/python3.10/site-packages/torch/functional.py", line 74, in broadcast_tensors
return _VF.broadcast_tensors(tensors) # type: ignore[attr-defined]
RuntimeError: The size of tensor a (512) must match the size of tensor b (1536) at non-singleton dimension 2```
Setting NUM_CHANNELS = 3 works fine.
## Potential Fix
Since we don't have a `_DEFAULT_MEAN/STD` for non-RGB images, we can just replace `frames=pixel_values` and disallow `norm_pix_loss=False` if `num_channels!=3`. With this modification, the code works on my machine. I am willing to fix and submit a PR. | 12-04-2022 21:30:38 | 12-04-2022 21:30:38 | Hi,
This is related to #19913. Would be great to open a PR! <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 20,579 | closed | fail to import 'microsoft/swin-tiny-patch4-window7-224' in AutoConfig | ### System Info
OSError: microsoft/swin-tiny-patch4-window7-224 does not appear to have a file named config.json. Checkout 'https://huggingface.co/microsoft/swin-tiny-patch4-window7-224/None' for available files
### Who can help?
@LysandreJik
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
NA
### Expected behavior
import 'microsoft/swin-tiny-patch4-window7-224' successful | 12-04-2022 19:21:53 | 12-04-2022 19:21:53 | Hi, @XZhang97666. It seems that config.json file is in the right place(https://huggingface.co/microsoft/swin-tiny-patch4-window7-224/blob/main/config.json).
Could you please tell what version of `transformers` you're using?
Also have you tried to run the snippet code that at https://huggingface.co/microsoft/swin-tiny-patch4-window7-224?<|||||>> Hi, @XZhang97666. It seems that config.json file is in the right place(https://huggingface.co/microsoft/swin-tiny-patch4-window7-224/blob/main/config.json). Could you please tell what version of `transformers` you're using? Also have you tried to run the snippet code that at https://huggingface.co/microsoft/swin-tiny-patch4-window7-224?
Yes. I tried https://huggingface.co/microsoft/swin-tiny-patch4-window7-224, which works. However, another task does not even with the same transformers version (4.22.2).<|||||>> Hi, @XZhang97666. It seems that config.json file is in the right place(https://huggingface.co/microsoft/swin-tiny-patch4-window7-224/blob/main/config.json). Could you please tell what version of `transformers` you're using? Also have you tried to run the snippet code that at https://huggingface.co/microsoft/swin-tiny-patch4-window7-224?
I found the issue. My task generate a local folder called "microsoft/..."<|||||>Closing this issue as it seems resolved, feel free to reopen. |
transformers | 20,578 | closed | Missing support for token sampling in XLMRobertaTokenizer (sentencepiece) | ### Feature request
Hi all, token sampling is supported by the sentencepiece library, but the kwargs required to enable it are blocked by the wrapper (`_tokenize` as no `**kwargs` param)
This simple fix will enable support for token sampling 🎉
### Motivation
Token sampling is awesome, it will enable learning a more robust model 👍
### Your contribution
In `XLMRobertaTokenizer.py`
```
def _tokenize(self, text, **kwargs):
enable_sampling = kwargs.get("enable_sampling", False)
if enable_sampling:
return self.sp_model.EncodeAsPieces(text)
else:
return self.sp_model.sample_encode_as_pieces(text, nbest_size=kwargs['nbest_size'], alpha=kwargs['alpha'])
```
And in `tokenization_utils_base.py`:
Line 318 --> `def split_on_tokens(tok_list, text, **kwargs):`
Line 338 --> `self._tokenize(token) if token not in self.unique_no_split_tokens else [token]`
| 12-04-2022 11:41:29 | 12-04-2022 11:41:29 | +1<|||||>+1<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 20,577 | closed | Add OneFormer Model | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Adds the Code, Documentation, and Tests for OneFormer proposed in [OneFormer: One Transformer to Rule Universal Image Segmentation](https://arxiv.org/abs/2211.06220). I have also opened a [PR](https://huggingface.co/datasets/huggingface/documentation-images/discussions/11) to add the documentation images to `huggingface/documentation-images`.
I have also made changes to the `ImageSegmentationPipeline` to accommodate OneFormer.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@patrickvonplaten @NielsRogge
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
--> | 12-04-2022 08:09:46 | 12-04-2022 08:09:46 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Hi @praeclarumjj3, thanks a lot for your PR. It's awesome OneFormer will be available in the library (we already have [MaskFormer](https://huggingface.co/docs/transformers/model_doc/maskformer) and plan to add Mask2Former as well).
I've got 2 main points for now:
## Backbones
However, there's no need to implement backbones from scratch again, as we've just added the `AutoBackbone` class, which allows to use frameworks like DETR, Mask R-CNN, and also OneFormer with all vision backbones available in the library. The idea is to add an `xxxBackbone` class to each vision model, see for instance [here](https://github.com/huggingface/transformers/blob/699e90437f984d69ad3c9b891dd2e9d0fc2cffe4/src/transformers/models/resnet/modeling_resnet.py#L434) for ResNet.
Next, the framework (like OneFormer) can use the `AutoBackbone` class as shown [here](https://github.com/huggingface/transformers/blob/699e90437f984d69ad3c9b891dd2e9d0fc2cffe4/src/transformers/models/maskformer/modeling_maskformer.py#L1385) for MaskFormer. This allows to mix-and-match backbones with a given framework.
The plan is to next add `SwinBackbone`, `ConvNextBackbone`, as well as `NatBackbone` and `DinatBackbone` => which will make sure OneFormer can use them.
## Auto class
I doubt there's a need for an `AutoModelForUniversalSegmentation` class, as OneFormer is probably the only class which will ever be supported by it. It'd be great to make OneFormer work with our existing image segmentation pipeline (cc @Narsil). This pipeline supports instance, semantic and panoptic segmentation, and uses the appropriate postprocess method.
Will soon do a more in depth review! Thanks already for all your work.
<|||||>> @praateekmahajan thank you for working on this! Seems like you already made very good progress, my main comments are:
>
> * As Niels suggested, you can create and/or leverage the XXXBackbone classes. The SwinBackbone PR will be merged shortly so you can just focus on the DinatBackbone class.
> * The current code is CUDA dependent (correct me if I'm wrong). I took a look at the paper and the Pixel Decoder seems very similar to that of Mask2Former (also uses multi-scale deformable attention). Perhaps you could use their PyTorch implementation to get rid of the CUDA scripts, here is the [relevant Mask2Former code.](https://github.com/facebookresearch/Mask2Former/blob/main/mask2former/modeling/pixel_decoder/msdeformattn.py)
> * I think having a OneFormerForUniversalSegmentation class makes sense but we can add it to auto mapping for instance segmentation instead of creating a new mapping for simplicity.
>
> I will do a detailed review once the custom CUDA scripts are cleaned up.
>
> Thanks again :)
Thanks for the suggestions @alaradirik! I will work on using AutoBackbone classes everywhere. About the CUDA code, sure, the PyTorch code is already [there](https://github.com/praeclarumjj3/transformers/blob/cb9cba1bf6d0249401ffacfbe9eca54ba1c384c8/src/transformers/models/oneformer/modeling_oneformer.py#L1228), we just check for the presence of GPU. I will clean the CUDA files. Also, I believe you tagged the wrong person by mistake 😂.
> I think having a OneFormerForUniversalSegmentation class makes sense but we can add it to auto mapping for instance segmentation instead of creating a new mapping for simplicity.
I still think it's better to create a different `AutoMapping` class for OneFormer as it belongs to a whole new class of architecture which uses a single model for all three tasks. Is it possible for us to keep it? Hopefully, there will be follow-up works in the same direction as OneFormer's approach of training a single model.<|||||>> Thanks for the suggestions @alaradirik! I will work on using AutoBackbone classes everywhere. About the CUDA code, sure, the PyTorch code is already [there](https://github.com/praeclarumjj3/transformers/blob/cb9cba1bf6d0249401ffacfbe9eca54ba1c384c8/src/transformers/models/oneformer/modeling_oneformer.py#L1228),
Great, that makes things much easier then, and sorry about tagging the wrong person :)
>
> I still think it's better to create a different `AutoMapping` class for OneFormer as it belongs to a whole new class of architecture which uses a single model for all three tasks. Is it possible for us to keep it? Hopefully, there will be follow-up works in the same direction as OneFormer's approach of training a single model.
MaskFormer and Mask2Former (in progress in another PR) also feature universal segmentation architectures and I agree that new research will likely leverage the same paradigm. In retrospect, creating an auto mapping for universal segmentation and adding MaskFormer and Mask2Former along with OneFormer might be better. @NielsRogge what do you think about this?
<|||||>Hi @NielsRogge @alaradirik, I have the made all the suggested changes, please let me know if I missed anything. Only one thing remains: using Autobackbone for Dinat (will do after the PR for that is merged).
Also a reminder about merging this [PR](https://huggingface.co/datasets/huggingface/documentation-images/discussions/11) for documentation images :)
## Changes after Review
- [x] Replace Swin backbone file with AutoBackbone
- [x] Replace Dinat backbone file with AutoBackbone
- [x] Remove FeatureExtractor Class.
- [x] Remove CUDA dependency code.
- [x] Apply suggested changes to image segmentation `task_inputs` description.
- [x] Remove dataset info files and use json files hosted on hf_hub instead.
<|||||>@praeclarumjj3 Thanks for adding this model! ⭐
@praeclarumjj3 @NielsRogge @sgugger Yes, I think it would be better to add a `OneFormerProcessor` that contains both the tokenizer and image processor, similar to e.g. [OwlViT](https://github.com/huggingface/transformers/blob/94f8e21c7095430caa01272e16a367a421822e1c/src/transformers/models/owlvit/processing_owlvit.py#LL63C5-L63C5). In particular because the text processing is, as far as I can tell, independent of the processing of the images and it ensures accessing, loading and saving of the processing objects (tokenizer & image processor) is consistent across models. <|||||>Hi @NielsRogge @alaradirik @amyeroberts, I have made all the requested changes and added a new `OneFormerProcessor` class.<|||||>Sorry I actually pushed on this PR, I didn't mean to:
https://github.com/huggingface/transformers/pull/20851 (For some reason I could not create a PR on top of you PR)<|||||>This PR has become too massive to be merged safely. Could you split the model addition and the pipeline addition in two different PRs?<|||||>> This PR has become too massive to be merged safely. Could you split the model addition and the pipeline addition in two different PRs?
@NielsRogge do you mind taking care of it ?
Let's remove my commits from this branch and just ignore the pipeline, I will then rebase my own PR on top once this is merged.<|||||>Sure, I think @praeclarumjj3 can revert the pipeline commits since I don't have write access and then @sgugger can have a final review.<|||||>@NielsRogge, are you sure you don't have write access? If @Narsil managed to push on the branch I think we should all have write access. I think it would be nice to take care of this given that @praeclarumjj3 has already done a big amount of work on the PR :)
Thanks a lot!<|||||>@praeclarumjj3 opened a PR here to revert the pipeline updates: https://github.com/praeclarumjj3/transformers/pull/1<|||||>Thanks for all your work! Merging now.<|||||>Hi @praeclarumjj3 Thank you for adding this model!
There are a few examples in the docstrings failing the CI. For example, in `OneFormerForUniversalSegmentation.forward`
```python
>>> # you can pass them to feature_extractor for instance postprocessing
>>> predicted_instance_map = feature_extractor.post_process_instance_segmentation(
```
the `feature_extractor` is not defined.
Would you like to make them fixed 🙏 ? If so, you can run the doctest like
(if you have some change in the branch, stage them first)
```python
python3 utils/prepare_for_doc_test.py src docs
```
then
```bash
python3 -m pytest -v --make-reports doc_tests_gpu --doctest-modules src/transformers/models/gptj/modeling_gptj.py::transformers.models.gptj.modeling_gptj.GPTJForSequenceClassification.forward -sv --doctest-continue-on-failure --doctest-glob="*.mdx"
```
and also
```bash
python3 -m pytest -v --make-reports doc_tests_gpu --doctest-modules src/transformers/models/oneformer/modeling_oneformer.py::transformers.models.oneformer.modeling_oneformer.OneFormerModel.forward -sv --doctest-continue-on-failure --doctest-glob="*.mdx"
```
After running the doctests, discard the change produced by `prepare_for_doc_test.py`, and see if you need more changes in the branch.
Don't hesitate if you have further question, or if you could not find time on this at this moment (our team will fix it then) 🙏 Thank you
<|||||>Hi @ydshieh, thanks for pointing this out to me. I apologize for not fixing the docstrings in the original PR (missed the changes after changing the code in an older commit). I have opened a new PR with the inconsistencies fixed: #21215.
Please take a look and let me know if something's still broken. And thanks for letting me know about the doctests! ✌🏻 |
transformers | 20,576 | closed | Flan-T5 returns incomplete results | ### System Info
transformer version: 4.19.2
platform: Linux
python: 3.8.13
### Who can help?
@LysandreJik
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
model = AutoModelForSeq2SeqLM.from_pretrained("google/flan-t5-large")
tokenizer = AutoTokenizer.from_pretrained("google/flan-t5-large")
inputs = tokenizer("Summarize the following text: Peter and Elizabeth took a taxi to attend the night party in the city. While in the party, Elizabeth collapsed and was rushed to the hospital. Since she was diagnosed with a brain injury, the doctor told Peter to stay besides her until she gets well. Therefore, Peter stayed with her at the hospital for 3 days without leaving.", return_tensors="pt")
outputs = model.generate(**inputs)
print(tokenizer.batch_decode(outputs, skip_special_tokens=True))
>>> ['Peter and Elizabeth went to a party together. Elizabeth collapsed and was rushed to the']
```
### Expected behavior
The generated text isn't complete. It seems to be truncated. I just use the example codes, so I have no idea about this problem.
Thanks for your help :) | 12-04-2022 07:15:59 | 12-04-2022 07:15:59 | You can set min_length and max_length in model.generate() to adjust the length of generation settings. By default, max_length is not very long.<|||||>It works! Thanks!
> You can set min_length and max_length in model.generate() to adjust the length of generation settings. By default, max_length is not very long.
|
transformers | 20,575 | closed | model.generate() function raise a exception | ### System Info
- `transformers` version: 4.23.1
- Platform: Linux-5.15.0-53-generic-x86_64-with-glibc2.35
- Python version: 3.10.6
- Huggingface_hub version: 0.10.1
- PyTorch version (GPU?): 1.13.0+cu117 (False)
- Tensorflow version (GPU?): 2.10.0 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
import torch
from transformers import Speech2TextProcessor, Speech2TextForConditionalGeneration
from datasets import load_dataset
import soundfile as sf
model = Speech2TextForConditionalGeneration.from_pretrained("facebook/s2t-small-librispeech-asr")
processor = Speech2TextProcessor.from_pretrained("facebook/s2t-small-librispeech-asr")
def map_to_array(batch):
speech, _ = sf.read(batch["file"])https://huggingface.co/docs/transformers/model_doc/speech_to_text
batch["speech"] = speech
return batch
ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation")
ds = ds.map(map_to_array)
inputs = processor(ds["speech"][0], sampling_rate=16_000, return_tensors="pt")
generated_ids = model.generate(input_ids=inputs["input_features"], attention_mask=inputs["attention_mask"])
transcription = processor.batch_decode(generated_ids)
print(f'{transcription=}')
```
1. the above code copy/paste from [huggingface.co](https://huggingface.co/docs/transformers/model_doc/speech_to_text)
2. running the script
3. get a exception
```
Traceback (most recent call last):
File "/home/ymq/tmp/pretrained-models/test/t.py", line 16, in <module>
generated_ids = model.generate(input_ids=inputs["input_features"], attention_mask=inputs["attention_mask"])
File "/home/ymq/py3/lib/python3.10/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "/home/ymq/py3/lib/python3.10/site-packages/transformers/generation_utils.py", line 1208, in generate
self._validate_model_kwargs(model_kwargs.copy())
File "/home/ymq/py3/lib/python3.10/site-packages/transformers/generation_utils.py", line 910, in _validate_model_kwargs
raise ValueError(
ValueError: The following `model_kwargs` are not used by the model: ['input_ids'] (note: typos in the generate arguments will also show up in this list)
```
### Expected behavior
get stt result text | 12-04-2022 06:18:45 | 12-04-2022 06:18:45 | cc @sanchit-gandhi and @gante <|||||>Hey @yumoqing,
In this case, it's just the code snippet on the [model README](https://huggingface.co/facebook/s2t-small-librispeech-asr) that's wrong. Pasting a corrected version of the code snippet that you can use:
```python
import torch
from transformers import Speech2TextProcessor, Speech2TextForConditionalGeneration
from datasets import load_dataset
model = Speech2TextForConditionalGeneration.from_pretrained("facebook/s2t-small-librispeech-asr")
processor = Speech2TextProcessor.from_pretrained("facebook/s2t-small-librispeech-asr")
ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation")
sample = ds[0]["audio"]
inputs = processor(sample["array"], sampling_rate=sample["sampling_rate"], return_tensors="pt")
generated_ids = model.generate(input_features=inputs["input_features"], attention_mask=inputs["attention_mask"])
transcription = processor.batch_decode(generated_ids, skip_special_tokens=True)
print(transcription)
```
**Print Output:**
```
['mister quilter is the apostle of the middle classes and we are glad to welcome his gospel']
```
I've opened a PR to update the example on the model's README card on the Hub: https://huggingface.co/facebook/s2t-small-librispeech-asr/discussions/2/files |
transformers | 20,574 | open | [i18n-<languageCode>] Translating docs to <languageName> | <!--
Note: Please search to see if an issue already exists for the language you are trying to translate.
-->
Hi!
Let's bring the documentation to all the <languageName>-speaking community 🌐 (currently 0 out of 267 complete)
Who would want to translate? Please follow the 🤗 [TRANSLATING guide](https://github.com/huggingface/transformers/blob/main/docs/TRANSLATING.md). Here is a list of the files ready for translation. Let us know in this issue if you'd like to translate any, and we'll add your name to the list.
Some notes:
* Please translate using an informal tone (imagine you are talking with a friend about transformers 🤗).
* Please translate in a gender-neutral way.
* Add your translations to the folder called `<languageCode>` inside the [source folder](https://github.com/huggingface/transformers/tree/main/docs/source).
* Register your translation in `<languageCode>/_toctree.yml`; please follow the order of the [English version](https://github.com/huggingface/transformers/blob/main/docs/source/en/_toctree.yml).
* Once you're finished, open a pull request and tag this issue by including #issue-number in the description, where issue-number is the number of this issue. Please ping @ArthurZucker, @sgugger for review.
* 🙋 If you'd like others to help you with the translation, you can also post in the 🤗 [forums](https://discuss.huggingface.co/).
## Get Started section
- [ ] [index.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/index.mdx) https://github.com/huggingface/transformers/pull/20180
- [ ] [quicktour.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/quicktour.mdx) (waiting for initial PR to go through)
- [ ] [installation.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/installation.mdx).
## Tutorial section
- [ ] [pipeline_tutorial.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/pipeline_tutorial.mdx)
- [ ] [autoclass_tutorial.mdx](https://github.com/huggingface/transformers/blob/master/docs/source/autoclass_tutorial.mdx)
- [ ] [preprocessing.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/preprocessing.mdx)
- [ ] [training.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/training.mdx)
- [ ] [accelerate.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/accelerate.mdx)
- [ ] [model_sharing.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/model_sharing.mdx)
- [ ] [multilingual.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/multilingual.mdx)
<!--
Keep on adding more as you go 🔥
-->
| 12-04-2022 04:53:15 | 12-04-2022 04:53:15 | Hi @Mellobrainbox, could you fill the template with the language you are interested in? |
transformers | 20,573 | closed | Add Multi Resolution Analysis (MRA) | # Add Multi Resolution Analysis (MRA) for Approximate Self-Attention (Old PR)
This PR adds the MRA model to the repository.
Paper: [https://arxiv.org/pdf/2207.10284.pdf](https://arxiv.org/pdf/2207.10284.pdf)
Code: [https://github.com/mlpen/mra-attention](https://github.com/mlpen/mra-attention)
To-do:
- [ ] Improve loading cuda kernels
- [ ] Improve formatting and documentation
- [ ] Upload checkpoints | 12-04-2022 01:12:21 | 12-04-2022 01:12:21 | cc @amyeroberts and @NielsRogge <|||||>Hello @amyeroberts, thank you so much for going over the code! I've made changes to my branch and left some questions in the above suggestions. Please take a look at them when you are available. I also have a few additional questions and clarifications:
> Can you add any necessary optional dependencies
If I'm not mistaken MRA does not need any optional dependencies. All functions/ classes only require torch and the CUDA kernels. Unfortunately, unlike YOSO, MRA requires CUDA kernels - it cannot run without them. Could it be that the tests are failing because the kernels are not being loaded? If so, how can we handle this dependency on CUDA kernels in the HF implementation? <|||||>Hello @amyeroberts, pinging to follow up on this PR. <|||||>Hi @novice03 - thanks for the ping. Re-reviewing now! <|||||>Before I start reviewing more, could you:
- fix the conflicts
- make sure all tests pass (you can run them locally with pytest)
- make sure all quality checks pass (you can run `make fixup` for most changes that can be automated then look at `make repo-consistency` locally to see where other quality scripts are unhappy).<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20573). All of your documentation changes will be reflected on that endpoint.<|||||>Thank you for the second review @amyeroberts. I've made most, if not all, the changes you suggested.
I'm now working on fixing the tests. `tests_torch` fails because the CUDA kernels are not being loaded correctly. I've added some extra code (for e.g. calling `load_cuda_kernels()` in `mra2_attention()`) for debugging purposes, which I'll remove it later. @sgugger I might need your help in understanding how to correctly load the kernels. I get `RuntimeError: Ninja is required to load C++ extensions` for `test_determinism`. Are Ninja and CUDA not available when running the tests?<|||||>No they are not, as most users won't have them installed. Both are only installed in the runners that run the nightly tests.<|||||>Thanks @sgugger. How can I use the nightly test runners instead? <|||||>@novice03 Unfortunately you can't use the nightly tests environment for the full test suite. As @sgugger notes, most users won't have ninja and cuda installed in their environment - this is something for which the model will need to be robust.
I mentioned in one of [my comments](https://github.com/huggingface/transformers/pull/20573/files#r1072316307) that deformable DETR has a safe way of loading the CUDA kernels. I think this would be the first things to address as this handles the case when ninja and cuda aren't in the environment. <|||||>Thank you @amyeroberts. I didn't notice that DETR can run using regular PyTorch - without CUDA kernels. MRA does not have this functionality yet, so I will be working on this. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Hello @amyeroberts, I've been talking to the authors about writing non-CUDA PyTorch code for MRA. It seems that writing a PyTorch alternative for MRA, especially the `sparse_max` function, will be extremely inefficient and infeasable. I am currently looking into other alternatives for running CUDA kernels on machines without Ninja. How about pre-compiling the CUDA kernels, add the .so .egg etc files in the repo, and import it during run-time? This way we can provide the pre-compiled kernels to users - we can compile it on a machine that has Ninja and import it later. <|||||>Hi @novice03 - thanks for the update.
I realise my previous comment might not have been completely clear and didn't catch that in your reply. The models relying on custom CUDA kernels don't have pytorch equivalents implemented. Rather, they have a safe way of importing the models if ninja and cuda aren't available e.g. [in deformable detr](https://github.com/huggingface/transformers/blob/a5392ee7470f34bb48417ca2af97b9189f0eda70/src/transformers/models/deformable_detr/modeling_deformable_detr.py#L54), the `is_torch_cuda_available` and `is_ninja_available` functions are used to conditionally load the cuda kernels. If they aren't available, [dummy variables are used](https://github.com/huggingface/transformers/blob/a5392ee7470f34bb48417ca2af97b9189f0eda70/src/transformers/models/deformable_detr/modeling_deformable_detr.py#L62). <|||||>Thanks for the clarification. I assumed we needed a PyTorch implementation since I saw one in [deformable detr](https://github.com/huggingface/transformers/blob/a5392ee7470f34bb48417ca2af97b9189f0eda70/src/transformers/models/deformable_detr/modeling_deformable_detr.py#L548). However, if equivalent PyTorch functions are not required, then I can just add dummy variables/functions for MRA. <|||||>Hello @amyeroberts and @sgugger, I've added safe loading of the CUDA kernels and made sure all the tests pass. I also uploaded a checkpoint to the hub. Please take a look at the updated code. <|||||>Hi @sgugger and @amyeroberts, I've resolved all conflicts and ensured that all the tests pass. Can you please take a look at the updated code?<|||||>Thank you @sgugger. I've addressed all of your suggestions. Please take a look at the updated code. <|||||>Thanks for the corrections @sgugger! I've made all the changes suggested and taken another look at the code (fixed some urls and tests). It looks like there are merge conflicts because of the .mdx files on my branch. How do you recommend resolving the conflicts? Should I change all the .mdx files to .md?<|||||>Yes, you will need to merge main into your branch (or rebase if you prefer) to fix the conflicts and also switch all your mdx to md.
This is because GitHub recently made changes to the UI of the diffs for MDX files, which makes it really hard to review PRs, so we switched everything to Markdown. Sorry about that.<|||||>Hi @sgugger, I might need some help in correctly switching from mdx to md. I tried renaming and git mv, but this still creates a lot of conflicts. What do you suggest?<|||||>No you can't rename them in this PR, you need to rebase on main or merge the main branch into yours.<|||||>Continuing in #24513 |
transformers | 20,572 | closed | Add OneFormer Model | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Adds the Code, Documentation and Tests for OneFormer proposed in [OneFormer: One Transformer to Rule Universal Image Segmentation](https://arxiv.org/abs/2211.06220). I have also opened a [PR](https://huggingface.co/datasets/huggingface/documentation-images/discussions/11) to add the documentation images to `huggingface/documentation-images`.
I have not integrated OneFormer into the [`image-segmentation`](https://github.com/huggingface/transformers/blob/main/src/transformers/pipelines/image_segmentation.py) pipeline yet. As OneFormer takes two inputs (image and task token), I will need to create a new pipeline. Please let me know if I should add that to this PR or open a new one.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@patrickvonplaten @NielsRogge @amyeroberts @sgugger
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 12-04-2022 01:09:32 | 12-04-2022 01:09:32 | |
transformers | 20,571 | closed | Can not sample next tokens with GPT-2 model with GPT2Config `reorder_and_upcast_attn=True` | ### System Info
- `transformers` version: 4.24.0
- Platform: macOS-10.16-x86_64-i386-64bit
- Python version: 3.9.12
- Huggingface_hub version: 0.11.0
- PyTorch version (GPU?): 1.12.0 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: no (for debugging/generating, training done on GPU)
- Using distributed or parallel set-up in script?: no
### Who can help?
@patrickvonplaten (referenced in both GPT-2 and Text generation)
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
The following code uses a GPT-2 model trained from scratch. It works without problems when `reorder_and_upcast_attn=False`(the default) but with `reorder_and_upcast_attn=True` it throws `RuntimeError: probability tensor contains either ´inf´, ´nan´ or element < 0` when calling `model.sample()`. You might need to call `sample()` several times (I did 100 calls in my experiments).
```python
model = AutoModelForCausalLM.from_pretrained(model_path)
tokenizer = PreTrainedTokenizerFast.from_pretrained(model_path)
mol = "CCC(C)(C)"
mol_encoded = tokenizer(
mol,
add_special_tokens=True,
padding=True,
)
input_ids = mol_encoded["input_ids"]
logits = model(**mol_encoded).logits[0]
assert not torch.any(torch.isinf(logits)) # just a safety check for debugging
assert not torch.any(torch.isnan(logits)) # just a safety check for debugging
# "Manual" sampling, works in both cases
probs = torch.nn.functional.softmax(logits, dim=-1)
next_tokens = torch.multinomial(probs, num_samples=1).squeeze(1)
# Hugging Face sampling
next_tokens = model.sample(
input_ids,
pad_token_id=tokenizer.pad_token_id,
eos_token_id=tokenizer.eos_token_id,
max_new_tokens=100,
) # This raises the RuntimeError exception when `reorder_and_upcast_attn=True`
```
### Expected behavior
I can think of the following options
- IF it's a bug, fix it ;-), i.e. `reorder_and_upcast_attn=True` should work for text generation
- if "massaging" the logits with `logits_processor` or `logits_warper` is required, update the docs
- improve my understanding of the option...
| 12-04-2022 00:00:06 | 12-04-2022 00:00:06 | cc @gante <|||||>Hi @hogru 👋 Having a popular project like `transformers` means we get many support and feature requests — if we want to maximize how much we help the community, the community has to help us stay productive 🙏
To that end, please share a *short* script where the issue is clearly reproducible on *any* computer. In your particular case, your example is missing the model itself, which can influence the `sample` call in many ways (e.g. depending on the model config). Thank you 🤗<|||||>Hi @gante, I get this. Since I can avoid the issue by not using that option and you hint at the issue being specific to my environment/config/model/... I save you and me some time and "close" the issue. I assumed this to be a generic issue and wanted to let you know. Solving it for my specific use case is not a priority.<|||||>Hey @hogru -- actually it may be an issue that happens on all sorts of environments and models :)
I didn't mean to sound dismissive. The limitation here is manpower: we have many issues per maintainer, so our focus is on 1) common issues; 2) issues where we can pin the issue. This is the first time I see this issue, so 1) doesn't apply. For 2) to happen, I need to be able to reproduce the issue quickly, otherwise it will be a huge time sink to find the exact problem so it can be fixed 🤗 That's where the short script comes in!<|||||>Hi @gante, thanks for reaching out, I did not perceive it as dismissive and my answer was intended to be in a friendly voice. But English is not my native language... And I am new to hugging face which means that (a) there's a chance that I overlook something obvious and (b) I need to figure out how to push the model to the hub, ahem. I know, probably pretty simple, but I am uncertain if it's intended to hold models for debugging. And again, following your argument, your time is likely better invested in other areas. So, all good here. |
transformers | 20,570 | closed | Add TFBartForSequenceClassification | # What does this PR do?
This adds a sequence classification head to the TensorFlow implementation of BART, following the pattern of `BartForSequenceClassification` (PyTorch version)
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # [Issue 19653](https://github.com/huggingface/transformers/issues/19653)
## Before submitting
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
@patrickvonplaten, @patil-suraj | 12-03-2022 22:29:59 | 12-03-2022 22:29:59 | _The documentation is not available anymore as the PR was closed or merged._<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>@ydshieh @sgugger Thank you both for your insightful reviews. Pushed some changes and posed a question back. <|||||>Thanks again @ydshieh
I'm afraid the special handling is necessary as the test `test_save_load_after_resize_token_embeddings` does some extra magic to alter the input ids. I took @sgugger 's suggestion to overwrite the test in the BartTester and move that logic into the test itself. That should clear up common.<|||||>> Good to go with the nits, thanks for bearing with us!
Happy to take care of them! This is my first PR, thanks for all the help, and the seamless process.<|||||>> > Good to go with the nits, thanks for bearing with us!
>
> Happy to take care of them! This is my first PR, thanks for all the help, and the seamless process.
You are doing a great job! 💯
|
transformers | 20,569 | closed | Spanish translation of asr.mdx and add_new_pipeline.mdx | # What does this PR do?
Translates `asr.mdx` and `add_new_pipeline.mdx` into Spanish. Also updates the `_toctree.yml` file accordingly. Includes minor typo corrections for the original versions of both files and the translated version of a file I had previously worked on.
Related to #15947
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests) Pull Request section?
@osanseviero @sgugger | 12-03-2022 19:26:50 | 12-03-2022 19:26:50 | _The documentation is not available anymore as the PR was closed or merged._<|||||>I think I committed all the suggested changes, thanks @osanseviero !<|||||>Thanks again for your contribution! |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.