repo
stringclasses
1 value
number
int64
1
25.3k
state
stringclasses
2 values
title
stringlengths
1
487
body
stringlengths
0
234k
created_at
stringlengths
19
19
closed_at
stringlengths
19
19
comments
stringlengths
0
293k
transformers
19,666
closed
How to implement beam search on logits output from BART model?
Hi, I've converted my BART-LARGE-CNN model to ONNX and now am trying to perform beam search then decoding to generate a summary. The output from logits layer is in the shape of [1,804,50265] which I'm assuming is [batch size, time step, vocab length]. I then perform softmax on each time step and log the result. If my understanding is correct, I can perform greedy search easily with this (at each time step, select the highest probability, then decode using vocab to find the summary). But how can I implement beam search? For example, given the choice of a word (i.e. "The) at time step 1, the probability of each word following that at time step 2 should be different depending on the word selected at time step 1. But in this case the logits are fixed for every different word at time step 2? Sorry I'm new to this, pretty confused. Any help would be appreciated.
10-17-2022 09:45:15
10-17-2022 09:45:15
Gently pinging @lewtun, and I'm unknowledgeable about the intersection between ONNX and text generation :) @ZiyueWangUoB yeah, you can implement the basic version of greedy search as you described. Beam search is more complex, a good reference is the following [blog post](https://huggingface.co/blog/how-to-generate). But I suspect we already have a solution for ONNX models!<|||||>@gante Yes I've read that article and understand the theory behind beam search. However I feel like I'm missing something with the onnx output, as the logits alone shouldn't be able to cover the beam search algorithm. <|||||>@ZiyueWangUoB There are several ways to kickstart beam search, but in all of them you have to do shenanigans at the start to obtain `N` (number of beams) sets of logits from a single input row. Option 1 - The first iteration is a normal greedy search where you keep the top_k (k=`N`) tokens Option 2 - You replicate your input `N` times, but set a large score penalty in all but the first row. You can use beam search from the first iteration. From there, run the usual beam search: obtain `[N, vocab_size]` logits, pick the top `N` based on the score. Following our code, especially our TF and JAX implementations, is great to understand everything that goes into making it right! Note: As per our [issues guidelines](https://github.com/huggingface/transformers/blob/main/ISSUES.md), we reserve GitHub issues for bugs in the repository and/or feature requests. For any other matters (like these questions), we'd like to invite you to use our [forum](https://discuss.huggingface.co/) 🤗<|||||>@gante how would I go about getting N sets of logits? The model only outputs a [1,804,50265] set, which I’m assuming is 1 set of logits.<|||||>Thanks for the ping @gante ! @ZiyueWangUoB we actually have a beam search example with BART + TorchScript from the ONNX team that you can inspect here: https://github.com/huggingface/transformers/blob/main/examples/research_projects/onnx/summarization/bart_onnx/generation_onnx.py tl;dr implementing beam search from scratch is quite involved and essentially requires reimplementing large chunks of the `generate()` method we have in `transformers` If ONNX Runtime is an option, an alternative would be to run the generation with our `optimum` lib: ```python from transformers import AutoTokenizer, pipeline from optimum.onnxruntime import ORTModelForSeq2SeqLM model_id = "facebook/bart-large-cnn" tokenizer = AutoTokenizer.from_pretrained(model_id) model = ORTModelForSeq2SeqLM.from_pretrained(model_id, from_transformers=True) summarizer = pipeline("summarization", model=model, tokenizer=tokenizer) text = "The tower is 324 metres (1,063 ft) tall, about the same height as an 81-storey building, and the tallest structure in Paris. Its base is square, measuring 125 metres (410 ft) on each side. During its construction, the Eiffel Tower surpassed the Washington Monument to become the tallest man-made structure in the world, a title it held for 41 years until the Chrysler Building in New York City was finished in 1930. It was the first structure to reach a height of 300 metres. Due to the addition of a broadcasting aerial at the top of the tower in 1957, it is now taller than the Chrysler Building by 5.2 metres (17 ft). Excluding transmitters, the Eiffel Tower is the second tallest free-standing structure in France after the Millau Viaduct." summarizer(text) ``` <|||||>@lewtun Thanks for the input! I've been looking at that example, but the problem with directly using that script is the result means the length of the encoded text is fixed. I'll look into the torchscript beam search further, but as of right now that can't be directly used. As for optimum, I will try that. I'm currently trying to convert the model into Tensorrt at the end, but if that's not reasonable I will use optimum. <|||||>> As for optimum, I will try that. I'm currently trying to convert the model into Tensorrt at the end, but if that's not reasonable I will use optimum. Cool! We're hoping to integrate TensorRT as a backend in `optimum` when we get some bandwidth - in the meantime, you're more than welcome to open an issue/PR if you feel so inclined :)
transformers
19,665
closed
My customized function compute_metrics doesn't work when i train the CLIP model
### System Info **environment** transformers==4.17.0 **hopes** I would like to know how to output evaluation metrics when i train the CLIP? I want to get `eval_loss` the way I get `train_loss`. Could anyone help me? **details** I have trained the CLIP demo(with the original training script `run_clip.py`: https://github.com/huggingface/transformers/tree/main/examples/pytorch/contrastive-image-text) successfully and obtained the following logs: <img width="1243" alt="image" src="https://user-images.githubusercontent.com/27990344/196113496-3582dd41-32bf-47c5-939c-289b5d8fcef6.png"> There seems to be no output evaluation metrics(such as `loss` or `acc`) after i specify `--do_eval ` before training. So i specified `compute_metrics=compute_metrics` in `Trainer` and got errors when the CLIP do evaluation <img width="737" alt="image" src="https://user-images.githubusercontent.com/27990344/196118789-3587ea57-1654-4075-a7ef-33f43505f15b.png"> <img width="1238" alt="image" src="https://user-images.githubusercontent.com/27990344/196117202-106ee438-254e-492f-a6ea-c332d9a4b7b4.png"> Except the above errors, i also checked the src code(`line 3052`, https://github.com/huggingface/transformers/blob/main/src/transformers/trainer.py) and found that my customized function `compute_metrics` doesn't work when `all_labels is None` <img width="855" alt="image" src="https://user-images.githubusercontent.com/27990344/196119730-ab1ae85d-324b-4580-bf64-ff322ee2a11e.png"> ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Run original training script `run_clip.py` https://github.com/huggingface/transformers/tree/main/examples/pytorch/contrastive-image-text ### Expected behavior I would like to know how to output evaluation metrics when i train the CLIP? I want to get `eval_loss` the way I get `train_loss`.
10-17-2022 08:02:03
10-17-2022 08:02:03
@ydshieh Can you take a moment to help out? Thx a lot!<|||||>Hi @lchwhut . Thank you for reporting. I will take a look<|||||>Hi @lchwhut Could you check what you get as `all_preds` here https://github.com/huggingface/transformers/blob/b17a5e00749790895314ea33a4f156c918718dfe/src/transformers/trainer.py#L3071 Does it contain the loss value?<|||||>Well, a second look, I think you can comment out this block ```python # Metrics! if self.compute_metrics is not None and all_preds is not None and all_labels is not None: if args.include_inputs_for_metrics: metrics = self.compute_metrics( EvalPrediction(predictions=all_preds, label_ids=all_labels, inputs=all_inputs) ) else: metrics = self.compute_metrics(EvalPrediction(predictions=all_preds, label_ids=all_labels)) else: metrics = {} ``` and put `metrics = {}` before the line `metrics = denumpify_detensorize(metrics)`. Please let me know if this helps, thank you!<|||||>> Well, a second look, I think you can comment out this block > > ```python > # Metrics! > if self.compute_metrics is not None and all_preds is not None and all_labels is not None: > if args.include_inputs_for_metrics: > metrics = self.compute_metrics( > EvalPrediction(predictions=all_preds, label_ids=all_labels, inputs=all_inputs) > ) > else: > metrics = self.compute_metrics(EvalPrediction(predictions=all_preds, label_ids=all_labels)) > else: > metrics = {} > ``` > > and put `metrics = {}` before the line `metrics = denumpify_detensorize(metrics)`. > > Please let me know if this helps, thank you! Commenting out this block and putting `metrics = {}` before the line `metrics = denumpify_detensorize(metrics)` doesn't work. It will report error like this: ![image](https://user-images.githubusercontent.com/27990344/196848990-107c81db-525a-4f6e-985e-f065907dbab1.png) Actually, this way is equivalent to executing the `else` module because `all_labels ` is None. I checked the src code `loss, logits, labels = self.prediction_step(model, inputs, prediction_loss_only, ignore_keys=ignore_keys)` This step seems to return (None, logits, None) when the model is CLIP,because in this function, `has_labels=False`. <|||||>Hi @lchwhut , could you check if the `inputs` at the line you mentioned have `"return_loss": True`, just like https://github.com/huggingface/transformers/blob/bbe2c8b126b04a3250e7089cf507ec62ce8d716c/examples/pytorch/contrastive-image-text/run_clip.py#L215 If it exists there and being `True`, the model itself should be able to return a loss value. We then have to check if `has_labels=False` will play a role not to return it back.<|||||>> Hi @lchwhut , could you check if the `inputs` at the line you mentioned have `"return_loss": True`, just like > > https://github.com/huggingface/transformers/blob/bbe2c8b126b04a3250e7089cf507ec62ce8d716c/examples/pytorch/contrastive-image-text/run_clip.py#L215 > > > If it exists there and being `True`, the model itself should be able to return a loss value. We then have to check if `has_labels=False` will play a role not to return it back. Hi @ydshieh, i think the `inputs` does have `"return_loss": True` because the `inputs` is generated by dataloader which conducts function `collate_fn`. <img width="940" alt="image" src="https://user-images.githubusercontent.com/27990344/197094616-779776fd-2ade-4ce2-95b5-5d9ec0e5d273.png"> `loss = None` will be specified if `have_label == False` <img width="833" alt="image" src="https://user-images.githubusercontent.com/27990344/197096010-32dac0c7-57d9-4fd2-8232-146ecee848dd.png"> The `logits` returned by `self.prediction_step` is the output of the CLIP which like: <img width="381" alt="image" src="https://user-images.githubusercontent.com/27990344/197096335-d91dfc31-8f61-41b1-8432-1b34de556cad.png"> The type of `text_model_output` and `vision_model_output` is not Tensor which causes(i think) the following error: <img width="612" alt="image" src="https://user-images.githubusercontent.com/27990344/197096740-eb542f27-6894-4e30-962a-e0fd661fd768.png"> <|||||>OK @lchwhut Thank you for all of these checks. I will figure it out a way.<|||||>Hi @lchwhut Since the commit [3951b9f39](https://github.com/huggingface/transformers/commit/3951b9f3908bfa30be7fd814cd2ad1039d3162d8) (PR #16526), we can get the evaluation loss. Could you try with the latest version? Thanks. (You can try with a tiny dummy training) ```bash ***** train metrics ***** epoch = 1.0 train_loss = 1.3123 train_runtime = 0:00:08.53 train_samples_per_second = 1.876 train_steps_per_second = 0.938 [INFO|trainer.py:2412] 2022-10-22 11:28:29,243 >> ***** Running Evaluation ***** [INFO|trainer.py:2414] 2022-10-22 11:28:29,243 >> Num examples = 16 [INFO|trainer.py:2417] 2022-10-22 11:28:29,243 >> Batch size = 2 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 8/8 [00:01<00:00, 5.62it/s] ***** eval metrics ***** epoch = 1.0 eval_loss = 0.8159 eval_runtime = 0:00:01.63 eval_samples_per_second = 9.806 eval_steps_per_second = 4.903 ``` Dummy training like ```bash python ./run_clip.py \ --output_dir ./outputs \ --model_name_or_path ./clip-roberta \ --data_dir $PWD/data \ --dataset_name ydshieh/coco_dataset_script \ --dataset_config_name=2017 \ --image_column image_path \ --caption_column caption \ --remove_unused_columns=False \ --do_train \ --do_eval \ --num_train_epochs 1 \ --max_steps 8 \ --max_train_samples 16 \ --max_eval_samples 16 \ --per_device_train_batch_size 2 \ --per_device_eval_batch_size 2 \ --learning_rate="5e-5" \ --warmup_steps="0" \ --weight_decay 0.1 \ --overwrite_output_dir \ ```<|||||>> Hi @lchwhut > > Since the commit [3951b9f39](https://github.com/huggingface/transformers/commit/3951b9f3908bfa30be7fd814cd2ad1039d3162d8) (PR #16526), we can get the evaluation loss. Could you try with the latest version? Thanks. > > (You can try with a tiny dummy training) > > ```shell > ***** train metrics ***** > epoch = 1.0 > train_loss = 1.3123 > train_runtime = 0:00:08.53 > train_samples_per_second = 1.876 > train_steps_per_second = 0.938 > [INFO|trainer.py:2412] 2022-10-22 11:28:29,243 >> ***** Running Evaluation ***** > [INFO|trainer.py:2414] 2022-10-22 11:28:29,243 >> Num examples = 16 > [INFO|trainer.py:2417] 2022-10-22 11:28:29,243 >> Batch size = 2 > 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 8/8 [00:01<00:00, 5.62it/s] > ***** eval metrics ***** > epoch = 1.0 > eval_loss = 0.8159 > eval_runtime = 0:00:01.63 > eval_samples_per_second = 9.806 > eval_steps_per_second = 4.903 > ``` > > Dummy training like > > ```shell > python ./run_clip.py \ > --output_dir ./outputs \ > --model_name_or_path ./clip-roberta \ > --data_dir $PWD/data \ > --dataset_name ydshieh/coco_dataset_script \ > --dataset_config_name=2017 \ > --image_column image_path \ > --caption_column caption \ > --remove_unused_columns=False \ > --do_train \ > --do_eval \ > --num_train_epochs 1 \ > --max_steps 8 \ > --max_train_samples 16 \ > --max_eval_samples 16 \ > --per_device_train_batch_size 2 \ > --per_device_eval_batch_size 2 \ > --learning_rate="5e-5" \ > --warmup_steps="0" \ > --weight_decay 0.1 \ > --overwrite_output_dir \ > ``` Hi @ydshieh Thanks for your help! I updated `transformers==4.17.0` to `transformers==4.21.3` and could get the evaluation loss now! <img width="1235" alt="image" src="https://user-images.githubusercontent.com/27990344/197668516-35e48897-0728-40bf-9179-80862ddd4941.png"> But the error `'BaseModelOutputWithPoolingAndCrossAttentions' object has no attribute 'detach'` will still reproduce if i customize `compute_metrics`. I check the code and find if don't customize `compute_metrics`, `prediction_loss_only=None` will be set and thus the `prediction_step` will return `(loss, None, None)` (just skip `nested_detach`) <img width="850" alt="image" src="https://user-images.githubusercontent.com/27990344/197669277-debf27d2-306e-43cb-a5f6-84ca95a4befe.png"> <img width="442" alt="image" src="https://user-images.githubusercontent.com/27990344/197668702-0506b9bd-6c73-453e-84b8-e9b1c5632773.png"> I think it doesn't matter because the `eval_loss` is enough for me. I can remove `text_model_output` and `vision_model_output` from `CLIPOutput` if i need customized compute_metrics to evaluate. Thank you very much! <|||||>Hi @lchwhut Yeah, `CLIP` is indeed special in terms of the output format :-) The `Trainer` class is designed to work with the most common use cases, but not a one-size-fits-all solution. Sometimes we need more customization <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
19,664
closed
Update README.md
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
10-17-2022 05:32:30
10-17-2022 05:32:30
_The documentation is not available anymore as the PR was closed or merged._<|||||>Thanks for your PR but We will keep the current wording.
transformers
19,663
closed
Add pillow to layoutlmv3 example requirements.txt
Adds required pillow / PIL library to the layoutlmv3 training example, as it´s used to load the images during training. @sgugger, @patil-suraj tagging you as i don´t know who is reponsible for that example.
10-16-2022 23:54:23
10-16-2022 23:54:23
_The documentation is not available anymore as the PR was closed or merged._
transformers
19,662
closed
word replacement line #231
install->installation # What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
10-16-2022 20:13:10
10-16-2022 20:13:10
_The documentation is not available anymore as the PR was closed or merged._
transformers
19,661
closed
grammatical error line #218
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
10-16-2022 20:09:45
10-16-2022 20:09:45
_The documentation is not available anymore as the PR was closed or merged._<|||||>Thanks for your PR, but the sentence is correct as it is. Your proposed modification changes the meaning.
transformers
19,660
closed
adding functionality to iterate pipelines over sentence pairs when using dataset
### Feature request to iterate on a dataset using pipelines, doc mentions this sample, [source](https://huggingface.co/docs/transformers/main/en/main_classes/pipelines#pipeline-batching) ``` import datasets from transformers import pipeline from transformers.pipelines.pt_utils import KeyDataset from tqdm.auto import tqdm pipe = pipeline("automatic-speech-recognition", model="facebook/wav2vec2-base-960h", device=0) dataset = datasets.load_dataset("superb", name="asr", split="test") # KeyDataset (only *pt*) will simply return the item in the dict returned by the dataset item # as we're not interested in the *target* part of the dataset. for out in tqdm(pipe(KeyDataset(dataset, "file"))): print(out) # {"text": "NUMBER TEN FRESH NELLY IS WAITING ON YOU GOOD NIGHT HUSBAND"} # {"text": ....} # .... ``` Pipeline supports sending a sentence pair as a dict. [source](https://github.com/huggingface/transformers/blob/main/src/transformers/pipelines/text_classification.py#L112) ``` pipe = pipeline('text-classification') out = pipe( {'text':'I like you', 'text_pair':'. I love you'}) ``` But there is no way to get pair of sentence using huggingface datasets. We can do this by simply adding a KeyPairDataset in [pt_utils](https://github.com/huggingface/transformers/blob/2ef774211733f0acf8d3415f9284c49ef219e991/src/transformers/pipelines/pt_utils.py). Something like. ``` class KeyPairDataset(Dataset): def __init__(self, dataset: Dataset, key1: str, key2: str): self.dataset = dataset self.key1 = key1 self.key2 = key2 def __len__(self): return len(self.dataset) def __getitem__(self, i): return {'text':self.dataset[i][self.key1],'text_pair':self.dataset[i][self.key2]} ``` And then inference this using ``` dataset = Dataset.from_pandas(dataset_df[['sentence1', 'sentence2']]) pipe = pipeline('text-classification', model=args.input_path_model, device=0, num_workers=4) result = list(tqdm(pipe(KeyPairDataset(dataset, 'sentence1', 'sentence2'), batch_size=32), total=len(dataset))) ``` ### Motivation I am working on models that take sentence pair as input. Just something that could make my life easier instead of making my own datasets and data loaders or overriding the preprocesses function of pipeline datasets. ### Your contribution I can submit the pr.
10-16-2022 18:17:15
10-16-2022 18:17:15
cc @Narsil <|||||>Hi @rohit1998 , Sounds like a good addition. In general I think it's good if users understand how to create their own, but `KeyPairDataset` should fit quite nicely !<|||||>Sure, I will try to add it and tag you to pr soon<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Wouldn't it be worth to add some generalisation like: ```python class KeyPairDataset(Dataset): def __init__(self, dataset: Dataset, key1: str, key2: str): self.dataset = dataset self.key1 = key1 self.key2 = key2 def __len__(self): return len(self.dataset) def __getitem__(self, i): return {self.key1:self.dataset[i][self.key1],self.key2:self.dataset[i][self.key2]} ```<|||||>I believe (correct me if i am wrong), but `text` and `text_pair` are conventions used in transformers library for two sentence case. If yes, making it more general would require changes in pipeline api.
transformers
19,659
closed
adding functionality to iterate pipelines over sentence pairs when using dataset
# What does this PR do? This PR adds a new torch data set class to pt_utils.py that helps with using pipelines and datasets for sentence pair tasks. Usage can be simply like ``` dataset = Dataset.from_pandas(dataset_df[['sentence1', 'sentence2']]) pipe = pipeline('text-classification', model=args.input_path_model, device=0, num_workers=4) result = list(tqdm(pipe(KeyPairDataset(dataset, 'sentence1', 'sentence2'), batch_size=32), total=len(dataset))) ``` @LysandreJik please have a look. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
10-16-2022 17:38:17
10-16-2022 17:38:17
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19659). All of your documentation changes will be reflected on that endpoint.
transformers
19,658
closed
[Doctest] Add `configuration_trocr.py`
* trocr Config for doctest * ran make style # What does this PR do? Add `configuration_trocr.py` to `utils/documentation_tests.txt` for doctest. Based on issue #19487 @ydshieh could you please take a look at it? Thanks =)
10-16-2022 15:20:27
10-16-2022 15:20:27
_The documentation is not available anymore as the PR was closed or merged._
transformers
19,657
closed
Fix pipeline predict transform methods
# What does this PR do? This PR fixes pipeline's predict and transform methods which are wrapper around ` __call__` of pipeline class. `__call__ `requires one positional argument, However these wrapper methods pass the incoming argument as a keyword argument to `__call__` method which leads to failure. Hence in this commit the keyword argument is modified to positional argument and basic tests are added to make sure this does not break again in future. Fixes #19289 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @LysandreJik @Narsil
10-16-2022 15:12:03
10-16-2022 15:12:03
_The documentation is not available anymore as the PR was closed or merged._
transformers
19,656
closed
Pooled output from DeBERTa(v2) model
### Feature request Add a context pooler and include the `pooler_output` in the result of the `forward` method of the `(TF)Deberta(V2)Model`. Changes required: * introduce `add_pooling_layer` flag in the constructor * add pooler * change return type from `BaseModelOutput` to `BaseModelOutputWithPooling` * update heads which currently do their own context pooling ### Motivation * Simplification of use cases where the model is used to generate context embeddings (whether the embeddings being the end goal or only used as an input to additional layers in a custom model) * Currently multiple heads require the pooled output and hence it is re-implemented in multiple places. The complexity of the heads and the redundancy can be decreased by moving this operation to the base model * Consistency with BERT and RoBERTa implementations which already operate in this fashion. Additionally, this consistency can simplify use cases where the user works with context embeddings and wants to experiment with different models. E.g.: BERT and RoBERTa can be easily exchanged. On the other hand, swapping in DeBERTa requires the user to handle the context pooler ### Your contribution I implemented the necessary changes in my fork and can submit a PR
10-16-2022 10:22:02
10-16-2022 10:22:02
Thanks for your suggestion! This will sadly be a breaking change as it will change the way the weights are named, and thus break compatibility with all DeBERTa models on the hub. So even if your suggestion makes a ton of sense, I don't think we will be able to implement it.<|||||>@sgugger thank you for your reply! I see why the implementation of the heads should not be altered, nevertheless, wouldn't it still make sense to at least add the pooler to the base model (the deberta model without any head)? Motivation being the first and the last points from the list above. <|||||>We have plenty of other models (like ELECTRA) where the pooler is not implemented. As a general rule of thumb, it depends on whether it was in the pretraining objective or not.<|||||>I understand and thank you! I will close the issue then.
transformers
19,655
closed
Removed Bert interdependency from Funnel transformer
# What does this PR do? Hi @sgugger, Fixes #19303 - The `BertTokenizer` dependency has been removed from `FunnelTokenizer` - The `BertTokenizerFast` dependency has been removed from `FunnelTokenizerFast` ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @sgugger
10-16-2022 06:05:09
10-16-2022 06:05:09
_The documentation is not available anymore as the PR was closed or merged._<|||||>> You just need to remove those `:obj:` :-) Made all the necessary changes. Thanks for looking into it.
transformers
19,654
closed
Clean up deprecation warnings
The deprecation spring cleaning mentioned in #19371 Notes: Changed some strings in tests to raw strings, which will change the literal content of the strings as they are fed into whatever machine handles them. Test cases for past in the past/past_key_values switch changed/removed due to warning of impending removal Most of the warnings defined and thrown by transformers functions were left alone # What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
10-15-2022 16:57:34
10-15-2022 16:57:34
_The documentation is not available anymore as the PR was closed or merged._<|||||>@Davidy22 Thanks for opening this PR and all of the fixes! 💪 Just taking a look now. It would really helpful as a reviewer, and for future reference if you could give a quick summary of the deprecation warnings you tackled. As a quick note regarding `PIL.Image.XXX` -> `PIL.Image.Resampling.XXX`, these changes might be breaking (c.f. [open transformers issue](https://github.com/huggingface/transformers/issues/19569) and [related PR in diffusers](https://github.com/huggingface/diffusers/pull/788) as it requires `Pillow>=9.1.0` which is not currently enforced. @sgugger What is the typical strategy for handling dependancy version updates? <|||||>Thanks for pointing this out Amy! In this case, Pillow 9.1.0 is too recent to be pinned as a minimum version (our general rule of thumb is to provide a support for 2 years and this is only from April). So we will need to move those objects to our `image_utils.py` where we can do some checks like: ```py if version.parse(version.parse(PIL.__version__).base_version) >= version.parse("9.1.0"): # Define enum using `PIL.Image.Resampling` else: # Define enum using `PIL.Image` ``` As for the actual enum, we could name it `PILImageResampling`?<|||||>Oh whoops I probably should have taken a couple more notes, I only wrote down a couple of things that stood out as things that'd functionally change something. Summary from re-skimming through the listed changes: - np types swapped with equivalent python default types - PIL Resampling options switched to the new recommended location in the PIL library - Some strings with non-python escapes changed to r strings. - dict_type -> dict parameter in dictionary.Dictionary - Usages of past changed to past_key_values - topk -> top_k<|||||>Added PILImageResampling, dealt with some funky import issues in one file that weren't happening in any of the other files, don't know why specifically the flava test file would have issues importing from image_utils, hoping it's not some thing that actually also happens in other files but doesn't get surfaced because it's not covered or something
transformers
19,653
closed
TFBartForSequenceClassification
The Tensorflow version of `BartForSequenceClassification` seems to be missing. I've been putting a port together for another use case. Any interest in adding it to the repo?
10-15-2022 16:44:59
10-15-2022 16:44:59
can you explain it in detail.<|||||>I have been working on a project for zero-shot classification using `facebook/bart-large-mnli`. After an initial prototype using the transformers pipeline I began to work with the underlying Bart model directly. As our project is in Tensorflow, I soon noticed there was no equivalent of `class BartForSequenceClassification` on the TF side. I have stitched it together: adding the classification head and loading the weights from the torch.bin file. If you think it would be useful, I could put up a PR add it to https://github.com/huggingface/transformers/blob/main/src/transformers/models/bart/modeling_tf_bart.py along side of the other TF ports for Bart.<|||||>Hey @uglyboxer 👋 It is indeed missing. If you have a working implementation, we'd be interested in adding it to our library!<|||||>Cool. I'll put something together this week.
transformers
19,652
closed
[Doctest] Add configuration_trocr.py
# What does this PR do? Add `configuration_trocr.py` to `utils/documentation_tests.txt` for doctest. Based on issue #19487
10-15-2022 16:10:01
10-15-2022 16:10:01
transformers
19,651
closed
[Doctest] Add configuration_transfo_xl.py
# What does this PR do? Add `configuration_transfo_xl.py` to `utils/documentation_tests.txt` for doctest. Based on issue #19487 @ydshieh could you please take a look at it? Thanks =)
10-15-2022 15:56:12
10-15-2022 15:56:12
_The documentation is not available anymore as the PR was closed or merged._<|||||>@thliang01 Could you try to resolve the conflict :-) thank you 🙏 <|||||>Hi @thliang01 I tried to fix the conflict and make it clean. It should work now - I will merge once the CI are all good. Thank you again for your contribution 👍 💯 !
transformers
19,650
closed
[Doctest] Add configuration_xlnet.py
# What does this PR do? Add configuration_xlnet.py to utils/documentation_tests.txt for doctest. Based on issue #19487 @sgugger could you please take a look at it? Thanks =)
10-15-2022 15:42:33
10-15-2022 15:42:33
_The documentation is not available anymore as the PR was closed or merged._
transformers
19,649
closed
[Doctest] Add configuration_xlnet.py
Add configuration_xlnet.py to utils/documentation_tests.txt for doctest. Based #19487 @ydshieh could you please check it? Thanks :)
10-15-2022 15:29:11
10-15-2022 15:29:11
_The documentation is not available anymore as the PR was closed or merged._
transformers
19,648
closed
fix image2test args forwarding
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes #19628 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
10-15-2022 14:20:55
10-15-2022 14:20:55
_The documentation is not available anymore as the PR was closed or merged._<|||||>I took the liberty of making the changes I was talking about. Feel free to revert if this doesn't suit you. Being stateless in the pipeline is essential, we really cannot use `self` to pass around information (it messes with threading and batching)
transformers
19,647
closed
[Doctest] Add `configuration_clip.py`
Add `configuration_clip.py` to `utils/documentation_tests.txt` for doctest. Based on issue #19487 Noticed the model initialization and config initialization lines were switched so made some additional changes to correct it. @ydshieh could you please check if it's okay? Thank you =)
10-15-2022 13:55:43
10-15-2022 13:55:43
_The documentation is not available anymore as the PR was closed or merged._<|||||>@daspartho Very cool! Would you like to take the challenge to add doc example to `CLIPConfig`, which will use `from_text_vision_configs`. This will help a lot of the library users 🔥 Let me know :-)<|||||>@ydshieh Sure! I'd like to take on the task =)<|||||>@ydshieh made some changes; could you please check if it looks good?<|||||>@ydshieh added an example using the `from_text_vision_configs` method; could you please review the changes to see if they're okay? Thanks :)<|||||>Just a final comment and we are ready to merge!<|||||>@ydshieh made the suggested changes; good to go =)<|||||>Thank you for the PR and your patience :-) @daspartho
transformers
19,646
closed
[Doctest] Add configuration_realm.py
<!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Add configuration_realm.py to utils/documentation_tests.txt for doctest. Based on issue #19487 @ydshieh could you please check it? Thank you
10-15-2022 13:05:00
10-15-2022 13:05:00
_The documentation is not available anymore as the PR was closed or merged._<|||||>Hi @ak04p Thank you for the PR. We should also change this line ``` from transformers import RealmEmbedder, RealmConfig ```<|||||>Thank you for the feedback, I'll correct it.
transformers
19,645
closed
Fix a typo in the preprocessing tutorial
# What does this PR do? Fixed a typo in transformers [tutorials > preprocess](https://huggingface.co/docs/transformers/preprocessing). Currently the code and its output does not match. Please see last two cells from this [colab notebook](https://colab.research.google.com/drive/18WjgFPtQu4n8k6qAWtpsjVwDmICDrcfD#scrollTo=4LsMdS8plf-K); `dataset[0]["image"]` is a PIL image, not a dictionary as present in the current version of the tutorial. It should be `dataset[0]`. (Please ignore the change at line 490, Github is showing the wrong diff. There's no actual changes.) ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
10-15-2022 12:40:28
10-15-2022 12:40:28
My fault, there's nothing wrong with the tutorial. 😮‍💨<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19645). All of your documentation changes will be reflected on that endpoint.
transformers
19,644
closed
Improve DETR models
# What does this PR do? This PR: - [x] fixes Deformable DETR's loss function - [x] adds more copied from statements for consistency - [x] fixes Conditional DETR's integration tests. As pointed out in #18948, Deformable DETR uses the same loss function and Hungarian matcher as Conditional DETR (use of sigmoid instead of softmax and not including the no-object class). This PR also improves the (original; conditional; deformable) DETR models by improving docs, adding more Copied from statements.
10-15-2022 12:32:18
10-15-2022 12:32:18
_The documentation is not available anymore as the PR was closed or merged._
transformers
19,643
closed
[Doctest] Add configuration_convbert.py
Add configuration_convbert.py to utils/documentation_tests.txt for doctest. Based on #19487 @ydshieh could you please check it? Thanks :)
10-15-2022 12:09:54
10-15-2022 12:09:54
_The documentation is not available anymore as the PR was closed or merged._<|||||>Thank you again, @AymenBer99 🚀
transformers
19,642
closed
get rid from bart attention copypaste
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
10-15-2022 11:42:38
10-15-2022 11:42:38
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19642). All of your documentation changes will be reflected on that endpoint.<|||||>Thanks for your PR, but having the models be independent of each other is a core principle of the philosophy of the library. You can learn more about it in [this blog post](https://huggingface.co/blog/transformers-design-philosophy).<|||||>> Thanks for your PR, but having the models be independent of each other is a core principle of the philosophy of the library. You can learn more about it in [this blog post](https://huggingface.co/blog/transformers-design-philosophy). Even small functions like `expand_mask`? Also in some files this PR catch problem, when some functions/classes was copied without `#Copied from` mechanism. P.S. In some way this PR not very different from your ~D~RY philosophy, because arguments like easy to patch one files stiil exists, because each one can change Attention for own file like he wants using just copy and change.
transformers
19,641
closed
[Doctest] Add configuration_conditional_detr.py
Add configuration_conditional_detr.py to utils/documentation_tests.txt for doctest. Based on #19487 @ydshieh could you please check it? Thanks :)
10-15-2022 11:30:12
10-15-2022 11:30:12
_The documentation is not available anymore as the PR was closed or merged._
transformers
19,640
closed
Fixed the docstring and type hint for forced_decoder_ids option in Ge…
# What does this PR do? This PR fixes #19602 where the docstring and type hint for forced_decoder_ids option in GenerationMixin.generate were inconsistent with the actual implementation. Fixes #19602 ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @gante has suggested me to send a PR for the issue #19602.
10-15-2022 09:37:28
10-15-2022 09:37:28
_The documentation is not available anymore as the PR was closed or merged._<|||||>@gante Thanks for the comment! I've taken in your suggestion in the new commit. Please proceed with the merge if it is looking good.<|||||>@koreyou Awesome, thank you for the changes! I will merge as soon as CI turns to green :)
transformers
19,639
closed
Circleci project setup
null
10-15-2022 08:08:34
10-15-2022 08:08:34
Not too sure what the goal of this PR is :-)
transformers
19,638
closed
Add return types for tensorflow GPT-J, XLM, and XLNet
# What does this PR do? Adds return types for model classes in tensorflow GPT-J, XLM, and XLNet as tasked in #16059. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @Rocketknight1 <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
10-15-2022 05:11:48
10-15-2022 05:11:48
_The documentation is not available anymore as the PR was closed or merged._
transformers
19,637
closed
[Doctest] Add `configuration_data2vec_vision.py`
Add `configuration_data2vec_vision.py` to `utils/documentation_tests.txt` for doctest. Based on issue #19487 @ydshieh could you take a look at it? Thank you =)
10-15-2022 03:27:38
10-15-2022 03:27:38
_The documentation is not available anymore as the PR was closed or merged._<|||||>@ydshieh made the required changes :)
transformers
19,636
closed
[Doctest] Add `configuration_data2vec_text.py`
Add `configuration_data2vec_text.py` to `utils/documentation_tests.txt` for doctest. Based on issue #19487 @ydshieh could you please check it? Thanks :)
10-15-2022 03:26:38
10-15-2022 03:26:38
@ydshieh made the suggested changes =)<|||||>_The documentation is not available anymore as the PR was closed or merged._
transformers
19,635
closed
[Doctest] Add `configuration_data2vec_audio.py`
Add `configuration_data2vec_audio.py` to `utils/documentation_tests.txt` for doctest. Based on issue #19487 @ydshieh could you please take a look at it? Thanks :)
10-15-2022 03:25:32
10-15-2022 03:25:32
Could you pull the latest `main` from the remote (which include your PR with other data2vec config files), and rebase your PR branch on the new `main` 🙏 . We need to fix the conflict changes<|||||>@ydshieh rebased the branch, it should resolve the conflict :)<|||||>_The documentation is not available anymore as the PR was closed or merged._
transformers
19,634
closed
Marian docstring
# What does this PR do? related to #16292 <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @ydshieh, @patrickvonplaten Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
10-15-2022 02:25:30
10-15-2022 02:25:30
I need some help. This is my first time contributing to a project ever. I'm getting familiar with the process of git, and I've tried to follow the instructions layed out on the [Community Event] Doc Tests Sprint #16292. I ran the doc test locally and received to errors. But upon further inspection, I can't seem to find what the issue is. Please if you can offer come guidance on what I'm doing wrong and how to progress. I'm working on modeling_tf_marian.py <|||||>![Screenshot 2022-10-13 223655](https://user-images.githubusercontent.com/75712292/195965122-349001fc-b173-488c-a790-cede71b8bf4a.png) <|||||>@traveler-pinkie sorry for being late here. Are you still interested in working on Marian config files?<|||||>@ydshieh . Thanks for commenting back. But I think at the moment I should probably study a little bit more. It looks like I was having more difficulty then I should have. With my skills currently. I think it's best if someone else works on it. Sorry and thank you
transformers
19,633
closed
[Doctest] Add configuration_codegen.py
# What does this PR do? Add configuration_codegen.py to utils/documentation_tests.txt for doctest. Based on #19487 @sgugger @ydshieh could you please check it? Thanks :)
10-15-2022 01:03:19
10-15-2022 01:03:19
_The documentation is not available anymore as the PR was closed or merged._
transformers
19,632
closed
Proof of Concept - Better Transformers
# What does this PR do? A Proof of Concept of the Better Transformers integration into `transformers` - more details coming soon. Also comparing this implementation with an integration in optimum https://github.com/huggingface/optimum/pull/422 cc @HamidShojanazeri https://github.com/huggingface/transformers/pull/19553
10-14-2022 21:33:03
10-14-2022 21:33:03
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19632). All of your documentation changes will be reflected on that endpoint.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
19,631
open
Add EDSR and MDSR
### Model description EDSR (Enhanced Deep Residual Networks for Single Image Super-Resolution) is for image super resolution, here's the [paper](https://arxiv.org/abs/1707.02921). ### Open source status - [x] The model implementation is available - [x] The model weights are available ### Provide useful links for the implementation Official Implementation https://github.com/sanghyun-son/EDSR-PyTorch and https://github.com/LimBee/NTIRE2017 ## Your contribution I'd like to work on incorporating this architecture into the HuggingFace. Please let me know if you think it's worth adding to huggingface. @NielsRogge can you review this issue so that I can get started
10-14-2022 20:02:21
10-14-2022 20:02:21
@venkat-natchi <|||||>I'm going through the paper and the existing implementation. I will open the PR in few days. <|||||>> I'm going through the paper and the existing implementation. I will open the PR in few days. That's great, I have already gone through the paper and the architecture and started the implementation!<|||||>Sorry, I was away due to festival season here. I am done with the paper. Shall I start transforming [this](https://github.com/sanghyun-son/EDSR-PyTorch/blob/9d3bb0ec620ea2ac1b5e5e7a32b0133fbba66fd2/src/model/edsr.py) one into the HuggingFace model standards?<|||||>Go ahead and let me know if you need any help; I will be adding the MDSR model to the hub, and I believe the code should be similar to this [Swin2sr](https://github.com/huggingface/transformers/pull/19784) PR also we can use it for reference.<|||||>Sure, thanks<|||||>Can we add both EDSR and MDSR in this PR?<|||||>No, I guess a separate PR is required.<|||||>Hello, can I work on this issue? Although I'm new to open-source contributions, I've worked on super-resolution models in the past and I was wondering why HuggingFace did not have these. I am familiar with PyTorch.<|||||>Thanks @asrimanth for the interest. I have an active PR going on for this issue. #19952 Kindly leave your comments there if you could.
transformers
19,630
closed
num_proc in dataloader affect F1 score in squad
### System Info transformers 4.18.0 (on CPU) python 3.9 ### Who can help? @sgugger ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction In the example question answering script, `cd examples/pytorch/question-answering/` Running the below two cmds will yield different F1 score report (even though their difference is just num workers in dataloader): 1. `python run_qa_no_trainer.py --model_name_or_path csarron/bert-base-uncased-squad-v1 --dataset_name squad --max_seq_length 384 --doc_stride 128 --num_train_epochs 0 --preprocessing_num_workers 1 --output_dir ~/tmp/debug_squad` 2. `python run_qa_no_trainer.py --model_name_or_path csarron/bert-base-uncased-squad-v1 --dataset_name squad --max_seq_length 384 --doc_stride 128 --num_train_epochs 0 --preprocessing_num_workers 4 --output_dir ~/tmp/debug_squad` ### Expected behavior 1. `Evaluation metrics: {'exact_match': 80.90823084200568, 'f1': 88.22754061399627}` 2. `Evaluation metrics: {'exact_match': 76.51844843897824, 'f1': 83.40809222646291}` They have different results on squad.
10-14-2022 18:51:40
10-14-2022 18:51:40
Hi there! Could you make your model public? I cannot reproduce this on [distilbert-base-cased-distilled-squad](https://huggingface.co/distilbert-base-cased-distilled-squad) (I get the same scores for the two commands).<|||||>Hi, I believe the one I'm using `csarron/bert-base-uncased-squad-v1` is public. I also tried `bert-large-uncased-whole-word-masking-finetuned-squad` and it's showing the same F1 score mismatch. Thanks!<|||||>Ah, just caught the problem in the logs. `preprocessing_num_workers` is the number of workers sent to `Dataset.map`. It should be left as 1 when using a fast tokenizer. When you change it, you change the way the dataset is preprocessed. To change the number of workers in the dataloader, you should use `dataloader_num_workers`.<|||||>Thanks! The default value is still set to 4 [here](https://github.com/huggingface/transformers/blob/5fda1fbd4625e93d023fe02153ec4a05b26b16cc/examples/pytorch/question-answering/run_qa_no_trainer.py#L111), which should be 1 according to your finding.<|||||>Indeed. Do you want to make a PR to fix this?<|||||>Done.
transformers
19,629
closed
[WIP] Making modifications Open source that are live for `BLOOM` inference.
# What does this PR do? Many things but the biggest are: - TP enabled model (need to pass around a `ProcessGroup` everywhere. There were some discussions back&forth but currently this is not breaking anything, and at least it makes everything quite explicit to load the model in a sharded fashion - Adding 1 custom kernel. Followed roughly DeformatDETR way of distributing this Exception is that the custom kernel is OPT-in, it's not loaded by default (The custom kernel does not support backward for instance). <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
10-14-2022 17:44:46
10-14-2022 17:44:46
@thomasw21 I tagged you to maybe get advice on the `generation` modifications. Should they get merged back into `main` ? (They do seem necessary).<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19629). All of your documentation changes will be reflected on that endpoint.<|||||>> @thomasw21 I tagged you to maybe get advice on the generation modifications. Should they get merged back into main ? (They do seem necessary). If I remember correctly the only thing that should be needed is the logit post processor.<|||||>@sgugger @LysandreJik We'd like some input as we don't necessarily agree with @thomasw21. Bascially this PR intends to be: - No breaking change - Enable users to use TP with bloom with much less effort, even if in a slightly more contrived way - Enable the same performance we get on production (be as open source as possible). This is where we disagree. @thomasw21 thinks, that adding the custom kernel to transformer `main` forces us to be backward compatible for eternity, creates potential headaches because the kernel has only been tested on A100 and might have some unforeseen caveats (1 we know is that it's limited to 4096 tokens) My point, is that we should strive to be open, and so making everything we did accessible should be a goal. Now I agree that maintaining that kernel should NOT be in scope, but I argue that: - Enabling it actively by users (User has to write ` bloom.use_custom_kernel()` in order to use it) - Making a proper warning when using that function - Eventually marking this function as private. Would enable us to merge it and still allow us to break it whenever we want. We obviously write that as `unstable`/`beta`. One added benefit in the back of my mind, is that allowing us to ship custom kernels could enable us (in all our ecosystem) to boost some performance where needed. Making it core `transformers` not necessarily a goal, but knowing how to enable it seems OK. `torch.fx` seems unhappy seems not playing well with `torch.jit.script`.<|||||>First things first, I strongly disagree with most of the modifications done in the modeling code, which make the model code less readable. I think of: - paths with a test using `if fused_bloom_attention_cuda` - model gaining a `process_group` argument at init - use of `tp_rank` attribute Keep in mind that we do not let users add a flag to select if the layernorm should be applied at the beginning or the end of the block and request a new architecture instead for instance, or Thom's comments on why the Mistral code in GPT-2 should never have been merged. This should either: - be a "fast" modeling file of its own in the same bloom folder - a research project of its own (which would have the advantage of not being constrained by any BC considerations)<|||||>> be a "fast" modeling file of its own in the same bloom folder That seems reasonable. It's not `fast` vs `non-fast` to be clear. It's TP vs single GPU code difference. And since TP is the best way to get latency optimizations it would feel quite nice if it was included in `transformers`. Actually anywhere in the HF ecosystem would be nice, as long as we could point out to our code and say it's "there". But since it's been 3 months and we still haven't figured it out, I decided to go in that fashion. Keeping it in a remote branch is ok, but I feel like a fork (https://github.com/huggingface/transformers_bloom_parallel/issues/8) **Separate file is all good for me !** Btw, we don't know of anyway to enable TP without modifying the code. @thomasw21 checked out torch.fx but it seemed to be a pain (and it was the most successful approach). > paths with a test using if fused_bloom_attention_cuda Would you have 3 files (regular, TP, TP + custom kernel) ? seems like a stretch but fine to me. What is so bad about this particular if ? It's very similar to this: https://github.com/huggingface/transformers/blob/main/src/transformers/models/deformable_detr/modeling_deformable_detr.py#L662-L675 The custom kernel can go. It's a shame imo, and it would be nice if it was easily pluggable, but I can understand the logic. > model gaining a process_group argument at init New modeling file would solve that for sure.<|||||>I think two files is good, one for the "generic" model code and one with custom improvements for TP/TP+custom CUDA kernel. I mainly want to avoid a researcher coming to take the BLOOM code and being annoyed at all the special paths for TP. > What is so bad about this particular if ? It's very similar to this: https://github.com/huggingface/transformers/blob/main/src/transformers/models/deformable_detr/modeling_deformable_detr.py#L662-L675 We want to avoid those kinds of path that hurt readability, at least in mainstream models. Deformable DETR didn't use to have those paths (at first it was custom CUDA kernel only) and is not a mainstream model (maybe in the future?). In any case, most of the code can be copied in the new modeling file and the copied from statements will insure they stay up to date with the original.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
19,628
closed
No way to pass max_length/max_new_tokens to generate for image-to-text pipeline
### System Info Transformers 4.23.1 ### Who can help? @narsil ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ```python from transformers import pipeline nlp = pipeline("image-to-text") nlp("image.jpg", max_length=20) ``` Results in ``` transformers/generation_utils.py:1301: UserWarning: Neither `max_length` nor `max_new_tokens` has been set ``` ### Expected behavior No warning or a way to disable the warning. This warning is also raised with the `automatic-speech-recognition` pipeline when using the OpenAI Whisper model.
10-14-2022 17:22:44
10-14-2022 17:22:44
The warning is there, and can be safely ignored in both situations IMO. The generation should stop when the model says it's OK, not when you decide it's too big. Having `max_new_tokens` in place, would definitely help prevent computing indefinitely if the actual generation never hits EOS so the generation never stopped though. This is what the warnings is trying to warn you about. @gante @patrickvonplaten What do you think we should do in that situation ? I think calling `generate` with neither `max_length` nor `max_new_tokens` is perfectly OK if you expect EOS to be hit. You are running the risk of having an infinite loop (well it would crash when the model OOMS or runs into it's max_length capacity...) What do you think about defaulting the model max capacity for `max_length` and silencing the warning ? Since we modified this behavior not too long ago, I understand why the current warning is there, so we could silence the warning much later or by opt-in (so that users that know what they are doing can silence it, but others are still warned). Being able to choose `max_new_tokens` in the pipeline should always be doable, since that's a very easy way to prevent an application from randomly crashing when you know that the generation for images should never be too long for instance. @OlivierDehaene also since you worked on `image-to-text`. <|||||>@Narsil @patrickvonplaten default `max_length` strikes again :D Note: any change would have to happen in the context of a major version change. That being said, both defaults have their shortcomings: defaulting to `20` results in short outputs that might be misinterpreted as poor outputs; defaulting to the model's maximum length might not be feasible (e.g. T5), or cause crashes due to memory requirements. A third option would be to make it a required argument (in `generate()`), but that would add friction to text generation 🤔 I honestly don't know which would be the best option.<|||||>I think the whisper models should define a max_length or `max_new_tokens` in the config actually (ideally in the "future" generation config). Regarding whisper, the model cannot process more than 30seconds of speech which is means that max_length/max_new_tokens almost never goes over 256, so a good/reasonable default for whisper would be 256. Until we don't have better generation configs I think we should set the model config to 256. Also cc'ing @ArthurZucker and @sanchit-gandhi here FYI. To understand better, we get the warning here only because the user passes `max_length=20` which happens to be exactly equal to our default-default max_length in `configuration_utils.py` here: https://github.com/huggingface/transformers/blob/c7edde1a692012eda23bc2b837588557b97ad729/src/transformers/configuration_utils.py#L278 no?
transformers
19,627
closed
Tokenizer not loaded for image-to-text pipelines when specified
### System Info Transformers 4.23.1 ### Who can help? @Narsil ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ```python from transformers import pipeline nlp = pipeline("image-to-text", model="ydshieh/vit-gpt2-coco-en", tokenizer="ydshieh/vit-gpt2-coco-en") nlp("image.jpg") ``` Results in: ``` File "transformers/pipelines/image_to_text.py", line 89, in postprocess "generated_text": self.tokenizer.decode( AttributeError: 'str' object has no attribute 'decode' ``` ### Expected behavior Caption to be returned. It appears something around this line is being tripped up - https://github.com/huggingface/transformers/blob/main/src/transformers/pipelines/__init__.py#L733
10-14-2022 17:08:47
10-14-2022 17:08:47
cc @Narsil <|||||>If that is an unexpected issue, I can debug it and try to fix. <|||||>Hi @davidmezzetti , The "string" is not resolved in `pipeline` when both `model` and `tokenizer` are sent. It should work when you send only 1. You could definitely create a PR to try and resolve `tokenizer` too in that situation. It's tricky magic code so be careful.<|||||>Thanks @narsil. In reviewing the way I'm calling pipelines, there is no reason to pass both a `model` and `tokenizer` as they are the same. It appears that this line: https://github.com/huggingface/transformers/blob/main/src/transformers/pipelines/__init__.py#L737 could be updated to also check if the tokenizer is a (str, tuple) for multi models. But it seems like a highly niche/possibly unnecessary use case. <|||||>`pipeline` is MAGICAL by nature, I'm not against adding even more magic to it. As long as the actual pipeline classes, stay much more down to earth and less magical. Magic is super nice when it works, but much harder to work with/evolve when it doesn't :)<|||||>Sounds good, I'll go ahead and close this issue.
transformers
19,626
closed
Tokenizer from_pretrained should not use local files named like token…
…izer files This fixes the issue reported in #19488. Basically, if a user has a local file in the working directory named like any of the files the tokenizer is looking for in `from_pretrained`, for instance `tokenizer.json`, that file is going to be used instead of the file in the repo/folder passed along. The added test fails on current main and is fixed by the PR. Fixes #19488
10-14-2022 16:39:23
10-14-2022 16:39:23
_The documentation is not available anymore as the PR was closed or merged._
transformers
19,625
closed
How to use model.prunes when you are using transformers.T5ForConditionalGeneration
### System Info - `transformers` version: 4.20.1 - Platform: macOS-12.4-arm64-arm-64bit - Python version: 3.9.10 - Huggingface_hub version: 0.8.1 - PyTorch version (GPU?): 1.13.0.dev20220709 (False) - Tensorflow version (GPU?): 2.8.0 (True) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? @patrickvonplaten ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction I use this code to prune the model from `T5ForConditionalGeneration`, but it went wrong. Many thanks for your time!:) ``` from transformers import T5ForConditionalGeneration model = T5ForConditionalGeneration.from_pretrained('t5-base') prune_heads = {} prune_heads[0] = [0,1] model.prune_heads(prune_heads) ``` ### Expected behavior ``` Traceback (most recent call last): File "/Users/caffrey/Documents/research/FiD/prunetest.py", line 8, in <module> model.prune_heads(prune_heads) File "/Users/caffrey/miniforge3/envs/tongji/lib/python3.9/site-packages/transformers/modeling_utils.py", line 1507, in prune_heads self.base_model._prune_heads(heads_to_prune) File "/Users/caffrey/miniforge3/envs/tongji/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1261, in __getattr__ raise AttributeError("'{}' object has no attribute '{}'".format( AttributeError: 'T5ForConditionalGeneration' object has no attribute '_prune_heads' ```
10-14-2022 16:08:45
10-14-2022 16:08:45
Gently pinging @ArthurZucker <|||||>Thanks @ArthurZucker<|||||>Gently pinging @LysandreJik<|||||>Hey! I'll have a look sorry for the long wait 😃 <|||||>Anyone can help? Thanks! @gante @ArthurZucker <|||||>Hey! It seems that the `T5ForConditionalGeneration` is simply missing the `_prune_heads`. We should add the following few lines : ``` def _prune_heads(self, heads_to_prune): """ Prunes heads of the model. heads_to_prune: dict of {layer_num: list of heads to prune in this layer} See base class PreTrainedModel """ for layer, heads in heads_to_prune.items(): self.modules()[layer].layer[0].SelfAttention.prune_heads(heads) ``` Or something along those lines! Would you like to open a PR for that? Otherwise I will take care of it 🤗<|||||>Hi @ArthurZucker, I will give it a try :) <|||||>Hi @ArthurZucker , there seems a little more problem. Since in this line https://github.com/huggingface/transformers/blob/bd469c40659ce76c81f69c7726759d249b4aef49/src/transformers/models/t5/modeling_t5.py#L533 The model report error when I successfully pruned some heads, it says ``` File "/home/user/anaconda3/envs/uw/lib/python3.7/site-packages/transformers/models/t5/modeling_t5.py", line 533, in forward mask[list(self.pruned_heads)] = 0 IndexError: index 8 is out of bounds for dimension 0 with size 8 ``` Are we gonna modify the `forward`? Since I try to print the `mask.shape` and `self.pruned_heads)` It says ``` torch.Size([12]) {8, 2, 10, 6} torch.Size([8]) {8, 2, 10, 6} Traceback (most recent call last): ``` <|||||>Hi @ArthurZucker , I open a PR here. https://github.com/huggingface/transformers/pull/19975 We can see the test on a colab https://colab.research.google.com/drive/1b9mHjtn2UxuHU_Sb_RXts12rDzbebBX0#scrollTo=hUSe4a1oOp6D I use `opendelta` to visualize the pruning process. But we seems to be a forward problem<|||||>We can conclude that the problem is between L531~L548 The difference is whether use head_mask > Hi @ArthurZucker , there seems a little more problem. Since in this line > > https://github.com/huggingface/transformers/blob/bd469c40659ce76c81f69c7726759d249b4aef49/src/transformers/models/t5/modeling_t5.py#L533 > > > The model report error when I successfully pruned some heads, it says > ``` > File "/home/user/anaconda3/envs/uw/lib/python3.7/site-packages/transformers/models/t5/modeling_t5.py", line 533, in forward > mask[list(self.pruned_heads)] = 0 > IndexError: index 8 is out of bounds for dimension 0 with size 8 > ``` > > Are we gonna modify the `forward`? > > Since I try to print the `mask.shape` and `self.pruned_heads)` > > It says > > ``` > torch.Size([12]) > {8, 2, 10, 6} > torch.Size([8]) > {8, 2, 10, 6} > Traceback (most recent call last): > ``` This one occur in this code ``` outputs = model.forward( input_ids=context_ids, attention_mask=context_mask, labels=labels, return_dict=True, # head_mask=head_mask, # decoder_head_mask=decoder_head_mask ) ``` > Hi @ArthurZucker , I open a PR here. #19975 > > We can see the test on a colab https://colab.research.google.com/drive/1b9mHjtn2UxuHU_Sb_RXts12rDzbebBX0#scrollTo=hUSe4a1oOp6D > > I use `opendelta` to visualize the pruning process. > > But we seems to be a forward problem This one use the code ``` outputs = model.forward( input_ids=context_ids, attention_mask=context_mask, labels=labels, return_dict=True, head_mask=head_mask, decoder_head_mask=decoder_head_mask ) ```<|||||>Hi @ArthurZucker , what is the function of `position_bias` ? It seems in this line https://github.com/huggingface/transformers/blob/bd469c40659ce76c81f69c7726759d249b4aef49/src/transformers/models/t5/modeling_t5.py#L513 We calculate it, and it seems only modify `position_bias` only when the first block. I want to delete `if` or put the `scores += position_bias_masked` into `if`, which means we calculate `position_bias` only in the first block or we calculate `position_bias` all the blocks and it should be the same to the score. You can see the code here ``` scores = torch.matmul( query_states, key_states.transpose(3, 2) ) # equivalent of torch.einsum("bnqd,bnkd->bnqk", query_states, key_states), compatible with onnx op>9 print("Score",scores.shape) if position_bias is None: print("A", position_bias) if not self.has_relative_attention_bias: position_bias = torch.zeros( (1, self.n_heads, real_seq_length, key_length), device=scores.device, dtype=scores.dtype ) print("B",position_bias.shape) if self.gradient_checkpointing and self.training: position_bias.requires_grad = True else: position_bias = self.compute_bias(real_seq_length, key_length, device=scores.device) print("C", position_bias.shape) # if key and values are already calculated # we want only the last query position bias if past_key_value is not None: position_bias = position_bias[:, :, -hidden_states.size(1) :, :] if mask is not None: position_bias = position_bias + mask # (batch_size, n_heads, seq_length, key_length) if self.pruned_heads: position_bias_masked = position_bias # print(self.pruned_heads) # mask = torch.ones(position_bias.shape[1]) # mask[list(self.pruned_heads)] = 0 # print("Position bias",position_bias.shape) # position_bias_masked = position_bias[:, mask.bool()] # print("Position bias masked",position_bias_masked.shape) else: position_bias_masked = position_bias scores += position_bias_masked ``` And output is here ``` Score torch.Size([1, 8, 2, 2]) A None C torch.Size([1, 8, 2, 2]) query_states torch.Size([1, 8, 2, 64]) key_states.transpose(3, 2) torch.Size([1, 8, 64, 200]) Score torch.Size([1, 8, 2, 200]) A None B torch.Size([1, 8, 2, 200]) query_states torch.Size([1, 9, 2, 64]) key_states.transpose(3, 2) torch.Size([1, 9, 64, 2]) Score torch.Size([1, 9, 2, 2]) Traceback (most recent call last): File "/Users/caffrey/Documents/research/FiD/prunetest2.py", line 72, in <module> outputs = model.forward( File "/Users/caffrey/miniforge3/envs/huggingface/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1658, in forward decoder_outputs = self.decoder( File "/Users/caffrey/miniforge3/envs/huggingface/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1190, in _call_impl return forward_call(*input, **kwargs) File "/Users/caffrey/miniforge3/envs/huggingface/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1050, in forward layer_outputs = layer_module( File "/Users/caffrey/miniforge3/envs/huggingface/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1190, in _call_impl return forward_call(*input, **kwargs) File "/Users/caffrey/miniforge3/envs/huggingface/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 683, in forward self_attention_outputs = self.layer[0]( File "/Users/caffrey/miniforge3/envs/huggingface/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1190, in _call_impl return forward_call(*input, **kwargs) File "/Users/caffrey/miniforge3/envs/huggingface/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 589, in forward attention_output = self.SelfAttention( File "/Users/caffrey/miniforge3/envs/huggingface/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1190, in _call_impl return forward_call(*input, **kwargs) File "/Users/caffrey/miniforge3/envs/huggingface/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 548, in forward scores += position_bias_masked RuntimeError: The size of tensor a (9) must match the size of tensor b (8) at non-singleton dimension 1 ``` So the `position_bias` can not have the same shape as `score` I also add a code in _prune_head to re-define the `self.relative_attention_bias` <|||||>In the PR https://github.com/huggingface/transformers/pull/19975/ , I delete `if` so that it could match with the shape of score. (It could run the colab notebook), but I do not know whether its meaning is right! Basically, the only problem is to make `position_bias ` have the same shape as `score`<|||||>gently pin @patrickvonplaten @ArthurZucker Many thanks<|||||>I will have a look 🤗<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
19,624
closed
Fixing DeformableDETR the easy but not the best way (IMO).
# What does this PR do? This PR fixes the pipeline to accomodate DeformableDETR. The core of the issue is https://github.com/huggingface/transformers/blob/main/src/transformers/models/deformable_detr/modeling_deformable_detr.py#L178-L181 Both these tensor don't use `batch_size` in the first place, so the magic batchig/debatching of the pipeline is confused. It's not entirely clear why the tensor has this weird shape though. @NielsRogge is there any reason ? Regardless we can't really change that this it would be a breaking change. The culprit lines are here: https://github.com/huggingface/transformers/blob/585f9c6d9efa9f6e93888b6adf84912ba3f98dfc/src/transformers/models/deformable_detr/modeling_deformable_detr.py#L1397-L1398 @sgugger I though I remembered some tensors where not returned by default with `return_loss=False`. Would that be something acceptable ? The pipeline fix is fine to avoid all complications, but it's brittle since during batching/debatching the pipeline has only access to the tensor name, so if any other model reuses the same name we're going to have a bigger issue. Fixes: https://github.com/huggingface/transformers/issues/19024 <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
10-14-2022 16:01:23
10-14-2022 16:01:23
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19624). All of your documentation changes will be reflected on that endpoint.<|||||>Thanks for your PR, so the issue would be resolved if these tensors have `batch_size` in the first dimension (rather than the second)? I think we can still fix that.<|||||>> Thanks for your PR, so the issue would be resolved if these tensors have batch_size in the first dimension (rather than the second)? Yes it should. When unknown tensors are seen, the batching/unbatching assumes the first dimension is the batch_size.<|||||>We fixed it correctly
transformers
19,623
closed
Add doctest info in testingmdx
# What does this PR do? Finding information on how to properly test the docstring is currently pretty hard! As far as I know, the only good explanation is in `transformers/utils/prepare_for_doc_test.py` but it does not appear in the doc. I hope this will help people debug their docstring examples.
10-14-2022 15:32:13
10-14-2022 15:32:13
_The documentation is not available anymore as the PR was closed or merged._<|||||>Yes, gonna clean the github history
transformers
19,622
closed
[Doctest] Add `configuration_levit.py`
Add `configuration_levit.py` to `utils/documentation_tests.txt` for doctest. Based on issue #19487 @ydshieh could please check it? Thanks =)
10-14-2022 15:03:30
10-14-2022 15:03:30
_The documentation is not available anymore as the PR was closed or merged._
transformers
19,621
closed
[Doctest] Add `configuration_distilbert.py`
Add `configuration_distilbert.py` to `utils/documentation_tests.txt` for doctest. Based on issue #19487 @ydshieh could you please take a look at it? Thanks :)
10-14-2022 15:02:50
10-14-2022 15:02:50
_The documentation is not available anymore as the PR was closed or merged._
transformers
19,620
closed
[Doctest] Add `configuration_resnet.py`
Add `configuration_resnet.py` to `utils/documentation_tests.txt` for doctest. Based on issue #19487 @ydshieh could you take a look at it? Thanks =)
10-14-2022 15:02:18
10-14-2022 15:02:18
_The documentation is not available anymore as the PR was closed or merged._<|||||>Could you run `make style` to see what's wrong 🙏 ?
transformers
19,619
closed
[Doctest] Add configuration_big_bird.py
1. Change the import order of the model and configuration classes 2. Add `configuration_big_bird.py` to `utils/documentation_tests.txt` for doctest. Documentation edit according to #19487 @sgugger could you have a look on this?
10-14-2022 13:04:15
10-14-2022 13:04:15
This is already being fixed in https://github.com/huggingface/transformers/pull/19606<|||||>Hey @Xabilahu ok, i did not see this, sorry. I though do not understand fully, your pull request regards `configuration_bigbird_pegasus.py` and mine `configuration_big_bird.py`. How is it, that these are belong to the same?<|||||>I address both models in my PR. <|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19619). All of your documentation changes will be reflected on that endpoint.<|||||>just saw it. Sorry, did not pay enough attention! <|||||>Hi @lappemic Still thank you. If you want to contribute, you can check [documentation_tests.txt](https://github.com/huggingface/transformers/blob/main/utils/documentation_tests.txt) on the main branch and see which ones are still missing :-)
transformers
19,618
closed
Type hints MCTCT
Hi @Rocketknight1: this PR looks to add type hints to MCTCT models and addresses #16059. ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests?
10-14-2022 12:57:13
10-14-2022 12:57:13
_The documentation is not available anymore as the PR was closed or merged._<|||||>Ah, careful! `attention_mask` and `head_mask` **were** optional in the main model methods, but not in the encoder. The encoder mostly doesn't have `Optional` arguments, but the other classes do. The changes to the main model methods is why you're getting that error now! This is much easier to see if you look at the 'files changed' interface on GitHub - we want this PR to only add type annotations, but not change any default arguments.<|||||>I see - my apologies! Will make some changes now to fix this :)
transformers
19,617
closed
[Doctest] Add configuartion_longformer.py
# What does this PR do? Fixes # 19487 Add configuration_longformer.py to utils/documentation_tests.txt for doctest. Based on issue https://github.com/huggingface/transformers/issues/19487 @sgugger could you take a look at it? Thanks :)
10-14-2022 11:14:06
10-14-2022 11:14:06
Hi @AShreyam This PR is not ready to merge. It contains other changes that is not for this config doctest sprint. <|||||>Hi @AShreyam This PR is not ready to merge. It contains other changes that is not for this config doctest sprint. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
19,616
closed
Cast masks to np.unit8 before converting to PIL.Image.Image
# What does this PR do? In the recent update to the image segmentation pipeline, the numpy arrays converted to `PIL.Image.Image` with mode `"L"` weren't converted to type `np.uint8`, resulting in corrupted masks. Running the following: ``` import requests from PIL import Image from transformers import pipeline url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) pipe = pipeline("image-segmentation", model="facebook/detr-resnet-50-panoptic") results = pipe(image) ``` **Output masks on `main`:** ![main_0_cat](https://user-images.githubusercontent.com/22614925/195832894-95ef005f-3e0a-4b09-9c10-88539a732d95.png) ![main_1_couch](https://user-images.githubusercontent.com/22614925/195832898-bc40d2fc-a33e-4a26-a97e-380b0a6f37b7.png) ![main_2_remote](https://user-images.githubusercontent.com/22614925/195832899-96cdae09-864f-420b-a902-2da2482d46e4.png) ![main_3_blanket](https://user-images.githubusercontent.com/22614925/195832901-9fa9f65b-089a-4c00-b216-7eabbbcd76a8.png) **Output masks on this branch**: ![fix_0_cat](https://user-images.githubusercontent.com/22614925/195832923-3835a988-e0b0-4f3e-a282-011daa95f4f0.png) ![fix_1_couch](https://user-images.githubusercontent.com/22614925/195832926-b7963092-4648-456d-b863-1820ea619f09.png) ![fix_2_remote](https://user-images.githubusercontent.com/22614925/195832929-f5cc2ca9-cf51-4dca-9954-5bf1500e156e.png) ![fix_3_blanket](https://user-images.githubusercontent.com/22614925/195832932-07ecf096-1c1e-4725-97a6-cf239a83e925.png) ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests?
10-14-2022 11:10:34
10-14-2022 11:10:34
_The documentation is not available anymore as the PR was closed or merged._<|||||>cc @Narsil, would it be possible to update the inference widgets to include this PR? We were notified on Twitter that the inference widgets were broken: https://twitter.com/levelsio/status/1580573108431646720?t=d9BlnF9Q2nvRaQFNei6KLw&s=19
transformers
19,615
closed
Adding -> Configuration_flava.py
# What does this PR do? Add configuration_flava.py to utils/documentation_tests.txt for doctest. Based on issue https://github.com/huggingface/transformers/issues/19487 @sgugger could you please take a look at it? Thanks =)
10-14-2022 10:29:10
10-14-2022 10:29:10
Hi @AShreyam This model config is done in #19724. Sorry if I forgot to follow this PR earlier. Going to close the PR though. Thank you however.
transformers
19,614
closed
Add table transformer [v2]
# What does this PR do? This PR adds [Table Transformer](https://github.com/microsoft/table-transformer) by Microsoft, as a separate model, rather than tweaking the existing DETR implementation.
10-14-2022 09:48:25
10-14-2022 09:48:25
@sgugger I've removed the non-Table Transformer related stuff to #19644 ;)<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>@sgugger PR is ready, feel free to approve :)
transformers
19,613
closed
Add configuration_flava.py
null
10-14-2022 09:41:13
10-14-2022 09:41:13
transformers
19,612
closed
fix: small error
I fixed only a small typo error
10-14-2022 08:24:17
10-14-2022 08:24:17
_The documentation is not available anymore as the PR was closed or merged._
transformers
19,611
closed
[Doctest] Add `configuration_ernie.py`
Add configuration_ernie.py to utils/documentation_tests.txt for doctest. Based on issue #19487 @sgugger @ydshieh Thanks :)
10-14-2022 08:10:40
10-14-2022 08:10:40
_The documentation is not available anymore as the PR was closed or merged._
transformers
19,610
closed
[Doctest] Add `configuration_xlm_roberta_xl.py`
Add configuration_xlm_roberta_xl.py to utils/documentation_tests.txt for doctest. Based on issue #19487 @sgugger @ydshieh could you please check it? Thanks :)
10-14-2022 08:02:17
10-14-2022 08:02:17
_The documentation is not available anymore as the PR was closed or merged._
transformers
19,609
closed
[Doctest] Add `configuration_xlm_roberta.py`
Add configuration_xlm_roberta.pyto utils/documentation_tests.txt for doctest. Based on issue #19487 @sgugger @ydshieh could you please check it? Thanks :)
10-14-2022 08:01:17
10-14-2022 08:01:17
transformers
19,608
closed
Fix whisper doc
# What does this PR do? Fixes whisper doc-test. I used `add_code_sample_docstrings` but didn't properly check that it does not support whisper model (which has to be given `input_ids`)
10-14-2022 07:46:16
10-14-2022 07:46:16
_The documentation is not available anymore as the PR was closed or merged._<|||||>Thanks, I missed that 😅 <|||||>> Thanks, I missed that 😅 Me too, I am bad guy!
transformers
19,607
closed
[Time Series Transformer] Add doc tests
# What does this PR do? This PR improves the code snippets in the docs of Time Series Transformer and makes sure they are tested.
10-14-2022 07:38:15
10-14-2022 07:38:15
_The documentation is not available anymore as the PR was closed or merged._<|||||>LGTM! thanks!<|||||>@ydshieh always better to learn more than earn more<|||||>@NielsRogge for generation you can also use the other test batch if you like
transformers
19,606
closed
[Doctest] Add `configuration_bigbird_pegasus.py` and `configuration_big_bird.py`
Add `configuration_bigbird_pegasus.py` and `configuration_big_bird.py` to `utils/documentation_tests.txt` for doctest. Based on issue https://github.com/huggingface/transformers/issues/19487 @ydshieh could you please check it? Thank you :)
10-14-2022 07:31:23
10-14-2022 07:31:23
_The documentation is not available anymore as the PR was closed or merged._<|||||>Thanks for the amazing work you guys do <3
transformers
19,605
closed
[Doctest] Add `configuration_visual_bert.py`
Add configuration_visual_bert.py to utils/documentation_tests.txt for doctest. Based on issue #19487 @sgugger @ydshieh Thanks :)
10-14-2022 06:38:09
10-14-2022 06:38:09
_The documentation is not available anymore as the PR was closed or merged._
transformers
19,604
closed
ONNX conversion from VisionEncoderDecoderModel?
# Description I'd like to convert a VisionEncoderDecoder model to ONNX using the feature that has been recently merged #19254 However, It reproduces errors as below. What am I missing? # Environment ```python import transformers import torch import sys !echo "OS: $(cat /etc/issue)" !echo "Arch.: $(arch)" print('python:', sys.version) print('transformers:', transformers.__version__) print('torch', torch.__version__) ``` OS: Ubuntu 20.04.4 LTS \n \l Arch.: x86_64 python: 3.8.10 (default, Jun 22 2022, 20:18:18) [GCC 9.4.0] transformers: 4.23.1 torch 1.12.1+cu102 # Reproduce ## Model Loading ```python model = torch.load('221002_203253.pt') # TrOCR model that I have trained. model.save_pretrained('trocr') # To show they're same type. model2 = transformers.VisionEncoderDecoderModel.from_pretrained("microsoft/trocr-base-handwritten") type(model), type(model2) ``` Output: <div class="lm-Widget p-Widget jp-OutputArea jp-Cell-outputArea" style=""><div class="lm-Widget p-Widget lm-Panel p-Panel jp-OutputArea-child"><div class="lm-Widget p-Widget jp-OutputPrompt jp-OutputArea-prompt"></div><div class="lm-Widget p-Widget jp-RenderedText jp-mod-trusted jp-OutputArea-output" data-mime-type="application/vnd.jupyter.stderr"><pre>Some weights of VisionEncoderDecoderModel were not initialized from the model checkpoint at microsoft/trocr-base-handwritten and are newly initialized: ['encoder.pooler.dense.bias', 'encoder.pooler.dense.weight'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. </pre></div></div><div class="lm-Widget p-Widget lm-Panel p-Panel jp-OutputArea-child jp-OutputArea-executeResult"><div class="lm-Widget p-Widget jp-RenderedText jp-mod-trusted jp-OutputArea-output" data-mime-type="text/plain"><pre>(transformers.models.vision_encoder_decoder.modeling_vision_encoder_decoder.VisionEncoderDecoderModel, transformers.models.vision_encoder_decoder.modeling_vision_encoder_decoder.VisionEncoderDecoderModel)</pre></div></div></div> ## Check Support ```python try: transformers.onnx.FeaturesManager.check_supported_model_or_raise(model) except Exception as e: print(type(e), e) print('='*100) try: transformers.onnx.FeaturesManager.check_supported_model_or_raise(model2) except Exception as e: print(type(e), e) ``` Output: <pre>&lt;class 'ValueError'&gt; vision-encoder-decoder doesn't support feature default. Supported values are: {'vision2seq-lm': functools.partial(&lt;bound method OnnxConfig.from_model_config of &lt;class 'transformers.models.vision_encoder_decoder.configuration_vision_encoder_decoder.VisionEncoderDecoderOnnxConfig'&gt;&gt;, task='vision2seq-lm')} ==================================================================================================== &lt;class 'ValueError'&gt; vision-encoder-decoder doesn't support feature default. Supported values are: {'vision2seq-lm': functools.partial(&lt;bound method OnnxConfig.from_model_config of &lt;class 'transformers.models.vision_encoder_decoder.configuration_vision_encoder_decoder.VisionEncoderDecoderOnnxConfig'&gt;&gt;, task='vision2seq-lm')} </pre> ## Conversion to ONNX ```shell python3 -m transformers.onnx -m trocr onnx/ ``` Output: <pre>Local PyTorch model found. Framework not requested. Using torch to export to ONNX. Traceback (most recent call last): File "/usr/lib/python3.8/runpy.py", line 194, in _run_module_as_main return _run_code(code, main_globals, None, File "/usr/lib/python3.8/runpy.py", line 87, in _run_code exec(code, run_globals) File "/usr/local/lib/python3.8/dist-packages/transformers/onnx/__main__.py", line 180, in &lt;module&gt; main() File "/usr/local/lib/python3.8/dist-packages/transformers/onnx/__main__.py", line 72, in main model = FeaturesManager.get_model_from_feature( File "/usr/local/lib/python3.8/dist-packages/transformers/onnx/features.py", line 666, in get_model_from_feature model = model_class.from_pretrained(model, cache_dir=cache_dir) File "/usr/local/lib/python3.8/dist-packages/transformers/models/auto/auto_factory.py", line 466, in from_pretrained raise ValueError( ValueError: Unrecognized configuration class &lt;class 'transformers.models.vision_encoder_decoder.configuration_vision_encoder_decoder.VisionEncoderDecoderConfig'&gt; for this kind of AutoModel: AutoModel. Model type should be one of AlbertConfig, BartConfig, BeitConfig, BertConfig, BertGenerationConfig, BigBirdConfig, BigBirdPegasusConfig, BlenderbotConfig, BlenderbotSmallConfig, BloomConfig, CamembertConfig, CanineConfig, CLIPConfig, CodeGenConfig, ConditionalDetrConfig, ConvBertConfig, ConvNextConfig, CTRLConfig, CvtConfig, Data2VecAudioConfig, Data2VecTextConfig, Data2VecVisionConfig, DebertaConfig, DebertaV2Config, DecisionTransformerConfig, DeformableDetrConfig, DeiTConfig, DetrConfig, DistilBertConfig, DonutSwinConfig, DPRConfig, DPTConfig, ElectraConfig, ErnieConfig, EsmConfig, FlaubertConfig, FlavaConfig, FNetConfig, FSMTConfig, FunnelConfig, GLPNConfig, GPT2Config, GPTNeoConfig, GPTNeoXConfig, GPTNeoXJapaneseConfig, GPTJConfig, GroupViTConfig, HubertConfig, IBertConfig, ImageGPTConfig, LayoutLMConfig, LayoutLMv2Config, LayoutLMv3Config, LEDConfig, LevitConfig, LongformerConfig, LongT5Config, LukeConfig, LxmertConfig, M2M100Config, MarianConfig, MarkupLMConfig, MaskFormerConfig, MBartConfig, MCTCTConfig, MegatronBertConfig, MobileBertConfig, MobileViTConfig, MPNetConfig, MT5Config, MvpConfig, NezhaConfig, NystromformerConfig, OpenAIGPTConfig, OPTConfig, OwlViTConfig, PegasusConfig, PegasusXConfig, PerceiverConfig, PLBartConfig, PoolFormerConfig, ProphetNetConfig, QDQBertConfig, ReformerConfig, RegNetConfig, RemBertConfig, ResNetConfig, RetriBertConfig, RobertaConfig, RoFormerConfig, SegformerConfig, SEWConfig, SEWDConfig, Speech2TextConfig, SplinterConfig, SqueezeBertConfig, SwinConfig, Swinv2Config, T5Config, TapasConfig, TimeSeriesTransformerConfig, TrajectoryTransformerConfig, TransfoXLConfig, UniSpeechConfig, UniSpeechSatConfig, VanConfig, VideoMAEConfig, ViltConfig, VisionTextDualEncoderConfig, VisualBertConfig, ViTConfig, ViTMAEConfig, ViTMSNConfig, Wav2Vec2Config, Wav2Vec2ConformerConfig, WavLMConfig, WhisperConfig, XCLIPConfig, XGLMConfig, XLMConfig, XLMProphetNetConfig, XLMRobertaConfig, XLMRobertaXLConfig, XLNetConfig, YolosConfig, YosoConfig. </pre>
10-14-2022 03:58:04
10-14-2022 03:58:04
Hi, 1) did you use Transformers from the main branch? 2) you should probably do the following ``` model_ckpt = "path_to_your_checkpoint" !python -m transformers.onnx --model={model_ckpt} --feature=vision2seq-lm onnx/ --atol 1e-3 ```<|||||>@NielsRogge ` --feature=vision2seq-lm` worked for me. Thank you!<|||||>@NielsRogge . I would like to get the inference script after onnx conversion of VisionEncoderDecoder model. Any suggestion please? <|||||>@kangsan0420 looking at ``` model_ckpt = "path_to_your_checkpoint" !python -m transformers.onnx --model={model_ckpt} --feature=vision2seq-lm onnx/ --atol 1e-3 ``` if downloading from the pertained trocr model, where is the path to the checkpoint?<|||||>@NielsRogge I have fine tuned the trocr small printed model on a custom single line text dataset. After training I converted the model to onnx format using the following [PR](https://github.com/huggingface/transformers/pull/19254#issue-1392234601). Converting the model to onnx results in drastic decrease in accuracy. Also, if I use the pretrained trocr small printed model and convert the same to onnx using exactly same procedure, there is a very small change in accuracy. Can someone explain why is there so much change in accuracy. Please reply if you need additional information. Thanks.<|||||>hello guys, exist an onnx model for TrOcr someone answer me please and thank you<|||||>> @NielsRogge . I would like to get the inference script after onnx conversion of VisionEncoderDecoder model. Any suggestion please? @NielsRogge thanks I have been able to convert my Donut model to ONNX format. Any idea how I can proceed to perform inference for the onnx model ?<|||||>@Mir-Umar @Kamilya2020 Please open issues in the Optimum repository: https://github.com/huggingface/optimum
transformers
19,603
closed
Flax `.from_pretrained` fails to use `subfolder`
### System Info - transformers 4.23.1 - Ubuntu 22.04 - jax 0.3.23 - huggingface_hub 10.0.1 `!transformers-cli env`: <details> <summary>ModuleNotFoundError: No module named 'datasets'</summary> ``` Traceback (most recent call last): File "🏠/venv.lab/bin/transformers-cli", line 5, in <module> from transformers.commands.transformers_cli import main File "🏠/venv.lab/lib/python3.10/site-packages/transformers/commands/transformers_cli.py", line 24, in <module> from .pt_to_tf import PTtoTFCommand File "🏠/venv.lab/lib/python3.10/site-packages/transformers/commands/pt_to_tf.py", line 21, in <module> from datasets import load_dataset ModuleNotFoundError: No module named 'datasets' ``` </details> ### Who can help? @patil-suraj for Flax and also CLIP ### Information - My own modified scripts ### Reproduction ```py from transformers import FlaxCLIPTextModel text_encoder = FlaxCLIPTextModel.from_pretrained( "CompVis/stable-diffusion-v1-4", revision="flax", subfolder="text_encoder", ) ``` fails with > HTTPError: 404 Client Error: Not Found for url: https://huggingface.co/CompVis/stable-diffusion-v1-4/resolve/flax/config.json > OSError: CompVis/stable-diffusion-v1-4 does not appear to have a file named config.json. Checkout 'https://huggingface.co/CompVis/stable-diffusion-v1-4/flax' for available files. ### Expected behavior a FlaxCLIPTextModel is created from https://huggingface.co/CompVis/stable-diffusion-v1-4/blob/flax/text_encoder/config.json related: #18184
10-14-2022 03:16:06
10-14-2022 03:16:06
note that this _does_ work with the non-Flax CLIPTextModel.<|||||>cc @sanchit-gandhi <|||||>Hey @keturn! The command `transformers-cli env` is failing as you don't have `datasets` installed. You can install `datasets` through: ``` pip install datasets ``` or from main: https://github.com/huggingface/datasets With regards to the Flax `.from_pretrained()` method failing with `subfolder`, the PR you've mentioned implemented the `subfolder` feature for PyTorch but not Flax! Would you like to have a go at implementing this feature in Flax? The PR can largely follow the changes made in https://github.com/huggingface/transformers/pull/18184.<|||||>Hey @keturn, just following up here! Let me know if you'd be keen to open a PR - I can help you with with pointers and any questions you might have! Otherwise I can take look next week 🤗<|||||>oh, I don't expect to get to this anytime soon myself. <|||||>Hey @keturn - have added it to my TODOs!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
19,602
closed
Documentation and implementation are inconsistent for forced_decoder_ids option in GenerationMixin.generate
### System Info - `transformers` version: 4.23.0 - Platform: macOS-12.6-arm64-arm-64bit - Python version: 3.9.13 - Huggingface_hub version: 0.10.1 - PyTorch version (GPU?): 1.11.0 (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help? Text generation: @patrickvonplaten, @Narsil, @gante Documentation: @sgugger, @stevhliu ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction ```python from transformers import AutoModelForSeq2SeqLM, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained('t5-small') model = AutoModelForSeq2SeqLM.from_pretrained('t5-small') input = 'This is a dummy input.' decoder_start_text = 'But is should still work, because' input_ids = tokenizer.encode(input, return_tensors='pt') decoder_start_ids = tokenizer.encode(decoder_start_text, add_special_tokens=False) # This raises an error as attached below outputs = model.generate( input_ids, forced_decoder_ids=decoder_start_ids ) # This is against the documentation but works outputs = model.generate( input_ids, forced_decoder_ids={i: id for i, id in enumerate(decoder_start_ids)} ) ``` ### Expected behavior According to [the documentation](https://github.com/huggingface/transformers/blob/3d320c78c32334f66d72d57ff6322d9e3a7dc00b/src/transformers/generation_utils.py#L1124-L1125), `GeneratorMixin.generate` accepts a list of int for `forced_decoder_ids `. However, above reproduction raises the following error: ``` --------------------------------------------------------------------------- TypeError Traceback (most recent call last) Input In [10], in <cell line: 1>() ----> 1 outputs = model.generate( 2 input_ids, 3 forced_decoder_ids=decoder_start_ids 4 ) File ~/.pyenv/versions/3.9.13/envs/dummy_proj/lib/python3.9/site-packages/torch/autograd/grad_mode.py:27, in _DecoratorContextManager.__call__.<locals>.decorate_context(*args, **kwargs) 24 @functools.wraps(func) 25 def decorate_context(*args, **kwargs): 26 with self.clone(): ---> 27 return func(*args, **kwargs) File ~/.pyenv/versions/3.9.13/envs/dummy_proj/lib/python3.9/site-packages/transformers/generation_utils.py:1353, in GenerationMixin.generate(self, inputs, max_length, min_length, do_sample, early_stopping, num_beams, temperature, top_k, top_p, typical_p, repetition_penalty, bad_words_ids, force_words_ids, bos_token_id, pad_token_id, eos_token_id, length_penalty, no_repeat_ngram_size, encoder_no_repeat_ngram_size, num_return_sequences, max_time, max_new_tokens, decoder_start_token_id, use_cache, num_beam_groups, diversity_penalty, prefix_allowed_tokens_fn, logits_processor, renormalize_logits, stopping_criteria, constraints, output_attentions, output_hidden_states, output_scores, return_dict_in_generate, forced_bos_token_id, forced_eos_token_id, remove_invalid_values, synced_gpus, exponential_decay_length_penalty, suppress_tokens, begin_suppress_tokens, forced_decoder_ids, **model_kwargs) 1348 raise ValueError( 1349 "Diverse beam search cannot be used in sampling mode. Make sure that `do_sample` is set to `False`." 1350 ) 1352 # 7. prepare distribution pre_processing samplers -> 1353 logits_processor = self._get_logits_processor( 1354 repetition_penalty=repetition_penalty, 1355 no_repeat_ngram_size=no_repeat_ngram_size, 1356 encoder_no_repeat_ngram_size=encoder_no_repeat_ngram_size, 1357 input_ids_seq_length=input_ids_seq_length, 1358 encoder_input_ids=inputs_tensor, 1359 bad_words_ids=bad_words_ids, 1360 min_length=min_length, 1361 max_length=max_length, 1362 eos_token_id=eos_token_id, 1363 forced_bos_token_id=forced_bos_token_id, 1364 forced_eos_token_id=forced_eos_token_id, 1365 prefix_allowed_tokens_fn=prefix_allowed_tokens_fn, 1366 num_beams=num_beams, 1367 num_beam_groups=num_beam_groups, 1368 diversity_penalty=diversity_penalty, 1369 remove_invalid_values=remove_invalid_values, 1370 exponential_decay_length_penalty=exponential_decay_length_penalty, 1371 logits_processor=logits_processor, 1372 renormalize_logits=renormalize_logits, 1373 suppress_tokens=suppress_tokens, 1374 begin_suppress_tokens=begin_suppress_tokens, 1375 forced_decoder_ids=forced_decoder_ids, 1376 ) 1378 # 8. prepare stopping criteria 1379 stopping_criteria = self._get_stopping_criteria( 1380 max_length=max_length, max_time=max_time, stopping_criteria=stopping_criteria 1381 ) File ~/.pyenv/versions/3.9.13/envs/dummy_proj/lib/python3.9/site-packages/transformers/generation_utils.py:786, in GenerationMixin._get_logits_processor(self, repetition_penalty, no_repeat_ngram_size, encoder_no_repeat_ngram_size, input_ids_seq_length, encoder_input_ids, bad_words_ids, min_length, max_length, eos_token_id, forced_bos_token_id, forced_eos_token_id, prefix_allowed_tokens_fn, num_beams, num_beam_groups, diversity_penalty, remove_invalid_values, exponential_decay_length_penalty, logits_processor, renormalize_logits, suppress_tokens, begin_suppress_tokens, forced_decoder_ids) 784 processors.append(SuppressTokensAtBeginLogitsProcessor(begin_suppress_tokens, begin_index)) 785 if forced_decoder_ids is not None: --> 786 processors.append(ForceTokensLogitsProcessor(forced_decoder_ids)) 787 processors = self._merge_criteria_processor_list(processors, logits_processor) 788 # `LogitNormalization` should always be the last logit processor, when present File ~/.pyenv/versions/3.9.13/envs/dummy_proj/lib/python3.9/site-packages/transformers/generation_logits_process.py:742, in ForceTokensLogitsProcessor.__init__(self, force_token_map) 741 def __init__(self, force_token_map): --> 742 self.force_token_map = dict(force_token_map) ``` It is clear that implementation is expecting `Dict[int, str] `as shown in [here](https://github.com/huggingface/transformers/blob/3d320c78c32334f66d72d57ff6322d9e3a7dc00b/src/transformers/generation_logits_process.py#L741-L742). Hence I believe that implementation and documentation are inconsistent. FYI, [other functions in `GeneratorMixin`](https://github.com/huggingface/transformers/blob/3d320c78c32334f66d72d57ff6322d9e3a7dc00b/src/transformers/generation_utils.py#L782-L783) seems to expect `List[int]` as in the documentation.
10-14-2022 02:03:19
10-14-2022 02:03:19
Hi @koreyou 👋 The documentation is indeed incorrect -- It accepts a list of pairs integers (`List[List[int]]`) that can be convertible to a `Dict[int, int]`, containing the index and the token to be forced, correspondingly (e.g. [this list of lists](https://huggingface.co/openai/whisper-large/blob/main/config.json#L23)). Would you like to open a PR to fix the documentation? 🤗 (cc @ArthurZucker @patrickvonplaten)
transformers
19,601
closed
Image Format (BGR/RGB) bug for lxmert example
### System Info Repository code (accessed 10/13/2022) in examples/research_projects/lxmert ### Who can help? @LysandreJik ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Run demo.ipynb in examples/research_project/lxmert (or instead use [this colab notebook](https://colab.research.google.com/drive/1N0z-mplcu-20TZPr7-TNkZUCASmmP90i?usp=sharing)) but instead of using an URL, upload arbitrary jpg image and do local file (i.e. change line `frcnn_visualizer = SingleImageViz(URL,id2obj=objids, id2attr=attrids)` to `frcnn_visualizer = SingleImageViz('pic.jpg',id2obj=objids, id2attr=attrids)` where `pic.jpg` is arbitrary jpg file you have) Then the result will have flipped red and blue colors ### Expected behavior The result have flipped red and blue colors. Technically, the image inputed to frcnn is RGB instead of BGR (but frcnn uses BGR), so this is probably due to doing BGR2RGB one extra time in the image preprocessing step. ### Source This bug was discovered through [MultiViz paper](https://arxiv.org/pdf/2207.00056.pdf)
10-14-2022 01:23:22
10-14-2022 01:23:22
Note that we do not maintain the research project examples, so you will have better luck pinging the original author :-)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
19,600
closed
fix doc test for megatron bert
# What does this PR do? Add configuration_megatron_bert.py to utils/documentation_tests.txt for doctest. Based on issue #19487 @sgugger / @ydshieh <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
10-13-2022 22:54:13
10-13-2022 22:54:13
_The documentation is not available anymore as the PR was closed or merged._
transformers
19,599
closed
Collator only gets keys from the dataset which are inputs to the model
### System Info - `transformers` version: 4.21.3 - Platform: macOS-12.1-arm64-arm-64bit - Python version: 3.9.10 - Huggingface_hub version: 0.4.0 - PyTorch version (GPU?): 1.10.2 (False) - Tensorflow version (GPU?): 2.7.0 (True) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? @sgugger ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction Steps to reproduce: - Use a dataset with certain keys expected by a custom collator but not the model inputs - Pass the dataset and custom collator to the huggingface trainer - Run training and the collator will not be passed the correct keys ### Expected behavior Huggingface Trainer automatically removes keys from the dataset which aren't needed by the model, but doesn't allow for the possibility that the collator might take different inputs than the model. This is unexpected as the collator is passed the data prior to the model so when stripping unused keys it should be done after the collator not prior.
10-13-2022 21:51:59
10-13-2022 21:51:59
You can just the option `remove_unused_keys=False` from your training arguments in this case.<|||||>Okay, thanks! I looked through before and didn't see anything, actually CTRL-F'd `key` but there was nothing. I believe it has actually been renamed `remove_unused_columns` which is why I didn't find it. Thanks for the help again! <|||||>Ah yes, sorry about the wrong name!<|||||>No worries, thanks for the help.
transformers
19,598
closed
[Doctest] Add `configuration_sew_d.py`
Add `configuration_sew_d.py` to `utils/documentation_tests.txt` for doctest. Based on issue #19487 @ydshieh could you please check it? Thank you :)
10-13-2022 20:24:52
10-13-2022 20:24:52
_The documentation is not available anymore as the PR was closed or merged._
transformers
19,597
closed
[Doctest] Add `configuration_sew.py`
Add `configuration_sew.py` to `utils/documentation_tests.txt` for doctest. Based on issue #19487 @ydshieh could you please take a look at it? Thank you :)
10-13-2022 20:23:26
10-13-2022 20:23:26
_The documentation is not available anymore as the PR was closed or merged._
transformers
19,596
closed
[Doctest] Add `configuration_unispeech.py`
Add `configuration_unispeech.py` to `utils/documentation_tests.txt` for doctest. Based on issue #19487 @ydshieh could you please check it? Thanks :)
10-13-2022 20:22:32
10-13-2022 20:22:32
_The documentation is not available anymore as the PR was closed or merged._
transformers
19,595
closed
[Doctest] Add `configuration_swinv2.py`
Add `configuration_swinv2.py` to `utils/documentation_tests.txt` for doctest. Based on issue #19487 @ydshieh could you check it? Thank you =)
10-13-2022 20:21:13
10-13-2022 20:21:13
transformers
19,594
closed
[Doctest] Add `configuration_swin.py`
Add `configuration_swin.py` to `utils/documentation_tests.txt` for doctest. Based on issue #19487 @ydshieh could you take a look at it? Thanks :)
10-13-2022 20:20:10
10-13-2022 20:20:10
_The documentation is not available anymore as the PR was closed or merged._
transformers
19,593
closed
The 54b MoE!
### System Info Hello, I'm (And I believe many others) are intrigues by the Model of Experts model. I've looked at the only documentation I could find here: https://github.com/facebookresearch/fairseq/blob/nllb/examples/nllb/modeling/README.md However the arguments for generation / evaluation are unclear to me :) I will be starting a data analysis job shortly and I see some possible applications for the 54b model. Surely there are the other models, but I believe many enthusiasts are looking forward to trying translations with this model. I'm just looking for starting arguments for translating, i.e. eng to de. I saw the model is coming to huggingface at some point so definitely looking forward to that. I am also interested in running real person evaluations of 3.3b model, 54b MoE model, Deepl and others to see how far the models have come :) Thank you so much for your work. ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction - ### Expected behavior -
10-13-2022 19:56:11
10-13-2022 19:56:11
Pinging @ArthurZucker and @younesbelkada who have been working on contributing the Switch Transformer, an MoE, to `transformers`. I agree adding the 54B NLLB model would be quite cool too!<|||||>also personally very interested in support for large MoEs<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Any news about MoE?<|||||>[Switch Transformers](https://huggingface.co/docs/transformers/model_doc/switch_transformers) has been added to the library.
transformers
19,592
closed
Sagemaker Estimator for fine tuning where all the transform code is in the train.py
### Feature request I work for a company that is a heavy user of AWS sagemaker. I am on a professional services team where I build a lot of examples for our data scientists to follow. I recently wanted to use the Sagemaker Huggingface estimator to fine tune a transformer and create a model for our custom NLP task. I had csv data in S3. I found several examples of fine tuning that involved pulling nicely curated datasets from HF hub down to the SM notebook and then transforming it into arrow with `save_to_disk` and pushing it to S3 as a dataset that could be read in the train.py file. I struggled mightily to find an example and never found a good example of how to start with just CSV files, use the HF existing tools load the data and then pass it to the estimator. Furthermore, the examples I find have the user pulling the data over to the notebook and doing the conversion to arrow there. That seems inefficient when the point of an estimator is to utilize a small instance to host your notebook and a large instance to do the work. If I had a large amount of data to to convert to arrow and I followed the given examples, I would need a large notebook instance and a large estimator instance. I wrote an example that puts all the transform code in the train.py and only invokes it from the notebook. In my train.py, I use load_dataset with the csv script to transform the data to arrow and do the save and load there. I wanted to use the arrow format for efficiency. I propose that I update your documentation with this unique example. ### Motivation I feel that the proposed documentation is unifies several previously documented concepts into a single, useful example. ### Your contribution I would be happy to build the example and have you guys approve it. I have never contributed to HF before, so I would need a bit of guidance to get started.
10-13-2022 19:24:14
10-13-2022 19:24:14
WDYT @philschmid @sgugger ?<|||||>Hello @j2cunningham, Thank you for all of the information and it is super cool to hear that you are using SageMaker! We have over 20 examples for how to use transformers with SageMaker for inference and training: https://github.com/huggingface/notebooks/tree/main/sagemaker In there should be examples of how to use a CSV file directly for [batch transform](https://github.com/huggingface/notebooks/blob/main/sagemaker/12_batch_transform_inference/sagemaker-notebook.ipynb), and we also ran a whole [workshop series last year](https://github.com/philschmid/huggingface-sagemaker-workshop-series), where you have example of the doing the processing in the [train.py](https://github.com/philschmid/huggingface-sagemaker-workshop-series/blob/main/workshop_4_distillation_and_acceleration/scripts/train.py) Regarding your struggle with loading CSV compared to regular datasets, this quite easy. Instead of providing the huggingface hub id you can use `csv` and then provide the path to your files. This will then created a dataset you can seamlessly use with the examples: [Documentation](https://huggingface.co/docs/datasets/v2.6.0/en/loading#csv) ```python from datasets import load_dataset dataset = load_dataset("csv", data_files="my_file.csv" ``` For more SageMaker related question or ideas please use the forum next time: https://discuss.huggingface.co/c/sagemaker/17 <|||||>I totally get how to use load_dataset with csv data now. My observation is that there isn't an example that starts with just plain csv data, explains how to use load_dataset and why and then does some fine tuning. The examples I found would all start with nicely curated huggingface datasets, do a save to disk (not explain why) and then do read from disk in the train.py. I had to find snippets and documentation that used load_dataset for csv, snippets that explained what save_to_disk was doing and what arrow was, snippets that explained that you could do transformation in the notebook or in the train.py and then wrap it all up into working code. I just feel like starting from raw csv data or image data and doing most of the work in the train.py and not the notebook is pretty common pattern. There very well could be the perfect example from HF that I couldn't find or I could be off the mark when I think this is a common pattern outside of my company. I got this all working and am just offering to share my notebook and train.py with the community. I know I could do a medium article, but thought an example on the HF git would be most beneficial. Thanks<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
19,591
closed
Beam search indices calculation issue
### System Info In the generator, the final [beam_indices](https://github.com/huggingface/transformers/blob/bd469c40659ce76c81f69c7726759d249b4aef49/src/transformers/generation_utils.py#L2376) calculated here may not be the `beam_indices` with the best score. For example, assume a generated text ended at time step T, the output length = T, and it has best score in the `beam_hypo` in the end. The beam indices length is T. When in `T+1` step, the `beam_indices` still keeps adding next beam index from TopK, for example, the TopK returns a text with `T+1` length, the beam_indices will add this Top 1 beam idx in `T+1` even though it is not the best score. `beam_indices` length becomes `T+1`. So in the end, the `beam_indices` represents a longer sequence T+1, but the `generated_outputs.sequence` is a short sequence T with best score, and its beam indices are not stored in `beam_hypo` ### Who can help? @patrickvonplaten ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction The logic is shown in the code. ### Expected behavior `beam_indices` should be consistent with and representing the best sequence.
10-13-2022 19:17:39
10-13-2022 19:17:39
cc @gante <|||||>Look like it was fixed in the latest version. The bug exists in Version 4.18<|||||>Hi @woshizouguo 👋 Glad to hear it is fixed in the most recent versions! Feel free to reopen this issue if you believe you have further queries (related to the latest version, as we can't change the past :) )
transformers
19,590
closed
Allow usage of TF Text BertTokenizer on TFBertTokenizer to make it servable on TF Serving
# What does this PR do? Fixes #19528. This PR introduces a flag that lets you use `tensorflow_text` `BertTokenizer` rather than `FastBertTokenizer`. This is important because as per https://github.com/tensorflow/serving/issues/2064 TF Serving does no support the `FastBertTokenizer` operations, so despite having the tokenizer in-graph, the model would not be servable in TF Serving ## Before submitting - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? @LysandreJik Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
10-13-2022 18:00:22
10-13-2022 18:00:22
_The documentation is not available anymore as the PR was closed or merged._<|||||>cc @Rocketknight1 @gante <|||||>@gante, do I need to change anything els before it is possible to merge this PR?<|||||>@piEsposito It seems there is an issue with your CircleCI permissions, the tests won't run. Could you try refreshing your permissions as shown [here](https://support.circleci.com/hc/en-us/articles/360048210711-How-to-Refresh-User-Permissions-)? Other than that, we need a review from @Rocketknight1 -- then we are good to merge :)<|||||>@gante @Rocketknight1 thank you for your you feedback, the tests just passed, so I think we are good to go. Let's hope TF Serving solves this on their end on the future too, but even if they do, they will only do it for TF >= 2.9, so to keep it servable on previous TF Serving versions, we will need the non-fast TF BertTokenizer anyway. Thanks!<|||||>Merged!
transformers
19,589
closed
[Doctests] add `configuration_blenderbot_small.py`
# What does this PR do? add `configuration_blenderbot_small.py` for doctests, addressing issue #19487. Please review @ydshieh. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
10-13-2022 17:33:53
10-13-2022 17:33:53
@ydshieh how can I rectify the failing check? Did not occur before in any other PRs.<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>> @ydshieh how can I rectify the failing check? Did not occur before in any other PRs. That test is somehow flaky. I re-ran it and now it pass.
transformers
19,588
closed
Jax/Flax pretraining of wav2vec2
There is Jax/Flax based script available for pretraining wav2vec2 [here](https://github.com/huggingface/transformers/tree/main/examples/research_projects/jax-projects/wav2vec2). I have been trying to pretrain new wav2vec2 model for Finnish on TPUs using that script but it seems impossible to get the model train properly. I know the script is under _research_projects_ so I am wondering if anyone has been able to succesfully pretrain wav2vec2 models with it? Or if anyone has made own updates to the script to fix potential problems? For me, it looks like the _codevector_perplexity_ will always collapse to value of 2 and stay there which I believe is not a good thing. Also, the constrastive loss is usually very unstable. I attached the image below showcasing those issues. In addition, I have tried pretraining the wav2vec2 with the official fairseq implementation where the training looks to be working fine without those issues. So I believe HF Jax/Flax implementation is broken somehow. ![image](https://user-images.githubusercontent.com/19529125/195653322-b650b89b-310b-4a89-84ee-425674810893.png) Also, I think the HF Jax/Flax wav2vec2 implementation is not fully on par with the HF PyTorch wav2vec2 implementation. For example, I noticed this comment by @patrickvonplaten https://github.com/huggingface/transformers/issues/14471#issuecomment-982077705 and I think the comment's point number 1 is not implemented in the Jax/Flax version. Also, on Pytorch wav2vec2 pretraining PR comment https://github.com/huggingface/transformers/pull/13877#discussion_r723197919 gradient scaling is implemented to avoid issues on multiple devices training. I wonder if same would be needed for Jax/Flax script when training on 8 TPU cores? I tried implementing those myself but then I found this script where @patrickvonplaten seemed to have already implemented the first point number 1: https://huggingface.co/patrickvonplaten/wav2vec2-german-flax/blob/main/run_wav2vec2_pretrain_flax.py Anyhow, even with those potential fixes I haven't been able to get the training work properly. That's really pity since the Jax/Flax training would be really great when using TPUs.
10-13-2022 16:34:34
10-13-2022 16:34:34
cc @sanchit-gandhi <|||||>Hey @aapot! Cool to see you're trying out pre-training of Wav2Vec2 in Flax on Finnish 🇫🇮 Indeed, the script is under 'research projects' as it remains unsolved. Pre-training of Wav2Vec2 models is notoriously difficult due to issues with stability, giving rise to the phenomena you've experienced such as code vector collapse and unstable contrastive loss. AFAIK there isn't a working implementation for training Transformers W2V2 models in Flax, which makes it an interesting topic to pursue! You've done a great job at digging through issues and PRs to find the aforementioned points! Both the points you've raised look to be missing from the Flax Wav2Vec2 script. Did you try gradient scaling in your experiments? One thing we can try is running the PyTorch and Flax scripts step-by-step in parallel and inspecting where they diverge. We can do this with a tiny dummy model (`hf-internal-testing/tiny-random-wav2vec2` for instance) to make it fast to debug and the same training inputs. When we've identified a divergence between the two we can fix the Flax script by porting the corresponding PyTorch code. LMK if you'd be interested in doing this and I can provide further pointers!<|||||>Just for reference, I never got Wav2Vec2 to work in JAX, but it should def be possible (didn't spent too much time on it) <|||||>@sanchit-gandhi yep this would be interesting to get working! Yes, I also tried gradient scaling like it was implemented in the PyTorch pretrain script (basically multiply gradients with (num devices / total samples)) without luck. I'd be interested in putting some time into fixing this so feel free to provide further pointers. Training of these ASR and NLP models for Finnish is a free time hobby project with @R4ZZ3 so cannot promise anything yet but let's get this fixed 🤗 <|||||>Awesome @aapot, that's great to hear! Essentially what you want to do is run the PyTorch script and Flax script with identical args (for the model, data and training args). In doing this, the PyTorch and Flax models should receive identical inputs, and thus should compute identical losses if the training scripts are the same. What you want to then do is compare the outputs of the PT and Flax training scripts after each step of pre-training: 1. First check that the data collators are identical by inspecting the returned elements of the `batch` ("input_values", "attention_mask", "mask_time_indices") -> we need to make sure the inputs to the models are the same before we can assess the model outputs 2. Check Gumbel temp is the same 3. Check outputs of the models are the same (projected_quantized_states, projected_states, codevector_perplexity) 4. Check contrastive loss is the same 5. Check diversity loss is the same -> once all the losses match then we can move onto making sure the gradients and updates are the same (easier to verify, and very much likely to be the case if the losses are the same) It's likely the bug in the Flax script lies in 3, 4 or 5! Once you identify where the losses deviate, you can dig deeper into the code for PT and Flax and try to find the line(s) of code where the functionality is different. How you debug this is up to you. To make this quick and easy, I'd recommend using a dummy model (`hf-internal-testing/tiny-random-wav2vec2`) and a dummy dataset (`hf-internal-testing/librispeech_asr_dummy`) -> in total this is about 10MB of downloaded data and the script should run very fast. I'd also first run training on CPU only for both PT and Flax, such that the number of devices are fixed equal to one (no gradient scaling effects). For comparing the outputs, you can either run the scripts side-by-side and print intermediate values, or combine them into a single notebook and print cell outputs after each step (I can give you a template for this if you want to use a notebook). Print statements are easy to use, but don't provide much detail other than numeric values. What I'd do is first add print statements for each of the items listed in 1-5 to quickly see which values match ✅ and which values don't ❌. After that you can go deeper with either: more print statements, breakpoints (ipdb), or a debugger. It might take a bit of time to establish a good set-up for debugging quickly, but once you've got this set-up it should be a case of finding where the losses are different and then fixing for Flax! You might also need to disable shuffling of the training dataset to make sure the training inputs are passed in the same way to PT as Flax. These should make for good starting points (haven't tried them, but they're similar to the configs I use for debugging ASR fine-tuning): PT ``` python run_wav2vec2_pretraining.py \ --dataset_name="hf-internal-testing/librispeech_asr_dummy" \ --dataset_config_names="clean" \ --train_split_name="validation" \ --model_name_or_path="hf-internal-testing/tiny-random-wav2vec2" \ --output_dir="./" \ --max_train_steps="10" \ --num_warmup_steps="2" \ --learning_rate="0.005" \ --logging_steps="1" \ --save_strategy="no" \ --per_device_train_batch_size="8" \ --do_train ``` Flax ``` JAX_PLATFORM_NAME=cpu python run_wav2vec2_pretrain_flax.py \ --dataset_name="hf-internal-testing/librispeech_asr_dummy" \ --dataset_config_names="clean" \ --train_split_name="validation" \ --model_name_or_path="hf-internal-testing/tiny-random-wav2vec2" \ --output_dir="./" \ --max_train_steps="10" \ --num_warmup_steps="2" \ --learning_rate="0.005" \ --logging_steps="1" \ --save_strategy="no" \ --per_device_train_batch_size="8" \ --do_train ```<|||||>Thanks for those pointers @sanchit-gandhi, sounds reasonable! I'll start digging into this soon, will keep you updated here.<|||||>Hi, > For me, it looks like the codevector_perplexity will always collapse to value of 2 and stay there which I believe is not a good thing. Also, the constrastive loss is usually very unstable. I tried pre-training the jax wav2vec2 model on my own data and I came across similar problems. Tried with multiple huge chunks of my own dataset and the perplexity always collapsed to 2 while the loss fluctuated a lot. I also noticed that across all my datasets the eval loss was always 0.09969. So, if I finetune this pretrained model, will it give any good results? Also do you guys have any code to fine-tune this pretrained model that I can use?<|||||>For finetuning we have used these resources as base: https://huggingface.co/blog/fine-tune-wav2vec2-english https://huggingface.co/blog/wav2vec2-with-ngram Also we are trying out going to try out these. We just need to fix some of our datasets before that as we have lover case material. Luckily we have trained T5 model for casing + punctuation correction. https://github.com/huggingface/transformers/tree/main/examples/pytorch/speech-recognition#sequence-to-sequence<|||||>Hey @Aaryan369! Thanks for sharing your experience - it seems like there's an inherent bug in the JAX pre-training implementation with how the loss terms are computed leading to code vector perplexity collapse and unstable loss. You can certainly try fine-tuning a pre-trained Wav2Vec2 model. If your fine-tuning data is in-domain with the pre-training you can expect good results with very little data - as little as 10 minutes as shown by the Wav2Vec2 paper! If your fine-tuning data is more out-of-domain with the pre-training data, you can expect to require much more data to achieve good results. This is really on a case-by-case basis, so you'll have to make that decision based on what you know about your fine-tuning situation! In terms of pre-trained models, there are English-only checkpoints: - [base](https://huggingface.co/facebook/wav2vec2-base) - [large](https://huggingface.co/facebook/wav2vec2-large) - [large-lv60](https://huggingface.co/facebook/wav2vec2-large-lv60) And multilingual ones (https://huggingface.co/facebook/wav2vec2-large-xlsr-53 for example). The English-only ones will fare better for English speech tasks, and the multilingual ones for most others. The resources @R4ZZ3 has kindly linked are perfect for fine-tuning in PyTorch. If you want to fine-tune in JAX, I'd advise you to try: https://github.com/sanchit-gandhi/seq2seq-speech/blob/main/run_flax_speech_recognition_ctc.py This script closely resembles the PyTorch one in Transformers: https://github.com/huggingface/transformers/tree/main/examples/pytorch/speech-recognition It's on my list to add this JAX CTC fine-tuning script to Transformers over the coming weeks!<|||||>We are compiling a large speech corpus in Norwegian (100k+ hours). We expect it to be ready in roughly a month. Our plan is to pretrain a Wav2Vec2. We have access to TPUs through TRC and ideally we would like to train this in Flax instead of XLA/PT. This is a high priority project for us, and I am happy to assist in both testing and debugging here. <|||||>100k is mega! Very excited to see how pre-training JAX Wav2Vec2 in Finnish goes with this much data. Just out of interest, are you set on producing a pre-trained checkpoint in Finnish? Or is the end goal downstream ASR? The multilingual [Whisper](https://huggingface.co/models?search=whisper) models are pre-trained on 1066h of **labelled** Finnish audio-transcription data (out of 670,000h total). They get good results with zero-shot transfer learning (i.e. no fine-tuning) on Finnish Common Voice 9 (17.0% WER) and Finnish VoxPopuli (15.5% WER), _c.f._ Tables 11 and 12 from the [Whisper paper](https://cdn.openai.com/papers/whisper.pdf). You could definitely improve upon these results with fine-tuning! Might be a faster route to a performant, downstream Finnish ASR model than Wav2Vec2 pre-training + fine-tuning?<|||||>Just a quick update so I finally had time to start the actual debugging. More info to follow soon.<|||||>Great! Keep us posted!<|||||>Alright, here are some findings so far: Step 1: `mask_time_indices` and `sampled_negative_indices` were not same with the PT implementation. Fixed that by pretty much just copying functions for those from PT to Flax. Step 2: PT and Flax gumbel decay was using different step number, fixed that by deducting Flax step number by one for gumbel decaying. After that, gumbel temp seemed to remain same for about the first 5 steps, after that it started deviating tiny bit between Flax and PT which was weird. Although this probably is not our biggest problem at the moment. Step 3: Comparing model outputs seemed bit hard, I guess because model weights are initialized differently at random? I have found couple differences between Flax and PT model code so far: 1. Flax was missing layerdrop functionality, fixed that. 2. Flax and PT gumbel softmax was implemented differently. PT version uses `hard=True` option with `torch.nn.functional.gumbel_softmax` which results in returned samples as discretized one-hot vectors. Flax gumbel softmax implementation returns soft samples. I tried implement the `hard` option to Flax by copying it from [PT code](https://pytorch.org/docs/stable/_modules/torch/nn/functional.html#gumbel_softmax). My current implementation looks like this: ```python y_soft = nn.softmax((hidden_states + gumbels) / temperature) index = y_soft.argmax(axis=-1) y_hard = jnp.zeros_like(hidden_states).at[jnp.arange(len(hidden_states)), index].set(1.0) codevector_probs = y_hard - y_soft + y_soft ``` when the [PT code](https://pytorch.org/docs/stable/_modules/torch/nn/functional.html#gumbel_softmax) looks like this: ````python y_soft = gumbels.softmax(dim) index = y_soft.max(dim, keepdim=True)[1] y_hard = torch.zeros_like(logits, memory_format=torch.legacy_contiguous_format).scatter_(dim, index, 1.0) ret = y_hard - y_soft.detach() + y_soft ```` At first, I also had the PT's `y_soft.detach()` implemented as `codevector_probs = y_hard - jax.lax.stop_gradient(y_soft) + y_soft` but I noticed it seemed to make the codevector collapse again. Without it, based on some testing the Flax codevector doesn't seem to collapse anymore (the `codevector_perplexity` is raising and staying on high level, not collapsing close to zero as originally). Although the model doesn't still seem to learn properly so I bet there still are more to investigate and fix. It also could be that my Flax gumbel softmax `hard` option is not yet implemented correctly. In addition, I have made initial updates (some more smaller updates could still be made) to the `run_wav2vec2_pretrain_flax.py` script to make it more up to date and comparable to the PT `run_wav2vec2_pretraining_no_trainer.py` script. My updates are available here on my fork and branch: https://github.com/aapot/transformers/tree/w2v2-jax-flax-pretrain<|||||>Continuing with the updates: Step 4. contrastive loss calculation is same with Flax and PT Step 5. diversity loss calculation looks to be same but I'll verify that later<|||||>Really great work, @aapot. I do however understand that there are still some issues here (since the contrastive loss starts to increase after a while), and that the issue most likely is related to the Flax gumbel implementation. Any chance that anyone at 🤗 can take a look at that? What do you think @sanchit-gandhi @patrickvonplaten ? When this is done Ill be glad to contribute with larger training, and finetuning/testing on downstream tasks.<|||||>> Comparing model outputs seemed bit hard, I guess because model weights are initialized differently at random? You could hack into the code and load pre-trained weights! I'd recommend the checkpoint at https://huggingface.co/hf-internal-testing/tiny-random-wav2vec2 PyTorch: ```python from transformers import Wav2Vec2ForPreTraining model = Wav2Vec2ForPreTraining.from_pretrained("hf-internal-testing/tiny-random-wav2vec2") ``` JAX: ```python from transformers import FlaxWav2Vec2ForPreTraining model = FlaxWav2Vec2ForPreTraining.from_pretrained("hf-internal-testing/tiny-random-wav2vec2", from_pt=True) ``` => this will initialise the models with the same weights! From the PyTorch code, it seems as though we should break `y_soft` from the computation graph in the `codevector_probs` calculation. Maybe worth quickly double checking what they do in fairseq here as well? `jax.lax.stop_gradient` can be a bit fiddly but I think it's the best option for stoping the backprop for a variable. Sounds like you're making good progress @aapot! Keep us posted with updates and questions, happy to help!<|||||>Oh one more question! You're running both on CPU right? JAX will definitely diverge from PT on GPU/TPU due to differences in the matmul precision (_c.f._ https://github.com/google/jax/issues/10413#issue-1212211265)<|||||>Thanks for the tips @sanchit-gandhi! Actually I also had in mind to use pre-trained weights to compare model outputs that way, will try it soon. Will also check fairseq implementation if that could reveal more stuff to fix. Yup, I am running both Jax and PT on my local laptop with CPU when debugging. <|||||>Okay great! Tiny pre-trained models on CPU is the way to go here!<|||||>After using pre-trained weights to continue pretraining for one more step with same input data, I think following is happening with model outputs: - `projected_states` has max difference of 0.276 (abs of Flax and PT matrices deducted from each other and max value of the deducted matrix) - `projected_quantized_states` has max difference of 0.588 - `codevector_perplexity` is same `projected_quantized_states` difference is due to the `GumbelVectorQuantizer` because its input `extract_features` from the `wav2vec2` module is actually matching for Flax and PT. Maybe the difference happening in `GumbelVectorQuantizer` is because of randomized gumbel sampling? In addition, I checked the fairseq gumbel softmax implementation and they are also using the PyTorch's `torch.nn.functional.gumbel_softmax` with the `hard=True` option. I am starting to think the main problem could be in this gumbel softmax implementation in Flax. If someone could verify if using `codevector_probs = y_hard - jax.lax.stop_gradient(y_soft) + y_soft` version will make the codevector to collapse (perplexity) that would be great. For me, I think using `codevector_probs = y_hard - y_soft + y_soft` won't make it collapse but not sure if that's the correct approach either for implementing the gumbel softmax in Flax. For example, with the local Flax VS PT testing with PT the codevector perplexity starts to rise from ~100 to ~400 over 5 epochs of pretraining from scratch. With Flax without using `jax.lax.stop_gradient` the perplexity rises very similarly. But if I use `jax.lax.stop_gradient` the perplexity rises only to ~250. Sometime ago I tried test the same with real base-sized w2v2 Flax model to pretrain with Finnish data and with `jax.lax.stop_gradient` the codevector perplexity seemed to collapse totally quite early at the training.<|||||>Fantastic work @aapot! I noticed the following comment in the pull from @patrickvonplaten to @ThomAub: _"PyTorch module to Flax? This might be a bit difficult and require some googling to see if others have already implement gumbel softmax in jax/Flax or not. If you could take a look at this, it would be very useful!"_ ([https://github.com/huggingface/transformers/pull/12271#issuecomment-867793046](https://github.com/huggingface/transformers/pull/12271#issuecomment-867793046)). May there be issues here?<|||||>> abs of Flax and PT matrices deducted from each other and max value of the deducted matrix This is exactly the way we want to compute differences between PT and Flax in the projected states space 👌 For reference, a matching implementation should have a max abs difference of 1e-5. > Maybe the difference happening in GumbelVectorQuantizer is because of randomized gumbel sampling? This seems logical! What I would do is dive into the GumbelVectorQuantizer and check the intermediate variables up to where the randomised sampling is performed. If they match up until the sampling that's a good sign. Forcing sampling between PT and Flax to be the same is a bit tricky... IMO we have two options: 1. Pre-define a sequence of 'pseudo-random' matrices. Hard code these in PT and Flax (e.g. 3 matrices of the correct dimension, pre-defined elements, with the same elements used in PT and Flax). Replace the sampled matrix with one of our pre-defined matrices in the GumbelVectorQuantizer at each training step: this ensures the matrices are the same in PT and Flax. 2. Temporarily use the PT implementation of the randomised Gumbel sampling in the Flax script such that the same seed is used and thus the same pseudo-random numbers. Will requires sampling a PyTorch tensor and then converting back to a jnp array. Unfortunately, both of these methods are a bit hacky. The first might be easier IMO - you don't have to define it to be anything too crazy, and just 2 or 3 different matrices would do (we just need to verify the outputs are the same over 2 or 3 training steps). > I am starting to think the main problem could be in this gumbel softmax implementation in Flax. Sounds like we're narrowing down! Maybe we can try forcing the same Gumbel quantiser outputs and then experiment with / without stop gradient. The fact that fairseq and HF PT use `y_hard` suggests we should use stop gradient! <|||||>Good point @peregilk - worth having a look to see if there are any OSS implementations of the Gumbel ops in JAX/Flax online! (as far as I'm aware there's not, but might be wrong!)<|||||>Please keep this issue open. It is still activity going on for solving this issue.<|||||>Hope the analysis is going ok @aapot, think you're doing a great job here! Feel free to share any updates / ask questions, more than happy to help!<|||||>Hi @sanchit-gandhi, unfortunately I have been very busy the past month so haven't had time to investigate more about this jax gumbel quantizer. Now that the recent Hugging Face Whisper finetuning event is over (where I participated too), I'll get back to debugging this wav2vec2 pretraining after a short Christmas break :) In any case, I am planning to create PR of my current work even if the Gumbel quantizer would not get fixed because my current branch has pretty much updated the Wav2vec2 flax model and pretraining code implemetation up to date with the Pytorch version. But I hope we get the Gumbel part fixed too.<|||||>Hey @aapot! Hope you had a nice Xmas break and that you enjoyed the Whisper event 🙂 Thanks for the update! Sounds good regarding opening a PR with the current changes - these are certainly welcome fixes! We can iterate on the PR to see if we can get the Gumbel part fixed too. Feel free to ping me here or on the new PR with questions / queries - more than happy to help and excited to see this one to completion!<|||||>@sanchit-gandhi quick update on the GumbelVectorQuantizer with the option 1 you mentioned earlier (replace gumbel sampled matrix with predefined matrix). First, I checked that `hidden_states` inside GumbelVectorQuantizer just before the actual gumbel sampling had diff of `5.7e-07` between Flax and PT for the first training step so that looks good. Next, I saved matrices of PT `nn.functional.gumbel_softmax` for the first three steps and then used them inside Flax GumbelVectorQuantizer for the first three steps. By doing that, model output's `projected_quantized_states` were actually the same between PT and Flax for the first training step (diff 0). But for the second step, the `projected_quantized_states` diff already jumped to 0.3 (although the diff before the linear projection layer was 0.01 so the linear projection adds some diff to the `projected_quantized_states` output. For the second step, `hidden_states` also had diff of 0.38 inside GumbelVectorQuantizer. For the third step diverging continues by having diff of 0.47 for `projected_quantized_states` (0.02 before linear projection), and diff of 0.43 for the `hidden_states` inside GumbelVectorQuantizer. Any ideas how to proceed?<|||||>Hey @aapot! Thanks for the update - really cool to see the progress you're making here! Sounds like you've got a nice system going for debugging and comparing the PT-FX outputs! That's great the `hidden_states` are equivalent before the Gumbel sampling ✅ And good to see that the `codevectors` had a diff of 0 - exactly what we wanted by forcing the sampled matrix! Was the `codevector_perplexity` also equivalent in this case? From the last experiment, it sounds pretty likely that the nn.Module's are equivalent now between PT and Flax (we're getting the same tensors out when we override the Gumbel sampling step). I would suggest we quickly verify that all the loss terms are equivalent with this non-deterministic set-up. If they match for the first training step, that's perfect, it means we should have a closely matching implementation. Note that with our 'forced sampling' method, we can verify that we get the same losses between PT and Flax, but since we change how the code vectors are computed in Flax (by forcing the sampled Gumbel matrix) we can't expect the gradients to be correct - forcing the Gumbel sampling is going to mess-up the backprop, so anything after the first parameter update is going to be divergent. So once we've verified that all the loss terms are the same (contrastive, diversity, total loss), I would re-instate stochastic sampling of the Gumbel matrix in Flax and see whether we can train a stable system! How does that sound?<|||||>@sanchit-gandhi sounds reasonable! `codevector_perplexity` diff was in the range of 1e-4 with fixed gumbels. I also checked loss terms and `constrast_loss` diff is 2e-3, `div_loss` diff is 2e-7, and `total_loss` is 2e-3. What would the best way to try train a stable system next?<|||||>Hey @aapot! Awesome - thanks for getting back with these results! Really enjoying hearing your updates here! Shall we double check that the code vector perplexity is being computed correctly? The diff for this value & the contrastive loss looks a little high for a dummy model (should be < 1e-5)! We can quickly check the code vector ppl function and verify that it matches PT (and correct if not!)<|||||>Hey @aapot! We're really close here! Any chance you've had the opportunity to look into the code vector perplexity and the error propagation onto the contrastive loss? Once we're confident with these we can start scaling up to full training runs<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
19,587
closed
TF port of ESM
Working out the last few issues now! Models <3B parameters have been ported already, larger models will need to wait for #19124. This PR also includes fixes for a couple of issues in the original PyTorch ESM.
10-13-2022 16:22:00
10-13-2022 16:22:00
_The documentation is not available anymore as the PR was closed or merged._<|||||>Pipeline tests are failing because the model has no SEP token and doesn't work with multiple sequences. Working on it!<|||||>There's one final test remaining that's failing because of some arcane issue in the code that generates data batches for the pipeline. I'm trying to figure it out!<|||||>Tests are green, and #19124 has been merged! Going to use it to upload the remaining checkpoints and then merge this.
transformers
19,586
closed
[Doctest] Add configuration_trajectory_transformer.py
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes part of issue https://github.com/huggingface/transformers/issues/19487. Adds configuration_trajectory_transformer.py to Doc tests. ## Before submitting - [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [X] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
10-13-2022 16:05:02
10-13-2022 16:05:02
_The documentation is not available anymore as the PR was closed or merged._<|||||>Hey @ydshieh PTAL. Thanks,
transformers
19,585
closed
Re enable Nightly CI for upcoming PyTorch 1.13
# What does this PR do? Re enable Nightly CI for upcoming PyTorch 1.13. This is the minimal changes. We might need to check if the docker image could be built with these versions.
10-13-2022 15:55:15
10-13-2022 15:55:15
_The documentation is not available anymore as the PR was closed or merged._<|||||>Sorry, I forgot one docker image. Change PR to draft.<|||||>@LysandreJik I think we don't need to merge this PR. I could just build images and run the tests.
transformers
19,584
closed
A few CI fixes for DocumentQuestionAnsweringPipeline
# What does this PR do? Fixes a few issues caught by CI (see [comment](https://github.com/huggingface/transformers/pull/19204#issuecomment-1277106187)). ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x ] Did you write any new necessary tests? ## Who can review? @ydshieh @Narsil @sgugger
10-13-2022 15:53:54
10-13-2022 15:53:54
I have not yet updated these tests: ``` FAILED tests/pipelines/test_pipelines_document_question_answering.py::DocumentQuestionAnsweringPipelineTests::test_large_model_pt_chunk - AssertionError: Lists differ: [{'score': 0.9974, 'answer': '1110212019', 'start': 23, [69 chars] 16}] != [{'score': 0.9967, 'answer': '1102/2019', 'start': 22, '[67 chars] 15}] FAILED tests/pipelines/test_pipelines_document_question_answering.py::DocumentQuestionAnsweringPipelineTests::test_large_model_pt_layoutlm_chunk - AssertionError: Lists differ: [{'sc[39 chars]t': 16, 'end': 16}, {'score': 0.9998, 'answer'[31 chars] 16}] != [{'sc[39 chars]t': 15, 'end': 15}, {'score': 0.9924, 'answer'[3... ``` as I want to inspect the CI failures first. I think both of these tests are caused by tesseract OCR errors (specifically I think the CI is running a diff. version of tesseract than my local machine).<|||||>This is what we have `tesseract-ocr` on our CI runners ```bash root@6aec4d26d7ac:/transformers# apt-show-versions tesseract-ocr bash: apt-show-versions: command not found root@6aec4d26d7ac:/transformers# apt show tesseract-ocr Package: tesseract-ocr Version: 4.1.1-2build2 Priority: optional Section: universe/graphics Source: tesseract Origin: Ubuntu Maintainer: Ubuntu Developers <[email protected]> Original-Maintainer: Alexander Pozdnyakov <[email protected]> Bugs: https://bugs.launchpad.net/ubuntu/+filebug Installed-Size: 1573 kB Depends: libarchive13 (>= 3.2.1), libc6 (>= 2.29), libcairo2 (>= 1.2.4), libfontconfig1 (>= 2.12.6), libgcc-s1 (>= 3.0), libglib2.0-0 (>= 2.12.0), libicu66 (>= 66.1~rc-1~), liblept5 (>= 1.75.3), libpango-1.0-0 (>= 1.37.2), libpangocairo-1.0-0 (>= 1.22.0), libpangoft2-1.0-0 (>= 1.14.0), libstdc++6 (>= 5.2), libtesseract4 (= 4.1.1-2build2), tesseract-ocr-eng (>= 4.00~), tesseract-ocr-osd (>= 4.00~) Replaces: tesseract-ocr-data Homepage: https://github.com/tesseract-ocr/ Download-Size: 262 kB APT-Manual-Installed: yes APT-Sources: http://archive.ubuntu.com/ubuntu focal/universe amd64 Packages ```<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>Thank you! Yes, I filed #347 at some point about this. Unfortunately it looks like a different (potentially flaky) test failure occurred in this run: ``` run_command(self._launch_args + testargs) result = get_results(tmp_dir) # Because we use --version_2_with_negative the testing script uses SQuAD v2 metrics. > self.assertGreaterEqual(result["eval_f1"], 28) E AssertionError: 21.428571428571427 not greater than or equal to 28 examples/pytorch/test_accelerate_examples.py:201: AssertionError ``` and so it did not repro the error. I'm going to spin up a VM or docker container that has tesseract 4, and then update the remaining tests there.<|||||>Thank you a lot @ankrgyl! If it's easier, we can get the new expected values from our runners, and see if it will pass with that in multiple runs. Let me know what you prefer :-)<|||||>Oh that is definitely easier. Could you help with that, or show me how to get those values?<|||||>I usually get values from report, say [here](https://github.com/huggingface/transformers/actions/runs/3231701081/jobs/5291537526) or its raw log version. Sometimes I need to run the tests inside the CI runner. I will do that tomorrow and check if I can get those updated tests pass in a consistent way.<|||||>Okay sounds great!<|||||>Hi @ankrgyl I am not able to push to your PR branch (you don't give us the permission I think 😢 ) Could you check [this branch](https://github.com/ydshieh/transformers/commit/33fff18d421187045197f1bbfcc6a4ed72cebe3c) and see if the new values work well on your side too 🙏 ? (I don't pay attention to the style, so you will need to re-style it before we can merge)<|||||>Hi @ydshieh the changes look good. I _think_ I just gave you write access to our fork (which should give you write permissions to the branch?). Would you mind checking if that worked?<|||||>Yes, I pushed, with the correct style. <|||||>With the latest commit, all `DocumentQuestionAnsweringPipelineTests` pass now. I can have a super happy weekend now.<|||||>Excellent! Let me know if there is anything else I can do to help.<|||||>@Narsil The tests with updated expected values in this PR are recently added in #19204. I would say this is just the environment difference (which gave the different values when @ankrgyl worked on #19204)<|||||>> I would say this is just the environment difference This is what we should be careful about :) If the environment provides such different results, which should either fix something so that the values are more consistent, or workaround the flaky dependency :) (If the test becomes bothering to maintain)<|||||>@Narsil the precise reason for the difference is that locally, I have tesseract 5 (the latest stable release), and the test runners have version 4, which produces slightly different OCR results. I filed https://github.com/huggingface/transformers/pull/347 some time ago about installing tesseract 5 in the docker containers used for spaces, which could help resolve the issue. In the meantime, I can think of a few ways to ensure the tests are more consistent/robust: - We can write up some instructions about how to update the tests within a docker container that has the same version of tesseract - We can freeze the tesseract/OCR results into the test so that tesseract is not actually run while evaluating them (small added benefit that tests will run a bit faster) - We can attempt to write a question and/or use a document where the results are the same b/w tesseract 4 and 5.<|||||>@ankrgyl thanks for the explanation. Since this is `transformers` not `tesseract` fixing the version being tested to `4` is OK. But if it becomes to hard to make stable, it's always possible to mock it in the tests so that we don't need to wipe up the whole program, and depend on its own instabilities/changes.
transformers
19,583
closed
[Doctest] Add configuration_vision_encoder_decoder.py
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes part of issue https://github.com/huggingface/transformers/issues/19487. Adds `configuration_vision_encoder_decoder.py` to `Doc tests`. ## Before submitting - [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [X] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
10-13-2022 15:44:43
10-13-2022 15:44:43
_The documentation is not available anymore as the PR was closed or merged._
transformers
19,582
closed
[Doctest] Add configuration_time_series_transformer.py
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes part of issue https://github.com/huggingface/transformers/issues/19487. Adds `configuration_time_series_transformer.py` to `Doc tests`. ## Before submitting - [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [X] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
10-13-2022 15:35:08
10-13-2022 15:35:08
Hey @ydshieh, I just had a doubt > Change the import order of the model and configuration classes here, which order do you mean, the ascending or descending? because in `configuration_time_series_transformer.py`, it was already in ascending order.<|||||>@ydshieh, By mistake, I pushed some merged changes, I will revert them soon.<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>@ydshieh PTAL. Thanks,<|||||>> A time series of pull request and pull accept! Thanks! That sounds AWESOME!!
transformers
19,581
closed
Inconsistent padding behavior for decoder_input_ids for Seq2Seq models
### System Info transformers : 4.18.0 torch: 1.12.0 Python 3.7.13 ### Who can help? @patrickvonplaten @patil-suraj ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction ```python from transformers import AutoModelForSeq2SeqLM, AutoTokenizer import torch models = [ "t5-small", "google/mt5-small", "facebook/m2m100_418M", "facebook/wmt19-ru-en", "facebook/bart-base", "facebook/blenderbot-400M-distill", "google/bigbird-pegasus-large-arxiv", "allenai/led-base-16384", "microsoft/prophetnet-large-uncased" ] for model_name in models: # load the seq2seq model model = AutoModelForSeq2SeqLM.from_pretrained(model_name) # tokenizer tokenizer = AutoTokenizer.from_pretrained(model_name) tokenizer.padding_side = "left" # sample sentence sample_sentence = "generate some numbers" encodings = tokenizer(sample_sentence, padding="max_length", max_length=5, return_tensors="pt", return_attention_mask=True, truncation=True) # decoder input ids (with a default start token for the model) decoder_input_ids = torch.ones(1,1, dtype=torch.int32) * model.config.decoder_start_token_id # model's forward without any padding for decoder_input_ids (hence without decoder_attn mask) outputs = model.forward(input_ids=encodings.input_ids, attention_mask=encodings.attention_mask, decoder_input_ids=decoder_input_ids, return_dict=True) next_token_logits = outputs["logits"][:,-1, :] # same decoder input ids but padded + decoder attention mask decoder_input_ids_with_padding = torch.ones(1,3, dtype=torch.int32) * tokenizer.pad_token_id decoder_input_ids_with_padding[:,-1] = model.config.decoder_start_token_id decoder_attn_mask = torch.zeros(1,3) decoder_attn_mask[:,-1] = 1 # model's forward with padding for decoder_input_ids (hence with decoder_attn mask) outputs_with_padding = model.forward(input_ids=encodings.input_ids, attention_mask=encodings.attention_mask, decoder_input_ids=decoder_input_ids_with_padding, decoder_attention_mask=decoder_attn_mask, return_dict=True) next_token_logits_with_padding = outputs_with_padding["logits"][:,-1,:] # check if padding affects the logits if torch.allclose(next_token_logits, next_token_logits_with_padding, atol=1e-3): print(f"No issues with model: {model_name}") else: print(f"Issues with model: {model_name}") ``` ### Expected behavior This issue is regarding seq2seq models for conditional text generation. There are differences in the output logits when padding is used for decoder_input_ids (by passing also decoder_attention_mask). This issue exists only for a few models (eg: BART, BlendorBot, Pegasus etc) and for other models there are no output differences (eg: T5, MT5 etc). Hence there is no consistency in the output across diff seq2seq models. To reproduce these differences, run the provided script which does the following: - Do one forward pass for a sample prompt (input_ids, attention_mask), additionally passing the default start token for the decoder. - Do another forward pass for the prompt (same input_ids and attention_mask). But this time, decoder_input_ids is left padded to a seq length of 3 with the same default start token as the last token. Additionally, decoder_attention_mask is passed to avoid attending to padded tokens. - Last token logits from these two forward passes are compared for equivalence (with a tolerance of 1e-3) And this is done for several seq2seq models to see which models have these differences. Ideally, we would expect padding not to cause any such differences.
10-13-2022 15:31:04
10-13-2022 15:31:04
cc @ArthurZucker <|||||>@ArthurZucker let me know if you need help with this<|||||>@ArthurZucker I can have a look at this if it is not being looked at.<|||||>Hey! 🙌 it's on my to do list, but can't look at it right now so feel free to do so 😀🤗<|||||>@patrickvonplaten, I've had a look at this and stepped through BART. I think it's solely to do with positional embeddings. For T5, MT5 there are relational embeddings, so it doesn't occur. For certain types of models like the original Transformer where the positional embeddings are directly summed to the input embeddings. Any time there is left padding to the input, the positional encodings are not shifted. This happens for both the encoder and decoder forward pass with left side padding. So the left padding above actually affects the encoder output as well. When I shift the positional embeddings according to the mask the results are correct/same to unpadded case. It is not usually a good idea to pad on the left side. I'm not sure if there is an efficient way to resolve this, as the input attention mask could be variable after left padding. e.g. ``` tokenizer.padding_side = "left" encodings = tokenizer.batch_encode_plus(['sample_sentence', 'A much much much longer sentence.'], padding="max_length", max_length=10, return_tensors="pt", return_attention_mask=True, truncation=True) ``` So can't use a batch fold operation. Let me know if you think there should be a PR, as I would like to be involved as took me a while to work this out 😅 <|||||>Gently ping @ArthurZucker :-) Let me know if you'd like me to take over the issue if you have too much on your plate<|||||>Sure. I've found the root cause (positional embeddings aren't shifted along with the left padding) and I don't think it is necessarily an issue/resolvable. So only occurs with models that use non-relative positional embeddings e.g. BART @ArthurZucker I'm happy to help out more if you think there is a resolution. Perhaps a PR with a warning? <|||||>The same problem happens when trying to left pad BERT or any model with absolute position embeddings. I notice BERT has a warning in the docs under tips. I think this issue can be closed. I can draft a PR for adding to docs of other models with similar tip to BERT.<|||||>Hey! Really sorry for the late reply! Awesome work and debugging! 🤗 I totally get the gist of it 😅 Feel free to open a PR to either : - Add a Warning when padding is left that outputs might be incorrect (similar to BERT?) - Actually shift the positional embeddings when the padding is left. This might be a bit tricky Even if it is not really recommended, if people actually use left padding (either unconsciously or for a particular application) it makes sense to shift the input! <|||||>@jordiclive @ArthurZucker Thanks for looking into this. Is left padding not recommended only due to position embeddings? In general, for batch next tokens prediction, it is easier for users to get the logits from the last token for the entire batch with left padding. (I remember GPT-2 had a similar issue and the left padding support was added at some point which made batch generation easier) Also from the perspective of providing consistent behavior across many seq2seq models (through AutoModelForSeq2Seq API), shifting the positional embeddings in case of left padding is desired IMO. <|||||>@rajcscw. Yes, it is just because of the old-style positional embeddings. For gpt-2 and BERT, there is an optional kwarg for position_ids. This would be the only way to do it, the user would have to provide the position_ids as it could be variable for each input in the batch and then the positional embeddings can be shifted. I am not sure about your exact use case for seq2seq models. Above you have left padding with the tokenizer for the encoder input and then the manual left pad of decoder input ids. This would require two position_ids kwargs (encoder and decoder) for the model as they would likely be offset differently.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
19,580
closed
[Doctest] Add configuration_vision_text_dual_encoder.py
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes part of issue https://github.com/huggingface/transformers/issues/19487. ## Before submitting - [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [X] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
10-13-2022 14:44:36
10-13-2022 14:44:36
_The documentation is not available anymore as the PR was closed or merged._<|||||>Hey @ydshieh , Please take a pass. Thanks,<|||||>Hi @SD-13 I left a comment. Also, does the doctest pass now (as you asked a question in another thread) The checks on this PR page doesn't run doctest. So it's important for the contributors to run it 🙏 please.<|||||>> Hi @SD-13 I left a comment. Also, does the doctest pass now (as you asked a question in another thread) > > The checks on this PR page doesn't run doctest. So it's important for the contributors to run it pray please. Hey @ydshieh, That totally makes sense. I am still getting the error and I am giving the whole error log here. please help me to debug this. Thanks, ============================= test session starts ============================== platform linux -- Python 3.10.6, pytest-7.1.3, pluggy-1.0.0 -- /home/pirate/Downloads/huggingFace/transformers/transformers/bin/python cachedir: .pytest_cache rootdir: /home/pirate/Downloads/huggingFace/transformers, configfile: setup.cfg collected 1 item src/transformers/models/vision_text_dual_encoder/configuration_vision_text_dual_encoder.py::transformers.models.vision_text_dual_encoder.configuration_vision_text_dual_encoder.VisionTextDualEncoderConfig FAILED =================================== FAILURES =================================== _ [doctest] transformers.models.vision_text_dual_encoder.configuration_vision_text_dual_encoder.VisionTextDualEncoderConfig _ 062 063 >>> # Accessing the model configuration 064 >>> config_vision = model.config.vision_config 065 >>> config_text = model.config.text_config 066 067 >>> # Saving the model, including its configuration 068 >>> model.save_pretrained("my-model") 069 070 >>> # loading model and config from pretrained folder 071 >>> vision_text_config = VisionTextDualEncoderConfig.from_pretrained("vit-bert") UNEXPECTED EXCEPTION: OSError("vit-bert is not a local folder and is not a valid model identifier listed on 'https://huggingface.co/models'\nIf this is a private repository, make sure to pass a token having permission to this repo with `use_auth_token` or log in with `huggingface-cli login` and pass `use_auth_token=True`.") Traceback (most recent call last): File "/home/pirate/Downloads/huggingFace/transformers/transformers/lib/python3.10/site-packages/huggingface_hub/utils/_errors.py", line 213, in hf_raise_for_status response.raise_for_status() File "/home/pirate/Downloads/huggingFace/transformers/transformers/lib/python3.10/site-packages/requests/models.py", line 1021, in raise_for_status raise HTTPError(http_error_msg, response=self) requests.exceptions.HTTPError: 401 Client Error: Unauthorized for url: https://huggingface.co/vit-bert/resolve/main/config.json The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/home/pirate/Downloads/huggingFace/transformers/src/transformers/utils/hub.py", line 409, in cached_file resolved_file = hf_hub_download( File "/home/pirate/Downloads/huggingFace/transformers/transformers/lib/python3.10/site-packages/huggingface_hub/file_download.py", line 1053, in hf_hub_download metadata = get_hf_file_metadata( File "/home/pirate/Downloads/huggingFace/transformers/transformers/lib/python3.10/site-packages/huggingface_hub/file_download.py", line 1359, in get_hf_file_metadata hf_raise_for_status(r) File "/home/pirate/Downloads/huggingFace/transformers/transformers/lib/python3.10/site-packages/huggingface_hub/utils/_errors.py", line 242, in hf_raise_for_status raise RepositoryNotFoundError(message, response) from e huggingface_hub.utils._errors.RepositoryNotFoundError: 401 Client Error. (Request ID: WiAMp3MzQkYIQuEIq-5Wj) Repository Not Found for url: https://huggingface.co/vit-bert/resolve/main/config.json. Please make sure you specified the correct `repo_id` and `repo_type`. If the repo is private, make sure you are authenticated. During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/lib/python3.10/doctest.py", line 1350, in __run exec(compile(example.source, filename, "single", File "<doctest transformers.models.vision_text_dual_encoder.configuration_vision_text_dual_encoder.VisionTextDualEncoderConfig[8]>", line 1, in <module> File "/home/pirate/Downloads/huggingFace/transformers/src/transformers/configuration_utils.py", line 531, in from_pretrained config_dict, kwargs = cls.get_config_dict(pretrained_model_name_or_path, **kwargs) File "/home/pirate/Downloads/huggingFace/transformers/src/transformers/configuration_utils.py", line 558, in get_config_dict config_dict, kwargs = cls._get_config_dict(pretrained_model_name_or_path, **kwargs) File "/home/pirate/Downloads/huggingFace/transformers/src/transformers/configuration_utils.py", line 613, in _get_config_dict resolved_config_file = cached_file( File "/home/pirate/Downloads/huggingFace/transformers/src/transformers/utils/hub.py", line 424, in cached_file raise EnvironmentError( OSError: vit-bert is not a local folder and is not a valid model identifier listed on 'https://huggingface.co/models' If this is a private repository, make sure to pass a token having permission to this repo with `use_auth_token` or log in with `huggingface-cli login` and pass `use_auth_token=True`. /home/pirate/Downloads/huggingFace/transformers/src/transformers/models/vision_text_dual_encoder/configuration_vision_text_dual_encoder.py:71: UnexpectedException 063 >>> # Accessing the model configuration 064 >>> config_vision = model.config.vision_config 065 >>> config_text = model.config.text_config 066 067 >>> # Saving the model, including its configuration 068 >>> model.save_pretrained("my-model") 069 070 >>> # loading model and config from pretrained folder 071 >>> vision_text_config = VisionTextDualEncoderConfig.from_pretrained("vit-bert") 072 >>> model = VisionTextDualEncoderModel.from_pretrained("vit-bert", config=vision_text_config) UNEXPECTED EXCEPTION: NameError("name 'vision_text_config' is not defined") Traceback (most recent call last): File "/usr/lib/python3.10/doctest.py", line 1350, in __run exec(compile(example.source, filename, "single", File "<doctest transformers.models.vision_text_dual_encoder.configuration_vision_text_dual_encoder.VisionTextDualEncoderConfig[9]>", line 1, in <module> NameError: name 'vision_text_config' is not defined /home/pirate/Downloads/huggingFace/transformers/src/transformers/models/vision_text_dual_encoder/configuration_vision_text_dual_encoder.py:72: UnexpectedException =========================== short test summary info ============================ FAILED src/transformers/models/vision_text_dual_encoder/configuration_vision_text_dual_encoder.py::transformers.models.vision_text_dual_encoder.configuration_vision_text_dual_encoder.VisionTextDualEncoderConfig ============================== 1 failed in 5.48s =============================== <|||||>Hi! From the error message ``` 071 >>> vision_text_config = VisionTextDualEncoderConfig.from_pretrained("vit-bert") UNEXPECTED EXCEPTION: OSError("vit-bert is not a local folder and is not a valid model identifier listed on '[https://huggingface.co/models'\nIf](https://huggingface.co/models'%5CnIf) this is a private repository, make sure to pass a token having permission to this repo with use_auth_token or log in with huggingface-cli login and pass use_auth_token=True.") ``` It tells that "vit-bert" doesn't exist. And if you read the code a few lines above this line, you see ``` model.save_pretrained("my-model") ``` So the code save model in some name but try to load it with another name. Change it to ``` model.save_pretrained("vit-bert") ``` will work :-)<|||||>Yep it worked!! =========================================================== test session starts ============================================================ platform linux -- Python 3.10.6, pytest-7.1.3, pluggy-1.0.0 -- /home/pirate/Downloads/huggingFace/transformers/transformers/bin/python cachedir: .pytest_cache rootdir: /home/pirate/Downloads/huggingFace/transformers, configfile: setup.cfg collected 1 item src/transformers/models/vision_text_dual_encoder/configuration_vision_text_dual_encoder.py::transformers.models.vision_text_dual_encoder.configuration_vision_text_dual_encoder.VisionTextDualEncoderConfig PASSED ============================================================ 1 passed in 10.89s ============================================================ Thanks for the explanation, I got your point. <|||||>Well, I need you push the necessary change in order to merge :-)
transformers
19,579
closed
Mutli target classification
### Feature request Is there a way to do multi target clasification e.g for text classification? for example: Input: text Output 1: Male/Female Output 2: Happy/Angry ### Motivation It‘s annoying to embed the outputs of the model into a custom model ### Your contribution Unfortunately not
10-13-2022 13:16:50
10-13-2022 13:16:50
Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests. Could you ask your question on the [forum](https://discuss.huggingface.co) instead? Thanks!
transformers
19,578
closed
Implement BigBird in TensorFlow
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes #19430 by implementing BigBird in TF ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? (WRITING TESTS IN PROGRESS) ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> PR implements BigBird based on implementations of: * [Original BigBird implementation](https://github.com/google-research/bigbird/blob/master/bigbird/core/attention.py) * [PyTorch BigBird implementation in PyTorch](https://github.com/huggingface/transformers/blob/main/src/transformers/models/big_bird/modeling_big_bird.py) * [TF version of Bert](https://github.com/huggingface/transformers/blob/main/src/transformers/models/bert/modeling_tf_bert.py) Raising this as a draft PR while I work on tests and ironing out issues I run into while testing, but thought it might be useful to let others have visibility of this while working on it!
10-13-2022 13:16:17
10-13-2022 13:16:17
_The documentation is not available anymore as the PR was closed or merged._
transformers
19,577
closed
[Doctests] add `configuration_blenderbot.py`
# What does this PR do? `configuration_blenderbot.py` for doctests, addressing issue #19487. Please review it @ydshieh. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
10-13-2022 13:11:36
10-13-2022 13:11:36
_The documentation is not available anymore as the PR was closed or merged._
transformers
19,576
closed
[Doctests] Add `configuration_blenderbot.py`
# What does this PR do? Hi! This is for blenderbot config, for issue #19487 . Please review this as well @ydshieh <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
10-13-2022 13:02:19
10-13-2022 13:02:19
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19576). All of your documentation changes will be reflected on that endpoint.
transformers
19,575
closed
[Doctest] Add configuration_canine.py
Add configuration_canine.py to utils/documentation_tests.txt for doctest. Based on issue [#19487](https://github.com/huggingface/transformers/issues/19487) @ydshieh could you take a look at it? Thanks :)
10-13-2022 11:51:33
10-13-2022 11:51:33
_The documentation is not available anymore as the PR was closed or merged._
transformers
19,574
closed
[Doctest] Add `configuration ctrl.py`
Hi! This is Ctrl config update Based on the issue https://github.com/huggingface/transformers/issues/19487
10-13-2022 10:53:45
10-13-2022 10:53:45
_The documentation is not available anymore as the PR was closed or merged._<|||||>Hi again @ydshieh! Another one is ready for review :)
transformers
19,573
closed
fix BLOOM ONNX config
Fixes dynamic axes for BloomOnnxConfig. After this PR https://github.com/huggingface/transformers/pull/18344, if use_past is used * past/present keys should have the dynamic axes `{0: 'batch', 1: 'past_sequence + sequence'}` * past/present values should have the dynamic axes `{0: 'batch', 2: 'past_sequence + sequence'}` Should also fix failing tests for BLOOM's ONNX export. (tested using `RUN_SLOW=1 pytest tests/onnx/test_onnx_v2.py -k "bloom" -s -x`) cc @lewtun @ydshieh
10-13-2022 10:23:50
10-13-2022 10:23:50
_The documentation is not available anymore as the PR was closed or merged._
transformers
19,572
closed
Fix fx symbolic tracing for deberta
# What does this PR do? Deberta cannot be traced when using relative attention. This fixes the issue.
10-13-2022 10:17:11
10-13-2022 10:17:11
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>@michaelbenayoun With no response from @BigBird01 I think we can merge this. Can you just fix the conflict?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
19,571
closed
Proposal Remove the weird `inspect` in ASR pipeline and make WhisperEncoder just nice to use.
# What does this PR do? It seems that accepting `attention_mask` is kind of an invariant of our models. For Seq2Seq ASR models, we had a special comment on how it actually was important to send it. `inspecting` seems pretty brittle way to handle this case. My suggestion is to simply add it as an kwarg that and just ignoring it with the docstring explaining why it's ignored. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
10-13-2022 10:04:50
10-13-2022 10:04:50
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19571). All of your documentation changes will be reflected on that endpoint.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19571). All of your documentation changes will be reflected on that endpoint.
transformers
19,570
closed
Improve error messaging for ASR pipeline.
# What does this PR do? - ~~Raise error early (in `_sanitize`) so users don't waste time trying to run queries with invalid params.~~ This is not easy unfortunately because the order of resolution of objection is tricky. - Fix the error was after using `config.inputs_to_logits_ratio` so our check was masked by the failing property does not exist. - Added some manual check on s2t for the error message. No non ctc model seems to be used by the default runner (they are all skipped). <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @sgugger @ArthurZucker Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
10-13-2022 09:50:10
10-13-2022 09:50:10
_The documentation is not available anymore as the PR was closed or merged._
transformers
19,569
closed
DeprecationWarning from Pillow (with Pillow ≥ 9.1.0)
### System Info `transformers`: 4.22.2 `pillow`: 9.2.0 Python 3.9.9 ### Who can help? @NielsRogge @amyeroberts (tagged based on changes to `image_utils.py` in #18520, but the issue seems to span most of the repo) ## Reproduction [Pillow 9.1.0 deprecated a bunch of constants](https://pillow.readthedocs.io/en/stable/releasenotes/9.1.0.html#deprecations) such as `PIL.Image.BILINEAR`, leading to the following warning when importing the CLIP model Note: I ran python with warnings enabled (`python -W always`) ``` >>> from transformers import CLIPFeatureExtractor .../transformers/image_utils.py:239: DeprecationWarning: BILINEAR is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BILINEAR instead. def resize(self, image, size, resample=PIL.Image.BILINEAR, default_to_square=True, max_size=None): ``` Caused by [this line in `image_utils.py`](https://github.com/huggingface/transformers/blob/bbd150e92f84db72e7507d0c3ce69474b2948839/src/transformers/image_utils.py#L364) (though there's other instances where deprecated PIL constants are used) These constants are pending removal in Pillow 10 (July 2023). ## Action required Noticed that transformers doesn't currently enforce a Pillow version constraint in [setup.py](https://github.com/huggingface/transformers/blob/main/setup.py), so I've opened this issue to check if any action is required – **either enforce Pillow < 10, or migrate to using the new Pillow constants** --- Additional info: discovered this warning when importing https://github.com/huggingface/diffusers – simply running `import diffusers` on a fresh install (version 0.4.1) triggers this warning for me.
10-13-2022 09:49:55
10-13-2022 09:49:55
cc @amyeroberts @alaradirik <|||||>Closing as this has been resolved in #19654
transformers
19,568
closed
Add Swin2SR
### Model description Swin2SR is a Swinv2-based model for image super resolution and compression. ### Open source status - [X] The model implementation is available - [X] The model weights are available ### Provide useful links for the implementation https://github.com/mv-lab/swin2sr
10-13-2022 08:53:49
10-13-2022 08:53:49
If there is consensus for this, can I work on it?<|||||>Sure!<|||||>cool, I will start with this.<|||||>Check out some tips on contributing a model here: * https://github.com/huggingface/transformers/tree/main/templates/adding_a_new_model#add-new-model-like-command * https://huggingface.co/docs/transformers/contributing * https://huggingface.co/docs/transformers/add_new_model <|||||>Am I supposed to add the model here: https://github.com/huggingface/transformers/blob/main/src/transformers/models/swinv2/modeling_swinv2.py Also, is there any Super resolution models already present in Transformers?<|||||>Hi, No each model in the library has its own folder and implementation files. We duplicate a lot of code in favor of easily readable code. There's no super resolution model already available in Transformers, it would be the first one.<|||||>Thanks. Can I reuse the code from https://github.com/mv-lab/swin2sr repo in a new folder or build on top of the model from here `modeling_swinv2.py` ?<|||||>You can start from modeling_swinv2.py, copy it over and tweak it for the new model.
transformers
19,567
closed
[Doctests] Add `configuration_vit_mae.py` and `configuration_yoso.py`
# What does this PR do? Add configuration_vit_mae.py to utils/documentation_tests.txt and configuration_yoso.py for doctest. Based on issue #19487 @ydshieh could you please review it? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
10-13-2022 08:43:35
10-13-2022 08:43:35
_The documentation is not available anymore as the PR was closed or merged._<|||||>Hi @grgkaran03 Do you intend to work on yoso in this PR? I see you reverted the change in a commit, then added it back in the last commit.<|||||>Hi! got a little confused. I worked in yoso and vit_mae in this pr, if it's fine... @ydshieh