repo
stringclasses
1 value
number
int64
1
25.3k
state
stringclasses
2 values
title
stringlengths
1
487
body
stringlengths
0
234k
created_at
stringlengths
19
19
closed_at
stringlengths
19
19
comments
stringlengths
0
293k
transformers
18,157
closed
MaskFormer documentation - `is_thing_map`
The MaskFormer documentation states: "Both tasks can be solved using [MaskFormerForInstanceSegmentation](https://huggingface.co/docs/transformers/v4.20.1/en/model_doc/maskformer#transformers.MaskFormerForInstanceSegmentation) output, the latter needs an additional is_thing_map to know which instances must be merged together.." However, `is_thing_map` does not appear in the source code, and it looks like this was replaced with `label_ids_to_fuse`.
07-16-2022 08:16:03
07-16-2022 08:16:03
cc @NielsRogge <|||||>@LysandreJik @NielsRogge I updated the MaskFormer docs to reflect the current code and added a [PR](https://github.com/huggingface/transformers/pull/18423) <|||||>I merged the PR, closing this issue!
transformers
18,156
closed
FIX: Typo
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
07-16-2022 01:59:12
07-16-2022 01:59:12
_The documentation is not available anymore as the PR was closed or merged._
transformers
18,155
closed
Fix check for falsey inputs in run_summarization
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> In the PyTorch version of `run_summarization.py`, there is a check that excludes examples where the source document and target summary are both `None`. https://github.com/huggingface/transformers/blob/ccc089780415445768bcfd3ac4418cec20353484/examples/pytorch/summarization/run_summarization.py#L516-L520 I think this should be relaxed to check for _falsey_ inputs instead. I think this because some datasets, like MultiNews, contain examples with empty strings: ```python from datasets import load_dataset multi_news = load_dataset("multi_news", split="validation") assert not multi_news[4850]["document"] ``` and these are not caught by the `is not None` check. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> @sgugger, @patil-suraj
07-16-2022 01:12:31
07-16-2022 01:12:31
_The documentation is not available anymore as the PR was closed or merged._
transformers
18,154
closed
add ONNX support for LeViT
# What does this PR do? This PR adds ONNX support for LeViT. Linked to [#16308](https://github.com/huggingface/transformers/issues/16308). ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
07-15-2022 19:24:45
07-15-2022 19:24:45
_The documentation is not available anymore as the PR was closed or merged._<|||||>Pinging @lewtun and @sgugger for approval. One CI test failed but it doesn't seem to be related to this PR.<|||||>You are welcome, thank you for your work! Yes, following the doc I have already run this command and it passed all (slow) tests ;)
transformers
18,153
closed
Update serving code to enable `saved_model=True`
# What does this PR do? Fixes and adds any missing`serving` and `serving_output` code to our TF models to enable `model.save_pretrained(path, saved_model=True)` I've added comments throughout the code to explain any areas where the TF refactor might no be obvious. I'm aware the diff of this PR is quite big, but most of it repetitive changes to enable passing of tests, so I hope acceptable. ## Specifically: **1. Adds missing serving logic to: ResNet, Swin, TAPAS** **2. Adds missing `serving_output` logic to models** Some vision models didn't have `serving_output` implemented - `serving` returned the model outputs directly. This was to enable testing (see 4.) and to keep consistent with the rest of the library, **3. Update or add `input_signature` decorator for models** **4. Adds a test to check `serving_output` is implemented and return types are as expected** We can't test `model.serving` directly i.e. this is not possible: ``` model = model_class(config) inputs = self._prepare_for_class(inputs_dict, model_class) serving_outputs = model.serving(inputs) ``` Running this on vision models raises the following: ``` E ValueError: The channel dimension of the inputs should be defined. The input_shape received is (None, None, None, None), where axis -1 (0-based) is the channel dimension, which found to be `None`. ``` This is because the input signature defined in the `tf.function` decorator for the `serving` method has all of the input dimensions defined as `None`: ``` @tf.function(input_signature=[{ "pixel_values": tf.TensorSpec((None, None, None, None), tf.float32, name="pixel_values"), }] ) def serving(self, inputs): .... ``` We can't hard code the channel dimensions as e.g. `3` as we want to support both RGB and greyscale images (although testing this locally does work). **5. Moves `test_saved_model_creation` back into `test_modeling_tf_common` and add explicit skips** There were quite a few models that couldn't be saved with `model.save_pretrained(path, saved_model=True)` and quite a few whose input signature or return types from `serving_output` were broken or inconsistent with the model. I think this is in part because the relevant test was moved to only be applied to certain core models, and those models didn't explicitly skip. See [#14415](https://github.com/huggingface/transformers/pull/14415), [#9478](https://github.com/huggingface/transformers/pull/9478) I've: * added it back to common so that it's added to models by default. CI currently running and passing, * added `unittest.skip` decorator so it's counted as a skipped rather than passed test on all models that were previously skipping. **6. Update logic in models such that their graph can be created and saved.** Adds serving logic to enable saving of models and ensures their outputs are transformed in line with the rest of the library ## Fixes https://github.com/huggingface/transformers/issues/18179 https://github.com/huggingface/transformers/issues/18164 https://github.com/huggingface/transformers/issues/17233 https://github.com/huggingface/transformers/issues/17285 https://discuss.huggingface.co/t/tfresnetforimageclassification-fails-with-save-pretrained-when-saved-model-is-true/20404 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests?
07-15-2022 18:47:38
07-15-2022 18:47:38
_The documentation is not available anymore as the PR was closed or merged._<|||||>@amyeroberts thank you for digging into this one. An important one albeit. > We can't hard code the channel dimensions as e.g. 3 as we want to support both RGB and greyscale images (although testing this locally does work). Could you shed more details on the locally testing you're referring to here?<|||||>> Could you shed more details on the locally testing you're referring to here? @sayakpaul Sure :) In the test `test_prepare_serving_output`, if serving outputs are got by calling `serving` directly i.e. ``` inputs = self._prepare_for_class(inputs_dict, model_class) serving_outputs = model.serving(inputs) ``` This will fail for vision models with the channel dim error I posted above. If I changed the input signature such that channel dimension is hard coded then it runs e.g. ``` @tf.function( input_signature=[ { "pixel_values": tf.TensorSpec((None, 3, None, None), tf.float32, name="pixel_values"), } ] ) ``` I run the test with ```pytest tests/models/{model_name}/test_modeling_tf_{model_name}.py::TF{ModelName}ModelTest::test_prepare_serving_output```<|||||>Thanks. A follow-up question. So if you run with (`pytest tests/models/{model_name}/test_modeling_tf_{model_name}.py::TF{ModelName}ModelTest::test_prepare_serving_output`) how does it get the hardcoded value for channels? Or do you first hard-code it and then run it? <|||||>> Thanks. A follow-up question. So if you run with (`pytest tests/models/{model_name}/test_modeling_tf_{model_name}.py::TF{ModelName}ModelTest::test_prepare_serving_output`) how does it get the hardcoded value for channels? Or do you first hard-code it and then run it? @sayakpaul Yes, I hardcode then run the tests. In this case, they pass. <|||||>Before merge, could you measure the timing for the tests `test_saved_model_creation` on **CPU**? You can run like ```python python -m pytest -v tests -k "test_saved_model_creation" --durations=0 --make-reports=tests_timing ``` and copy-paste the results from `reports/tests_timing/durations.txt `<|||||>@ydshieh We don't run the slow tests on CPU, only on GPU/multi-GPU.<|||||>> @ydshieh We don't run the slow tests on CPU, only on GPU/multi-GPU. I should double check the latest version. My memory was in a previoius commit `Add in tests (505cb774b1b7eb5c9a6c8e2bc63f12061824b8bd)` while I asked the question 😢 <|||||>@amyeroberts The `attention_mask` and `token_type_ids` in TFHubert / TFWav2Vec2 should be `int32` I believe. I think we don't put this clearly in their docstrings in TF models, but we have this information in their PyTorch model files.<|||||>@ydshieh I know from @sgugger's comment, we don't run on CPU, but I ran the tests for reference (Macbook Pro 2021 M1 Max) Based on this I disabled the test for Swin. The slowest tests - `test_saved_model_creation_extended` - are independent of this PR. ``` slowest durations 242.92s call tests/models/convbert/test_modeling_tf_convbert.py::TFConvBertModelTest::test_saved_model_creation_extended 202.38s call tests/models/bert/test_modeling_tf_bert.py::TFBertModelTest::test_saved_model_creation_extended 177.82s call tests/models/gptj/test_modeling_tf_gptj.py::TFGPTJModelTest::test_saved_model_creation_extended 150.56s call tests/models/bart/test_modeling_tf_bart.py::TFBartModelTest::test_saved_model_creation_extended 82.65s call tests/models/gpt2/test_modeling_tf_gpt2.py::TFGPT2ModelTest::test_saved_model_creation_extended 61.12s call tests/models/lxmert/test_modeling_tf_lxmert.py::TFLxmertModelTest::test_saved_model_creation_extended 26.88s call tests/models/swin/test_modeling_tf_swin.py::TFSwinModelTest::test_saved_model_creation 20.14s call tests/models/clip/test_modeling_tf_clip.py::TFCLIPTextModelTest::test_saved_model_creation_extended 19.51s call tests/models/convbert/test_modeling_tf_convbert.py::TFConvBertModelTest::test_saved_model_creation 19.40s call tests/models/gptj/test_modeling_tf_gptj.py::TFGPTJModelTest::test_saved_model_creation 18.54s call tests/models/deberta_v2/test_modeling_tf_deberta_v2.py::TFDebertaModelTest::test_saved_model_creation 16.77s call tests/models/speech_to_text/test_modeling_tf_speech_to_text.py::TFSpeech2TextModelTest::test_saved_model_creation 16.72s call tests/models/clip/test_modeling_tf_clip.py::TFCLIPVisionModelTest::test_saved_model_creation_extended 14.51s call tests/models/roformer/test_modeling_tf_roformer.py::TFRoFormerModelTest::test_saved_model_creation 14.37s call tests/models/deberta/test_modeling_tf_deberta.py::TFDebertaModelTest::test_saved_model_creation 13.15s call tests/models/wav2vec2/test_modeling_tf_wav2vec2.py::TFWav2Vec2ModelTest::test_saved_model_creation 13.12s call tests/models/hubert/test_modeling_tf_hubert.py::TFHubertModelTest::test_saved_model_creation 12.79s call tests/models/mpnet/test_modeling_tf_mpnet.py::TFMPNetModelTest::test_saved_model_creation 12.47s call tests/models/tapas/test_modeling_tf_tapas.py::TFTapasModelTest::test_saved_model_creation 12.16s call tests/models/layoutlm/test_modeling_tf_layoutlm.py::TFLayoutLMModelTest::test_saved_model_creation 12.16s call tests/models/electra/test_modeling_tf_electra.py::TFElectraModelTest::test_saved_model_creation 12.12s call tests/models/xlm/test_modeling_tf_xlm.py::TFXLMModelTest::test_saved_model_creation 11.92s call tests/models/flaubert/test_modeling_tf_flaubert.py::TFFlaubertModelTest::test_saved_model_creation 11.54s call tests/models/transfo_xl/test_modeling_tf_transfo_xl.py::TFTransfoXLModelTest::test_saved_model_creation 11.42s call tests/models/bert/test_modeling_tf_bert.py::TFBertModelTest::test_saved_model_creation 11.36s call tests/models/rembert/test_modeling_tf_rembert.py::TFRemBertModelTest::test_saved_model_creation 11.12s call tests/models/distilbert/test_modeling_tf_distilbert.py::TFDistilBertModelTest::test_saved_model_creation 10.85s call tests/models/ctrl/test_modeling_tf_ctrl.py::TFCTRLModelTest::test_saved_model_creation 10.77s call tests/models/roberta/test_modeling_tf_roberta.py::TFRobertaModelTest::test_saved_model_creation 10.51s call tests/models/wav2vec2/test_modeling_tf_wav2vec2.py::TFWav2Vec2RobustModelTest::test_saved_model_creation 10.44s call tests/models/gpt2/test_modeling_tf_gpt2.py::TFGPT2ModelTest::test_saved_model_creation 10.30s call tests/models/hubert/test_modeling_tf_hubert.py::TFHubertRobustModelTest::test_saved_model_creation 10.23s call tests/models/deit/test_modeling_tf_deit.py::TFDeiTModelTest::test_saved_model_creation 10.20s call tests/models/t5/test_modeling_tf_t5.py::TFT5EncoderOnlyModelTest::test_saved_model_creation 10.14s call tests/models/albert/test_modeling_tf_albert.py::TFAlbertModelTest::test_saved_model_creation 10.14s call tests/models/clip/test_modeling_tf_clip.py::TFCLIPTextModelTest::test_saved_model_creation 9.53s call tests/models/openai/test_modeling_tf_openai.py::TFOpenAIGPTModelTest::test_saved_model_creation 9.11s call tests/models/vit_mae/test_modeling_tf_vit_mae.py::TFViTMAEModelTest::test_saved_model_creation 8.71s call tests/models/vit/test_modeling_tf_vit.py::TFViTModelTest::test_saved_model_creation 8.68s call tests/models/clip/test_modeling_tf_clip.py::TFCLIPVisionModelTest::test_saved_model_creation 8.44s call tests/models/dpr/test_modeling_tf_dpr.py::TFDPRModelTest::test_saved_model_creation 8.14s call tests/models/xlnet/test_modeling_tf_xlnet.py::TFXLNetModelTest::test_saved_model_creation 7.54s call tests/models/data2vec/test_modeling_tf_data2vec_vision.py::TFData2VecVisionModelTest::test_saved_model_creation 6.70s call tests/models/convnext/test_modeling_tf_convnext.py::TFConvNextModelTest::test_saved_model_creation 6.21s call tests/models/regnet/test_modeling_tf_regnet.py::TFRegNetModelTest::test_saved_model_creation 4.62s call tests/models/resnet/test_modeling_tf_resnet.py::ResNetModelTest::test_saved_model_creation ```<|||||>I noticed a cool thing -- assuming the models with `@tooslow` also pass the tests (which I'm assuming they do, from [this](https://github.com/huggingface/transformers/pull/18153#issuecomment-1191706099) comment), this PR fixes: 1. https://github.com/huggingface/transformers/issues/17233, as the problematic line was rewritten with TF code in this PR 2. https://github.com/huggingface/transformers/issues/17285, as we know we can create a `SavedModel` (which contains a graph) for all models. @amyeroberts can you add these issues to your `Fixes` list above? :D<|||||>@gante Nice spot :D Yep - I just double checked, and all the models with the @tooslow decorator can be saved with `saved_model=True` - and so their graph can be built. Added the fixes. The only model we can't save at the moment is CLIP, due to the nested dict of outputs. <|||||>@ydshieh @Rocketknight1 Am I OK to merge?
transformers
18,152
open
dalle mega
# What does this PR do? This PR adds the DalleMega model from [dalle-mini](https://github.com/borisdayma/dalle-mini) for text-2-image generation. The VQGAN model required for converting the tokens to image is in this PR #18150 - [ ] override the `sample` method for classifier-free guidance. - [ ] port and upload weights on the hub - [ ] add tests - [ ] add docs - [ ] boom!
07-15-2022 18:11:57
07-15-2022 18:11:57
@patil-suraj - I can take over the PR if you want :-)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
18,151
closed
Confusing documentation for argument class_labels in MaskFormerForInstanceSegmentation.forward()
I'm trying to fine-tune MaskFormer for an instance segmentation problem. The documentation for MaskFormerForInstanceSegmentation.forward() lists the following optional parameters: * mask_labels (List[torch.Tensor], optional) — List of mask labels of shape (num_labels, height, width) to be fed to a model * class_labels (List[torch.LongTensor], optional) — list of target class labels of shape (num_labels, height, width) to be fed to a model. They identify the labels of mask_labels, e.g. the label of mask_labels[i][j] if class_labels[i][j]. The wording is confusing, especially at the end -- "the label of mask_labels[i][j] if class_labels[i][j]" is missing a verb. Additionally, other MaskFormer classes in the API accept `class_labels` of shape (labels) - one class label for each mask - e.g. the forward() method of MaskFormerLoss. It's not clear why this is different in this case.
07-15-2022 15:14:33
07-15-2022 15:14:33
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
18,150
open
Add VQGAN
# What does this PR do? Adds the VQGAN model, first step for adding the Dallemega model in transformers. - This model is different from most the models available in `Transformers`, it's an U-Net like encoder-decoder architecture with vector quantizer bottleneck. - This is only the generator part of the GAN, intended only for inference. - It does not have common transformer style embeddings, blocks and other attributes. - Currently it does not support `output_hidden_states` and `output_attentions`, since this is complex architecture and it's not clear which `hidden_states` to return. Would love to hear your thoughts if we should support this.
07-15-2022 14:42:47
07-15-2022 14:42:47
@patil-suraj note that you can use the current main version of diffusers as a reference of how the code should look like and you can use the conversion script to covert the official weights<|||||>Taking over this PR
transformers
18,149
closed
Inference for TFMarianMTModel (en to Romance language translation) is slow and inaccurate
### System Info **System** macOS Monterey 12.2.1 ``` transformers==4.20.1 tensorflow==2.9.1 ``` ### Who can help? _No response_ ### Information - [ ] The official example scripts - [x] My own modified scripts ### Tasks - [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ``` from transformers import TFMarianMTModel, MarianTokenizer model_name = "Helsinki-NLP/opus-mt-en-ROMANCE" tokenizer = MarianTokenizer.from_pretrained(model_name) model = TFMarianMTModel.from_pretrained(model_name) text_in = ['>>fr<< hello'] batch = tokenizer(text_in, return_tensors='tf', padding=True) translated = model.generate(**batch) ``` Output: ``` - Qu'est-ce qu'il y a, là-bas, là-bas, là--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ``` ### Expected behavior I would expect similar performance to the PyTorch model. Inference requires about 120s on my machine and outputs an incorrect translation. In contrast, the PyTorch model (replacing `TFMarianMTModel` with `MarianMTModel` and changing `return_tensors` to `'pt'` in the code snippet) returns the correct translation ("Bonjour") and inference requires about 6s on my machine.
07-15-2022 12:49:15
07-15-2022 12:49:15
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Sorry for missing this! Could you take a look at this, @gante, @Rocketknight1, @ydshieh?<|||||>Let me take a look on the quality issue. And possibly @gante or @Rocketknight1 for the speed issue, let's discuss it :-)<|||||>Actually, the performance issue comes from the quality issue. The TF version didn't stop the generation until 512 tokens. ```bash [[65000 25 2092 7 179 15 276 185 7 227 32 9 2 2538 15 5716 2 2538 15 5716 2 2538 15 15 15 15 15 15 15 15 15 15 15 15 15 15 ............ 0]], shape=(1, 512), dtype=int32) ```<|||||>I believe the current PT / TF checkpoints for "Helsinki-NLP/opus-mt-en-ROMANCE" doesn't contain the same weight. As if I change from ``` model = TFMarianMTModel.from_pretrained(model_name) ``` to ``` model = TFMarianMTModel.from_pretrained(model_name, from_pt=True) ``` I could get ``` [[65000 21959 3 0 65000 65000 65 ....] (still `512` tokens) ``` while the PyTorch version gives ``` tensor([[65000, 21959, 3, 0]]) ``` So: - we probably need to check which checkpoint is the correct one, and uploaded the new checkpoint - investigate why `TFMarianMTModel` doesn't stop earlier.<|||||>After a double check (see code below, where I use `from_pt=True`), I believe the current PT checkpoint is the correct one, but not the TF checkpoint. @gante Would you like to have a look too, upload a new TF checkpoint, and see why `TFMarianMTModel` doesn't stop the generation earlier as `MarianMTModel` does? ``` from transformers import MarianMTModel, MarianTokenizer, TFMarianMTModel model_name = "Helsinki-NLP/opus-mt-en-ROMANCE" tokenizer = MarianTokenizer.from_pretrained(model_name) # text_in = ['>>fr<< hello'] text_in = ['>>fr<< Hello, I am a student.'] model = MarianMTModel.from_pretrained(model_name) batch = tokenizer(text_in, return_tensors='pt', padding=True) translated = model.generate(**batch) o = tokenizer.batch_decode(translated, skip_special_tokens=True) print(translated) print(o) model = TFMarianMTModel.from_pretrained(model_name, from_pt=True) batch = tokenizer(text_in, return_tensors='tf', padding=True) translated = model.generate(**batch) o = tokenizer.batch_decode(translated, skip_special_tokens=True) print(translated) print(o) text_in = ['>>it<< I love dogs and cats.'] model = MarianMTModel.from_pretrained(model_name) batch = tokenizer(text_in, return_tensors='pt', padding=True) translated = model.generate(**batch) o = tokenizer.batch_decode(translated, skip_special_tokens=True) print(translated) print(o) model = TFMarianMTModel.from_pretrained(model_name, from_pt=True) batch = tokenizer(text_in, return_tensors='tf', padding=True) translated = model.generate(**batch) o = tokenizer.batch_decode(translated, skip_special_tokens=True) print(translated) print(o) ```<|||||>Hi there @ydshieh @danielenricocahall 👋 None of the Marian models can be successfully converted to TF -- they all fail when validating the hidden layers and outputs of the models. This is a shame since there are a ton of Marian models for translation :( This means there is something wrong with either the model architecture or with weight cross-loading. I haven't looked into it, other than noticing the issue when attempting to convert the weights from `Helsinki-NLP`<|||||>Thank you for looking into it @ydshieh and @gante !!! This is great information.<|||||>@danielenricocahall a fix was merged and new weights were pushed -- if you run from `main`, the translations should be much better now 🙌 <|||||>cc @gante We still have the generation issue ```python from transformers import MarianMTModel, MarianTokenizer, TFMarianMTModel model_name = "Helsinki-NLP/opus-mt-en-ROMANCE" tokenizer = MarianTokenizer.from_pretrained(model_name) text_in = ['>>fr<< hello'] # PT generates a few tokens then stops early -> very fast model = MarianMTModel.from_pretrained(model_name) batch = tokenizer(text_in, return_tensors='pt', padding=True) translated = model.generate(**batch) o = tokenizer.batch_decode(translated, skip_special_tokens=True) print(translated) print(o) # TF generates 512 tokens, although the decoded version gives the same result as PT -> very slow model = TFMarianMTModel.from_pretrained(model_name, from_pt=False) batch = tokenizer(text_in, return_tensors='tf', padding=True) translated = model.generate(**batch) o = tokenizer.batch_decode(translated, skip_special_tokens=True) print(translated) print(o) ```<|||||>@ydshieh Hi, I am experiencing the same issue. Expected the TF version would be faster than the PT version.
transformers
18,148
closed
core dumped
### System Info ```shell centos7 transformers==4.17.0 datasets==1.17.0 pytorch==1.6.0 GPUs==4*GTX 3090 ``` ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction I modify the example code of text classification (run_glue.py) and train it on my own dataset. When I run this code on the CPU, it is well running (setting no_cuda true). But when I run it on GPUs, it quits when finishing loading data and generate the following information: run_burebert_ft.sh: line 16: 37055 Aborted (core dumped) CUDA_LAUNCH_BLOCKING=1 python run_br_pred.py --model_name_or_path ../plm/BureBERT --train_file ../dataset/priority_pred_data/priority_train.csv --validation_file ../dataset/priority_pred_data/priority_valid.csv --test_file ../dataset/priority_pred_data/priority_test.csv --cache_dir ./cache_dir --do_train --do_eval --do_predict --no_cuda false --max_seq_length 128 --per_device_train_batch_size 32 --learning_rate 5e-6 --num_train_epochs 10 --save_steps 10000 --output_dir ./results_priority I am very confused because it just gives limited information. Actually, I also run the same code on another server which contains one tesla V100 GPU, it can run well. So I guest whether It requires extra settings when training on multiple GPUs. ### Expected behavior ```shell When I run the code, it should start to fine-tune BureBERT on my dataset for ten epochs. ``` ### Checklist - [X] I have read the migration guide in the readme. ([pytorch-transformers](https://github.com/huggingface/transformers#migrating-from-pytorch-transformers-to-transformers); [pytorch-pretrained-bert](https://github.com/huggingface/transformers#migrating-from-pytorch-pretrained-bert-to-transformers)) - [X] I checked if a related official extension example runs on my machine.
07-15-2022 09:28:09
07-15-2022 09:28:09
I find that my problem is produced by cuda version. It seems that GTX 3090 is not compatible with cuda 10 and torch installed by pip has the default cuda version (10). So I re-install pytorch with cuda 11 and solve the problem.
transformers
18,147
closed
[HPO] update to sigopt new experiment api
* follow https://docs.sigopt.com/experiments Signed-off-by: Wang, Yi A <[email protected]> # What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes https://github.com/huggingface/transformers/issues/18145 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. trainer: @sgugger
07-15-2022 09:13:24
07-15-2022 09:13:24
@kding1 @yao-matrix @sgugger please have a review<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>Hi @sywangyi, thanks for your contribution 🤗. The current changes introduce a significant breaking change and we tend to limit them as much as possible. Still, we though about this and we would like to propose 3 options in order to keep backward compatibility with previous sigopt versions: 1. Introduce a `if sigopt_version < x.y.z` -> previous behaviour `else` -> new behaviour in this PR 2. Split the current implement and the one introduced in this PR in two distinct functions and dispatch to the right one based on the sigopt version 3. Keep only the behaviour introduced in this PR but guard with a version check, raising an error if the version is too old to inform the user to upgrade its dependency. Please pick one of the suggested options above (_or comment with potential other alternatives 🤓_) and we will be on track for merging 🤗. Thanks, Morgan<|||||>@mfuntowicz hi, thanks for the suggestion, and choose option 1 and patch is uploaded<|||||>Thanks a lot @sywangyi! It looks good to me, the failure in the CI seems related but I will let @sgugger have the final word 😃
transformers
18,146
closed
MLflow fails to log to a tracking server
### System Info Python 3.9.13 | packaged by conda-forge | (main, May 27 2022, 16:56:21) print(transformers.__version__) 4.20.1 print(mlflow.__version__) 1.27.0 ### Who can help? _No response_ ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction 1. Install mlflow 2. Configure a vanilla training job to use a tracking server (os.environ["MLFLOW_TRACKING_URI"]="...") 3. Run the job You should see an error similar to: ``` Traceback (most recent call last): File "/home/ubuntu/train.py", line 45, in <module> trainer.train() File "/home/ubuntu/.local/lib/python3.9/site-packages/transformers/trainer.py", line 1409, in train return inner_training_loop( File "/home/ubuntu/.local/lib/python3.9/site-packages/transformers/trainer.py", line 1580, in _inner_training_loop self.control = self.callback_handler.on_train_begin(args, self.state, self.control) File "/home/ubuntu/.local/lib/python3.9/site-packages/transformers/trainer_callback.py", line 347, in on_train_begin return self.call_event("on_train_begin", args, state, control) File "/home/ubuntu/.local/lib/python3.9/site-packages/transformers/trainer_callback.py", line 388, in call_event result = getattr(callback, event)( File "/home/ubuntu/.local/lib/python3.9/site-packages/transformers/integrations.py", line 856, in on_train_begin self.setup(args, state, model) File "/home/ubuntu/.local/lib/python3.9/site-packages/transformers/integrations.py", line 847, in setup self._ml_flow.log_params(dict(combined_dict_items[i : i + self._MAX_PARAMS_TAGS_PER_BATCH])) File "/home/ubuntu/.local/lib/python3.9/site-packages/mlflow/tracking/fluent.py", line 675, in log_params MlflowClient().log_batch(run_id=run_id, metrics=[], params=params_arr, tags=[]) File "/home/ubuntu/.local/lib/python3.9/site-packages/mlflow/tracking/client.py", line 918, in log_batch self._tracking_client.log_batch(run_id, metrics, params, tags) File "/home/ubuntu/.local/lib/python3.9/site-packages/mlflow/tracking/_tracking_service/client.py", line 315, in log_batch self.store.log_batch( File "/home/ubuntu/.local/lib/python3.9/site-packages/mlflow/store/tracking/rest_store.py", line 309, in log_batch self._call_endpoint(LogBatch, req_body) File "/home/ubuntu/.local/lib/python3.9/site-packages/mlflow/store/tracking/rest_store.py", line 56, in _call_endpoint return call_endpoint(self.get_host_creds(), endpoint, method, json_body, response_proto) File "/home/ubuntu/.local/lib/python3.9/site-packages/mlflow/utils/rest_utils.py", line 256, in call_endpoint response = verify_rest_response(response, endpoint) File "/home/ubuntu/.local/lib/python3.9/site-packages/mlflow/utils/rest_utils.py", line 185, in verify_rest_response raise RestException(json.loads(response.text)) mlflow.exceptions.RestException: INVALID_PARAMETER_VALUE: Invalid value [{'key': 'logging_nan_inf_filter', 'value': 'True'}, {'key': 'save_strategy', 'value': 'epoch'}, {'key': 'save_steps', 'value': '500'}, {'key': 'save_total_limit', 'value': 'None'}, {'key': 'save_on_each_node', 'value': 'False'}, {'key': 'no_cuda', 'value': 'False'}, {'key': 'seed', 'value': '42'}, {'key': 'data_seed', 'value': 'None'}, {'key': 'jit_mode_eval', 'value': 'False'}, {'key': 'use_ipex', 'value': 'False'}, {'key': 'bf16', 'value': 'False'}, {'key': 'fp16', 'value': 'False'}, {'key': 'fp16_opt_level', 'value': 'O1'}, {'key': 'half_precision_backend', 'value': 'auto'}, {'key': 'bf16_full_eval', 'value': 'False'}, {'key': 'fp16_full_eval', 'value': 'False'}, {'key': 'tf32', 'value': 'None'}, {'key': 'local_rank', 'value': '-1'}, {'key': 'xpu_backend', 'value': 'None'}, {'key': 'tpu_num_cores', 'value': 'None'}, {'key': 'tpu_metrics_debug', 'value': 'False'}, {'key': 'debug', 'value': '[]'}, {'key': 'dataloader_drop_last', 'value': 'False'}, {'key': 'eval_steps', 'value': 'None'}, {'key': 'dataloader_num_workers', 'value': '0'}, {'key': 'past_index', 'value': '-1'}, {'key': 'run_name', 'value': './output'}, {'key': 'disable_tqdm', 'value': 'False'}, {'key': 'remove_unused_columns', 'value': 'True'}, {'key': 'label_names', 'value': 'None'}, {'key': 'load_best_model_at_end', 'value': 'False'}, {'key': 'metric_for_best_model', 'value': 'None'}, {'key': 'greater_is_better', 'value': 'None'}, {'key': 'ignore_data_skip', 'value': 'False'}, {'key': 'sharded_ddp', 'value': '[]'}, {'key': 'fsdp', 'value': '[]'}, {'key': 'fsdp_min_num_params', 'value': '0'}, {'key': 'deepspeed', 'value': 'None'}, {'key': 'label_smoothing_factor', 'value': '0.0'}, {'key': 'optim', 'value': 'adamw_hf'}, {'key': 'adafactor', 'value': 'False'}, {'key': 'group_by_length', 'value': 'False'}, {'key': 'length_column_name', 'value': 'length'}, {'key': 'report_to', 'value': "['mlflow']"}, {'key': 'ddp_find_unused_parameters', 'value': 'None'}, {'key': 'ddp_bucket_cap_mb', 'value': 'None'}, {'key': 'dataloader_pin_memory', 'value': 'True'}, {'key': 'skip_memory_metrics', 'value': 'True'}, {'key': 'use_legacy_prediction_loop', 'value': 'False'}, {'key': 'push_to_hub', 'value': 'False'}, {'key': 'resume_from_checkpoint', 'value': 'None'}, {'key': 'hub_model_id', 'value': 'None'}, {'key': 'hub_strategy', 'value': 'every_save'}, {'key': 'hub_token', 'value': '<HUB_TOKEN>'}, {'key': 'hub_private_repo', 'value': 'False'}, {'key': 'gradient_checkpointing', 'value': 'False'}, {'key': 'include_inputs_for_metrics', 'value': 'False'}, {'key': 'fp16_backend', 'value': 'auto'}, {'key': 'push_to_hub_model_id', 'value': 'None'}, {'key': 'push_to_hub_organization', 'value': 'None'}, {'key': 'push_to_hub_token', 'value': '<PUSH_TO_HUB_TOKEN>'}, {'key': '_n_gpu', 'value': '1'}, {'key': 'mp_parameters', 'value': ''}, {'key': 'auto_find_batch_size', 'value': 'False'}, {'key': 'full_determinism', 'value': 'False'}, {'key': 'torchdynamo', 'value': 'None'}, {'key': 'ray_scope', 'value': 'last'}] for parameter 'params' supplied. Hint: Value was of type 'list'. See the API docs for more information about request parameters. ``` Training script: ``` import os import numpy as np from datasets import load_dataset, load_metric from transformers import AutoTokenizer, Trainer, TrainingArguments, AutoModelForSequenceClassification train_dataset, test_dataset = load_dataset("imdb", split=['train', 'test']) tokenizer = AutoTokenizer.from_pretrained("distilbert-base-cased") def tokenize_function(examples): return tokenizer(examples["text"], padding="max_length", truncation=True) train_dataset = train_dataset.map(tokenize_function, batched=True) test_dataset = test_dataset.map(tokenize_function, batched=True) model = AutoModelForSequenceClassification.from_pretrained("distilbert-base-cased", num_labels=2) metric = load_metric("accuracy") def compute_metrics(eval_pred): logits, labels = eval_pred predictions = np.argmax(logits, axis=-1) return metric.compute(predictions=predictions, references=labels) os.environ["HF_MLFLOW_LOG_ARTIFACTS"]="1" os.environ["MLFLOW_EXPERIMENT_NAME"]="trainer-mlflow-demo" os.environ["MLFLOW_FLATTEN_PARAMS"]="1" #os.environ["MLFLOW_TRACKING_URI"]=<MY_SERVER IP> training_args = TrainingArguments( num_train_epochs=1, output_dir="./output", logging_steps=500, save_strategy="epoch", ) trainer = Trainer( model=model, args=training_args, train_dataset=train_dataset, eval_dataset=test_dataset, compute_metrics=compute_metrics ) trainer.train() ``` ### Expected behavior I would expect logging to work :)
07-15-2022 09:12:24
07-15-2022 09:12:24
cc @sgugger <|||||>I'm not the one who wrote or supports the ML Flow callback :-)<|||||>@noise-field wrote the integration two years ago, do you have an idea of why it doesn't seem to work anymore @noise-field?<|||||>@juliensimon, I had an error message similar (I think). I found that the issue was related to values with empty string values (https://github.com/mlflow/mlflow/issues/6253), and it looks like there is a patch in the upcoming MLFLOW version 1.28 (not yet released) In my case, I had to set `mp_parameters` to `None` instead of leaving it as an empty string (the default value), and I see your error message has `{'key': 'mp_parameters', 'value': ''}`. While later MLflow version fix will address this issue, I think setting the `mp_parameters` to `None` instead of an empty string is cleaner. However, I'm not sure about the extent of this change. <|||||>OK, I'll give it a try and I'll let you know.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
18,145
closed
the Sigopt api is outdated in transformers trainer.py, the old api could not work
### System Info - `transformers` version: 4.21.0.dev0 - Platform: Linux-5.8.0-43-generic-x86_64-with-glibc2.29 - Python version: 3.8.10 - Huggingface_hub version: 0.7.0 - PyTorch version (GPU?): 1.11.0+cu113 (True) - Tensorflow version (GPU?): 2.9.1 (False) - Flax version (CPU?/GPU?/TPU?): 0.5.0 (cpu) - Jax version: 0.3.6 - JaxLib version: 0.3.5 - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? @sgugger ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction 1.enable sigopt HPO in example and run. 2. work log like"UserWarning: You're currently using the old SigOpt Experience. Try out the new and improved SigOpt experience by getting started with the docs today. You have until July 2022 to migrate over without experiencing breaking changes." ### Expected behavior HPO with sigopt backend could work correctly without warning
07-15-2022 08:58:46
07-15-2022 08:58:46
transformers
18,144
closed
Fix typo in pipelines/base.py
dictionnary -> dictionary # What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
07-15-2022 04:58:11
07-15-2022 04:58:11
_The documentation is not available anymore as the PR was closed or merged._
transformers
18,143
closed
DeBERTa for MaskedLM appears to be producing random results
### System Info @LysandreJik when I run the sample code in the docs for `DeBERTaV2ForMaskedLM` it appears to be returning random predictions ```python from transformers import DebertaV2Tokenizer, DebertaV2ForMaskedLM import torch tokenizer = DebertaV2Tokenizer.from_pretrained("microsoft/deberta-v3-base") model = DebertaV2ForMaskedLM.from_pretrained("microsoft/deberta-v3-base") inputs = tokenizer("The capital of France is [MASK].", return_tensors="pt") with torch.no_grad(): logits = model(**inputs).logits # retrieve index of [MASK] mask_token_index = (inputs.input_ids == tokenizer.mask_token_id)[0].nonzero(as_tuple=True)[0] predicted_token_id = logits[0, mask_token_index].argmax(axis=-1) tokenizer.decode(predicted_token_id) ``` If I run this code block multiple times in a row, I get a different response every time, none of which are the correct response. If I run this code block with `BERT` or `RoBERTa` I get the correct answer every time. I also tried different models of DeBERTa such as `"microsoft/deberta-v2-xlarge"` and I get the same thing, random responses transformers version = 4.15 python version = 3.8.12 os = macOS 12.4 running on cpu ### Who can help? _No response_ ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Steps to reproduce: run the code block in the description over and over again. The result will be different everytime. ```python from transformers import DebertaV2Tokenizer, DebertaV2ForMaskedLM import torch tokenizer = DebertaV2Tokenizer.from_pretrained("microsoft/deberta-v3-base") model = DebertaV2ForMaskedLM.from_pretrained("microsoft/deberta-v3-base") inputs = tokenizer("The capital of France is [MASK].", return_tensors="pt") with torch.no_grad(): logits = model(**inputs).logits # retrieve index of [MASK] mask_token_index = (inputs.input_ids == tokenizer.mask_token_id)[0].nonzero(as_tuple=True)[0] predicted_token_id = logits[0, mask_token_index].argmax(axis=-1) tokenizer.decode(predicted_token_id) ``` example output ```python 'Independence' ``` ### Expected behavior I expect the result to at least be the same every time, but I also expect it to be correct.
07-15-2022 04:48:22
07-15-2022 04:48:22
Hi @alexdauenhauer , I think there's also at least one issue in the upstream repo: https://github.com/microsoft/DeBERTa/issues/74 However - this is not a rant - but the DeBERTa guys are not really responsive and e.g. pretraining code of v3 is still not available after months (but that's another story).<|||||>@stefan-it ah... ok thanks for the info, sounds like I'll just have to use a different model.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
18,142
closed
Question for implementation of resize in image-classification examples.
### System Info - `transformers` version: 4.21.0.dev0 - Platform: Linux-4.15.0-175-generic-x86_64-with-glibc2.29 - Python version: 3.8.10 - Huggingface_hub version: 0.8.1 - PyTorch version (GPU?): 1.12.0+cu113 (True) - Tensorflow version (GPU?): 2.9.1 (True) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? Examples: maintained examples (not research project or legacy): @sgugger, @patil-suraj ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction * minimal code for reproduction. ```python ## partial import from image classification scripts from typing import Optional from dataclasses import dataclass, field from torchvision.transforms import ( CenterCrop, Compose, Resize, ) from transformers import ( MODEL_FOR_IMAGE_CLASSIFICATION_MAPPING, AutoConfig, AutoFeatureExtractor, ) MODEL_CONFIG_CLASSES = list(MODEL_FOR_IMAGE_CLASSIFICATION_MAPPING.keys()) MODEL_TYPES = tuple(conf.model_type for conf in MODEL_CONFIG_CLASSES) @dataclass class ModelArguments: """ Arguments pertaining to which model/config/tokenizer we are going to fine-tune from. """ model_name_or_path: str = field( default="google/vit-base-patch16-224-in21k", metadata={"help": "Path to pretrained model or model identifier from huggingface.co/models"}, ) model_type: Optional[str] = field( default=None, metadata={"help": "If training from scratch, pass a model type from the list: " + ", ".join(MODEL_TYPES)}, ) config_name: Optional[str] = field( default=None, metadata={"help": "Pretrained config name or path if not the same as model_name"} ) cache_dir: Optional[str] = field( default=None, metadata={"help": "Where do you want to store the pretrained models downloaded from s3"} ) model_revision: str = field( default="main", metadata={"help": "The specific model version to use (can be a branch name, tag name or commit id)."}, ) feature_extractor_name: str = field(default=None, metadata={"help": "Name or path of preprocessor config."}) use_auth_token: bool = field( default=False, metadata={ "help": ( "Will use the token generated when running `transformers-cli login` (necessary to use this script " "with private models)." ) }, ) ignore_mismatched_sizes: bool = field( default=False, metadata={"help": "Will enable to load a pretrained model whose head dimensions are different."}, ) # use defualt model_args model_args = ModelArguments() feature_extractor = AutoFeatureExtractor.from_pretrained( model_args.feature_extractor_name or model_args.model_name_or_path, cache_dir=model_args.cache_dir, revision=model_args.model_revision, use_auth_token=True if model_args.use_auth_token else None, ) # comment the ToTensor and normalize to check the PIL image. _val_transforms = Compose( [ Resize(feature_extractor.size), CenterCrop(feature_extractor.size) # ToTensor(), # normalize, ] ) ``` * get sample image ```python from datasets import load_dataset ds = load_dataset('imagenet-1k',use_auth_token=True, streaming=True) im = list(ds['train'].take(1))[0]['image'] ``` * original transform ```python original_transform = _val_transforms(im) original_transform ``` * new transform ```python _val_transforms_new = Compose( [ Resize((feature_extractor.size, feature_extractor.size)), CenterCrop(feature_extractor.size) # ToTensor(), # normalize, ] ) new_transform = _val_transforms_new(im) new_transform ``` ### Expected behavior I'm careful to say this because I'm a newbie in the field of vision, but the implementation for resize transformation in the `_val_transforms` function seems to be wrong in image classification example script.([here](https://github.com/huggingface/transformers/blob/8581a798c0a48fca07b29ce2ca2ef55adcae8c7e/examples/pytorch/image-classification/run_image_classification_no_trainer.py#L320) and [here](https://github.com/huggingface/transformers/blob/8581a798c0a48fca07b29ce2ca2ef55adcae8c7e/examples/pytorch/image-classification/run_image_classification.py#L301)) This transform may cut the object in validation step. ```python ... _val_transforms = Compose( [ Resize(feature_extractor.size), CenterCrop(feature_extractor.size), ToTensor(), normalize, ] ) ... ``` In order to maintain the shape of the object and only change the size of the image, I think the following code is right for `_val_transforms` function. ```python ... _val_transforms = Compose( [ Resize((feature_extractor.size, feature_extractor.size)), CenterCrop(feature_extractor.size), ToTensor(), normalize, ] ) ... ``` If I've misunderstood, please feel free to tell me about it.
07-15-2022 03:59:31
07-15-2022 03:59:31
cc @NielsRogge and @nateraw <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>cc @amyeroberts as well, as you've been working with similar objects lately :)<|||||>Hi @DataLama, thanks for raising the issue. In this script, the reason for the validation transformations being defined like this and in this order - resize then centre crop - is that we end up with an image of size `(feature_extractor.size, feature_extractor.size)`, but what's shown in the image has the same aspect ratio as the original i.e. the image isn't "squashed". In your suggestion: ``` ... _val_transforms = Compose( [ Resize((feature_extractor.size, feature_extractor.size)), CenterCrop(feature_extractor.size), ToTensor(), normalize, ] ) ... ``` the image would be resized to `(feature_extractor.size, feature_extractor.size)` first, changing the aspect ratio, and `CenterCrop(feature_extractor.size)` would then not have an effect. <|||||>Hi @amyeroberts, thanks for explanation. Now I understand what you intended. I'm closing this issue. the issue has been resolved.
transformers
18,141
closed
[Bloom] Remove unused position_ids, improve modeling code (lm head, alibi, attention multiplication)
Notable changes: - Remove `attention_mask` sum trick, and instead use `torch.masked_fill` - Simplify the causal attention creation - Move back to `baddbmm` instead of `bmm`. It was unclear why the change was necessary. - ~Remove~ Deprecate `position_ids` as they don't make sense in BLOOM. - Introduce a fp32 cast for lm_head (and consequently the word embeddings in order to respect the sharing). The intuition is as follows ` One of the thing we're wondering about if something we'd like to call "max-collapse". Given that 16bit allows to generate at most 65536 different values, this means that with a vocabulary of 255k+ values are going to collapse, ie multiple values are going to be equal. So if that happens to the max value, this means that greedy decoding can change between fp32 and fp16/bf16. ` - move back test to test generation on 16bit precision
07-14-2022 23:41:36
07-14-2022 23:41:36
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_18141). All of your documentation changes will be reflected on that endpoint.<|||||>First of all, thanks for the quick review! Concerning the change in default behaviour, I'd advocate that this fixes a bug where we see poor generation specifically due to this issue. The argument is that at some level 255k+ vocab is just too big compared to the precision and we observe some sort of collapse when the top 2 values (2 being arbitrary) that are distinct in fp32 in fact collapse where you run them in fp16 or bf16, and so argmax takes the first value given that matches the max given the [documentation](https://pytorch.org/docs/stable/generated/torch.argmax.html). This issue with collapse actually breaks greedy generation really fast (we've seen cases where the first token generated don't match, and then it just gets a lot worse). Also maybe we can rename `force_word_embeddings_in_fp32` to `force_lm_head_in_fp32`? The realy reason why I cast the word embeddings in fp32 is because `word_embeddings` and `lm_head` are tied, and I need `lm_head` to be fp32. **Edit**: Actually after thinking about it, this might be a more generic issue where whenever you have too high a lm_head size you need to upcast before running the final linear layer. Typically models like mt5 might struggle due to this?<|||||>This PR tries to group way too many things together which is very bad practice as when we realize after merging it that everything is broken, we won't find the cause easily. Please break it down in at lest three parts: - clean up code without any change - removing/deprecating position Ids - the float32 upcasting (which is where all the heated discussion is, so **really** should be its own PR) I am veto-ing any merge of everything altogether as we've already had quite enough of "Oh the last BLOOM PR broke Xxx." 🙃 <|||||>Here are some comments so far: - >Move back to baddbmm instead of bmm. It was unclear why the change was necessary. - Here: https://github.com/huggingface/transformers/pull/17866#discussion_r921825354 - But if this PR solves the issue for FP16, OK for me to use `baddbmm`. - (**just wondering what are the difference, and if there is any reason you prefer to use `baddbmm`?**) - There are 4 `force_lm_head_in_fp32` in the test file. Other than the one in `test_force_lm_head_in_fp32_is_close_to_fp16`, I don't know why we set it to `False`. - Is it to keep the same behavior as before (the whole model in FP16)? - But `prepare_config_and_inputs` has default `force_lm_head_in_fp32=True`, so most tests now use `True`. It is a bit confusing to me we keep them `False` in a few places. - I agree with @sgugger that the default value for `force_lm_head_in_fp32` should be `False`. - Although `True` here is good for generation, this is kind special (casting parts of model weights to different dtype) - Also it's good to keep the previous behavior by default -> do not introduce surprise things to users <|||||>Also, I really appreciate your finding on "max-collapse" (especially being able to demonstrate it!), and glad that it improves the generation here. But I personally would not expect FP16 generations will **always** match FP32 generations (even with `force_lm_head_in_fp32=True`), and we don't need to have tests that compare results across FP16/FP32. (I don't remember if we have a common test doing so though).<|||||>> Here: https://github.com/huggingface/transformers/pull/17866#discussion_r921825354 But if this PR solves the issue for FP16, OK for me to use baddbmm. (just wondering what are the difference, and if there is any reason you prefer to use baddbmm?) I think @younesbelkada and @NouamaneTazi changed the original behaviour, it was unclear what it actually fixed. The reason why I want to use `baddbmm` is because the training codebase used `baddbmm` and so there's no reason to use `bmm`. > There are 4 force_lm_head_in_fp32 in the test file. Other than the one in test_force_lm_head_in_fp32_is_close_to_fp16, I don't know why we set it to False. Is it to keep the same behavior as before (the whole model in FP16)? But prepare_config_and_inputs has default force_lm_head_in_fp32=True, so most tests now use True. It is a bit confusing to me we keep them False in a few places. Yeah so I initially thought that upcasting would have much better inference (at least in greedy style). turns out that's not true at least for 176b (it was true on the small models in test), so as @sgugger and @patrickvonplaten I'll try to figure out more if that feature is actually necessary at all.<|||||>Woops forgot to answer some question: I agree that default should be `False` now :D > But I personally would not expect FP16 generations will always match FP32 generations (even with force_lm_head_in_fp32=True), and we don't need to have tests that compare results across FP16/FP32. (I don't remember if we have a common test doing so though). Well technically given checkpoints are in float16 of bfloat16, there should be little reason that generation don't match. I mean it's the promise of pretraining on those half precision: "use twice less compute/time to get more or less the same model". I would not be surprised that it doesn't match perfectly, but at the same time, now that they do, it's a great signal that the model is robust to numerical inacurracies. Consequently, I think the test matching fp16 (with fp32 lm_head) output with full fp32 output makes sense.<|||||>As https://github.com/huggingface/transformers/pull/18344#event-7125942979 has been merged, can you merge main into this branch?<|||||>Actually going to close this PR, any reason why you want this branch to still be alive? What should be missing if the `fp32` upcasting that I've done in another branch.<|||||>Good to be closed for me
transformers
18,140
open
[TRACKER] Add alibi tests on BLOOM
### Feature request Add ALiBi tests on BLOOM ! We should add several tests to simply to test if alibi has been created correctly - test padding - test expected output @Narsil ### Motivation Build stronger CI tests ### Your contribution Design and build the tests mentioned above
07-14-2022 23:18:57
07-14-2022 23:18:57
transformers
18,139
closed
Fix BLOOM DeepSpeed inference issue
# What does this PR do? This PR tries to address a strange behaviour observed when inferring bloom-176 model using DeepSpeed! My intuitions are: - In the previous code we used `-10000` for the attention mask filling value whereas we should use `fp32.min` as it is written in the original cuda kernel of [`FusedScaledSoftmax`](https://github.com/bigscience-workshop/Megatron-DeepSpeed/blob/7b5f175b73a12d602cdadd3ee205ed666f6d4234/megatron/fused_kernels/scaled_masked_softmax.h#L288). This might lead to inconsistent result between the old version and the new version, but the new version should be considered as the correct one - @RezaYazdaniAminabadi discovered that attention scores should not be multiplied by the attention mask after the softmax, which makes sense and could fix the issue cc @RezaYazdaniAminabadi @stas00 @thomasw21
07-14-2022 23:04:39
07-14-2022 23:04:39
@RezaYazdaniAminabadi did you tried to infer by removing the elementwise multiplication after the softmax as proposed in the PR? When trying to infer on 8xA100 80GB I did obtained the same generations using the old code vs the new one with batch_size=1<|||||>> @RezaYazdaniAminabadi did you tried to infer by removing the elementwise multiplication after the softmax as proposed in the PR? When trying to infer on 8xA100 80GB I did obtained the same generations using the old code vs the new one with batch_size=1 Hi @younesbelkada, I did try this on 16 A100-40GB previously and it was not giving similar results. I will try with this one and let you know. Anyhow, I think that multiply is not needed since the scores are already masked. Thanks<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_18139). All of your documentation changes will be reflected on that endpoint.<|||||>Thank you very much @RezaYazdaniAminabadi !!<|||||>Finally after doing some tests it appears that we need the multiplication with the attention mask because of the following: in some cases we have an attention mask like the one below ``` 0 0 0 0 0 0 1 0 0 0 0 1 1 0 0 0 1 1 1 0 0 1 1 1 1 ``` After replacing all zeros by `torch.finfo(dtype.min)` , the softmax will return the following on the first row: `0.2 0.2 0.2 0.2 0.2` because we have the same values on the first row. To avoid using these wrong values on the calculation later I had to multiply the attention scores by the original mask. cc @NouamaneTazi <|||||>@younesbelkada, ok, so we have the first row of `0.2 0.2 0.2 0.2 0.2` let's follow through to the end - where does that manifest an issue? Let's perhaps use a small concrete example and use it to document why things are done the way they are - otherwise everybody will keep on questioning why this is done this way. <|||||>Is this because of padding, we should not care about the padding row, ie when the padding is the query. The wrong values don't matter when they are in the padding no?<|||||>My guess was this will impact the computation of the `context_layer` tensor [here](https://github.com/younesbelkada/transformers/blob/9ef1a4a52020854e02eea104a5bb8553f3de83e8/src/transformers/models/bloom/modeling_bloom.py#L319) in the case we have padded inputs as mentioned by @thomasw21 So at the end you are right ! Indeed it impacts the computation of this tensor but I think that it does not matter at all. At the end we get a token-to-token correspondance for the computed hidden states - ie the context layer will have a shape `batch_size x seq_len x hidden_dim` and the hidden states corresponding to the padding tokens will not impact anyway the prediction of the next token. Do you think that this explanation makes sense? <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
18,138
closed
Please Add a Fast Flaubert_tokenizer class as well to leverage fast_tokenizer methods
### System Info transformers version: 4.20.1 GPU: Nvidia Titan T4 ### Who can help? @LysandreJik I feel that you can help best since Flaubert Tokenizer class is directly inherited from XLM's Tokenizer which also does not have a fast tokenizer class too! ### Information - [x] My own modified scripts - [ ] The official example scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction This method is not working since my AutoTokenizer Class is taking the default FlaubertTokenizer class and since we don't have a Flaubert_tokenizer_fast class here; we are not able to leverage fast_tokenizer functionalities! <img width="1245" alt="Screenshot 2022-07-14 at 4 39 38 PM" src="https://user-images.githubusercontent.com/78647606/179044514-e6081be4-591a-48bb-bf10-56ab90628ab0.png"> ### Expected behavior If FlaubertTokenizer has a faster version class like BertToeknizerFast; We will be able to use class methods like word_ids, word_to_tokens, etc Specifically for the TokenClasssifcation task.
07-14-2022 17:35:19
07-14-2022 17:35:19
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
18,137
closed
ViT modeling file is missing drop path present in PyTorch image models
### System Info - `transformers` version: 4.20.1 - Platform: Linux-5.4.0-109-generic-x86_64-with-glibc2.17 - Python version: 3.8.13 - Huggingface_hub version: 0.8.1 - PyTorch version (GPU?): 1.10.1+cu113 (True) - Tensorflow version (GPU?): 2.9.1 (False) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: yes - Using distributed or parallel set-up in script?: No ### Who can help? @rwightman @NielsRogge ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction 1. Vision transformer modeling file from Pytorch image models uses [drop path ](https://github.com/rwightman/pytorch-image-models/blob/324a4e58b6b0365d16dcf8f93739be8f74cd7d37/timm/models/vision_transformer.py#L234) after attention and feedforward layers. It is missing in the current Huggingface modeling_vit.py file. We found that on VTAB benchmark, having drop path significantly boost the performance of ViTB model. 2. The epsilon value of layer norm used in pretrained ViTConfig is 1e-12 whereas vision_transformer.py file in pytorch-image-models uses epsilon value of 1e-6. Because of this, google/vit-base-patch16-224 checkpoint performs slightly different with modeling_vit.py from Huggingface and vision_transformer.py from pytorch-image-models in the test mode. ### Expected behavior 1. Given that Drop path is shown to perform well on VTAB benchmark , it can be added to current modeling file of Huggingface. 2. Epsilon value in layer norm of the transformer can be made consistent to pytorch-image-models.
07-14-2022 16:36:15
07-14-2022 16:36:15
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
18,136
closed
如何将xlnet作为嵌入层置于其他模型前
### Feature request 我想将预训练模型xlnet作为嵌入层添加到bilstm-CRF前,如何作为一个嵌入层加入到模型的构建当中。比如说,我现在用bert4keras框架搭建好了bert-Bilstm-Crf模型,我想将bert替换为xlnet,将xlnet作为模型的嵌入层,这样可不可以??? ------------------------- def bert_bilstm_crf(config_path,checkpoint_path,latm_units,drop_rate,learning_rate): # 构建模型的结构 -----------------------------------------------------------------------------------以下关于bert的内容是否可以替换 bert=build_transformer_model( #BERT加载权重 config_path=config_path, checkpoint_path=checkpoint_path, model='bert', return_keras_model=False ) x=bert.model.output #[batchsize,seq_len,768] x=keras.layers.Bidirectional( keras.layers.LSTM( #加入lstm模型 latm_units, kernel_initializer='he_normal', #初始化规则 return_sequences=True ) )(x) #[batchsize,seq_len,lstm_units*2] ### Motivation 修改关于ner方面的问题 ### Your contribution 解决模型构建的问题
07-14-2022 13:08:33
07-14-2022 13:08:33
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
18,135
closed
[bugfix] perceiverIO`PerceiverBasicDecoder` error when appending preprocessed inputs to decoder queries
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
07-14-2022 12:54:27
07-14-2022 12:54:27
_The documentation is not available anymore as the PR was closed or merged._<|||||>@julien-c @sgugger Let me know if I'm missing something 🙏
transformers
18,134
closed
FSDP integration enhancements and fixes
# What does this PR do? 1. Fixes #17681 and https://github.com/pytorch/pytorch/issues/79605 2. integrates new features of FSDP to auto wrap transformer blocks and support for mixed precision ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests?
07-14-2022 12:26:36
07-14-2022 12:26:36
_The documentation is not available anymore as the PR was closed or merged._
transformers
18,133
closed
Expected behaviour for MBartTokenizer as target tokenizer
Hello, I'm trying to fine-tune a MBart model with a multilingual dataset, but I'm facing some issues. The generated texts during training are really strange (mainly others languages not present in the dataset), then I noticed that the input_ids from target text does not follows the format [tgt_lang_code] [text tokens] [eos]. ### **System info** transformers==4.20.1 ### **Who can help?** @patrickvonplaten ### **Information** - [x] The official example scripts - [ ] My own modified scripts ### **Reproduction** Running the script below I get both sequence of tokens with format of X [eos, src_lang_code]. ```python from transformers import MBartForConditionalGeneration, MBartTokenizer tokenizer = MBartTokenizer.from_pretrained("facebook/mbart-large-en-ro", src_lang="en_XX", tgt_lang="ro_RO") example_english_phrase = "UN Chief Says There Is No Military Solution in Syria" expected_translation_romanian = "Şeful ONU declară că nu există o soluţie militară în Siria" inputs = tokenizer(example_english_phrase, return_tensors="pt") with tokenizer.as_target_tokenizer(): labels = tokenizer(expected_translation_romanian, return_tensors="pt") print(inputs['input_ids']) #tensor([[ 8274, 127873, 25916, 7, 8622, 2071, 438, 67485, 53, # 187895, 23, 51712, 2, 250004]]) print(labels['input_ids']) #tensor([[ 47711, 7844, 127666, 8, 18347, 18147, 1362, 315, 42071, # 36, 31563, 8454, 33796, 451, 346, 125577, 2, 250020]]) ``` ### **Expected Behaviour** ```python print(labels['input_ids']) #tensor([[ 250020, 47711, 7844, 127666, 8, 18347, 18147, 1362, 315, 42071, # 36, 31563, 8454, 33796, 451, 346, 125577, 2]]) ```
07-14-2022 12:02:30
07-14-2022 12:02:30
I won't have time to look into this anytime soon. @ArthurZucker could you take a look here? <|||||>Hey! Really sorry for the long delay! 🤗 From what I understand based on the tests, this behaviour is actually intended : once the `labels` are passed to the `MBartForConditionalGeneration`, they are shifted using `shift_tokens_right(labels)` (see [here](https://github.com/huggingface/transformers/blob/main/src/transformers/models/mbart/modeling_mbart.py/#L1346-L1351). This means that the model actually receives the expected input 😄 Now about the languages that are generated, there are a lot of possibilities : - Since the original model is trained on a lot of language, it can belong to the dataset used to train that model. - You can remove the prediction of a list of languages by using `suppress_tokens` argument of the `generate` function (if you are using it). Otherwise, I can't really help about the actual finetuning! Tell me if that makes sens<|||||>PS : you can try the following : ```python from transformers import MBartForConditionalGeneration, MBartTokenizer from transformers.models.mbart.modeling_mbart import shift_tokens_right tokenizer = MBartTokenizer.from_pretrained("facebook/mbart-large-en-ro", src_lang="en_XX", tgt_lang="ro_RO") example_english_phrase = "UN Chief Says There Is No Military Solution in Syria" expected_translation_romanian = "Şeful ONU declară că nu există o soluţie militară în Siria" tokens = tokenizer(example_english_phrase, text_target=expected_translation_romanian, return_tensors="pt") shift_tokens_right(tokens["labels"], tokenizer.pad_token_id) ``` Should output ```python tensor([[250020, 47711, 7844, 127666, 8, 18347, 18147, 1362, 315, 42071, 36, 31563, 8454, 33796, 451, 346, 125577, 2]]) ```
transformers
18,132
closed
model does not work after loss change
### System Info ```shell hello My model finetunes bert (specifically Roberta) using a lst fully connected layer of a binary text classification task. I was using cross entropy loss and the code worked well. However when I changed the loss the model stopped learning and predicted 0 for all the examples and did not learn. For other classification tasks the loss works fine. The loss is decreasing but the accuracy stayes the same and the prediction is always 0. I have tried different learning rate values and batch sizes and many other things but until now nothing worked. It happens when finetuning happens and also when the bert model is frozen. But not on other classification tasks. The loss functio is RCE: class ReverseCrossEntropy(torch.nn.Module): def __init__(self, num_classes, scale=1.0): super(ReverseCrossEntropy, self).__init__() self.device = device self.num_classes = num_classes self.scale = scale def forward(self, pred, labels): pred = F.softmax(pred, dim=1) pred = torch.clamp(pred, min=1e-7, max=1.0) label_one_hot = torch.nn.functional.one_hot(labels, self.num_classes).float().to(self.device) label_one_hot = torch.clamp(label_one_hot, min=1e-4, max=1.0) rce = (-1*torch.sum(pred * torch.log(label_one_hot), dim=1)) return self.scale * rce.mean() and I also tried NCE: class NormalizedReverseCrossEntropy(torch.nn.Module): def __init__(self, num_classes, scale=1.0): super(NormalizedReverseCrossEntropy, self).__init__() self.device = device self.num_classes = num_classes self.scale = scale def forward(self, pred, labels): pred = F.softmax(pred, dim=1) pred = torch.clamp(pred, min=1e-7, max=1.0) label_one_hot = torch.nn.functional.one_hot(labels, self.num_classes).float().to(self.device) label_one_hot = torch.clamp(label_one_hot, min=1e-4, max=1.0) normalizor = 1 / 4 * (self.num_classes - 1) rce = (-1*torch.sum(pred * torch.log(label_one_hot), dim=1)) return self.scale * normalizor * rce.mean() They are taken from the artical https://arxiv.org/abs/2006.13554 git: https://github.com/HanxunH/Active-Passive-Losses/blob/master/loss.py any help will be much appreciated. ``` ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction class Model(nn.Module): def __init__(self, device='cuda', lm='roberta', alpha_aug=0.8): super().__init__() if lm in lm_mp: self.bert = AutoModel.from_pretrained(lm_mp[lm]) else: self.bert = AutoModel.from_pretrained(lm) self.device = device # linear layer hidden_size = self.bert.config.hidden_size self.fc = torch.nn.Linear(hidden_size, 2) def forward(self, x1, x2=None): """Encode the left, right, and the concatenation of left+right. Args: x1 (LongTensor): a batch of ID's Returns: Tensor: binary prediction """ x1 = x1.to(self.device) # (batch_size, seq_len) enc = self.bert(x1)[0][:, 0, :] return self.fc(enc) creating the model: device = 'cuda' if torch.cuda.is_available() else 'cpu' model = Model(device=device, lm=hp.lm, alpha_aug=hp.alpha_aug) model = model.cuda() optimizer = AdamW(model.parameters(), lr=hp.lr) The training step is: #deciding the loss criterion = nn.CrossEntropyLoss() for i, batch in enumerate(train_iter): optimizer.zero_grad() if len(batch) == 2: x, y = batch prediction = model(x) else: x1, x2, y = batch prediction = model(x1, x2) loss = criterion(prediction, y.to(model.device)) if hp.fp16: with amp.scale_loss(loss, optimizer) as scaled_loss: scaled_loss.backward() else: loss.backward() optimizer.step() scheduler.step() if i % 10 == 0: # monitoring print(f"step: {i}, loss: {loss.item()}") del loss This works well then the only change I did was for the loss: criterion = ReverseCrossEntropy(2) instead of cross entropy. And this change does not work. ### Expected behavior ```shell The result for training with cross entropy is: step: 0, loss: 0.5812623500823975 Gradient overflow. Skipping step, loss scaler 0 reducing loss scale to 16384.0 /usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda. warnings.warn("An input tensor was not cuda.") epoch 1: dev_f1=0.2772277227722772, f1=0.2745098039215686, best_f1=0.2745098039215686 /usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda. warnings.warn("An input tensor was not cuda.") step: 0, loss: 0.3767085075378418 Gradient overflow. Skipping step, loss scaler 0 reducing loss scale to 8192.0 /usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda. warnings.warn("An input tensor was not cuda.") epoch 2: dev_f1=0.36363636363636365, f1=0.35294117647058826, best_f1=0.35294117647058826 /usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda. warnings.warn("An input tensor was not cuda.") step: 0, loss: 0.43073320388793945 /usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda. warnings.warn("An input tensor was not cuda.") epoch 3: dev_f1=0.2978723404255319, f1=0.2978723404255319, best_f1=0.35294117647058826 /usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda. warnings.warn("An input tensor was not cuda.") step: 0, loss: 0.6784828305244446 /usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda. warnings.warn("An input tensor was not cuda.") epoch 4: dev_f1=0.5365853658536585, f1=0.43999999999999995, best_f1=0.43999999999999995 /usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda. warnings.warn("An input tensor was not cuda.") step: 0, loss: 0.25015905499458313 /usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda. warnings.warn("An input tensor was not cuda.") epoch 5: dev_f1=0.43076923076923085, f1=0.4745762711864407, best_f1=0.43999999999999995 /usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda. warnings.warn("An input tensor was not cuda.") step: 0, loss: 0.329183429479599 /usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda. warnings.warn("An input tensor was not cuda.") epoch 6: dev_f1=0.8148148148148148, f1=0.7647058823529412, best_f1=0.7647058823529412 /usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda. warnings.warn("An input tensor was not cuda.") step: 0, loss: 0.08995085209608078 /usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda. warnings.warn("An input tensor was not cuda.") epoch 7: dev_f1=0.88, f1=0.8333333333333333, best_f1=0.8333333333333333 /usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda. warnings.warn("An input tensor was not cuda.") step: 0, loss: 0.18586984276771545 /usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda. warnings.warn("An input tensor was not cuda.") epoch 8: dev_f1=0.9032258064516129, f1=0.8750000000000001, best_f1=0.8750000000000001 /usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda. warnings.warn("An input tensor was not cuda.") step: 0, loss: 0.007164476439356804 /usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda. warnings.warn("An input tensor was not cuda.") epoch 9: dev_f1=0.888888888888889, f1=0.8275862068965518, best_f1=0.8750000000000001 /usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda. warnings.warn("An input tensor was not cuda.") step: 0, loss: 0.005751035641878843 /usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda. warnings.warn("An input tensor was not cuda.") epoch 10: dev_f1=0.9032258064516129, f1=0.8484848484848484, best_f1=0.8750000000000001 /usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda. warnings.warn("An input tensor was not cuda.") step: 0, loss: 0.14081726968288422 /usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda. warnings.warn("An input tensor was not cuda.") epoch 11: dev_f1=0.8571428571428571, f1=0.9032258064516129, best_f1=0.8750000000000001 /usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda. warnings.warn("An input tensor was not cuda.") step: 0, loss: 0.0045958105474710464 /usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda. warnings.warn("An input tensor was not cuda.") epoch 12: dev_f1=0.896551724137931, f1=0.9032258064516129, best_f1=0.8750000000000001 /usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda. warnings.warn("An input tensor was not cuda.") step: 0, loss: 0.0023396878968924284 /usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda. warnings.warn("An input tensor was not cuda.") epoch 13: dev_f1=0.8333333333333333, f1=0.888888888888889, best_f1=0.8750000000000001 /usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda. warnings.warn("An input tensor was not cuda.") step: 0, loss: 0.0017288422677665949 /usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda. warnings.warn("An input tensor was not cuda.") epoch 14: dev_f1=0.8750000000000001, f1=0.8750000000000001, best_f1=0.8750000000000001 /usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda. warnings.warn("An input tensor was not cuda.") step: 0, loss: 0.0025747090112417936 /usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda. warnings.warn("An input tensor was not cuda.") epoch 15: dev_f1=0.896551724137931, f1=0.896551724137931, best_f1=0.8750000000000001 /usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda. warnings.warn("An input tensor was not cuda.") step: 0, loss: 0.0030487636104226112 /usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda. warnings.warn("An input tensor was not cuda.") epoch 16: dev_f1=0.88, f1=0.888888888888889, best_f1=0.8750000000000001 /usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda. warnings.warn("An input tensor was not cuda.") step: 0, loss: 0.0015720207011327147 /usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda. warnings.warn("An input tensor was not cuda.") epoch 17: dev_f1=0.896551724137931, f1=0.896551724137931, best_f1=0.8750000000000001 /usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda. warnings.warn("An input tensor was not cuda.") step: 0, loss: 0.001150735653936863 /usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda. warnings.warn("An input tensor was not cuda.") epoch 18: dev_f1=0.9333333333333333, f1=0.896551724137931, best_f1=0.896551724137931 /usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda. warnings.warn("An input tensor was not cuda.") step: 0, loss: 0.0009454995160922408 /usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda. warnings.warn("An input tensor was not cuda.") epoch 19: dev_f1=0.9333333333333333, f1=0.896551724137931, best_f1=0.896551724137931 /usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda. warnings.warn("An input tensor was not cuda.") step: 0, loss: 0.0007868938846513629 /usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda. warnings.warn("An input tensor was not cuda.") epoch 20: dev_f1=0.9333333333333333, f1=0.896551724137931, best_f1=0.896551724137931 /usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda. warnings.warn("An input tensor was not cuda.") step: 0, loss: 0.0006980099133215845 /usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda. warnings.warn("An input tensor was not cuda.") epoch 21: dev_f1=0.9333333333333333, f1=0.896551724137931, best_f1=0.896551724137931 /usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda. warnings.warn("An input tensor was not cuda.") step: 0, loss: 0.0006197747425176203 /usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda. warnings.warn("An input tensor was not cuda.") epoch 22: dev_f1=0.9333333333333333, f1=0.896551724137931, best_f1=0.896551724137931 /usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda. warnings.warn("An input tensor was not cuda.") step: 0, loss: 0.0006151695270091295 /usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda. warnings.warn("An input tensor was not cuda.") epoch 23: dev_f1=0.9333333333333333, f1=0.896551724137931, best_f1=0.896551724137931 /usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda. warnings.warn("An input tensor was not cuda.") step: 0, loss: 0.0004854918224737048 /usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda. warnings.warn("An input tensor was not cuda.") epoch 24: dev_f1=0.9333333333333333, f1=0.896551724137931, best_f1=0.896551724137931 /usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda. warnings.warn("An input tensor was not cuda.") step: 0, loss: 0.000492772669531405 /usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda. warnings.warn("An input tensor was not cuda.") epoch 25: dev_f1=0.9333333333333333, f1=0.896551724137931, best_f1=0.896551724137931 /usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda. warnings.warn("An input tensor was not cuda.") step: 0, loss: 0.0004389513051137328 /usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda. warnings.warn("An input tensor was not cuda.") epoch 26: dev_f1=0.9333333333333333, f1=0.896551724137931, best_f1=0.896551724137931 /usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda. warnings.warn("An input tensor was not cuda.") step: 0, loss: 0.0003859938296955079 Gradient overflow. Skipping step, loss scaler 0 reducing loss scale to 4096.0 /usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda. warnings.warn("An input tensor was not cuda.") epoch 27: dev_f1=0.9333333333333333, f1=0.896551724137931, best_f1=0.896551724137931 /usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda. warnings.warn("An input tensor was not cuda.") step: 0, loss: 0.0004301978333387524 /usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda. warnings.warn("An input tensor was not cuda.") epoch 28: dev_f1=0.9333333333333333, f1=0.896551724137931, best_f1=0.896551724137931 /usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda. warnings.warn("An input tensor was not cuda.") step: 0, loss: 0.0004772722895722836 /usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda. warnings.warn("An input tensor was not cuda.") epoch 29: dev_f1=0.9333333333333333, f1=0.896551724137931, best_f1=0.896551724137931 /usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda. warnings.warn("An input tensor was not cuda.") step: 0, loss: 0.0003848907945211977 /usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda. warnings.warn("An input tensor was not cuda.") epoch 30: dev_f1=0.9333333333333333, f1=0.896551724137931, best_f1=0.896551724137931 /usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda. warnings.warn("An input tensor was not cuda.") step: 0, loss: 0.0003429920761846006 /usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda. warnings.warn("An input tensor was not cuda.") epoch 31: dev_f1=0.9333333333333333, f1=0.896551724137931, best_f1=0.896551724137931 /usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda. warnings.warn("An input tensor was not cuda.") step: 0, loss: 0.0004783756739925593 /usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda. warnings.warn("An input tensor was not cuda.") epoch 32: dev_f1=0.9333333333333333, f1=0.896551724137931, best_f1=0.896551724137931 /usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda. warnings.warn("An input tensor was not cuda.") step: 0, loss: 0.00039960749563761055 /usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda. warnings.warn("An input tensor was not cuda.") epoch 33: dev_f1=0.9333333333333333, f1=0.896551724137931, best_f1=0.896551724137931 /usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda. warnings.warn("An input tensor was not cuda.") step: 0, loss: 0.00043797597754746675 /usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda. warnings.warn("An input tensor was not cuda.") epoch 34: dev_f1=0.9333333333333333, f1=0.896551724137931, best_f1=0.896551724137931 /usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda. warnings.warn("An input tensor was not cuda.") step: 0, loss: 0.00025380056467838585 /usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda. warnings.warn("An input tensor was not cuda.") epoch 35: dev_f1=0.9333333333333333, f1=0.896551724137931, best_f1=0.896551724137931 /usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda. warnings.warn("An input tensor was not cuda.") step: 0, loss: 0.0003628128906711936 /usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda. warnings.warn("An input tensor was not cuda.") epoch 36: dev_f1=0.9333333333333333, f1=0.896551724137931, best_f1=0.896551724137931 /usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda. warnings.warn("An input tensor was not cuda.") step: 0, loss: 0.00036079881829209626 /usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda. warnings.warn("An input tensor was not cuda.") epoch 37: dev_f1=0.9333333333333333, f1=0.896551724137931, best_f1=0.896551724137931 /usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda. warnings.warn("An input tensor was not cuda.") step: 0, loss: 0.00036769770667888224 /usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda. warnings.warn("An input tensor was not cuda.") epoch 38: dev_f1=0.9333333333333333, f1=0.896551724137931, best_f1=0.896551724137931 /usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda. warnings.warn("An input tensor was not cuda.") step: 0, loss: 0.0003665930707938969 /usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda. warnings.warn("An input tensor was not cuda.") epoch 39: dev_f1=0.9333333333333333, f1=0.896551724137931, best_f1=0.896551724137931 /usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda. warnings.warn("An input tensor was not cuda.") step: 0, loss: 0.0002882482949644327 /usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda. warnings.warn("An input tensor was not cuda.") epoch 40: dev_f1=0.9333333333333333, f1=0.896551724137931, best_f1=0.896551724137931 The expectation was the the rusults will be similar but hen changed to reverse cross entropy the results are: step: 0, loss: 3.970363140106201 Gradient overflow. Skipping step, loss scaler 0 reducing loss scale to 16384.0 Gradient overflow. Skipping step, loss scaler 0 reducing loss scale to 8192.0 Gradient overflow. Skipping step, loss scaler 0 reducing loss scale to 4096.0 Gradient overflow. Skipping step, loss scaler 0 reducing loss scale to 2048.0 /usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda. warnings.warn("An input tensor was not cuda.") epoch 1: dev_f1=0.28571428571428575, f1=0.30000000000000004, best_f1=0.30000000000000004 /usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda. warnings.warn("An input tensor was not cuda.") step: 0, loss: 2.027850866317749 /usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda. warnings.warn("An input tensor was not cuda.") epoch 2: dev_f1=0.2666666666666667, f1=0.2666666666666667, best_f1=0.30000000000000004 /usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda. warnings.warn("An input tensor was not cuda.") step: 0, loss: 1.72965407371521 /usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda. warnings.warn("An input tensor was not cuda.") epoch 3: dev_f1=0.2666666666666667, f1=0.2666666666666667, best_f1=0.30000000000000004 /usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda. warnings.warn("An input tensor was not cuda.") step: 0, loss: 2.015202522277832 /usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda. warnings.warn("An input tensor was not cuda.") epoch 4: dev_f1=0.2666666666666667, f1=0.2666666666666667, best_f1=0.30000000000000004 /usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda. warnings.warn("An input tensor was not cuda.") step: 0, loss: 0.5761911273002625 /usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda. warnings.warn("An input tensor was not cuda.") epoch 5: dev_f1=0.2666666666666667, f1=0.2666666666666667, best_f1=0.30000000000000004 /usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda. warnings.warn("An input tensor was not cuda.") step: 0, loss: 1.439455270767212 /usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda. warnings.warn("An input tensor was not cuda.") epoch 6: dev_f1=0.2666666666666667, f1=0.2666666666666667, best_f1=0.30000000000000004 /usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda. warnings.warn("An input tensor was not cuda.") step: 0, loss: 1.7271339893341064 /usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda. warnings.warn("An input tensor was not cuda.") epoch 7: dev_f1=0.2666666666666667, f1=0.2666666666666667, best_f1=0.30000000000000004 /usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda. warnings.warn("An input tensor was not cuda.") step: 0, loss: 0.8637082576751709 /usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda. warnings.warn("An input tensor was not cuda.") epoch 8: dev_f1=0.2666666666666667, f1=0.2666666666666667, best_f1=0.30000000000000004 /usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda. warnings.warn("An input tensor was not cuda.") step: 0, loss: 1.1514854431152344 /usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda. warnings.warn("An input tensor was not cuda.") epoch 9: dev_f1=0.2666666666666667, f1=0.2666666666666667, best_f1=0.30000000000000004 /usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda. warnings.warn("An input tensor was not cuda.") step: 0, loss: 0.863682746887207 /usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda. warnings.warn("An input tensor was not cuda.") epoch 10: dev_f1=0.2666666666666667, f1=0.2666666666666667, best_f1=0.30000000000000004 /usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda. warnings.warn("An input tensor was not cuda.") step: 0, loss: 1.7270889282226562 /usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda. warnings.warn("An input tensor was not cuda.") epoch 11: dev_f1=0.2666666666666667, f1=0.2666666666666667, best_f1=0.30000000000000004 /usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda. warnings.warn("An input tensor was not cuda.") step: 0, loss: 0.863652765750885 /usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda. warnings.warn("An input tensor was not cuda.") epoch 12: dev_f1=0.2666666666666667, f1=0.2666666666666667, best_f1=0.30000000000000004 /usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda. warnings.warn("An input tensor was not cuda.") step: 0, loss: 1.1514408588409424 /usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda. warnings.warn("An input tensor was not cuda.") epoch 13: dev_f1=0.2666666666666667, f1=0.2666666666666667, best_f1=0.30000000000000004 /usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda. warnings.warn("An input tensor was not cuda.") step: 0, loss: 1.15143883228302 /usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda. warnings.warn("An input tensor was not cuda.") epoch 14: dev_f1=0.2666666666666667, f1=0.2666666666666667, best_f1=0.30000000000000004 /usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda. warnings.warn("An input tensor was not cuda.") step: 0, loss: 2.0148658752441406 /usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda. warnings.warn("An input tensor was not cuda.") epoch 15: dev_f1=0.2666666666666667, f1=0.2666666666666667, best_f1=0.30000000000000004 /usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda. warnings.warn("An input tensor was not cuda.") step: 0, loss: 2.5904781818389893 /usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda. warnings.warn("An input tensor was not cuda.") epoch 16: dev_f1=0.2666666666666667, f1=0.2666666666666667, best_f1=0.30000000000000004 /usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda. warnings.warn("An input tensor was not cuda.") step: 0, loss: 2.0148520469665527 /usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda. warnings.warn("An input tensor was not cuda.") epoch 17: dev_f1=0.2666666666666667, f1=0.2666666666666667, best_f1=0.30000000000000004 /usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda. warnings.warn("An input tensor was not cuda.") step: 0, loss: 2.01485013961792 /usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda. warnings.warn("An input tensor was not cuda.") epoch 18: dev_f1=0.2666666666666667, f1=0.2666666666666667, best_f1=0.30000000000000004 /usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda. warnings.warn("An input tensor was not cuda.") step: 0, loss: 1.4391952753067017 /usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda. warnings.warn("An input tensor was not cuda.") epoch 19: dev_f1=0.2666666666666667, f1=0.2666666666666667, best_f1=0.30000000000000004 /usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda. warnings.warn("An input tensor was not cuda.") step: 0, loss: 1.7270371913909912 /usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda. warnings.warn("An input tensor was not cuda.") epoch 20: dev_f1=0.2666666666666667, f1=0.2666666666666667, best_f1=0.30000000000000004 /usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda. warnings.warn("An input tensor was not cuda.") step: 0, loss: 1.4392175674438477 /usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda. warnings.warn("An input tensor was not cuda.") epoch 21: dev_f1=0.2666666666666667, f1=0.2666666666666667, best_f1=0.30000000000000004 /usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda. warnings.warn("An input tensor was not cuda.") step: 0, loss: 1.4392108917236328 /usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda. warnings.warn("An input tensor was not cuda.") epoch 22: dev_f1=0.2666666666666667, f1=0.2666666666666667, best_f1=0.30000000000000004 /usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda. warnings.warn("An input tensor was not cuda.") step: 0, loss: 2.0148367881774902 /usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda. warnings.warn("An input tensor was not cuda.") epoch 23: dev_f1=0.2666666666666667, f1=0.2666666666666667, best_f1=0.30000000000000004 /usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda. warnings.warn("An input tensor was not cuda.") step: 0, loss: 2.302647113800049 /usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda. warnings.warn("An input tensor was not cuda.") epoch 24: dev_f1=0.2666666666666667, f1=0.2666666666666667, best_f1=0.30000000000000004 /usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda. warnings.warn("An input tensor was not cuda.") step: 0, loss: 0.8635783195495605 /usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda. warnings.warn("An input tensor was not cuda.") epoch 25: dev_f1=0.2666666666666667, f1=0.2666666666666667, best_f1=0.30000000000000004 /usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda. warnings.warn("An input tensor was not cuda.") step: 0, loss: 0.5757505297660828 /usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda. warnings.warn("An input tensor was not cuda.") epoch 26: dev_f1=0.2666666666666667, f1=0.2666666666666667, best_f1=0.30000000000000004 /usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda. warnings.warn("An input tensor was not cuda.") step: 0, loss: 0.5757474303245544 /usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda. warnings.warn("An input tensor was not cuda.") epoch 27: dev_f1=0.2666666666666667, f1=0.2666666666666667, best_f1=0.30000000000000004 /usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda. warnings.warn("An input tensor was not cuda.") step: 0, loss: 1.4391957521438599 /usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda. warnings.warn("An input tensor was not cuda.") epoch 28: dev_f1=0.2666666666666667, f1=0.2666666666666667, best_f1=0.30000000000000004 /usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda. warnings.warn("An input tensor was not cuda.") step: 0, loss: 2.0148279666900635 /usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda. warnings.warn("An input tensor was not cuda.") epoch 29: dev_f1=0.2666666666666667, f1=0.2666666666666667, best_f1=0.30000000000000004 /usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda. warnings.warn("An input tensor was not cuda.") step: 0, loss: 2.0148282051086426 /usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda. warnings.warn("An input tensor was not cuda.") epoch 30: dev_f1=0.2666666666666667, f1=0.2666666666666667, best_f1=0.30000000000000004 /usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda. warnings.warn("An input tensor was not cuda.") step: 0, loss: 1.4392008781433105 /usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda. warnings.warn("An input tensor was not cuda.") epoch 31: dev_f1=0.2666666666666667, f1=0.2666666666666667, best_f1=0.30000000000000004 /usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda. warnings.warn("An input tensor was not cuda.") step: 0, loss: 0.8635559678077698 /usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda. warnings.warn("An input tensor was not cuda.") epoch 32: dev_f1=0.2666666666666667, f1=0.2666666666666667, best_f1=0.30000000000000004 /usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda. warnings.warn("An input tensor was not cuda.") step: 0, loss: 0.8635714054107666 /usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda. warnings.warn("An input tensor was not cuda.") epoch 33: dev_f1=0.2666666666666667, f1=0.2666666666666667, best_f1=0.30000000000000004 /usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda. warnings.warn("An input tensor was not cuda.") step: 0, loss: 2.0148158073425293 /usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda. warnings.warn("An input tensor was not cuda.") epoch 34: dev_f1=0.2666666666666667, f1=0.2666666666666667, best_f1=0.30000000000000004 /usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda. warnings.warn("An input tensor was not cuda.") step: 0, loss: 0.8635637760162354 /usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda. warnings.warn("An input tensor was not cuda.") epoch 35: dev_f1=0.2666666666666667, f1=0.2666666666666667, best_f1=0.30000000000000004 /usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda. warnings.warn("An input tensor was not cuda.") step: 0, loss: 0.5757399201393127 /usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda. warnings.warn("An input tensor was not cuda.") epoch 36: dev_f1=0.2666666666666667, f1=0.2666666666666667, best_f1=0.30000000000000004 /usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda. warnings.warn("An input tensor was not cuda.") step: 0, loss: 0.8635669946670532 /usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda. warnings.warn("An input tensor was not cuda.") epoch 37: dev_f1=0.2666666666666667, f1=0.2666666666666667, best_f1=0.30000000000000004 /usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda. warnings.warn("An input tensor was not cuda.") step: 0, loss: 1.1513622999191284 /usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda. warnings.warn("An input tensor was not cuda.") epoch 38: dev_f1=0.2666666666666667, f1=0.2666666666666667, best_f1=0.30000000000000004 /usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda. warnings.warn("An input tensor was not cuda.") step: 0, loss: 0.8635590076446533 /usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda. warnings.warn("An input tensor was not cuda.") epoch 39: dev_f1=0.2666666666666667, f1=0.2666666666666667, best_f1=0.30000000000000004 /usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda. warnings.warn("An input tensor was not cuda.") step: 0, loss: 1.7269994020462036 /usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda. warnings.warn("An input tensor was not cuda.") epoch 40: dev_f1=0.2666666666666667, f1=0.2666666666666667, best_f1=0.30000000000000004 Thank you for the help. ``` ### Checklist - [X] I have read the migration guide in the readme. ([pytorch-transformers](https://github.com/huggingface/transformers#migrating-from-pytorch-transformers-to-transformers); [pytorch-pretrained-bert](https://github.com/huggingface/transformers#migrating-from-pytorch-pretrained-bert-to-transformers)) - [X] I checked if a related official extension example runs on my machine.
07-14-2022 07:54:02
07-14-2022 07:54:02
Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests. Could you ask your question on the [forum](https://discuss.huggingface.co) instead? Thanks!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
18,131
closed
Fixing a hard to trigger bug for `text-generation` pipeline.
# What does this PR do? This PR readds sending `attention_mask` to the `generate` function. In order to trigger the bug ones would need: - To use the pipeline in `batch_size>1` mode. - Use a model that did not configure `pad_token_id`. (if it is configured, than generate just recovers the attention mask gracefully) Then the `generate` function would not be able to recover the attention mask, warn about it but still generate something (most likely incorrect) Since the pipeline most likely already generated the `attention_mask` we might as well send it along to `generate`. @sgugger <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @sugger Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
07-14-2022 07:35:02
07-14-2022 07:35:02
_The documentation is not available anymore as the PR was closed or merged._
transformers
18,130
closed
model.generate doesn't validate kwargs
### Feature request I made a mistake in a script: ``` model.generate(**tokens, in_length=num_tokens) ``` missing `m` in `min_length` and I was puzzling over why I was getting unexpected results (as it was using the default value which was quite different from mine) Would it be possible to have `generate` validate its input and assert on unexpected args? Thank you! @patrickvonplaten
07-14-2022 04:21:53
07-14-2022 04:21:53
cc @gante as well<|||||>Hi @stas00 👋 -- we do have a plan for it. The rough sketch is [here](https://github.com/huggingface/transformers/pull/17196#issuecomment-1155002093), and I will pick it up after the last wrinkles related to TF generate have been ironed out (which should be very soon!)<|||||>excellent. Thank you, @gante! I guess let's keep this Issue open for tracking unless there is another one already? <|||||>Yeah, let's keep this one open!
transformers
18,129
open
DeltaLM
### Model description DeltaLM is a multilingual encoder-decoder architecture that regards the decoder as the task layer of off-the-shelf pre-trained encoders. This architecture introduces an interleaved decoder, which has a more consistent structure with the encoder. Weights from pre-trained multilingual encoders are used to initialise both the encoder and decoder models before training on monolingual and bilingual data. As of September 2021 DeltaLM ranks first on the [WMT21 multilingual translation task](http://www.statmt.org/wmt21/large-scale-multilingual-translation-task.html). ### Open source status - [X] The model implementation is available - [X] The model weights are available ### Provide useful links for the implementation The model implementation is available at: https://github.com/microsoft/unilm/tree/master/deltalm Model weights are available: [DeltaLM-base](https://deltalm.blob.core.windows.net/deltalm/deltalm-base.pt) [DeltaLM-large](https://deltalm.blob.core.windows.net/deltalm/deltalm-large.pt) Who are the authors: @shumingma @gitnlp I'd be happy to try work on contributing the model.
07-14-2022 02:48:02
07-14-2022 02:48:02
Any progress on this?<|||||>Hi, I've noticed there are some DeltaLM available in the Hub: - https://huggingface.co/IDEA-CCNL/Randeng-Deltalm-362M-En-Zh - https://huggingface.co/IDEA-CCNL/Randeng-Deltalm-362M-Zh-En The code they use seems to be here: https://github.com/IDEA-CCNL/Fengshenbang-LM/tree/main/fengshen/models/deltalm However, I'd be interested in using the original checkpoints available in the official repository: https://github.com/microsoft/unilm/tree/master/deltalm Do you have any idea on how to do it? Thanks!
transformers
18,128
closed
Gradual types
Test XGLM Model with gradual types. We annotate the second dimension of the model, trace using constraints and also generate constraints to migrate the first annotation of the model after tracing is complete.
07-13-2022 18:42:14
07-13-2022 18:42:14
_The documentation is not available anymore as the PR was closed or merged._
transformers
18,127
closed
todo: enable CI to run torchdynamo/tensorrt tests
@ydshieh, let's talk about instrumenting one of the jobs to run tests for torchdynamo/tensorrt It's quite a handful of things to build, the instructions are in the OP: https://github.com/huggingface/transformers/pull/17765 I installed the environment locally, the tests work. I didn't want to get in the way of the PR, so doing it in a separate task. thank you.
07-13-2022 16:27:42
07-13-2022 16:27:42
Hi, @stas00 Sure! It sounds that you have already some tests written, and we just want to run them with CI, right? Let me know how you would like to proceed (discussion or PR review etc.).<|||||>Hi @stas00 Let me know if we should proceed :-)<|||||>Yes, please. Please let me know how I can help. Thank you, @ydshieh!<|||||>Hi! Basically I don't know what the task is. I didn't go through #17765 in detail, and only knows that PR merged. Is the (new) goal to install some libraries in docker files, write some specific test methods, or something else. You mentioned `I installed the environment locally, the tests work.`. It would be nice if you can let me know which test you mean here, and if you want me to `installed the environment` for the scheduled CI(s) to run those tests 🙏 Thank you, @stas00 !<|||||>1. The instructions of what needs to be installed are verbatim in the OP of https://github.com/huggingface/transformers/pull/17765 2. To test: ``` pytest tests/trainer/test_trainer.py -k torchdynamo ```<|||||>OK, I get it better now. So basically I need to - install what mentioned in `To reproduce and set up the environment` section in #17765 inside some docker files - have a job to run `torchdynamo` tests Do you think it's better to have a new docker image for this (just like `deepspeed`), or we can just put it in the `transformers-all-latest-gpu` (the one used for mode/tokenizer/generation tests)? I can try the later first in any case.<|||||>I'd say one docker image for all the extensions.<|||||>At some point it will become stable and will work with just the official released version.<|||||>@stas00 I have tried to build the image and run the tests. Here is the [run page](https://github.com/huggingface/transformers/runs/7878615152?check_suite_focus=true) It failed with ```bash > from torchdynamo.optimizations.training import aot_autograd_speedup_strategy E ImportError: cannot import name 'aot_autograd_speedup_strategy' from 'torchdynamo.optimizations.training' ``` I can't find `aot_autograd_speedup_strategy` in the latest [torchdynamo repo](https://github.com/pytorch/torchdynamo). cc @frank-wei <|||||>Thank you for trying, @ydshieh. It's too bad that the API appears to be unstable :( Let's wait for @frank-wei to reply<|||||>> torchdynamo repo Thanks for setting up the testing @stas00 and @ydshieh Looks like @Chillee or @anijain2305 had made some updates there for AOTAutoGrad. Could you elaborate here? <|||||>We have cleaned up some Aot Autograd related things in TorchDynamo repo. https://github.com/huggingface/transformers/blob/4eed2beca0fd8058a1c51684f68599522adf20c9/src/transformers/trainer.py#L652 The above line could be replaced with `return torchdynamo.optimize("aot_nvfuser")` Therefore, we do not need the import anymore on line 645 <|||||>> We have cleaned up some Aot Autograd related things in TorchDynamo repo. > > https://github.com/huggingface/transformers/blob/4eed2beca0fd8058a1c51684f68599522adf20c9/src/transformers/trainer.py#L652 > > The above line could be replaced with > > `return torchdynamo.optimize("aot_nvfuser")` 1. could you please make a PR that fixes things. 2. could you please include the relevant transformers tests in your CI, so that if you break things in the future you'd instantly know and then update the transformers side? Thank you. e.g. you can see how Deepspeed runs transformers/deepspeed integration tests on their CI https://github.com/microsoft/DeepSpeed/blob/master/.github/workflows/nv-transformers-v100.yml In your case it'd be cloning the latest `transformers` repo and running: ``` pytest tests/trainer/test_trainer.py -k torchdynamo ``` <|||||>I can change to `return torchdynamo.optimize("aot_nvfuser")` in my PR directly (to enable CI testing).<|||||>`import` issue fixed. But get `ResetRequired` from `../torchdynamo/torchdynamo/eval_frame.py:101: in __enter__ self.on_enter()`. See the full error below. Maybe I could just `torchdynamo.reset()` somewhere below ` # 2. TorchDynamo nvfuser`?? ```bash ________________ TrainerIntegrationTest.test_torchdynamo_memory ________________ self = <tests.trainer.test_trainer.TrainerIntegrationTest testMethod=test_torchdynamo_memory> @require_torch_non_multi_gpu @require_torchdynamo def test_torchdynamo_memory(self): # torchdynamo at the moment doesn't support DP/DDP, therefore require a single gpu class CustomTrainer(Trainer): def compute_loss(self, model, inputs, return_outputs=False): x = inputs["x"] output = model(x) if self.args.n_gpu == 1: return output.mean() return output class MyModule(torch.nn.Module): """Simple module that does aggressive fusion""" def __init__(self): super().__init__() def forward(self, x): for _ in range(20): x = torch.nn.functional.relu(x) return x mod = MyModule() # 1. without TorchDynamo (eager baseline) a = torch.ones(1024, 1024, device="cuda", requires_grad=True) a.grad = None trainer = CustomTrainer(model=mod) # warmup for _ in range(10): orig_loss = trainer.training_step(mod, {"x": a}) # resets gc.collect() torch.cuda.empty_cache() torch.cuda.reset_peak_memory_stats() orig_loss = trainer.training_step(mod, {"x": a}) orig_peak_mem = torch.cuda.max_memory_allocated() del trainer # 2. TorchDynamo nvfuser a = torch.ones(1024, 1024, device="cuda", requires_grad=True) a.grad = None args = TrainingArguments(output_dir="None", torchdynamo="nvfuser") trainer = CustomTrainer(model=mod, args=args) # warmup for _ in range(10): > loss = trainer.training_step(mod, {"x": a}) tests/trainer/test_trainer.py:1893: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ src/transformers/trainer.py:2479: in training_step with self.compute_loss_context_manager(): src/transformers/utils/generic.py:291: in __enter__ self.stack.enter_context(context_manager) /opt/conda/lib/python3.8/contextlib.py:425: in enter_context result = _cm_type.__enter__(cm) ../torchdynamo/torchdynamo/eval_frame.py:101: in __enter__ self.on_enter() _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ def on_enter(): global most_recent_backend if ( most_recent_backend is not None and most_recent_backend is not compiler_fn ): > raise ResetRequired() E torchdynamo.exc.ResetRequired: E Must call `torchdynamo.reset()` before changing backends. Detected two calls to E `torchdynamo.optimize(...)` with a different backend compiler arguments. ../torchdynamo/torchdynamo/eval_frame.py:[183](https://github.com/huggingface/transformers/runs/7894266049?check_suite_focus=true#step:7:184): ResetRequired ```<|||||>That's why I asked of @anijain2305 to fix it and make a new PR that actually fixes the tests. It's not productive to keep going back and forth when we don't know what other things have changed. <|||||>Yes, @stas00 I am gonna send a PR to transformers to make the appropriate changes. Apologize for the failures. Regarding adding transformers in the CI, thats a very good idea. Let me see how much extra time it adds on TorchDynamo side.<|||||>Thank you, @anijain2305! You can add it as a separate job, so it'd run in parallel with your other jobs and thus not add to the total CI runtime. It should be real fast to finish at least with barely a few basic tests we have right now for torchdynamo. or even tacking it to the existing job - the main overhead will be cloning `transformers` and installing its prerequisites. <|||||>Fixed by #19056
transformers
18,126
closed
NLLB tokenizer
Adds the NLLB tokenizer. In order to run: ```py >>> from transformers import AutoTokenizer, AutoModelForSeq2SeqLM, pipeline >>> model = AutoModelForSeq2SeqLM.from_pretrained("facebook/nllb-200-distilled-600M") >>> tokenizer = AutoTokenizer.from_pretrained("facebook/nllb-200-distilled-600M") >>> translator = pipeline('translation', model=model, tokenizer=tokenizer, src_lang="eng_Latn", tgt_lang='ron_Latn') >>> translator("UN Chief says there is no military solution in Syria") [{'translation_text': 'Şeful ONU spune că nu există o soluţie militară în Siria'}] ``` Closes https://github.com/huggingface/transformers/issues/18043
07-13-2022 15:54:23
07-13-2022 15:54:23
All models are now public, feel free to try it out @stefan-it. The generation seems good, have not tried fine-tuning yet.<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>I don't know of a better place to post (issue?), so I'll do it here :) @LysandreJik Thank you so much for adding support for the NLLB dense models! I pulled out this branch and tried all of them and they work awesome! There is the following place in the readme "This implementation contains dense models available in release. Let us know via GitHub if you want to see MoE models as well." So it would be really great if you could add MoE models! I tried to figure out the original repo, but it turned out to be unexpectedly difficult. I couldn't get MoE to run. So if you add MoE models, I'm sure it will make a lot of people happier, at least me :) <|||||>@LysandreJik Thanks a lot for your promt work! I tried using NLLB model from HuggingFace and noticed one problem: max_length does not set in config.json for any of the NLLB models, so it uses default value of max_length (20). https://github.com/huggingface/transformers/blob/33028f4c795e76f9e97226fc591bc7d0b8c7d815/src/transformers/configuration_utils.py#L125 As the result, your example code cannot generate more than 20 tokens. it is possible to set max_length higher when calling translation method, but it will be great to have meaningful default as well. For comparison, both for M2M and MBart50 models max_length set in config.json file to 200.<|||||>> @LysandreJik Thanks a lot for your promt work! I tried using NLLB model from HuggingFace and noticed one problem: > > max_length does not set in config.json for any of the NLLB models, so it uses default value of max_length (20). > > https://github.com/huggingface/transformers/blob/33028f4c795e76f9e97226fc591bc7d0b8c7d815/src/transformers/configuration_utils.py#L125 > > > As the result, your example code cannot generate more than 20 tokens. it is possible to set max_length higher when calling translation method, but it will be great to have meaningful default as well. > For comparison, both for M2M and MBart50 models max_length set in config.json file to 200. How is the default max_length determined per model? Or is it documented in their white papers? With this PR, I have started evaluating the extremely large model (facebook/nllb-200-3.3B) against GCP translation and so far it is doing really well despite the length of text I give it but I want to give it the best chance to perform so knowing the ideal max_length would help.<|||||>> > @LysandreJik Thanks a lot for your promt work! I tried using NLLB model from HuggingFace and noticed one problem: > > max_length does not set in config.json for any of the NLLB models, so it uses default value of max_length (20). > > https://github.com/huggingface/transformers/blob/33028f4c795e76f9e97226fc591bc7d0b8c7d815/src/transformers/configuration_utils.py#L125 > > > > As the result, your example code cannot generate more than 20 tokens. it is possible to set max_length higher when calling translation method, but it will be great to have meaningful default as well. > > For comparison, both for M2M and MBart50 models max_length set in config.json file to 200. > > How is the default max_length determined per model? Or is it documented in their white papers? With this PR, I have started evaluating the extremely large model (facebook/nllb-200-3.3B) against GCP translation and so far it is doing really well despite the length of text I give it but I want to give it the best chance to perform so knowing the ideal max_length would help. I think usual default for max_length is to be equal to max input length. Translation pipeline in transformers are checking that max_length at higher than 90% of input length. https://github.com/huggingface/transformers/blob/33028f4c795e76f9e97226fc591bc7d0b8c7d815/src/transformers/pipelines/text2text_generation.py#L272-L278<|||||>as mentionned here #19943 where did you guys see that the "</s> Langtoken" is added AFTER the tokens ? In the NLLB paper, it says only the "Langtoken" is placed BEFORE the tokens. (mBart does the opposite)<|||||>I've just seen this example - where the lang-token is prepended: https://github.com/facebookresearch/fairseq/blob/nllb/fairseq/data/multilingual/multilingual_data_manager.py#L78-L101 from original code base :thinking: <|||||>right. Also I am wondering why they use "</s>" which is "eos" as the start token of the source sequence. (in fact same for the target sequence). I would have expected: SRC = LangTok + tokens TGT = BOS + LangTok, tokens + EOS It seems they use EOS instead of BOS and that they put a EOS as the SRC start.
transformers
18,125
closed
Make sharded checkpoints work in offline mode
# What does this PR do? This PR make sharded checkpoint work in offline mode and add more information to an error we return. The crux of the issue is that the `from_pretrained` method of the various models will catch `EntryNotFoundError` on the regular model weights file, but we return a `FileNotFoundError` in offline mode. I changed the error type at the root, to avoid making three modifications in the PyTorch/TF/Flax model classes, but can change if you don't find this suitable.
07-13-2022 14:56:49
07-13-2022 14:56:49
_The documentation is not available anymore as the PR was closed or merged._
transformers
18,124
closed
Bloom-6b3 not utilizing much from GPU
### System Info GPU: Nvidia V100 ### Who can help? _No response_ ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Using the following code, when I do some inferences, GPU utilization is below 10%. ``` tokenizer = AutoTokenizer.from_pretrained("models/bloom_1b3") model = AutoModelForCausalLM.from_pretrained( "models/bloom_1b3", device_map="auto", torch_dtype=torch.float16 ) pipe = pipeline(task="text-generation", model=model, tokenizer=tokenizer, device=torch.device(0)) ``` ### Expected behavior GPU utilization should be larger than 50%.
07-13-2022 12:58:29
07-13-2022 12:58:29
Hi @farzanehnakhaee70 ! Thanks a lot for your message! Could you give us the output of `nvidia-smi` when running your script? Also could you share with us the version of `accelerate` you are using?<|||||>Hi @younesbelkada Thanks for your support. nvidia-smi: ``` +-----------------------------------------------------------------------------+ | NVIDIA-SMI 495.29.05 Driver Version: 495.29.05 CUDA Version: 11.5 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |===============================+======================+======================| | 0 Tesla V100-SXM2... Off | 00000000:3E:00.0 Off | 0 | | N/A 38C P0 54W / 300W | 10358MiB / 32510MiB | 0% Default | | | | N/A | +-------------------------------+----------------------+----------------------+ ``` Also accelerate version is `0.10.0`. It should also be mentioned that the same behavior existed if I use deep-speed or even if I didn't use any of accelerate and deep-speed.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
18,123
closed
Adding OPTForSeqClassification class
# What does this PR do? It add the class for OPTForSequenceClassification based on OPT model <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes #17525 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
07-13-2022 11:51:18
07-13-2022 11:51:18
_The documentation is not available anymore as the PR was closed or merged._<|||||>Not sure how to make these failing test pass. Need help<|||||>The following three test are failing : FAILED tests/models/opt/test_modeling_opt.py::OPTModelTest::test_load_with_mismatched_shapes - AssertionError: RuntimeError not raised FAILED tests/models/opt/test_modeling_opt.py::OPTModelTest::test_model_common_attributes - NotImplementedError FAILED tests/models/opt/test_modeling_opt.py::OPTModelTest::test_resize_tokens_embeddings - NotImplementedError Need help on how to fix them .<|||||>@NielsRogge Can you help me on how to fix the tests? <|||||>@ArthurZucker Please point on how to fix these errors<|||||>Hey, thanks a lot for this PR ! sorry for the delay I was OOO. 2 of the failed tests are quite simple to solve, it seems that the class is missing a function `get_input_embeddings`. One test is probably unrelated to your PR : `PegasusStandaloneDecoderModelTest` for that I would just recommend merging the latest updates from the main branch. I would have to dig a little bit deeper for the last test `OPTModelTest.test_load_with_mismatched_shapes` but it might be related to a missing `set_input_embedding` function, will check that <|||||>@younesbelkada Some tests are failing due to same model(OPTForSeqClassification) available in torch not available for tensorflow. Should I add it part of this PR ? Can we close this PR and I will raise another PR with TFOPTForSeqClassification ? <|||||>Hey @oneraghavan ! Thanks for your comment, I am not sure if not having `TFOPTForSeqClassification` explains why those tests are failing since the test should not return nothing [here](https://github.com/oneraghavan/transformers/blob/d672d9c54a5a329220a8dabb6b6e3f961fbdca5b/tests/test_modeling_common.py#L1767). The error says `AttributeError: decoder.embed_tokens.weight not found in TF 2.0 model` so it might be possible that the modifications you made on `OPTModel` and `OPTPretrainedModel` classes broke those tests. Also the git history seems to be broken, could you please rebase to main or merge with force-push to clean the git histoiry (aka the number of modified files has increased) 💪 Thanks again for your help here! And let us know if anything is unclear or if you need any help<|||||>Hey, I think the history is a bit messed up but it is alright (issues with merging I guess). Also, we should not have to change the `OPTPretrainedModel` base prefix. This can be a backward compatibility issue and should definitely be avoided. If you could just revert on that change it would be great. I think that it will solve the failing test and we will be able to merge 👍🏻 <|||||>> Hey @oneraghavan ! Thanks for your comment, I am not sure if not having `TFOPTForSeqClassification` explains why those tests are failing since the test should not return nothing [here](https://github.com/oneraghavan/transformers/blob/d672d9c54a5a329220a8dabb6b6e3f961fbdca5b/tests/test_modeling_common.py#L1767). The error says `AttributeError: decoder.embed_tokens.weight not found in TF 2.0 model` so it might be possible that the modifications you made on `OPTModel` and `OPTPretrainedModel` classes broke those tests. Also the git history seems to be broken, could you please rebase to main or merge with force-push to clean the git histoiry (aka the number of modified files has increased) 💪 Thanks again for your help here! And let us know if anything is unclear or if you need any help I Tried to do pull rebase from my main to huggingface repo. How do you want me to leave this PR? Just leave my commits on top ? <|||||>@ArthurZucker All tests fixed, we can close this PR.<|||||>> Looks good! Only left a comment regarding one addition. Done<|||||>Hey @oneraghavan do you have any checkpoints we could use for the documentation? It seems that you added some expected loss value and expected outputs, wanted to know if we can maybe use the model checkpoints for documentation (otherwise it is totally ok! I will use a dummy model 😄 )<|||||>@ArthurZucker I do not have any checkpoints, I used a dummy model.
transformers
18,122
closed
TypeError: TextInputSequence must be str
### System Info - `transformers` version: 4.20.1 - Platform: Linux-3.10.0-1160.6.1.el7.x86_64-x86_64-with-glibc2.17 - Python version: 3.9.7 - Huggingface_hub version: 0.7.0 - PyTorch version (GPU?): 1.11.0+cu102 (True) - Tensorflow version (GPU?): 2.7.0 (False) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? @LysandreJik @SaulLu ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction step1. I downloaded bert-base-cased from https://huggingface.co/models, then I placed all files(config.json pytorch_model.bin tokenizer_config.json tokenizer.json vocab.txt) in the directory /transformers/examples/pytorch/text-classification/bert-base-cased step2. from https://github.com/nyu-mll/GLUE-baselines/download_glue_data.py, I got train.tsv and dev.tsv, then converted them to train.csv and validation.csv(both are three columns, namely label, sentence1, sentence2). I placed these two files in the directory /transformers/examples/pytorch/text-classification/ step3. python run_glue.py --model_name_or_path bert-base-cased --train_file train.csv --validation_file validation.csv --do_train --do_eval --max_seq_length 128 --per_device_train_batch_size 32 --learning_rate 2e-5 --num_train_epochs 1 --output_dir ./output/ Then I got this error as shown below: Running tokenizer on dataset: 0%| | 0/4 [00:00<?, ?ba/s] Traceback (most recent call last): File "/root/zhaozhifeng/transformers/examples/pytorch/text-classification/run_glue.py", line 613, in <module> main() File "/root/zhaozhifeng/transformers/examples/pytorch/text-classification/run_glue.py", line 442, in main raw_datasets = raw_datasets.map( File "/root/anaconda3/lib/python3.9/site-packages/datasets/dataset_dict.py", line 770, in map { File "/root/anaconda3/lib/python3.9/site-packages/datasets/dataset_dict.py", line 771, in <dictcomp> k: dataset.map( File "/root/anaconda3/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 2376, in map return self._map_single( File "/root/anaconda3/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 551, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) File "/root/anaconda3/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 518, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) File "/root/anaconda3/lib/python3.9/site-packages/datasets/fingerprint.py", line 458, in wrapper out = func(self, *args, **kwargs) File "/root/anaconda3/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 2764, in _map_single batch = apply_function_on_filtered_inputs( File "/root/anaconda3/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 2644, in apply_function_on_filtered_inputs processed_inputs = function(*fn_args, *additional_args, **fn_kwargs) File "/root/anaconda3/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 2336, in decorated result = f(decorated_item, *args, **kwargs) File "/root/zhaozhifeng/transformers/examples/pytorch/text-classification/run_glue.py", line 434, in preprocess_function result = tokenizer(*args, padding=padding, max_length=max_seq_length, truncation=True) File "/root/anaconda3/lib/python3.9/site-packages/transformers/tokenization_utils_base.py", line 2495, in __call__ return self.batch_encode_plus( File "/root/anaconda3/lib/python3.9/site-packages/transformers/tokenization_utils_base.py", line 2686, in batch_encode_plus return self._batch_encode_plus( File "/root/anaconda3/lib/python3.9/site-packages/transformers/tokenization_utils_fast.py", line 426, in _batch_encode_plus encodings = self._tokenizer.encode_batch( TypeError: TextInputSequence must be str ### Expected behavior run run_glue.py and fine-tune on the pre-trained model successfully.
07-13-2022 09:15:19
07-13-2022 09:15:19
What is contained in your CSV files? Would you have a reproducible code example we can run in colab?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
18,121
closed
how to frozen TFGPT2LMHeadModel Embedding matrix?
### System Info when i use the TFGPT2LMHeadModel structure to train, i want to frozen the embedding matrix, how can i do it?@patil-suraj ### Who can help? _No response_ ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction config = AutoConfig.from_pretrained(pretrained_model_path,vocab_size=VOCAB_SIZE, n_positions=MAX_SEQ_LEN, n_ctx=MAX_SEQ_LEN, n_layer=1, n_embd=384, initializer_range=0.002)#, initializer_range=0.002 model = TFGPT2LMHeadModel.from_config(config) embedding = np.load(embed_data_path) model.set_input_embeddings(embedding) ### Expected behavior help me to frozen embedding layer
07-13-2022 07:44:42
07-13-2022 07:44:42
Hi @Orient12 👋 As per our [issues guidelines](https://github.com/huggingface/transformers/blob/main/ISSUES.md), we reserve GitHub issues for bugs in the repository and/or feature requests. For any other matters, we'd like to invite you to use our [forum](https://discuss.huggingface.co/) 🤗 (You can set the `trainable` attribute of a layer to `False`, see [this guide](https://keras.io/guides/transfer_learning/#freezing-layers-understanding-the-trainable-attribute))<|||||>> Actually, class TFGPT2LMHeadModel model can not split by layer,it just have one layer, so i can't set embedding matrix trainable=False <|||||>@Orient12 Sure you can, you just need to know the right attribute names (for which you might need to dig through the code). The following snippet freezes the embedding layer. ```python from transformers import TFGPT2LMHeadModel model = TFGPT2LMHeadModel.from_pretrained("distilgpt2") print(model.transformer.wte) print(model.transformer.wte.trainable) # setting embeddings to not trainable model.transformer.wte.trainable = False print(model.transformer.wte.trainable) ``` Our models do not use the Sequential nor the Functional API -- they use the [subclassing](https://keras.io/guides/making_new_layers_and_models_via_subclassing/) method.<|||||>Thanks for your help!I have known how to freeze it!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
18,120
closed
Pipelines returns inconsistent results when using non-default model
### System Info Transformers version 4.19.2 Python 3.7.13 Ubuntu 16.04.6 LTS ### Who can help? @Narsil I've noticed that `pipeline` returns inconsistent results, after re-instantiating it, when supplying a non-standard model. See code below. - What is being returned and why does it change? - What exactly does `pipeline` do when you give it a non-default model or a model not trained for the specific task? - Since it doesn't necessarily make sense to use `bert-base-uncased` for a sentiment analysis task, should pipeline allow this? I don't get a warning or error. Is there a recommended way to tell pipeline to fail if the supplied model doesn't make sense? ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ``` >>> from transformers import pipeline >>> pipe = pipeline("sentiment-analysis", model="bert-base-uncased") >>> pipe("This restaurant is awesome") [{'label': 'LABEL_0', 'score': 0.5899267196655273}] >>> pipe = pipeline("sentiment-analysis", model="bert-base-uncased") >>> pipe("This restaurant is awesome") [{'label': 'LABEL_0', 'score': 0.5623320937156677}] >>> pipe = pipeline("sentiment-analysis", model="bert-base-uncased") >>> pipe("This restaurant is awesome") [{'label': 'LABEL_1', 'score': 0.5405012369155884}] ``` ### Expected behavior I would expect pipeline to either fail or give a warning message if given a model not trained for the task.
07-13-2022 02:04:53
07-13-2022 02:04:53
Hi @sjgiorgi , Did you disable the logs somehow ? When running your code you can see: ``` Some weights of the model checkpoint at bert-base-uncased were not used when initializing BertForSequenceClassification: ['cls.predictions.transform.dense.weight', 'cls.predictions.transform.LayerNorm.weight', 'cls.seq_relationship.bias', 'cls.predictions.transform.dense.bias', 'cls.predictions.bias', 'cls.predictions.transform.LayerNorm.bias', 'cls.seq_relationship.weight', 'cls.predictions.decoder.weight'] - This IS expected if you are initializing BertForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model). - This IS NOT expected if you are initializing BertForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). Some weights of BertForSequenceClassification were not initialized from the model checkpoint at bert-base-uncased and are newly initialized: ['classifier.weight', 'classifier.bias'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. ``` which *does* warn you about uninitialized weights and potential issues. Maybe you deactivated the warnings ?<|||||>Ah, yes, I see those warnings now (I had changed the logging) and apologize. Thank you. My concerns are: - While the warning message says "You should probably TRAIN..." it does not _explicitly_ say "We don't recommend using this model for classification". - Results are still being returned, albeit with different labels. It's not clear what these numbers mean or how something like `bert-base-uncased` is being used in a classification setting. Is there documentation on what is happening? So it's not clear how I should be catching model mismatch given that pipeline is returning something that looks reasonably formatted. Also, blindly adding pipeline to some code could produce some weird results and new users might not fully understand the current warning's implications. I've handled this on my side, where I check the returned pipeline dictionary results against known keys which _should_ be in the results (e.g., `bert-base-uncased` in the `sentiment-analysis` pipeline returns a label of `LABEL_0` as opposed to `POSITIVE` or `NEGATIVE`): ``` PIPELINE_RESULTS_BY_TASK = { "text-classification": ["POSITIVE", "NEGATIVE"], "sentiment-analysis": ["POSITIVE", "NEGATIVE"], "question-answering": ["answer"], "translation": ["translation_text"], "summarization": ["summary_text"], "token-classification": ["entity"], "ner": ["entity"], "text-generation": ["generated_text"], } ``` but I'm not sure if this will catch everything. <|||||>@sjgiorgi I do agree that it's easy to miss warnings, especially when running setups automatically and serving them for instance, those warnings might not be readily visible to you. The real culprit here, is that the model architecture you are trying to load is actually very capable of running the pipeline. But the model weights themselves are missing the layers the architecture is looking for (here it doesn't have the classification head). Catching the warning would be the best way to be 100% sure it works that way. Pinging a core maintainer to see if we have other solutions. My personal idea would be to enable a flag to raise a hard error on mismatched weights instead of a warning, and using that flag in pipelines because we really don't want to load from pretrained an incomplete model. It's a different story in Model.from_pretrained where it's actually a desired feature if you intend to finetune, @sgugger maybe ?<|||||>100% agree on the culprit. People will think "i know bert does classification" and then blindly use this model. Which is what we were doing :) Yes, a flag like that would be very useful. Thank you. <|||||>Not really in favor of that flag, as you could have weights not in the checkpoint that are not actually used, and thus still having the pipeline work.<|||||>Really ? But we're using `AutoModelForCausalLM` for instance. So extraneous weights can be safely ignored, but missing weights are almost always necessarily used, no ? Do you have an example of architecture where that fails? Thanks for the answer, you're probably right, I just can't find an example from my experience.<|||||>What does "work" mean though? In the example above, I'm using `bert-base-uncased` in the sentiment pipeline. As @Narsil pointed out, there is no classification head but a result is returned so in some sense it "works" (maybe we are using different senses of "work"). What is that number? How is it being calculated? Why does it change when I re-instantiate the pipeline? Regardless of how to handle all of this, these answers are not clear from the documentation. A flag, which could default to the current behavior, would at least allow end users to have some control over this. Edit: I see your point @sgugger, and yes we are using different senses of "work". And this explains why the warning message doesn't explicitly say "don't do this" (because it may be the case that it _is_ okay to do this). Is there a way to distinguish what you had in mind from my example (which doesn't actually work even though it returns well-formatted results)? <|||||>> What is that number? How is it being calculated? Why does it change when I re-instantiate the pipeline? Regardless of how to handle all of this, these answers are not clear from the documentation. By default the classification is created randomly. Then the correct weights are placed onto your model. Since those weights are missing we just don't place them. That's why outputs change all the time. the head is different all the times.<|||||>The problem is that we have *a lot* of architectures and while there shouldn't be any warning in theory if everything has been coded right, I can't guarantee there is not one that shows some warning because some of the internal class variables for the weights that should be ignored in that warning are not properly set (those keys would be tensors that are not set randomly but deterministically like the `position_ids` of BERT). That's why I'm not too much in favor of erroring instead of warning.<|||||>Makes sense ! Thanks.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
18,119
closed
Better messaging and fix for incorrect shape when collating data.
# What does this PR do? I ran into an error related to an incorrect shape of inputs when using DataCollatorForSeq2Seq. I learned it had to do with having excessively nested inputs for my features. The error message was not particularly useful. This PR adds an assertion checking for incorrectly shaped inputs to be collated. The assertion also provides a solution by suggesting to use `remove_excess_nesting` util. `remove_excess_nesting` removes excessive nesting from features within a `DatasetDict`. Fixes #15505 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @stas00
07-13-2022 01:57:26
07-13-2022 01:57:26
_The documentation is not available anymore as the PR was closed or merged._
transformers
18,118
open
Model parallelism for m2m100
### Model description The translation model m2m100 proposed by Facebook is too huge to train using DDP, is there any open solution for model parallelism of m2m100, just like GPT2? Thank you. ### Open source status - [X] The model implementation is available - [X] The model weights are available ### Provide useful links for the implementation _No response_
07-13-2022 01:53:12
07-13-2022 01:53:12
We recommend using accelerate to achieve parallelisation now: https://github.com/huggingface/accelerate
transformers
18,117
closed
Add summarization name mapping for MultiNews
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> Adds the `text_column` and `summary_column` names to the to the `summarization_name_mapping` dictionary in `run_summarization.py`. This allows a user to use the script with [MultiNews](https://huggingface.co/datasets/multi_news) without having to specify these variables explicitly. Admittedly this is a tiny change but benefits anyone using MultiNews with this script. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> @sgugger, @patil-suraj
07-13-2022 01:33:42
07-13-2022 01:33:42
_The documentation is not available anymore as the PR was closed or merged._
transformers
18,116
closed
supported python versions reference
# What does this PR do? Provides a reference to the supported python versions to get a development environment working. Fixes #18112 ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @sgugger
07-13-2022 00:09:47
07-13-2022 00:09:47
_The documentation is not available anymore as the PR was closed or merged._<|||||>@sgugger another thing to consider is that we are only referencing the line number so the moment the file is updated, and the lines shift it will link to something else. Any fixes come to mind? Or will that do for the time being?
transformers
18,115
closed
Add custom config to quicktour
This PR updates the quicktour to include a section for building custom configurations that creates a randomly initialized model. Other changes include: - Added a brief section for `Trainer`. - Switched back to the code switcher (instead of code blocks) for some code examples which showed essentially the same thing and didn't have drastically different text associated with them. I think this will reduce the amount of scrolling and improve user experience. - Minor maintenance work to improve conciseness.
07-12-2022 22:28:06
07-12-2022 22:28:06
_The documentation is not available anymore as the PR was closed or merged._<|||||>Oops, sorry about all the changes! How about we keep the custom config section here and rollback all the other changes, which we can discuss in a separate issue?<|||||>Yes please!
transformers
18,114
closed
Added a varification step for to the development contribution guide
# What does this PR do? Informs the user to have the proper python version to ensure they can get a development environment working. Fixes #18112 ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @sgugger
07-12-2022 19:55:06
07-12-2022 19:55:06
_The documentation is not available anymore as the PR was closed or merged._<|||||>closing due to unrelated code
transformers
18,113
closed
LayoutLMv3 image preparation code snippet does not work with PDFs
### System Info - `transformers` version: 4.20.1 - Platform: macOS-10.16-x86_64-i386-64bit - Python version: 3.9.12 - Huggingface_hub version: 0.8.1 - PyTorch version (GPU?): 1.12.0 (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? @NielsRogge ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction This is not a bug per se, but I wasn't sure how else to file it. The official LayoutLMv3 Transformers documentation indicates that PDF files can be directly processed; however, they can't -- at least, not with the current code snippets. For example, this [code snippet](https://huggingface.co/docs/transformers/model_doc/layoutlmv3#transformers.LayoutLMv3FeatureExtractor.__call__.example) has the lines: ``` from PIL import Image image = Image.open("name_of_your_document - can be a png file, pdf, etc.").convert("RGB") ``` However, `PIL.Image` cannot open PDFs. In fact, the [Pillow documentation](https://pillow.readthedocs.io/en/stable/handbook/image-file-formats.html?highlight=pdf#:~:text=.palm.-,PDF,-%23) indicates that PDFs are only writable. Reproduction is trivial, but, for completeness: 1. Download this pdf: https://slicedinvoices.com/pdf/wordpress-pdf-invoice-plugin-sample.pdf 2. Install Pillow: `pip install pillow` 3. Run this code: ```python from PIL import Image image = Image.open(<path_to_invoice.pdf>).convert("RGB") ``` Expected error: ``` UnidentifiedImageError: cannot identify image file '/Users/joe/Downloads/wordpress-pdf-invoice-plugin-sample.pdf' ``` ### Expected behavior The documentation should provide a working solution for processing PDFs. I did notice that the `__call__` implementation of the `LayoutLMv3FeatureExtractor` has an `images` argument that accepts numpy arrays and torch tensors, in addition to Image objects. So, I assume one or more of the following options is the correct workflow: 1. Read PDFs into a python object that can be converted to an PIL.Image type. 2. Read/transform PDFs into an array as expected by the feature extractor. 3. Convert PDFs to an image and proceed with PIL.Image However, as I'm new to document intelligence and modeling PDFs, I'll have to do some digging to identify the right solution. So, it would be nice if the documentation was updated so that others won't have to do the same. One work-around (or solution?) is to just convert the PDF to an image, e.g.: ```python import io from wand.image import Image as WImage import PIL local_path = "/Users/joe/Downloads/wordpress-pdf-invoice-plugin-sample.pdf" img = WImage(filename=local_path, resolution=100) # bigger image = PIL.Image.open(io.BytesIO(img.make_blob("png"))).convert("RGB") ``` It also [looks like](https://stackoverflow.com/questions/47599012/how-to-convert-a-wand-image-object-to-numpy-array-without-opencv) Wand supports exporting to Numpy `array`.
07-12-2022 19:54:08
07-12-2022 19:54:08
cc @NielsRogge <|||||>Yes, feel free to improve the docs as was done for LayoutLMv2 in #15293<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
18,112
closed
Cannot set up development environment on Python 3.10
### System Info - `transformers` version: 4.20.1 - Platform: Windows-10-10.0.19043-SP0 - Python version: 3.10.2 - Huggingface_hub version: 0.8.1 - PyTorch version (GPU?): not installed (NA) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? @aaugustin @sgugger ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction 1. under the same environment conditions 2. go through the steps described in https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests 3. when you run `pip install -e ".[dev]"` you will see the following error > ERROR: Could not find a version that satisfies the requirement ray[tune]; extra == "dev" (from transformers[dev]) (from versions: none) ERROR: No matching distribution found for ray[tune]; extra == "dev" the full traceback: > Obtaining file:///C:/Projects/transformers Installing build dependencies ... done Getting requirements to build wheel ... done Preparing wheel metadata ... done Requirement already satisfied: filelock in c:\projects\transformers\env\lib\site-packages (from transformers==4.21.0.dev0) (3.7.1) Requirement already satisfied: packaging>=20.0 in c:\projects\transformers\env\lib\site-packages (from transformers==4.21.0.dev0) (21.3) Requirement already satisfied: pyyaml>=5.1 in c:\projects\transformers\env\lib\site-packages (from transformers==4.21.0.dev0) (6.0) Collecting regex!=2019.12.17 Using cached regex-2022.7.9-cp310-cp310-win_amd64.whl (262 kB) Requirement already satisfied: tqdm>=4.27 in c:\projects\transformers\env\lib\site-packages (from transformers==4.21.0.dev0) (4.64.0) Requirement already satisfied: numpy>=1.17 in c:\projects\transformers\env\lib\site-packages (from transformers==4.21.0.dev0) (1.23.1) Requirement already satisfied: huggingface-hub<1.0,>=0.1.0 in c:\projects\transformers\env\lib\site-packages (from transformers==4.21.0.dev0) (0.8.1) Requirement already satisfied: requests in c:\projects\transformers\env\lib\site-packages (from transformers==4.21.0.dev0) (2.28.1) Collecting tokenizers!=0.11.3,<0.13,>=0.11.1 Using cached tokenizers-0.12.1-cp310-cp310-win_amd64.whl (3.3 MB) Collecting phonemizer Using cached phonemizer-3.2.1-py3-none-any.whl (90 kB) Collecting tensorflow>=2.3 Using cached tensorflow-2.9.1-cp310-cp310-win_amd64.whl (444.1 MB) Collecting dill<0.3.5 Using cached dill-0.3.4-py2.py3-none-any.whl (86 kB) Collecting sentencepiece!=0.1.92,>=0.1.91 Using cached sentencepiece-0.1.96-cp310-cp310-win_amd64.whl (1.1 MB) Collecting onnxconverter-common Using cached onnxconverter_common-1.9.0-py2.py3-none-any.whl (78 kB) Collecting pyctcdecode>=0.3.0 Using cached pyctcdecode-0.3.0-py2.py3-none-any.whl (43 kB) Collecting ipadic<2.0,>=1.0.0 Using cached ipadic-1.0.0.tar.gz (13.4 MB) Collecting torchaudio Using cached torchaudio-0.12.0-cp310-cp310-win_amd64.whl (969 kB) Collecting unidic-lite>=1.0.7 Using cached unidic-lite-1.0.8.tar.gz (47.4 MB) Collecting sigopt Using cached sigopt-8.5.0-py2.py3-none-any.whl (182 kB) Collecting timeout-decorator Using cached timeout-decorator-0.5.0.tar.gz (4.8 kB) Collecting fugashi>=1.0 Using cached fugashi-1.1.2-cp310-cp310-win_amd64.whl (497 kB) Collecting protobuf<=3.20.1 Using cached protobuf-3.20.1-cp310-cp310-win_amd64.whl (903 kB) Collecting hf-doc-builder>=0.3.0 Using cached hf_doc_builder-0.3.0-py3-none-any.whl (56 kB) Collecting flake8>=3.8.3 Using cached flake8-4.0.1-py2.py3-none-any.whl (64 kB) Collecting cookiecutter==1.7.3 Using cached cookiecutter-1.7.3-py2.py3-none-any.whl (34 kB) Collecting tf2onnx Using cached tf2onnx-1.11.1-py3-none-any.whl (440 kB) Collecting parameterized Using cached parameterized-0.8.1-py2.py3-none-any.whl (26 kB) Collecting pytest-xdist Using cached pytest_xdist-2.5.0-py3-none-any.whl (41 kB) Collecting unidic>=1.0.2 Using cached unidic-1.1.0.tar.gz (7.7 kB) Collecting sacrebleu<2.0.0,>=1.4.12 Using cached sacrebleu-1.5.1-py3-none-any.whl (54 kB) ERROR: Could not find a version that satisfies the requirement ray[tune]; extra == "dev" (from transformers[dev]) (from versions: none) ERROR: No matching distribution found for ray[tune]; extra == "dev" WARNING: You are using pip version 21.2.4; however, version 22.1.2 is available. You should consider upgrading via the 'C:\Projects\transformers\env\Scripts\python.exe -m pip install --upgrade pip' command. (this error stems the process significantly to the extent that I couldn't run tests as a result) ### Expected behavior When running this command in Python 10 the whole development process run without errors like it does with `Python 3.8.8`.
07-12-2022 18:41:06
07-12-2022 18:41:06
As the error clearly mentions, this is because the ray package does not offer a distribution for Python 3.10, so I would open the issue there :-)<|||||>As a matter of GitHub etiquette, pinging random people ain't cool.<|||||>@sgugger yeah i forgot to mention that to resolve it you need to downgrade as mentioned in this issue https://github.com/ray-project/tune-sklearn/issues/169 but I figured idealy you would want development to work on any python distribution @aaugustin [![image](https://user-images.githubusercontent.com/37946988/178571219-c79968a4-cecf-4eef-a6f5-53a30b50282d.png)](https://github.com/huggingface/transformers/commit/3233b58ad4aceb9d048b3c48cad44ef526470b53) <|||||>Exactly my point. Just because someone did something 3 years ago doesn't mean you can ping them. If I want to contribute to Hugging Face Transformers for free, I'll follow the repo!
transformers
18,111
closed
Word offsets of some fast tokenizers are not compatible with token classification pipeline label aggregation
### System Info - `transformers` version: 4.21.0.dev0 - Platform: macOS-12.4-x86_64-i386-64bit - Python version: 3.9.13 - Huggingface_hub version: 0.8.1 - PyTorch version (GPU?): 1.11.0 (False) - Tensorflow version (GPU?): 2.9.1 (False) - Flax version (CPU?/GPU?/TPU?): 0.5.2 (cpu) - Jax version: 0.3.6 - JaxLib version: 0.3.5 - Using GPU in script?: N - Using distributed or parallel set-up in script?: N ### Who can help? Tagging @Narsil for pipelines and @SaulLu for tokenization. Let me know if I should tag anyone for specific models, but it's not really a model issue, except in terms of tokenization. ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction I noticed this issue with a DeBERTa model, but it also affects some others. The high level issue is that some tokenizers include leading spaces in the offset indices, some exclude them, and some are configurable with `trim_offsets`. When offsets include leading spaces (equivalent to `trim_offsets==False`), the pipeline [word heuristic](https://github.com/huggingface/transformers/blob/afe5d42d8d1d80af911ed980c2936bfe887078f6/src/transformers/pipelines/token_classification.py#L294) doesn't work. The result is aggregating all tokens in the sequence to one label. Simple example: ```python model_name = "brandon25/deberta-base-finetuned-ner" model = AutoModelForTokenClassification.from_pretrained(model_name) tokenizer = AutoTokenizer.from_pretrained(model_name) ner_aggregate = pipeline("ner", model=model, tokenizer=tokenizer, ignore_labels=[], aggregation_strategy="max") ner_aggregate("We're from New York") ``` Result: ``` [{'entity_group': 'O', 'score': 0.9999778, 'word': " We're from New York", 'start': 0, 'end': 19}] ``` ### Expected behavior Expected result, something like: ``` [{'entity_group': 'O', 'score': 0.9999778, 'word': " We're from", 'start': 0, 'end': 10}, {'entity_group': 'O', 'score': 0.9xxx, 'word': "New York", 'start': 11, 'end': 19}] ``` If you'd like to see actual output, here's a [colab notebook with relevant models](https://colab.research.google.com/drive/1bcWotnqSPNIuAaRNkELKmKiLQheudHu1?usp=sharing) for comparison. This affects at least these: - DeBERTa V1 - DeBERTa V2/3 - GPT2 (tested because `DebertaTokenizerFast` is a subclass of `GPT2TokenizerFast`) - Depending on config, Roberta (and any other tokenizer that honors `trim_offsets==False`) The easiest solution would be to update the heuristic. [Here is a change](https://github.com/davidbenton/transformers/commit/5c43c63d401f80818d95e9cafb627607680f4dff) that works for preceding space in sequence (like current heuristic) _or_ leading space in token. I can turn into a PR if desired. I know a lot of the default configuration matches reference implementations or published research, so I'm not sure where inconsistencies between tokenizers are desired behavior. I did notice, for example, that some sentencepiece tokenizers include leading spaces in offset indices (DeBERTa V2/3), and some don't (Albert, XLNet). I looked at the converter config and the rust code (which is pretty opaque to me), but it's not obvious to me why the offsets are different. Do you know, @SaulLu? Is that expected? I am comparing different architectures to replace a production Bert model and was evaluating models fine tuned on an internal dataset when I ran into this. I have my manager's blessing to spend some time on this (and already have! 😂), so I'm happy to work on a PR or help out how I can.
07-12-2022 15:57:06
07-12-2022 15:57:06
Thank you very much for the detailed issue, that's a good point! > I know a lot of the default configuration matches reference implementations or published research, so I'm not sure where inconsistencies between tokenizers are desired behavior. I did notice, for example, that some sentencepiece tokenizers include leading spaces in offset indices (DeBERTa V2/3), and some don't (Albert, XLNet). I looked at the converter config and the rust code (which is pretty opaque to me), but it's not obvious to me why the offsets are different. Do you know, @SaulLu? Is that expected? I reassure you, it is not obvious to me either why the offsets are different :smile: . In principle I think it's not a problem that the default value is different. But the problem is that for the moment, for many tokenizers it is not possible to change this value. Technically for 2 reasons: 1) for some we don't expose the argument at init and 2) for others, even if we did we couldn't change it some processors such as `Templateprocessor` don't allow to set it (I think it's the case for deberta). I'll ping you in particular Narsil: 1. do you think it's worth changing the heuristic for tokenizers that have `trim_offsets` set to False? I think you're the more knowledgeable for this :smile: 2. does it make sense in principle to allow this argument to be set for all tokenizers that use a rust component that allows you to choose whether you want to trim the offsets or not? I have the impression that the NER use case is one of the main use cases for offsets in general 3. If you agree with the previous point, we will be blocked by the fact that Deberta doesn't use the `bytelevel` processor but the `TemplateProcessor` (which doesn't allow to choose trim_offset). We can surely leverage the processor sequence feature that @mishig25 is doing inside the tokenizers library https://github.com/huggingface/tokenizers/pull/1005 to solve it. Note to @Narsil , I'm using "deberta-base" to reproduce the issue.<|||||>I agree that different defaults are not a problem, and it would be great if `trim_offsets` was configurable on more tokenizers. Maybe with a user warning if specifying it on an unsupported tokenizer? I'll make a case briefly for updating the heuristic: It would be good if more people tried non-Bert models out for their tasks. For new users, the pipelines and associated doc recipes with a fine-tuned hub model can be a "hello world" for trying out transformers. Having the word heuristic work with default settings (without having to know about `trim_offsets`) for most models will mean more users have early success when they venture away from Bertland.<|||||>> f you agree with the previous point, we will be blocked by the fact that Deberta doesn't use the bytelevel processor but the TemplateProcessor (which doesn't allow to choose trim_offset). We can surely leverage the processor sequence feature that @mishig25 is doing inside the tokenizers library This is the long term solution. As what's the good default for the tokenizer, it's not really up to me to decide, but the tokenizer's creator. I really like not trimming offets, and tokenizers like gpt2 which treat the *full string* as something that has to be fed to the model, meaning spaces have to be somewhere. This is a really great feature as it alleviates lots of headaches about "skipped" spaces, decoding issues and so on. But not all tokenizers are created equal and it's sometime more convenient to trim offsets (for whatever reason) or even it was just done that way in the original implem. > I have the impression that the NER use case is one of the main use cases for offsets in general NER, POS, question-answering, and even mask filling when you want to make a correct replacement and don't have access to the ids. ANY task, which has to treat the original string really. It's also super helpful to debug whenever ids are *incorrect* and you want to know why (like weird unicode looking like ascii but screwing your results). > I'll make a case briefly for updating the heuristi @davidbenton I really like the idea of updating the heuristic. As long as it's clear it's a heuristic and not a *real* solution (aggregation_mode="simple" is the only real solution IMHO, others are workaround to poorly performing models adding extra bias) The heuristic should be simple and elegant, your current solution is ! So I really like that fix. I think we can make it even simpler and commented directly on the diff . Now to make it become a PR, I think it's really important is to add the slow test which is exactly shown here above. Then add a fast test which deals only with the aggregation strategy (we can extract the actual entities from the slow test to have a real example), and add it as a fast test (so any regression is detected early on). All the other tests should help cover any regression this heuristic change might introduce (I hope it doesn't). Does that strategy seem viable ? Would you have time to set up such a PR ? Cheers !! And thanks for bringing that up, if there's room for improvement in the pipelines I am all up for it. (But I am relatively convinced, that it's impossible to be 100% correct as "words" are ill defined in some languages ;)) <|||||>Thanks a lot for your feedback @Narsil! Super good points! For the heuristics improvement, couldn't we test if the `trim_offset` argument is defined in the processor of the `backend_tokenizer`, and if: - yes and it is set to True keep the old logic - otherwise use your logic @davidbenton that tests the first character as you suggest? (I share the same conviction as you @Narsil that it's impossible to be 100% correct as "words" are ill-defined in some languages )<|||||>`trim_offsets` cannot be linked the heuristic IMO. The heuristic is just trying to determine if what we're looking at is a "word". currently it only looks like if the previous character before the offset ends with a space. But prefix space could also exist so checking that the first character (in the original string) corresponding this token is a space is also valid IMO. Again extremely biased towards space separated language, but working. I may have to dive and see really what the issue is, but this is my current understanding without exactly looking at the issue in detail.<|||||>> trim_offsets cannot be linked the heuristic IMO. I understand your point, I also realize that my proposal would not be adapted to tokenizers that have a pre-tokenizer that splits on spaces and removes them! <|||||>> I understand your point, I also realize that my proposal would not be adapted to tokenizers that have a pre-tokenizer that splits on spaces and removes them! Oh it's perfectly fine if that's the desired behavior we want. But I don't think we should bend backwards to make que QA pipeline work within a mode where it tries to recover because a model doesn't work properly ;). And the tokenizer cannot provide "words" boundaries (because it just wasn't made that way)<|||||>I'm not sure how to read your answer ahah. The tokenizer I have in mind is for example Bert's: Bert's tokenizer doesn't have trim_offset set to True, but the spaces are removed during the pre-tokenization step and the "words" boundaries are built the other way by adding "##" to the token that doesn't start a word.<|||||>Thanks for the comments and direction! I'm sorry to let this go stale, but I had a family emergency and this dropped off my radar. I should be available going forward, and I'll work on adding the tests mentioned above and set up a PR. @Narsil I answered your question about the heuristic in context on my commit. <|||||>@SaulLu I'll note, relating to your suggestion to branch on `trim_offset`, that as my suggested heuristic works now, the logic is unchanged for models that do not tokenize whitespace, as it only checks in the decoded token for a leading space.<|||||>@davidbenton Perfect. I think you can do the modifications, and we would really benefit if there was a test making sure that the new heuristic works. For instance here is a slow test to test the heuristics for spanish: https://github.com/huggingface/transformers/blob/main/tests/pipelines/test_pipelines_token_classification.py#L200-L215<|||||>I have a couple tests added in my local wip, but it looks like there might be a borken pipeline test prior to my changes. The "UN" start/end entity offsets don't seem to match the input sequence on [this line](https://github.com/huggingface/transformers/blob/51227e26ab8fe6d1a19804da697786649f9340e3/tests/pipelines/test_pipelines_token_classification.py#L289), along with a few other diffs. @Narsil Is this expected, or should I be looking for green (or skipped) tests before I create a PR? (FYI I'm running `RUN_SLOW=1 RUN_PIPELINE_TESTS=yes pytest tests/pipelines/test_pipelines_token_classification.py`; will also run the full suite once that's looking good.)<|||||>@davidbenton , If all **fast** tests pass, you should be fine. They are tested on every commit, so they should be green. For slow tests, we run them before releases and in controlled environments, they are sometimes affected by `torch` version or `python` version. Usually the differences are minor so we can decide how to deal with them on a case-by-case basis. But for a PR it shouldn't be blocking (that's why we try to have good fast tests, as they are run very often, the slow tests are more like integration tests, usually when we need an actual trained model output to showcase something)<|||||>Yeah, that all makes sense. Are we sure slow pipeline tests are ~~running~~ being run? That test I linked has start/end offsets that seem to be incorrect (past the end of the input). I just wanted to flag that, but I'll go ahead and get that PR up too.<|||||>Thanks for flagging, I am looking into it right now :)<|||||>@davidbenton what's your environement ? I can't seem to reproduce on my local env Do you mind creating a new issue for this ? Report it like a regular bug, there should be tools to print your exact env. https://github.com/huggingface/transformers/issues/new?assignees=&labels=bug&template=bug-report.yml As I said, slow tests can be sometimes a little more flaky that fast tests, but usually within acceptable bounds (pytorch will modify kernels which affects ever so slightly values, but it can pile up, Python version can break dictionary order etc..)
transformers
18,110
closed
TF: `unpack_inputs` decorator independent from `main_input_name`
# What does this PR do? As the title indicates -- the `unpack_inputs` decorator becomes independent from `main_input_name` in this PR. The old `input_processing` included some checks for `input_ids`, which somewhat transitioned into the `unpack_inputs` decorator (the `input_ids` input was obtained there from the argument under `main_input_name`). However, in practice, it is not needed -- what we want is to support the case where all model arguments come packed in the first input, which happens to be `main_input_name` for most use cases of `unpack_inputs`. Note that Keras often expects this packing behavior with input dictionaries, which `input_processing` maps back to our expected format. Fixes #18040
07-12-2022 15:25:27
07-12-2022 15:25:27
_The documentation is not available anymore as the PR was closed or merged._
transformers
18,109
closed
"ValueError: initial_value must be specified." error when compiling bert for text classification
### System Info I'm following the hugging face tutorial for the sequence classification and while trying to fine-tune 'distilbert-base-multilingual-cased' I'm having the following error when running the model.compile() method. I'm having the same error when using bert-uncased. ``` ValueError Traceback (most recent call last) <ipython-input-22-68f5487774e6> in <module> ----> 1 model.compile(optimizer=optimizer) /tf/Raph/jupyter/tf2-37/lib/python3.7/site-packages/transformers/modeling_tf_utils.py in compile(self, optimizer, loss, metrics, loss_weights, weighted_metrics, run_eagerly, steps_per_execution, **kwargs) 1035 run_eagerly=run_eagerly, 1036 experimental_steps_per_execution=steps_per_execution, -> 1037 **kwargs, 1038 ) 1039 /tf/Raph/jupyter/tf2-37/lib/python3.7/site-packages/tensorflow/python/keras/engine/training.py in compile(self, optimizer, loss, metrics, loss_weights, weighted_metrics, run_eagerly, **kwargs) 547 experimental_steps_per_execution = kwargs.pop( 548 'experimental_steps_per_execution', 1) --> 549 self._configure_steps_per_execution(experimental_steps_per_execution) 550 551 # Initializes attrs that are reset each time `compile` is called. /tf/Raph/jupyter/tf2-37/lib/python3.7/site-packages/tensorflow/python/training/tracking/base.py in _method_wrapper(self, *args, **kwargs) 455 self._self_setattr_tracking = False # pylint: disable=protected-access 456 try: --> 457 result = method(self, *args, **kwargs) 458 finally: 459 self._self_setattr_tracking = previous_value # pylint: disable=protected-access /tf/Raph/jupyter/tf2-37/lib/python3.7/site-packages/tensorflow/python/keras/engine/training.py in _configure_steps_per_execution(self, steps_per_execution) 581 steps_per_execution, 582 dtype='int64', --> 583 aggregation=variables.VariableAggregationV2.ONLY_FIRST_REPLICA) 584 585 @property /tf/Raph/jupyter/tf2-37/lib/python3.7/site-packages/tensorflow/python/ops/variables.py in __call__(cls, *args, **kwargs) 260 return cls._variable_v1_call(*args, **kwargs) 261 elif cls is Variable: --> 262 return cls._variable_v2_call(*args, **kwargs) 263 else: 264 return super(VariableMetaclass, cls).__call__(*args, **kwargs) /tf/Raph/jupyter/tf2-37/lib/python3.7/site-packages/tensorflow/python/ops/variables.py in _variable_v2_call(cls, initial_value, trainable, validate_shape, caching_device, name, variable_def, dtype, import_scope, constraint, synchronization, aggregation, shape) 254 synchronization=synchronization, 255 aggregation=aggregation, --> 256 shape=shape) 257 258 def __call__(cls, *args, **kwargs): /tf/Raph/jupyter/tf2-37/lib/python3.7/site-packages/tensorflow/python/ops/variables.py in getter(**kwargs) 65 66 def getter(**kwargs): ---> 67 return captured_getter(captured_previous, **kwargs) 68 69 return getter /tf/Raph/jupyter/tf2-37/lib/python3.7/site-packages/tensorflow/python/distribute/distribute_lib.py in creator(next_creator, **kwargs) 2855 def creator(next_creator, **kwargs): 2856 _require_strategy_scope_strategy(strategy) -> 2857 return next_creator(**kwargs) 2858 2859 self._var_creator_scope = variable_scope.variable_creator_scope(creator) /tf/Raph/jupyter/tf2-37/lib/python3.7/site-packages/tensorflow/python/ops/variables.py in <lambda>(**kws) 235 shape=None): 236 """Call on Variable class. Useful to force the signature.""" --> 237 previous_getter = lambda **kws: default_variable_creator_v2(None, **kws) 238 for _, getter in ops.get_default_graph()._variable_creator_stack: # pylint: disable=protected-access 239 previous_getter = _make_getter(getter, previous_getter) /tf/Raph/jupyter/tf2-37/lib/python3.7/site-packages/tensorflow/python/ops/variable_scope.py in default_variable_creator_v2(next_creator, **kwargs) 2644 synchronization=synchronization, 2645 aggregation=aggregation, -> 2646 shape=shape) 2647 2648 /tf/Raph/jupyter/tf2-37/lib/python3.7/site-packages/tensorflow/python/ops/variables.py in __call__(cls, *args, **kwargs) 262 return cls._variable_v2_call(*args, **kwargs) 263 else: --> 264 return super(VariableMetaclass, cls).__call__(*args, **kwargs) 265 266 /tf/Raph/jupyter/tf2-37/lib/python3.7/site-packages/tensorflow/python/ops/resource_variable_ops.py in __init__(self, initial_value, trainable, collections, validate_shape, caching_device, name, dtype, variable_def, import_scope, constraint, distribute_strategy, synchronization, aggregation, shape) 1516 aggregation=aggregation, 1517 shape=shape, -> 1518 distribute_strategy=distribute_strategy) 1519 1520 def _init_from_args(self, /tf/Raph/jupyter/tf2-37/lib/python3.7/site-packages/tensorflow/python/ops/resource_variable_ops.py in _init_from_args(self, initial_value, trainable, collections, caching_device, name, dtype, constraint, synchronization, aggregation, distribute_strategy, shape) 1594 synchronization, aggregation, trainable, name)) 1595 if initial_value is None: -> 1596 raise ValueError("initial_value must be specified.") 1597 init_from_fn = callable(initial_value) 1598 ValueError: initial_value must be specified. ``` tensorflow version: 2.3.0 transformers version: 4.20.1 Python version: 3.7 Any ideas? thanks in advance. ### Who can help? _No response_ ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ```python model_name = 'distilbert-base-multilingual-cased' model = TFAutoModelForSequenceClassification.from_pretrained(model_name, num_labels=3) batch_size = 16 num_epochs = 5 batches_per_epoch = len(train) // batch_size total_train_steps = int(batches_per_epoch * num_epochs) optimizer, schedule = create_optimizer(init_lr=2e-5, num_warmup_steps=0, num_train_steps=total_train_steps) loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True) model.compile(optimizer=optimizer, loss=loss) ``` ### Expected behavior a compilation without an error.
07-12-2022 14:49:56
07-12-2022 14:49:56
cc @Rocketknight1 @gante <|||||>Hi @djellalmohamedaniss, your version of TF is quite old - our support for TF 2.3 is very shaky, and we prefer TF >= 2.4, and TF 2.8 or 2.9 are even better! Can you check with a more recent version of TF and let us know if the problem still exists?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
18,108
closed
CLI: reenable `pt_to_tf` test
# What does this PR do? Reenables the `pt_to_tf` CLI test that was disabled a few days ago. ⚠️ Before this PR is merged, [this](https://huggingface.co/hf-internal-testing/tiny-random-gptj/discussions/1) hub PR must be merged, and CI must be rerun. This test model is also used in `tests/deepspeed/test_model_zoo.py`, not sure if the change will have implications there. The problem was not due to code, but rather to problems in the config file of the test model. When `config.rotary_dim` is larger than `self.head_dim` (which is `config.hidden_size` divided by `config.num_attention_heads`), then we try to slice out of bounds in some tensors, causing downstream dimension-related exceptions -- e.g. [slicing here](https://github.com/huggingface/transformers/blob/main/src/transformers/models/gptj/modeling_tf_gptj.py#L229).
07-12-2022 11:07:20
07-12-2022 11:07:20
_The documentation is not available anymore as the PR was closed or merged._
transformers
18,107
closed
求教:load BERT 模型报错
### System Info OS: Windows-10-10.0.19041-SP0 Python: 3.8.3 PyTorch: 1.12.0+cpu TensorFlow: 2.6.0 HanLP: 2.1.0-beta.36 ### Who can help? _No response_ ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction I tried the 2 options: 1. recog = hanlp.load('MSRA_NER_BERT_BASE_ZH') OR 2. recog = hanlp.load(hanlp.pretrained.ner.MSRA_NER_BERT_BASE_ZH) Error Log: >>> import hanlp >>> recognizer = hanlp.load(hanlp.pretrained.ner.MSRA_NER_BERT_BASE_ZH) 2022-07-12 16:32:32.871385: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'cudart64_110.dll'; dlerror: cudart64_110.dll not found 2022-07-12 16:32:32.871563: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine. 2022-07-12 16:32:44.634072: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'nvcuda.dll'; dlerror: nvcuda.dll not found 2022-07-12 16:32:44.634254: W tensorflow/stream_executor/cuda/cuda_driver.cc:269] failed call to cuInit: UNKNOWN ERROR (303) 2022-07-12 16:32:44.654910: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:169] retrieving CUDA diagnostic information for host: SZH-C-000XF 2022-07-12 16:32:44.655263: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:176] hostname: SZH-C-000XF 2022-07-12 16:32:51.161833: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX AVX2 To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags. Failed to load https://file.hankcs.com/hanlp/ner/ner_bert_base_msra_20211227_114712.zip. If the problem still persists, please submit an issue to https://github.com/hankcs/HanLP/issues When reporting an issue, make sure to paste the FULL ERROR LOG below. ================================ERROR LOG BEGINS================================ OS: Windows-10-10.0.19041-SP0 Python: 3.8.3 PyTorch: 1.12.0+cpu TensorFlow: 2.6.0 HanLP: 2.1.0-beta.36 Traceback (most recent call last): File "<stdin>", line 1, in <module> File "C:\Users\AAH2SZH\AppData\Roaming\Python\Python38\site-packages\hanlp\__init__.py", line 43, in load return load_from_meta_file(save_dir, 'meta.json', verbose=verbose, **kwargs) File "C:\Users\AAH2SZH\AppData\Roaming\Python\Python38\site-packages\hanlp\utils\component_util.py", line 175, in load_from_meta_file raise e from None File "C:\Users\AAH2SZH\AppData\Roaming\Python\Python38\site-packages\hanlp\utils\component_util.py", line 99, in load_from_meta_file obj.load(save_dir, verbose=verbose, **kwargs) File "C:\Users\AAH2SZH\AppData\Roaming\Python\Python38\site-packages\hanlp\common\keras_component.py", line 214, in load self.build(**merge_dict(self.config, training=False, logger=logger, **kwargs, overwrite=True, inplace=True)) File "C:\Users\AAH2SZH\AppData\Roaming\Python\Python38\site-packages\hanlp\common\keras_component.py", line 224, in build self.model = self.build_model(**merge_dict(self.config, training=kwargs.get('training', None), File "C:\Users\AAH2SZH\AppData\Roaming\Python\Python38\site-packages\hanlp\components\taggers\transformers\transformer_tagger_tf.py", line 34, in build_model model, tokenizer = build_transformer(transformer, max_seq_length, len(self.transform.tag_vocab), tagging=True) File "C:\Users\AAH2SZH\AppData\Roaming\Python\Python38\site-packages\hanlp\layers\transformers\loader_tf.py", line 11, in build_transformer tokenizer = AutoTokenizer_.from_pretrained(transformer) File "C:\Users\AAH2SZH\AppData\Roaming\Python\Python38\site-packages\hanlp\layers\transformers\pt_imports.py", line 68, in from_pretrained tokenizer = cls.from_pretrained(get_tokenizer_mirror(transformer), use_fast=use_fast, File "C:\Users\AAH2SZH\AppData\Roaming\Python\Python38\site-packages\transformers\models\auto\tokenization_auto.py", line 535, in from_pretrained config = AutoConfig.from_pretrained( File "C:\Users\AAH2SZH\AppData\Roaming\Python\Python38\site-packages\transformers\models\auto\configuration_auto.py", line 705, in from_pretrained config_dict, _ = PretrainedConfig.get_config_dict(pretrained_model_name_or_path, **kwargs) File "C:\Users\AAH2SZH\AppData\Roaming\Python\Python38\site-packages\transformers\configuration_utils.py", line 553, in get_config_dict config_dict, kwargs = cls._get_config_dict(pretrained_model_name_or_path, **kwargs) File "C:\Users\AAH2SZH\AppData\Roaming\Python\Python38\site-packages\transformers\configuration_utils.py", line 641, in _get_config_dict raise EnvironmentError( OSError: Can't load config for 'bert-base-chinese'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'bert-base-chinese' is the correct path to a directory containing a config.json file ### Expected behavior fix the error
07-12-2022 09:02:37
07-12-2022 09:02:37
please help guys! thanks so much in advance!<|||||>what is your transformers version?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
18,106
closed
speed up Nezha model tests
# What does this PR do? This PR speeds up Nezha tests, which was pointed out by @sgugger to be slow. On my machine, this change speeds up the test by about 80% (~160s -> ~20s). I think we should merge this instead of #18103. <!-- Remove if not applicable --> Fixes #18103 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ x Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @sgugger @LysandreJik
07-12-2022 03:27:00
07-12-2022 03:27:00
_The documentation is not available anymore as the PR was closed or merged._<|||||>Thanks @sijunhe, I can confirm that this does significantly speed up the tests: ``` slowest durations 5.00s call tests/models/nezha/test_modeling_nezha.py::NezhaModelTest::test_torch_fx_output_loss 4.70s call tests/models/nezha/test_modeling_nezha.py::NezhaModelTest::test_torch_fx 3.61s call tests/models/nezha/test_modeling_nezha.py::NezhaModelTest::test_model_outputs_equivalence 1.71s call tests/models/nezha/test_modeling_nezha.py::NezhaModelTest::test_attention_outputs 1.69s call tests/models/nezha/test_modeling_nezha.py::NezhaModelTest::test_save_load_fast_init_to_base 1.68s call tests/models/nezha/test_modeling_nezha.py::NezhaModelTest::test_save_load_fast_init_from_base 1.06s call tests/models/nezha/test_modeling_nezha.py::NezhaModelTest::test_training 1.03s call tests/models/nezha/test_modeling_nezha.py::NezhaModelTest::test_head_pruning_integration 0.95s call tests/models/nezha/test_modeling_nezha.py::NezhaModelTest::test_save_load 0.90s call tests/models/nezha/test_modeling_nezha.py::NezhaModelTest::test_head_pruning_save_load_from_pretrained 0.83s call tests/models/nezha/test_modeling_nezha.py::NezhaModelTest::test_feed_forward_chunking 0.77s call tests/models/nezha/test_modeling_nezha.py::NezhaModelTest::test_correct_missing_keys 0.74s call tests/models/nezha/test_modeling_nezha.py::NezhaModelTest::test_hidden_states_output 0.56s call tests/models/nezha/test_modeling_nezha.py::NezhaModelTest::test_load_with_mismatched_shapes 0.53s call tests/models/nezha/test_modeling_nezha.py::NezhaModelTest::test_training_gradient_checkpointing 0.51s call tests/models/nezha/test_modeling_nezha.py::NezhaModelTest::test_resize_tokens_embeddings 0.47s call tests/models/nezha/test_modeling_nezha.py::NezhaModelTest::test_headmasking 0.46s call tests/models/nezha/test_modeling_nezha.py::NezhaModelTest::test_tie_model_weights 0.45s call tests/models/nezha/test_modeling_nezha.py::NezhaModelTest::test_head_pruning_save_load_from_config_init 0.45s call tests/models/nezha/test_modeling_nezha.py::NezhaModelTest::test_forward_signature 0.43s call tests/models/nezha/test_modeling_nezha.py::NezhaModelTest::test_head_pruning 0.43s call tests/models/nezha/test_modeling_nezha.py::NezhaModelTest::test_save_load_keys_to_ignore_on_save 0.42s call tests/models/nezha/test_modeling_nezha.py::NezhaModelTest::test_determinism 0.41s call tests/models/nezha/test_modeling_nezha.py::NezhaModelTest::test_gradient_checkpointing_backward_compatibility 0.39s call tests/models/nezha/test_modeling_nezha.py::NezhaModelTest::test_inputs_embeds 0.39s call tests/models/nezha/test_modeling_nezha.py::NezhaModelTest::test_resize_embeddings_untied 0.38s call tests/models/nezha/test_modeling_nezha.py::NezhaModelTest::test_gradient_checkpointing_enable_disable 0.36s call tests/models/nezha/test_modeling_nezha.py::NezhaModelTest::test_initialization 0.35s call tests/models/nezha/test_modeling_nezha.py::NezhaModelTest::test_model_common_attributes 0.18s call tests/models/nezha/test_modeling_nezha.py::NezhaModelTest::test_problem_types 0.11s call tests/models/nezha/test_modeling_nezha.py::NezhaModelTest::test_model_as_decoder 0.10s call tests/models/nezha/test_modeling_nezha.py::NezhaModelTest::test_model_as_decoder_with_default_input_mask 0.06s call tests/models/nezha/test_modeling_nezha.py::NezhaModelTest::test_model 0.06s call tests/models/nezha/test_modeling_nezha.py::NezhaModelTest::test_for_masked_lm 0.05s call tests/models/nezha/test_modeling_nezha.py::NezhaModelTest::test_for_sequence_classification 0.05s call tests/models/nezha/test_modeling_nezha.py::NezhaModelTest::test_for_multiple_choice 0.05s call tests/models/nezha/test_modeling_nezha.py::NezhaModelTest::test_retain_grad_hidden_states_attentions ``` <|||||>Thanks for fixing them!
transformers
18,105
closed
Add support for Sagemaker Model Parallel >= 1.10 new checkpoint API
# What does this PR do? This PR adds support for Sagemaker Model Parallel >= 1.10's new checkpoint API as well as keeping SMP < 1.10 functionality. * Support loading checkpoints saved with SMP < 1.10 in SMP < 1.10 and SMP >= 1.10 * Support loading checkpoints saved with SMP >= 1.10 in SMP >= 1.10 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
07-12-2022 00:26:43
07-12-2022 00:26:43
_The documentation is not available anymore as the PR was closed or merged._<|||||>Thanks for iterating on this PR! It looks like a lot of CI failures are fixed on the mian branch. Could you do a quick rebase so we can make sure this PR does not break anything?<|||||>It looks like GitHub did not like this rebase as it know shows 290 files changed. Could you open a clean new PR after fixing the merge conflicts?<|||||>> It looks like GitHub did not like this rebase as it know shows 290 files changed. Could you open a clean new PR after fixing the merge conflicts? Yeah, will do!<|||||>New clean PR at #18221!
transformers
18,104
closed
gpt2 results with past_key_values not the same as when computed from scratch
### System Info - `transformers` version: 4.20.1 - Platform: Linux-5.4.0-89-generic-x86_64-with-glibc2.31 - Python version: 3.9.12 - Huggingface_hub version: 0.8.1 - PyTorch version (GPU?): 1.12.0+cu113 (False) - Tensorflow version (GPU?): 2.9.1 (False) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: no - Using distributed or parallel set-up in script?: no ### Who can help? @patil-suraj @patrickvonplaten @LysandreJik ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction Below is a minimal example that reproduces this unexpected behavior I encountered while tinkering with past_key_values. Essentially when I cache keys and values from a padded batch and then use past_key_values to run forward on an additional token for each example in the batch, I get somewhat different results than if I just compute the whole inputs from scratch and look at the last tokens. It seems that something is going wrong when past_key_values involves some padding, however I believe I am using attention_mask correctly by including the masking strategy that was used for past_key_values as specified in the docs. ``` python from transformers import AutoModelForCausalLM, AutoTokenizer import torch model = AutoModelForCausalLM.from_pretrained('gpt2') tokenizer = AutoTokenizer.from_pretrained('gpt2') tokenizer.pad_token = tokenizer.eos_token s = ["a b c", "l m n o"] inputs1 = tokenizer(s, return_tensors='pt', padding=True) outputs1 = model(**inputs1) s = [" d", " p"] inputs2 = tokenizer(s, return_tensors='pt', padding=True) attention_mask = torch.cat((inputs1['attention_mask'], inputs2['attention_mask']), dim=1) outputs2 = model(input_ids=inputs2['input_ids'], attention_mask=attention_mask, past_key_values=outputs1.past_key_values) s = ["a b c d", "l m n o p"] inputs_full = tokenizer(s, return_tensors='pt', padding=True) outputs_full = model(**inputs_full) assert torch.allclose(outputs2.logits[1,0],outputs_full.logits[1,-1]) # are second example last token logits the same? -> passes assert torch.allclose(outputs2.logits[0,0], outputs_full.logits[0,-2]) # are first example last token logits the same? -> fails ``` ### Expected behavior The expected behavior would be for the logits of given tokens to be the same regardless of whether past_key_values is used for preceding tokens or if the full inputs are computed from scratch. Thanks so much for all your hard work on this great library!
07-12-2022 00:18:28
07-12-2022 00:18:28
On further inspection, I believe the source of the difference is the `position_ids`. When the batched and padded `past_key_values` are used, the default `position_ids` are computed by [this code](https://github.com/huggingface/transformers/blob/d0acc9537829e7d067edbb791473bbceb2ecf056/src/transformers/models/gpt2/modeling_gpt2.py#L791): ``` python if past_key_values is None: past_length = 0 past_key_values = tuple([None] * len(self.h)) else: past_length = past_key_values[0][0].size(-2) if position_ids is None: position_ids = torch.arange(past_length, input_shape[-1] + past_length, dtype=torch.long, device=device) position_ids = position_ids.unsqueeze(0).view(-1, input_shape[-1]) ``` Because the past_length includes the padded parts of past_key_values, this will cause the `position_ids` for the new tokens to be different than if everything is computed from scratch. I tested and if you modify my minimal example in the original post with `position_ids = torch.tensor([[3],[4]],dtype=torch.int64)` and pass that to the model forward pass, both asserts now pass. So just manually specifying the `position_ids` solves this problem.<|||||>I won't have time to look into this I'm afraid. @ArthurZucker could you give it a try? <|||||>Yep, I will have a look asap <|||||>So! Sorry for the late reply. My first answer would be that the `attention_mask` and the inputs are different. - In the first case, you are feeding `[ 64, 275, 269, 50256]` and then `[288]` with the combined attention mask : `[1, 1, 1, 0, 1]`. - In the second case, you are feeding `[ 64, 275, 269, 288, 50256]` with attention mask `[1, 1, 1, 1, 0]`. I thought that using `padding_side='left'` would fix it, let me investigate! <|||||>Okay this fixes it for me : ```python from transformers import AutoModelForCausalLM, AutoTokenizer import torch model = AutoModelForCausalLM.from_pretrained('gpt2') tokenizer = AutoTokenizer.from_pretrained('gpt2', padding_side="left") tokenizer.pad_token = tokenizer.eos_token s = ["a b c", "l m n o p q r s t u v w x y"] inputs1 = tokenizer(s, return_tensors='pt', padding=True) # First sequence is indeed padded : [ 64, 275, 269, 50256] outputs1 = model(**inputs1) s = [" d", " z"] inputs2 = tokenizer(s, return_tensors='pt', padding=True) attention_mask = torch.cat((inputs1['attention_mask'], inputs2['attention_mask']), dim=-1) # outputs1.past_key_values[0][0].shape # torch.Size([2, 12, 4, 64]) outputs2 = model(input_ids=inputs2['input_ids'], attention_mask=attention_mask, past_key_values=outputs1.past_key_values) s = ["a b c d", "l m n o p q r s t u v w x y z"] inputs_full = tokenizer(s, return_tensors='pt', padding=True) outputs_full = model(**inputs_full) assert torch.allclose(outputs2.logits[1,0],outputs_full.logits[1,-1]) # are second example last token logits the same? -> passes assert torch.allclose(outputs2.logits[0,0], outputs_full.logits[0,-1]) # are first example last token logits the same? -> fails ``` <|||||>@ArthurZucker thanks for looking into this! Yes using `padding_side="left"` seems like a great solution to this issue! I'm curious what is the intended path for users to figure out this usage? I can see how the most common use case for `past_key_values` is sequential decoding, in which case batched generation will already mandate left padding. However there may be some other users like myself that are using past_key_values to compute likelihoods of a set of reference texts that all have some shared prefix that can be cached with past_key_values. In that case, the necessity of left padding wont emerge until one considers what will happen to the `position_ids` as we have here. I wonder if the [documentation](https://github.com/huggingface/transformers/blob/31d452c68b34c2567b62924ee0df40a83cbc52d5/src/transformers/models/gpt2/modeling_gpt2.py#L558) for the `past_key_values` and `attention_mask` parameters of `forward` could mention that left padding will preserve the `position_ids`. Below is a possibility with changes in bold. It's just a thought, in case it might be helpful. Thank you for your consideration! > past_key_values (Tuple[Tuple[torch.Tensor]] of length config.n_layers) — Contains precomputed hidden-states (key and values in the attention blocks) as computed by the model (see past_key_values output below). Can be used to speed up sequential decoding. The input_ids which have their past given to this model should not be passed as input_ids as they have already been computed. **However, the attention_mask of given past input_ids does need to be provided (see attention_mask).** > > attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: > 1 for tokens that are not masked, > 0 for tokens that are masked. > If past_key_values is used, attention_mask needs to contain the masking strategy that was used for past_key_values. In other words, the attention_mask always has to have the length: len(past_key_values) + len(input_ids). **For batching with past_key_values, left padding is required to make uninterrupted attention_masks that preserve position_ids.**<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>@ArthurZucker what if the suffixes (`[" d", " z"]` in the example) have a different number of tokens? I changed the suffixes to `[" d e", " z"]` and don't get the expected result ```python from transformers import AutoModelForCausalLM, AutoTokenizer import torch model = AutoModelForCausalLM.from_pretrained('gpt2') tokenizer = AutoTokenizer.from_pretrained('gpt2', padding_side="left") tokenizer.pad_token = tokenizer.eos_token s = ["a b c", "l m n o p q r s t u v w x y"] inputs1 = tokenizer(s, return_tensors='pt', padding=True) outputs1 = model(**inputs1) s = [" d e", # add e " z"] inputs2 = tokenizer(s, return_tensors='pt', padding=True) attention_mask = torch.cat((inputs1['attention_mask'], inputs2['attention_mask']), dim=-1) outputs2 = model(input_ids=inputs2['input_ids'], attention_mask=attention_mask, past_key_values=outputs1.past_key_values) s = ["a b c d e", # add e "l m n o p q r s t u v w x y z"] inputs_full = tokenizer(s, return_tensors='pt', padding=True) outputs_full = model(**inputs_full) assert torch.allclose(outputs2.logits[0,-1], outputs_full.logits[0,-1]) # are first example last token logits the same? -> fails assert torch.allclose(outputs2.logits[1,-1], outputs_full.logits[1,-1]) # are second example last token logits the same? -> fails ``` Edit: I think I have a general solution. Will add another comment<|||||>A general solution (general meaning: prefixes can have a different number of tokens, and suffixes can have a different number of tokens) is to create and supply `position_ids` as @IanMagnusson found [above](https://github.com/huggingface/transformers/issues/18104#issuecomment-1182489082). I also think right-padding is the more correct solution b/c prefix position ids are the same as they were if there was no padding. Demo ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer model = AutoModelForCausalLM.from_pretrained('gpt2') tokenizer = AutoTokenizer.from_pretrained('gpt2') tokenizer.pad_token = tokenizer.eos_token # allow batching if not tokenizer.padding_side == 'right': raise ValueError('Gotta use right padding to ensure position IDs are ' 'correct.') prefixes = ['a b c', 'l m n o p q r s t u v w x y'] # Make sure to start each suffix w/ a whitespace suffixes = [' d e', ' z'] # Batch inference prefixes prefixes_encoding = tokenizer(prefixes, return_tensors='pt', padding=True) with torch.no_grad(): prefixes_out = model(**prefixes_encoding) # Need offsets so that position_ids for future tokens are set correctly offsets = prefixes_encoding.attention_mask.sum(dim=1) # Batch inference suffixes suffixes_encoding = tokenizer(suffixes, return_tensors='pt', padding=True) num_completion_tokens = suffixes_encoding.input_ids.shape[1] # Set position_ids to what they were had we fed each prefix + suffix # together w/ right-padding (right-padding b/c GPT-2 uses absolute position ids) suffixes_position_ids = (torch.arange(0, num_completion_tokens) + offsets[:, None]) # broadcast # Need attention_mask to include the prefixes since it could have padding attention_mask = torch.cat((prefixes_encoding.attention_mask, suffixes_encoding.attention_mask), dim=1) # Everything should now be aligned 🤞 🙏 with torch.no_grad(): suffixes_out = model(input_ids=suffixes_encoding.input_ids, attention_mask=attention_mask, past_key_values=prefixes_out.past_key_values, position_ids=suffixes_position_ids) ``` Tests ```python # Expected output full = [prefix + suffix for prefix, suffix in zip(prefixes, suffixes)] full_encoding = tokenizer(full, return_tensors='pt', padding=True) with torch.no_grad(): full_out = model(**full_encoding) # Test shape assert suffixes_out.logits.shape[0] == full_out.logits.shape[0] assert suffixes_out.logits.shape[-1] == full_out.logits.shape[-1] # Test that every non-pad token's logits are close. # (in the comments, the token in parentheses is the one whose logits we're # acessing) assert torch.allclose(suffixes_out.logits[0, 0], # (d), e full_out.logits[0, 3]) # a, b, c, (d), e, rest are <PAD> assert torch.allclose(suffixes_out.logits[0, 1], # d, (e) full_out.logits[0, 4]) # a, b, c, d, (e), rest are <PAD> assert torch.allclose(suffixes_out.logits[1, 0], # (z), <PAD> full_out.logits[1, -1]) # l m n o p q r s t u v w x y (z) ```<|||||>Hey! Yes as mentioned before, the positional IDS in GPT2 are not created on the fly contrary to other of our models. A fix is in the makinf, see #21853, which should prevent you from having to pass the positional ids.
transformers
18,103
closed
Make Nezha tests slow
# What does this PR do? The Nezha model tests are fairly slow (see below) so this PR marks them as such. @sijunhe if you have an idea on how to make them faster, it's more than welcome! Current times on main: ``` 46.62s call tests/models/longt5/test_modeling_longt5.py::LongT5TGlobalModelTest::test_export_to_onnx 38.86s call tests/models/nezha/test_modeling_nezha.py::NezhaModelTest::test_attention_outputs 35.41s call tests/models/nezha/test_modeling_nezha.py::NezhaModelTest::test_save_load_fast_init_to_base 35.37s call tests/models/nezha/test_modeling_nezha.py::NezhaModelTest::test_save_load_fast_init_from_base 28.36s call tests/models/nezha/test_modeling_nezha.py::NezhaModelTest::test_head_pruning_integration 26.86s call tests/models/nezha/test_modeling_nezha.py::NezhaModelTest::test_hidden_states_output 26.62s call tests/models/nezha/test_modeling_nezha.py::NezhaModelTest::test_feed_forward_chunking 26.54s call tests/models/nezha/test_modeling_nezha.py::NezhaModelTest::test_head_pruning_save_load_from_pretrained 26.08s call tests/models/nezha/test_modeling_nezha.py::NezhaModelTest::test_save_load 25.02s call tests/models/nezha/test_modeling_nezha.py::NezhaModelTest::test_correct_missing_keys 22.04s call tests/models/detr/test_modeling_detr.py::DetrModelTest::test_attention_outputs 20.56s call tests/models/detr/test_modeling_detr.py::DetrModelTest::test_save_load_fast_init_from_base 19.49s call tests/models/longt5/test_modeling_longt5.py::LongT5ModelTest::test_export_to_onnx 18.63s call tests/models/detr/test_modeling_detr.py::DetrModelTest::test_save_load_fast_init_to_base 18.27s call tests/models/nezha/test_modeling_nezha.py::NezhaModelTest::test_torch_fx_output_loss 18.20s call tests/models/flava/test_modeling_flava.py::FlavaImageCodebookTest::test_save_load 17.45s call tests/models/nezha/test_modeling_nezha.py::NezhaModelTest::test_torch_fx 17.15s call tests/models/flava/test_modeling_flava.py::FlavaImageCodebookTest::test_feed_forward_chunking 16.16s call tests/models/nezha/test_modeling_nezha.py::NezhaModelTest::test_tie_model_weights 16.05s call tests/models/detr/test_modeling_detr.py::DetrModelTest::test_save_load 15.34s call tests/models/flava/test_modeling_flava.py::FlavaImageCodebookTest::test_determinism 14.88s call tests/models/detr/test_modeling_detr.py::DetrModelTest::test_feed_forward_chunking 14.87s call tests/models/mobilevit/test_modeling_mobilevit.py::MobileViTModelTest::test_save_load_fast_init_from_base 14.62s call tests/models/detr/test_modeling_detr.py::DetrModelTest::test_hidden_states_output 14.45s call tests/models/nezha/test_modeling_nezha.py::NezhaModelTest::test_headmasking 14.20s call tests/models/nezha/test_modeling_nezha.py::NezhaModelTest::test_model_outputs_equivalence 14.03s call tests/models/nezha/test_modeling_nezha.py::NezhaModelTest::test_initialization 13.81s call tests/models/nezha/test_modeling_nezha.py::NezhaModelTest::test_inputs_embeds 13.80s call tests/models/nezha/test_modeling_nezha.py::NezhaModelTest::test_head_pruning 13.58s call tests/models/nezha/test_modeling_nezha.py::NezhaModelTest::test_determinism 13.36s call tests/models/nezha/test_modeling_nezha.py::NezhaModelTest::test_head_pruning_save_load_from_config_init 13.16s call tests/models/nezha/test_modeling_nezha.py::NezhaModelTest::test_forward_signature 13.05s call tests/models/data2vec/test_modeling_data2vec_audio.py::Data2VecAudioModelTest::test_mask_time_prob_ctc 13.00s call tests/models/nezha/test_modeling_nezha.py::NezhaModelTest::test_gradient_checkpointing_backward_compatibility 12.99s call tests/models/nezha/test_modeling_nezha.py::NezhaModelTest::test_gradient_checkpointing_enable_disable ```
07-11-2022 16:47:29
07-11-2022 16:47:29
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_18103). All of your documentation changes will be reflected on that endpoint.<|||||>Hi @sgugger. I am not sure why the Nezha tests are slow. The tests that you listed here are not the integration test with the full model, but the model tests with test config (which is be pretty small and the same size as the regular bert test). I can try to decrease the test config size to see if it helps.<|||||>Found the issue and here is the fix #18106
transformers
18,102
closed
TF: remove graph mode distinction when processing boolean options
# What does this PR do? Removes a very old `if` branch related to boolean options, as TF graph mode can handle both branches with no issues -- it passes core tests for the models I tried. It also unblocks @sayakpaul in the demo he's building for the TF SegFormer (#17910), which requires setting `output_hidden_states` in graph mode.
07-11-2022 16:40:41
07-11-2022 16:40:41
_The documentation is not available anymore as the PR was closed or merged._<|||||>LGTM! All code of this type should be removed imo - it's an artifact of TF 1.x. Modern TF code is compiled by tracing and recompiled if Python flags are changed and can handle all kinds of weird Python flow control as a result.
transformers
18,101
closed
OOM error when training with trainer
### System Info transformers: 4.19.2 When I use trainer to do mlm training on deberta-v3-large, there is out of memory problem. And the GPU occupancy continues to grow over time, eventually out of memory.and throws OOM error every time at fixed position By searching forums and issues, I tried to modify the source code of trainer, including 1. in the huggingface transformers trainer code (function create_optimizer), add force_broadcast_object=True 2. rewrite the Trainer saving function to skip saving the optimizer weights 3. disable the optimizer saving by commenting out consolidate_state_dict as well as the optimizer saving part 4. Remove useless intermediate variables and empty_cache but it didn't work ### Who can help? @sgugger ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction training_args = TrainingArguments( output_dir=args.output_dir, evaluation_strategy="no", learning_rate=args.lr, weight_decay=0.01, save_strategy='steps', per_device_train_batch_size=args.batch_size, num_train_epochs=args.num_train_epochs, # report_to="wandb", run_name=f'output-mlm-{args.exp_num}', # logging_dir='./logs', lr_scheduler_type='cosine', warmup_ratio=0.2, fp16=True, logging_steps=500, gradient_accumulation_steps=args.gradient_accumulation_steps, save_steps=5000, prediction_loss_only=True, ) trainer = Trainer( model=model, args=training_args, train_dataset=tokenized_datasets["train"], # eval_dataset=tokenized_datasets['valid'], data_collator=data_collator, # optimizers=(optimizer, scheduler) ) ### Expected behavior Hope it works without changing batch_size
07-11-2022 16:39:03
07-11-2022 16:39:03
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
18,100
closed
Fix image segmentation and object detection pipeline tests
# What does this PR do? The recent release of timm has broken two pipeline tests, this PR fixes them.
07-11-2022 16:25:36
07-11-2022 16:25:36
_The documentation is not available anymore as the PR was closed or merged._
transformers
18,099
closed
Add filename to info displayed when downloading things in from_pretrained
# What does this PR do? The progress bar used in `http_get` has a description saying "Downloading". When we download multiple files (for instance for a sharded chackpoint) that's not necessarily super informative, so this PR adds the name of the file being downloaded to the description. cc @stas00
07-11-2022 16:11:53
07-11-2022 16:11:53
_The documentation is not available anymore as the PR was closed or merged._<|||||>Thank you so much, Sylvain!
transformers
18,098
closed
[ create_a_model.mdx ] translate to pt
# What does this PR do? Creates a new file called create_a_model.mdx in docs/source/pt Translates all the content of the base create_a_model to pt-br Fixes issue #16824 ## Before submitting - [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [X] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [X] Did you write any new necessary tests?
07-11-2022 15:57:27
07-11-2022 15:57:27
_The documentation is not available anymore as the PR was closed or merged._<|||||>Obrigado @Fellip15! Thank you for your translation, and sorry for my late review. The text looks good to me, but the tests seem to have a weird loop. I can not find the reason, WDYT @sgugger?
transformers
18,097
closed
TF: use the correct config with `(...)EncoderDecoder` models
# What does this PR do? Fixes #18071 Modifies `unpack_inputs` to ignore the config file for `(...)EncoderDecoder` models, mimicking the behavior in PT. If we don't ignore it, then unset options will get set with the config's default (`False` for most of them), causing the inner models to ignore their own config files. ⚠️ I've added a corresponding test for the `EncoderDecoder` models. I then noticed that other `(...)EncoderFecoder` tests have copy/pasted their own `EncoderDecoderMixin`, so I've left the other classes for a follow-up PR with the following question: should a common `EncoderDecoderMixin` be defined and shared across `(...)EncoderDecoder` tests, or should I add a similar test to all other classes individually?
07-11-2022 15:28:01
07-11-2022 15:28:01
_The documentation is not available anymore as the PR was closed or merged._<|||||>I believe they are -- going to give it a go afterwards if @ydshieh also agrees :)<|||||>I have limited connection at this moment in the mountain, so feel free to merge if you prefer. Regarding the common mixin, good for me. I see there are a few little things to address, like the input names (input_ids, pixel values etc). Would be nice if you do this refactorization after merging my PR about the PT/TF equivalence tests, or incoperate the change in it 🙏 Thank you for the fix, @gante<|||||>@ydshieh can I have a review plz 🙏 <|||||>@ydshieh rebased with main and reran tests -- all working 👍
transformers
18,096
closed
TFWav2Vec2ForCTC breaks when not run eagerly
### System Info `transformers` version: 4.21.0.dev0 - Platform: Linux-5.13.0-48-generic-x86_64-with-glibc2.31 - Python version: 3.7.13 - PyTorch version (GPU?): 1.11.0 (False) - Tensorflow version (GPU?): 2.9.1 (True) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? @Rocketknight1 @gante ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Colab link to reproduce: https://colab.research.google.com/drive/1GxQtnDLaDFooG8t2uAIR-2GnS2RFod_j?usp=sharing When model is compiled with: model.compile(optimizer=optimizer ,run_eagerly = False, metrics =[compute_wer]) the code errors out with: ``` ValueError: Label values must be <= vocab_size: 30 ``` However, it works perfectly when `run_eagerly = True`. ### Expected behavior Training should happen fine even when not run eagerly.
07-11-2022 15:15:14
07-11-2022 15:15:14
Hi @Sreyan88 👋 That is a big reproduction script 😅 To ensure we can provide quality support, a short reproduction script goes a long way. I haven't run the code, but my suspicion goes to the line where the model is defined (`model = TFWav2Vec2ForCTC.from_pretrained(MODEL_CHECKPOINT,vocab_size=len(processor.tokenizer), pad_token_id=processor.tokenizer.pad_token_id,apply_spec_augment=False, from_pt = True)`). Try removing the `pad_token_id` keyword argument here -- the data pipeline knows which token is the padding token from the tokenizer. It might be creating a new token with the argument, which causes the `vocab_size` error. Let me know if it works! If it does, it probably means that our `pad_token_id` is not working properly -- I've been seeing similar errors lately.<|||||>Hi @gante , The error persists even after removing it! I would please request you to run the script twice by toggling run_eagerly boolean in this line: model.compile(optimizer=optimizer ,run_eagerly = False, metrics =[compute_wer]) which is in the Building and Compiling the Model section. When `run_eagerly = True`, the training does not throw any error! Apologies for the script but I am about to push it to Keras examples soon so it's indeed a detailed one and I thought of explaining every step because I was not sure about the error. The script takes about 2 mins to run on colab!<|||||>Hi @Sreyan88 it is not about the run time, but about being able to pin the issue. We don't have the bandwidth to help the community with all requests and bugs, so we request some help from the community to create short scripts for bug reproducibility. Without it, I'm afraid this issue will not jump high on my priority list :)<|||||>No problem! I have modified the script and deleted comments/explanations. Currently, all code blocks are just the ones absolutely necessary. Hope you find some time to look at it! and please also suggest if I should clean more! Thank You!<|||||>Hi @gante , Good day! Any leads to this? I tried but couldn't figure out the exact issue. :( <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Hi @gante , Good day! Could you please re-open this as it wasn't solved? Thank You!
transformers
18,095
closed
Report value for a step instead of epoch.
# What does this PR do? Report an objective function value for a step instead of epoch to optuna. ## I made this modification for the following reason: If "eval_steps" is less than steps per epoch, there maybe warnings: `optuna/trial/_trial.py:592: UserWarning: The reported value is ignored because this ‘step’ 0 is already reported.`. This is because the epoch granularity is too coarse. So "step" are more appropriate than "epoch" here. ## Who can review? Anyone in the community is free to review the PR once the tests have passed. @sgugger @LysandreJik
07-11-2022 12:11:19
07-11-2022 12:11:19
_The documentation is not available anymore as the PR was closed or merged._<|||||>> Looks good, thanks for fixing! > > Can you just run `make style` on your branch to make sure the formatting check passes? Done. @sgugger <|||||>Thanks a lot! (test failure is already fixed on main, so merging)
transformers
18,094
closed
Good difficult issue override for the stalebot
Ignores issue with the `Good difficult issue` label with the stalebot, as otherwise these get closed.
07-11-2022 10:00:56
07-11-2022 10:00:56
_The documentation is not available anymore as the PR was closed or merged._
transformers
18,093
closed
[logging] Turn off loss logging, while keeping progress bar and logging to third-party application
### Feature request I would like to add a training argument to the `TrainingArguments` class to turn off the loss logging to stdout while keeping the progress bar and logging to a third-party application like Weights and Biases. ### Motivation I am working on a project that trains a model with the Trainer class. I need to log the losses at every epoch to Weights and Biases. Here is my code: ``` training_arguments = TrainingArguments( output_dir="./logging_dir", num_train_epochs=epochs, per_device_train_batch_size=batch_size, per_device_eval_batch_size=batch_size, report_to="wandb", logging_strategy="epoch", ) trainer = Trainer( model=model, args=training_arguments, tokenizer=tokenizer, train_dataset=train_dataset, eval_dataset=eval_dataset, ) ``` This code is contained in a python file `train.py` and is launched in the terminal via the command `python train.py`. However, I don’t want to have the loss printed because it breaks my progress bar as you can see in the image below and I need the progress bar to give some feedback on the training time. <img width="1155" alt="cut_progress_bar" src="https://user-images.githubusercontent.com/48316195/178239258-ad61d97f-d851-4c31-a4da-c653aa17db3b.png"> ### Your contribution I think that I could add a `disable_on_log` argument to `TrainingArguments`. Then in the `on_log` of the `ProgressCallback`, a condition should be added like this: ``` def on_log(self, args, state, control, logs=None, **kwargs): if state.is_local_process_zero and self.training_bar is not None and args.disable_on_log: _ = logs.pop("total_flos", None) self.training_bar.write(str(logs)) ```
07-11-2022 10:00:27
07-11-2022 10:00:27
You should be able to disable logging while keeping progress bars active. Have you tried setting the log level manually? ```python from transformers.utils.logging import set_verbosity_error set_verbosity_error() ```<|||||>Yes, I've tried that, but this makes my progress bar disappear and keeps the loss logging. I want to do the opposite, keep my progress bar and remove the loss logging.<|||||>Ah, understood. Then in that case providing the training argument `logging_strategy` should do what you want: `logging_strategy="no"` will not output these logs. Did that solve your problem? <|||||>Thank you for your answer @LysandreJik. I also tried that, but I want to keep the logs in my third-party application (Weights and Biases here). If I use `logging_strategy="no"`, no more logs are reported to Weights and Biases.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
18,092
closed
Can't convert Flax T5 model to PyTorch
### System Info - `transformers` version: 4.18.0 - Platform: Linux-3.10.0-957.5.1.el7.x86_64-x86_64-with-centos-7.6.1810-Core - Python version: 3.6.13 - Huggingface_hub version: 0.4.0 - PyTorch version (GPU?): 1.10.2 (False) - Tensorflow version (GPU?): 2.6.2 (False) - Flax version (CPU?/GPU?/TPU?): 0.3.5 (cpu) - Jax version: 0.2.17 - JaxLib version: 0.1.69 - Using GPU in script?: NO - Using distributed or parallel set-up in script?: NO ### Who can help? @patrickvonplaten @patil-suraj ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Failed to convert a T5 model from Flax to PyTorch ```python import tempfile from transformers import AutoTokenizer, FlaxT5ForConditionalGeneration, T5ForConditionalGeneration tmp = tempfile.mkdtemp() flax_model = FlaxT5ForConditionalGeneration.from_pretrained("t5-small") flax_model.save_pretrained(tmp) pt_model = T5ForConditionalGeneration.from_pretrained(tmp, from_flax=True) ``` ### Expected behavior Some weights of T5ForConditionalGeneration were not initialized from the Flax model: ['encoder.embed_tokens.weight', 'decoder.embed_tokens.weight', 'lm_head.weight'] ``` WARNING:absl:No GPU/TPU found, falling back to CPU. (Set TF_CPP_MIN_LOG_LEVEL=0 and rerun for more info.) /data/home/db72687/miniconda3/envs/decipher/lib/python3.6/site-packages/transformers/modeling_flax_pytorch_utils.py:240: UserWarning: The given NumPy array is not writeable, and PyTorch does not support non-writeable tensors. This means you can write to the underlying (supposedly non-writeable) NumPy array using the tensor. You may want to copy the array to protect its data or make it writeable before converting it to a tensor. This type of warning will be suppressed for the rest of this program. (Triggered internally at /opt/conda/conda-bld/pytorch_1640811805959/work/torch/csrc/utils/tensor_numpy.cpp:189.) pt_model_dict[flax_key] = torch.from_numpy(flax_tensor) All Flax model weights were used when initializing T5ForConditionalGeneration. Some weights of T5ForConditionalGeneration were not initialized from the Flax model and are newly initialized: ['encoder.embed_tokens.weight', 'decoder.embed_tokens.weight', 'lm_head.weight'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. ```
07-11-2022 09:38:34
07-11-2022 09:38:34
Sorry, I made a mistake. Actually the missing weights of above the above PyTorch model converted from Flax can be correctly initialized by accepting the Flax model's weight, `fx_model.params['shared']['embedding']`.
transformers
18,091
closed
LayoutLMv2ForRelationExtraction is missing in transformers
### Feature request Microsoft's [unilim repository](https://github.com/microsoft/unilm/tree/db1095a693aa0d6d15bb9312cccb7f8af42b0aeb/layoutlmft/layoutlmft), which originally implements all `LayoutLM` models, contains implementation of the model for relation extraction with Biaffine Attention Classifier, namely [`LayoutLMv2ForRelationExtraction`](https://github.com/microsoft/unilm/blob/db1095a693aa0d6d15bb9312cccb7f8af42b0aeb/layoutlmft/layoutlmft/models/layoutlmv2/modeling_layoutlmv2.py#L895). However, this class wasn't included in `transfromers`, due to unknown reasons. Therefore, the implementation for relation extraction `LayoutLMv2ForRelationExtraction` should be included to extend current LayoutLMv2 and LayoutXLM. ### Motivation This repository should implement all tasks included in papers ([LayoutLMv2](https://arxiv.org/pdf/2012.14740.pdf), [LayoutXLM](https://arxiv.org/pdf/2104.08836.pdf)) and [unilim repository]((https://github.com/microsoft/unilm/tree/db1095a693aa0d6d15bb9312cccb7f8af42b0aeb/layoutlmft/layoutlmft)), thus this missing part should be added to `transformers`. It would enable users to easily reproduce entire paper as well as convenient use of relation extraction in their downstream applications. ### Your contribution If there are no obstacles unknown to me, I could try to move the implementation from unilim to transformers.
07-11-2022 09:03:31
07-11-2022 09:03:31
cc @NielsRogge <|||||>+1 to this<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
18,090
closed
TypeError: to_json_file() got an unexpected keyword argument 'use_diff'
### System Info transformers/modeling_utils.py model_to_save.config.save_pretrained(save_directory) transformers/configuration_utils.py self.to_json_file(output_config_file, use_diff=True) TypeError: to_json_file() got an unexpected keyword argument 'use_diff' ### Who can help? _No response_ ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction unwrapped_model = accelerator.unwrap_model(student_model) unwrapped_model.**save_pretrained**(args.output_dir, save_function=accelerator.save) ### Expected behavior Transformer 4.17.0
07-11-2022 09:01:27
07-11-2022 09:01:27
Maybe @muellerzr or @sgugger have an idea?<|||||>Very strange. Can you tell us a bit more about your system? Specifically the python version you are using and the accelerate version? (Might not be relevant, but so we can know everything)<|||||>This is not linked to Accelerate at all, just the internals of `save_pretrained`. Without the whole traceback and the code executed, we can't really though.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>I get the same error when I run `run_summarization_no_trainer.py` script from [DQ-Bart](https://github.com/amazon-science/dq-bart) repo: ``` python3 run_summarization_no_trainer.py \ --model_name_or_path google/flan-t5-base \ --dataset_name samsum \ --pred_distill \ --num_train_epochs 1 \ --weight_bits 8 \ --do_train \ --do_test \ --distill_encoder 6 \ --distill_decoder 6 \ --learning_rate 5e-5 \ --source_prefix summarize: \ --seed 7 \ ``` The error happens in [line 822](https://github.com/amazon-science/dq-bart/blob/main/run_summarization_no_trainer.py#L822) when trying to save the trained student model. This is the whole traceback: ``` File "run_summarization_no_trainer.py", line 896, in <module> main() File "run_summarization_no_trainer.py", line 822, in main unwrapped_model.save_pretrained(args.output_dir, save_function=accelerator.save) File "/opt/conda/lib/python3.8/site-packages/transformers/modeling_utils.py", line 1068, in save_pretrained model_to_save.config.save_pretrained(save_directory) File "/opt/conda/lib/python3.8/site-packages/transformers/configuration_utils.py", line 438, in save_pretrained self.to_json_file(output_config_file, use_diff=True) TypeError: to_json_file() got an unexpected keyword argument 'use_diff' ``` These are the dependencies: ``` transformers==4.17.0 datasets==1.18.4 sacrebleu==2.0 wandb nltk accelerate==0.5.1 tensorboard setuptools<50 rouge_score py7zr ``` I have tried to remove `use_diff=True` from `self.to_json_file(output_config_file, use_diff=True)` but I still get the same error. Any help is appreciated @sgugger @muellerzr <|||||>That error should be raised on that repo @jmdu99 as they use a custom configuration for their model that doesn't implement the same APIs as the Transformers configurations.<|||||>Just out of curiosity, what was the problem in your case @ADaBenxiong? Did you find a solution?
transformers
18,089
closed
support no gpt_j_residual for gpt-neox
# What does this PR do? Support "gpt_j_residual == False" for gpt-neox, which implement "else" branch in https://github.com/EleutherAI/gpt-neox/blob/main/megatron/model/transformer.py#L627 ## Who can review? Anyone in the community is free to review the PR. @sgugger @patrickvonplaten
07-11-2022 07:07:29
07-11-2022 07:07:29
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_18089). All of your documentation changes will be reflected on that endpoint.<|||||>@TopIdiot, would existing checkpoints (i.e., https://huggingface.co/EleutherAI/gpt-neox-20b) be usable with this configuration option, or would they output gibberish?<|||||>> @TopIdiot, would existing checkpoints (i.e., https://huggingface.co/EleutherAI/gpt-neox-20b) be usable with this configuration option, or would they output gibberish? @LysandreJik Yes, it can be used. The default value for gpt_j_residual is "True", so the exsiting checkpoints would enter the "if self.gpt_j_residual" branch. btw, I tested gpt-neox-20b, the result looks good for me.<|||||>The question is whether the model would give sensible results if you set it to `False`. Transformers is not a modular toolbox, we don't add options to models that make no sense with the pretrained checkpoints of that model, we add new models instead.<|||||>> The question is whether the model would give sensible results if you set it to `False`. Transformers is not a modular toolbox, we don't add options to models that make no sense with the pretrained checkpoints of that model, we add new models instead. @sgugger gpt_j_residual is an option which already existed in gpt-neox (i.e. https://github.com/EleutherAI/gpt-neox/blob/main/configs/20B.yml#L29) Recently, we trained EleutherAI/gpt-neox with option gpt_j_residual == False and successfully convert the checkpoint to torch. However, we found that the current version of huggingface's gpt-neox doesn't implement this part. We added it by ourself and got expected result. <|||||>You can share the custom code of your model using the [code in the Hub](https://huggingface.co/docs/transformers/custom_models) API. This is typically the kind of changes we don't accept in existing model files (and the reason there is one for GPT-2, GPT-J, GPT-Neo and GPT-Neo-X which are all very similar).<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
18,088
closed
RuntimeError - invalid multinomial distribution (with replacement=False, not enough non-negative category to sample)
### System Info whenever i set `do_sample=True`, i get the error: `RuntimeError('invalid multinomial distribution (with replacement=False, not enough non-negative category to sample)')`, i don't know why it is happening, but i want to use `do_sample=True` because it gives out more relevant results. Im using the Bart-Large-CNN for text summarization, with huggingface transformers version 4.16.2, any help would be greatly appreciated. ### Who can help? @LysandreJik ### Information - [ ] The official example scripts - [x] My own modified scripts ### Tasks - [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ` import torch from transformers import AutoModelForSeq2SeqLM, AutoTokenizer, AutoConfig tokenizer = AutoTokenizer.from_pretrained(model_path) model = AutoModelForSeq2SeqLM.from_pretrained(model_path) max_chunk = 1024 current_chunk = 0 chunks = [] fulltext = ''' long text here''' fulltext = fulltext.replace('.', '.<eos>') fulltext = fulltext.replace('?', '?<eos>') fulltext = fulltext.replace('!', '!<eos>') sentences = fulltext.split('<eos>') for sentence in sentences: if len(chunks) == current_chunk + 1: if len(chunks[current_chunk]) + len(sentence.split(' ')) <= max_chunk: chunks[current_chunk].extend(sentence.split(' ')) else: current_chunk += 1 chunks.append(sentence.split(' ')) else: chunks.append(sentence.split(' ')) for chunk_id in range(len(chunks)): chunks[chunk_id] = ' '.join(chunks[chunk_id]) chink_list = [] for chinks in chunks: inputs = tokenizer(str(chinks), return_tensors="pt", truncation=True) outputs = model.generate(inputs["input_ids"], do_sample=True) chunk_summary = tokenizer.decode(outputs[0]) chunk_summary = str(chunk_summary) chunk_summary = chunk_summary[:-4] chunk_summary = chunk_summary[7:] chink_list.append(chunk_summary) summary = ' '.join(chink_list) print( summary) ` ### Expected behavior no errors and for it to run normally.
07-11-2022 01:12:24
07-11-2022 01:12:24
is also get this warning: `UserWarning: __floordiv__ is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor'). next_indices = next_tokens // vocab_size`<|||||>@zeke-john, thanks for your issue! Please use tags responsibly, tagging everyone involved with GitHub won't guarantee you an answer. We try to keep the github issues for bugs/feature requests. Could you ask your question on the [forum](https://discuss.huggingface.co) instead? We'll be unable to help you without you providing a complete (ideally small) reproducer, so without the long text in question it will be tough to find the issue for you. Thanks!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
18,087
closed
[bloom] fix alibi device placement
This PR fixes alibi device placement - currently it's the default device which breaks things at times - it has to be set explicitly to the correct device. The problem emerged when trying to get DeepSpeed-Inference working. Kudos to @RezaYazdaniAminabadi for discovering the problem. cc: @younesbelkada
07-10-2022 16:01:36
07-10-2022 16:01:36
_The documentation is not available anymore as the PR was closed or merged._<|||||>Thanks a lot for the fix !<|||||>btw, we also discussed to change the alibi creation logic. it should create it once in init with a largish length (say 1k) and not change it again unless the input is longer. then the device and dtype will be automatically handled correctly. this is the logic that all `transformers` positional embeddings use.<|||||>Yes agreed ! We are already addressing these issues in this PR together with refactoring the whole attention block which looks too complicated : https://github.com/huggingface/transformers/pull/17866 Here alibi (including the shifting) and the attention mask is created only once
transformers
18,086
closed
AttributeError: 'TrainingArguments' object has no attribute 'generation_max_length'
### System Info ```shell transformers :4.20.1 platform: Colab python : 3.7 ``` ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [x] My own task or dataset (give details below) "ade_corpus_v2" in Huggingface RAFT ### Reproduction ``` def compute_metrics(p): pred, labels = p pred = np.argmax(pred, axis=1) accuracy = accuracy_score(y_true=labels, y_pred=pred) recall = recall_score(y_true=labels, y_pred=pred) precision = precision_score(y_true=labels, y_pred=pred) f1 = f1_score(y_true=labels, y_pred=pred) return {"accuracy": accuracy, "precision": precision, "recall": recall, "f1": f1} # Define Trainer args = TrainingArguments( output_dir="output", evaluation_strategy="steps", eval_steps=500, per_device_train_batch_size=8, per_device_eval_batch_size=8, num_train_epochs=100, seed=0, load_best_model_at_end=True, ) from transformers import Seq2SeqTrainer trainer = Seq2SeqTrainer( # model=delta_model3, model=model, args=args, train_dataset=train_dataset, eval_dataset=val_dataset, compute_metrics=compute_metrics, callbacks=[EarlyStoppingCallback(early_stopping_patience=3)], ) # Train pre-trained model trainer.train() ``` ### Expected behavior ```shell It went wrong. > /usr/local/lib/python3.7/dist-packages/transformers/optimization.py:310: FutureWarning: This implementation of AdamW is deprecated and will be removed in a future version. Use the PyTorch implementation torch.optim.AdamW instead, or set `no_deprecation_warning=True` to disable this warning FutureWarning, ***** Running training ***** Num examples = 40 Num Epochs = 100 Instantaneous batch size per device = 8 Total train batch size (w. parallel, distributed & accumulation) = 8 Gradient Accumulation steps = 1 Total optimization steps = 500 [500/500 06:18, Epoch 100/100] Step Training Loss Validation Loss --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) <ipython-input-11-52043b3bb24a> in <module>() 33 34 # Train pre-trained model ---> 35 trainer.train() 3 frames /usr/local/lib/python3.7/dist-packages/transformers/trainer.py in train(self, resume_from_checkpoint, trial, ignore_keys_for_eval, **kwargs) 1411 resume_from_checkpoint=resume_from_checkpoint, 1412 trial=trial, -> 1413 ignore_keys_for_eval=ignore_keys_for_eval, 1414 ) 1415 /usr/local/lib/python3.7/dist-packages/transformers/trainer.py in _inner_training_loop(self, batch_size, args, resume_from_checkpoint, trial, ignore_keys_for_eval) 1726 self.control = self.callback_handler.on_step_end(args, self.state, self.control) 1727 -> 1728 self._maybe_log_save_evaluate(tr_loss, model, trial, epoch, ignore_keys_for_eval) 1729 else: 1730 self.control = self.callback_handler.on_substep_end(args, self.state, self.control) /usr/local/lib/python3.7/dist-packages/transformers/trainer.py in _maybe_log_save_evaluate(self, tr_loss, model, trial, epoch, ignore_keys_for_eval) 1910 metrics = None 1911 if self.control.should_evaluate: -> 1912 metrics = self.evaluate(ignore_keys=ignore_keys_for_eval) 1913 self._report_to_hp_search(trial, epoch, metrics) 1914 /usr/local/lib/python3.7/dist-packages/transformers/trainer_seq2seq.py in evaluate(self, eval_dataset, ignore_keys, metric_key_prefix, max_length, num_beams) 66 dictionary also contains the epoch number which comes from the training state. 67 """ ---> 68 self._max_length = max_length if max_length is not None else self.args.generation_max_length 69 self._num_beams = num_beams if num_beams is not None else self.args.generation_num_beams 70 return super().evaluate(eval_dataset, ignore_keys=ignore_keys, metric_key_prefix=metric_key_prefix) AttributeError: 'TrainingArguments' object has no attribute 'generation_max_length' ``` @sgugger ### Checklist - [X] I have read the migration guide in the readme. ([pytorch-transformers](https://github.com/huggingface/transformers#migrating-from-pytorch-transformers-to-transformers); [pytorch-pretrained-bert](https://github.com/huggingface/transformers#migrating-from-pytorch-pretrained-bert-to-transformers)) - [X] I checked if a related official extension example runs on my machine.
07-10-2022 12:47:09
07-10-2022 12:47:09
You need to use `Seq2SeqTrainingArguments` to go with `Seq2SeqTrainer`.<|||||>> You need to use `Seq2SeqTrainingArguments` to go with `Seq2SeqTrainer`. Well the thing is when I try to use `Seq2SeqTrainingArguments`, and the dataset value is tensor, it can train. However when I tried to use `TrainingArguments`, and the dataset value is not tensor, it can train. But whatever way I code, the training loss is missed until the last epoch. What is the tricky here. Thanks! @sgugger ``` ### dataset change import torch class Dataset(torch.utils.data.Dataset): def __init__(self, encodings, labels=None): self.encodings = encodings self.labels = labels def __getitem__(self, idx): item = {key: torch.tensor(val[idx]) for key, val in self.encodings.items()} if self.labels: item["labels"] = torch.tensor(self.labels[idx]-1) return item def __len__(self): return len(self.encodings["input_ids"]) import torch class Dataset(torch.utils.data.Dataset): def __init__(self, encodings, labels=None): self.encodings = encodings self.labels = labels def __getitem__(self, idx): item = {key: val[idx] for key, val in self.encodings.items()} if self.labels: item["labels"] = self.labels[idx]-1 return item def __len__(self): return len(self.encodings["input_ids"]) train_dataset = Dataset(X_train_tokenized, y_train) val_dataset = Dataset(X_val_tokenized, y_val) import numpy as np from sklearn.metrics import accuracy_score, recall_score, precision_score, f1_score def compute_metrics(pred): print(pred) predict_res = torch.Tensor(pred.predictions[0]) # size:[验证集样本量, label的token长度, vocab大小] pred_ids = predict_res.argmax(dim=2) ## 2.处理 pred.label_ids labels_actual = torch.LongTensor(pred.label_ids) ## 3.计算accuracy total_num = labels_actual.shape[0] acc = torch.sum(torch.all(torch.eq(pred_ids, labels_actual), dim=1))/total_num return {'accuracy': acc} # Define Trainer args = Seq2SeqTrainingArguments( output_dir="output", evaluation_strategy="steps", eval_steps=25, per_device_train_batch_size=8, per_device_eval_batch_size=8, num_train_epochs=100, seed=0, load_best_model_at_end=True, ) trainer = Trainer( # model=delta_model3, model=model, args=args, train_dataset=train_dataset, eval_dataset=val_dataset, compute_metrics=compute_metrics, callbacks=[EarlyStoppingCallback(early_stopping_patience=3)], ) # Train pre-trained model trainer.train() ``` > The trainning log is as bellow <img width="450" alt="image" src="https://user-images.githubusercontent.com/84232793/178263827-c967b616-6279-484a-a265-5353b7241687.png"> <|||||>Please use the [forums](https://discuss.huggingface.co/) to debug your code as we keep issues for bugs and feature requests only. You didn't indicate you want to log the training loss every 25 steps in your training arguments, so it uses the default of 500.<|||||>> Please use the [forums](https://discuss.huggingface.co/) to debug your code as we keep issues for bugs and feature requests only. You didn't indicate you want to log the training loss every 25 steps in your training arguments, so it uses the default of 500. Sorry about that! I will use the forums in the future! Thanks<|||||>i have the same problem here how did you resolve this issue
transformers
18,085
open
Adding TF Implementation of BEiT
### Feature request Addition of TF implementation of BEiT ### Motivation I have always seen that there is a discrepancy in the availability of models for PyTorch and the models available in TensorFlow, and want to have models for usage in both backends. ### Your contribution I will add the implementation of BEiT in TF :) cc - @gante
07-10-2022 11:52:05
07-10-2022 11:52:05
cc @NielsRogge @amyeroberts <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Hi @MadElf1337 do you have any updates? Are you still planning on contributing this model? <|||||>Yep I’m still working on the model, had to keep it aside for a bit due to my uni exam schedule, but will start again the day my exams are over Regarding the updates, I am done with the architecture, have to write the functions for specific purposes(like segmentation) and the tests<|||||>Great - glad to hear you're still interested :) As @NielsRogge pointed out, data2vec vision is an extension of BEiT. This means the porting should be a lot simpler! In our [pytorch BEiT implementation](https://github.com/huggingface/transformers/blob/main/src/transformers/models/data2vec/modeling_data2vec_vision.py#L310), you can see this from the `#Copied from` statements. Ideally the TF implementation would reflect this and be the same as our pytorch implementation, however TF data2vec vision is already implemented. So, we need to move the data2vec code to beit, and then add the necessary `#Copied from` statement in data2vec. Does this make sense? Could you open a draft PR for the model please so that the code is visible? Good luck with the last of your exams!<|||||>Yes I’ll open a draft PR to show the code that’s been done till date And thanks!
transformers
18,084
closed
Making Roformer models compatible with pre-trained Roformer v2 models
# What does this PR do? RoFormer v2 is a more recent and lightweight version of RoFormer. The differences between the two models are: - RoFormer v2 removed the bias term from all the attention modules - RoFormer v2 used a simple RMS norm instead of LayerNorm Currently, loading a pre-trained RoFormer v2 model such as [this one](https://huggingface.co/junnyu/roformer_v2_chinese_char_base) with RoFormer will raise a lot of "newly initialized but not found in checkpoint" warning. This PR ensures that the redundant weights are not created when a RoFormer V2 config is provided. Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @JunnYu who contributed RoFormer to HF @patrickvonplaten who reviewed RoFormer before
07-09-2022 16:15:16
07-09-2022 16:15:16
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_18084). All of your documentation changes will be reflected on that endpoint.<|||||>Thanks a lot for your PR, @sijunhe! In this situation, we would favor adding a new model architecture instead of editing the existing one to add support for a newer model. The gist of it is that: - If the changes are not part of the original codebase, original paper, or original pretrained weights - If the initial checkpoints cannot be loaded in the architecture that will be enabled then a new model architecture is warranted. You can read about this aspect of our philosophy [here](https://huggingface.co/blog/transformers-design-philosophy). We have a tool to allow you to generate a model exactly as RoFormer to add your contribution here: [add-new-model-like](https://github.com/huggingface/transformers/tree/main/templates/adding_a_new_model#add-new-model-like-command). In this situation, the arguments that you add here likely don't need to exist: I suppose the rotary operations will always be there in v2, and the `rms_norm` will always be used as well. Is that correct? Thanks!<|||||>Hi @LysandreJik. Thanks for sending over HF's philosophy. It was a good read! I'd agree with you. But in this case, the difference between RoFormer and RoFormer V2 is so small and I am not sure if it's worth a whole new model and tests. Technically, it's possible to load RoFormer V2 weights with the current RoFormer class and we would just have some redundant weights that would be randomly initialized. I think a counter example here is the BERT model and its 3 different kinds of position embeddings (absolute, relative_key and relative_key_query). And in this case, the architectural difference between RoFormer and RoFormer V2 is much smaller than BERT.<|||||>Good point regarding BERT, but that's actually a mistake from our part when the philosophy was still evolving :sweat_smile:. Same with GPT-2 and some arguments for it to scale better.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
18,083
closed
dictionary update sequence element #5 has length 1; 2 is required
### System Info - `transformers` version: 4.20.0.dev0 - Platform: Linux-5.4.0-66-generic-x86_64-with-glibc2.31 - Python version: 3.9.12 - Huggingface_hub version: 0.7.0 - PyTorch version (GPU?): 1.11.0 (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? transformers/examples/pytorch/language-modeling/run_mlm.py @LysandreJik @sgugger ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction I want to pre-train RoBERTa from scratch on my own dataset using `transformers/examples/pytorch/language-modeling/run_mlm.py`. 1. I run the command: ``` python run_mlm.py \ --model_type roberta \ --tokenizer_name /CodeSearchNet/code_txt/tokenizer \ --config_overrides vocab_size=52_000,max_position_embeddings=514,num_attention_heads=12,num_hidden_layers=12,type_vocab_size=1 \ --train_file /data_for_train_tokenizer/CodeSearchNet/train_codes.txt \ --validation_file /data_for_train_tokenizer/CodeSearchNet/valid_codes.txt \ --per_device_train_batch_size 64 \ --per_device_eval_batch_size 64 \ --num_train_epochs 100 \ --overwrite_output_dir \ --line_by_line \ --save_steps 5000 \ --do_train \ --do_eval \ --output_dir /CodeSearchNet/code_txt/model/pretrain_Roberta_from_scratch/CSN/single_file \ --logging_dir /CodeSearchNet/code_txt/log/pretrain_Roberta_from_scratch_CSN_single_file ``` There is an error: ``` 07/09/2022 02:00:22 - WARNING - __main__ - You are instantiating a new config instance from scratch. 07/09/2022 02:00:22 - INFO - __main__ - Overriding config: vocab_size=52_000,max_position_embeddings=514,num_attention_heads=12,num_hidden_layers=12,type_vocab_size=1, Traceback (most recent call last): File "/transformers/examples/pytorch/language-modeling/run_mlm.py", line 612, in <module> main() File "/transformers/examples/pytorch/language-modeling/run_mlm.py", line 359, in main config.update_from_string(model_args.config_overrides) File "/transformers/src/transformers/configuration_utils.py", line 850, in update_from_string d = dict(x.split("=") for x in update_str.split(",")) ValueError: dictionary update sequence element #5 has length 1; 2 is required ``` **How to set `config_overrides` in `run_mlm.py`?** 2. When I set `per_device_eval_batch_size 64`, there is an error: ``` RuntimeError: CUDA out of memory. Tried to allocate 21.48 GiB (GPU 0; 39.59 GiB total capacity; 26.26 GiB already allocated; 11.40 GiB free; 26.32 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF 0%| | 0/175900 [00:27<?, ?it/s] ``` Load imbalance caused by data parallelism. **How to set up distributed data parallelism in trainer?** ### Expected behavior Be able to train Roberta from scratch in DDP mode using large batch size.
07-09-2022 13:03:20
07-09-2022 13:03:20
Are you sure you pasted the exact command you ran? I have no error when trying it on my side and the config is successfully updated. To use distributed training, just use the pytorch launcher instead of `python` to run your script, see [here](https://huggingface.co/docs/transformers/run_scripts#distributed-training-and-mixed-precision).<|||||>> Are you sure you pasted the exact command you ran? I have no error when trying it on my side and the config is successfully updated. To use distributed training, just use the pytorch launcher instead of `python` to run your script, see [here](https://huggingface.co/docs/transformers/run_scripts#distributed-training-and-mixed-precision). Yes. I'm sure. Maybe I should change `--config_overrides vocab_size=52_000,max_position_embeddings=514,num_attention_heads=12,num_hidden_layers=12,type_vocab_size=1` to `--config_overrides "vocab_size=52_000,max_position_embeddings=514,num_attention_heads=12,num_hidden_layers=12,type_vocab_size=1"`? In other words, should quotes be added to the config_overrides parameter?<|||||>> Are you sure you pasted the exact command you ran? I have no error when trying it on my side and the config is successfully updated. To use distributed training, just use the pytorch launcher instead of `python` to run your script, see [here](https://huggingface.co/docs/transformers/run_scripts#distributed-training-and-mixed-precision). Thanks. I successfully ran distributed training when continue pre-train, but `--per_device_train_batch_size` can only be set to a maximum of 8, increasing to 16 will report an error `CUDA out of memory`. But I use the LineByLineTextDataset to write the following script: ``` tokenizer = RobertaTokenizerFast.from_pretrained("roberta-base") model = RobertaForMaskedLM.from_pretrained("roberta-base") print(model.num_parameters()) train_dataset = LineByLineTextDataset( tokenizer=tokenizer, file_path=f"{data_dir}/train_codes.txt", block_size=128, ) test_dataset = LineByLineTextDataset( tokenizer=tokenizer, file_path=f"{data_dir}/valid_codes.txt", block_size=128, ) data_collator = DataCollatorForLanguageModeling( tokenizer=tokenizer, mlm=True, mlm_probability=0.15 ) training_args = TrainingArguments( output_dir=model_dir, overwrite_output_dir=True, num_train_epochs=50, per_gpu_train_batch_size=64, save_steps=5000, do_eval=True, logging_dir=log_dir, ) trainer = Trainer( model=model, args=training_args, data_collator=data_collator, train_dataset=train_dataset, eval_dataset = test_dataset ) trainer.train() trainer.save_model(model_dir) tokenizer.save_pretrained(tokenizer_dir) ``` Using the same training data, my script can handle up to 64 batches per GPU, while RUN_mlm.py can handle only 8 batches per GPU. Why? Can pyTorch Launcher be used to run distributed training using `LineByLineTextDataset`?<|||||>"Using deprecated `--per_gpu_train_batch_size` argument which will be removed in a future version. Using `--per_device_train_batch_size` is preferred." `per_device_train_batch_size` specifies the batch size to be processed by each GPU, right?<|||||>@sgugger I used the 'LineByLineTextDataset' script as above to continue pre-train Roberta on multiple cards in a single machine. It seemed to be an unbalanced load. ![image](https://user-images.githubusercontent.com/41561936/178887267-f4a6c4d9-d408-45cc-b557-3daff1de0cd9.png) Is the single-machine multi-card of LineByLineTextDataset implemented with `DataParallel`? Is there an implementation of `DistributedDataParallel`? <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
18,082
closed
Export bert to onnx failed
### System Info - `transformers` version: 4.17.0 - Platform: Linux-4.15.0-167-generic-x86_64-with-debian-buster-sid - Python version: 3.7.6 - Onnx version: 1.12.0 - PyTorch version (GPU?): 1.10.1+cu111 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <True> - Using distributed or parallel set-up in script?: <No> ### Who can help? _No response_ ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction Code: ``` import torch from transformers import AutoModel device = torch.device('cuda') model = AutoModel.from_pretrained('bert-base-chinese') model.to(device) model.eval() batch_size = 32 size = (batch_size, 256) export_onnx_file = 'save/bert.onnx' input_ids = torch.zeros(size=size, device=device, dtype=torch.long) attention_mask = torch.ones(size=size, device=device, dtype=torch.float) token_type_ids = torch.zeros(size=size, device=device, dtype=torch.long) inputs = (input_ids, attention_mask, token_type_ids) torch.onnx.export(model=model, args=inputs, f=export_onnx_file, verbose=False, opset_version=12, do_constant_folding=True, output_names = ['last_hidden_state', 'pooler_output'], input_names=["input_ids", "attention_mask", "token_type_ids"]) ``` ### Expected behavior Error info: ``` /home/zhangzhexin/anaconda3/lib/python3.7/site-packages/torch/onnx/symbolic_helper.py:325: UserWarning: Type cannot be inferred, which might cause exported graph to produce incorrect results. warnings.warn("Type cannot be inferred, which might cause exported graph to produce incorrect results.") [W shape_type_inference.cpp:434] Warning: Constant folding in symbolic shape inference fails: Index is supposed to be an empty tensor or a vector Exception raised from index_select_out_cuda_impl at /pytorch/aten/src/ATen/native/cuda/Indexing.cu:742 (most recent call first): frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x42 (0x7ff9245c7d62 in /home/zhangzhexin/anaconda3/lib/python3.7/site-packages/torch/lib/libc10.so) frame #1: c10::detail::torchCheckFail(char const*, char const*, unsigned int, char const*) + 0x5f (0x7ff9245c475f in /home/zhangzhexin/anaconda3/lib/python3.7/site-packages/torch/lib/libc10.so) frame #2: void at::native::(anonymous namespace)::index_select_out_cuda_impl<float>(at::Tensor&, at::Tensor const&, long, at::Tensor const&) + 0x190d (0x7ff7a4e601bd in /home/zhangzhexin/anaconda3/lib/python3.7/site-packages/torch/lib/libtorch_cuda_cu.so) frame #3: at::native::index_select_out_cuda(at::Tensor const&, long, at::Tensor const&, at::Tensor&) + 0x3d3 (0x7ff7a4dce0e3 in /home/zhangzhexin/anaconda3/lib/python3.7/site-packages/torch/lib/libtorch_cuda_cu.so) frame #4: at::native::index_select_cuda(at::Tensor const&, long, at::Tensor const&) + 0xd0 (0x7ff7a4dce610 in /home/zhangzhexin/anaconda3/lib/python3.7/site-packages/torch/lib/libtorch_cuda_cu.so) frame #5: <unknown function> + 0x25756d6 (0x7ff7a5d296d6 in /home/zhangzhexin/anaconda3/lib/python3.7/site-packages/torch/lib/libtorch_cuda_cu.so) frame #6: <unknown function> + 0x2575722 (0x7ff7a5d29722 in /home/zhangzhexin/anaconda3/lib/python3.7/site-packages/torch/lib/libtorch_cuda_cu.so) frame #7: at::_ops::index_select::redispatch(c10::DispatchKeySet, at::Tensor const&, long, at::Tensor const&) + 0xb9 (0x7ff7f5617649 in /home/zhangzhexin/anaconda3/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so) frame #8: <unknown function> + 0x3253be3 (0x7ff7f6f95be3 in /home/zhangzhexin/anaconda3/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so) frame #9: <unknown function> + 0x3254215 (0x7ff7f6f96215 in /home/zhangzhexin/anaconda3/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so) frame #10: at::_ops::index_select::call(at::Tensor const&, long, at::Tensor const&) + 0x166 (0x7ff7f5697296 in /home/zhangzhexin/anaconda3/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so) frame #11: torch::jit::onnx_constant_fold::runTorchBackendForOnnx(torch::jit::Node const*, std::vector<at::Tensor, std::allocator<at::Tensor> >&, int) + 0x1b5f (0x7ff8d8cf023f in /home/zhangzhexin/anaconda3/lib/python3.7/site-packages/torch/lib/libtorch_python.so) frame #12: <unknown function> + 0xbcea6a (0x7ff8d8d37a6a in /home/zhangzhexin/anaconda3/lib/python3.7/site-packages/torch/lib/libtorch_python.so) frame #13: torch::jit::ONNXShapeTypeInference(torch::jit::Node*, std::map<std::string, c10::IValue, std::less<std::string>, std::allocator<std::pair<std::string const, c10::IValue> > > const&, int) + 0xa8e (0x7ff8d8d3d30e in /home/zhangzhexin/anaconda3/lib/python3.7/site-packages/torch/lib/libtorch_python.so) frame #14: <unknown function> + 0xbd5e12 (0x7ff8d8d3ee12 in /home/zhangzhexin/anaconda3/lib/python3.7/site-packages/torch/lib/libtorch_python.so) frame #15: <unknown function> + 0xb414c0 (0x7ff8d8caa4c0 in /home/zhangzhexin/anaconda3/lib/python3.7/site-packages/torch/lib/libtorch_python.so) frame #16: <unknown function> + 0x2a5aa8 (0x7ff8d840eaa8 in /home/zhangzhexin/anaconda3/lib/python3.7/site-packages/torch/lib/libtorch_python.so) <omitting python frames> frame #45: __libc_start_main + 0xe7 (0x7ff93fb25c87 in /lib/x86_64-linux-gnu/libc.so.6) (function ComputeConstantFolding) Traceback (most recent call last): File "onnx_tensorrt.py", line 425, in <module> test_bert() File "onnx_tensorrt.py", line 316, in test_bert input_names=["input_ids", "attention_mask", "token_type_ids"]) File "/home/zhangzhexin/anaconda3/lib/python3.7/site-packages/torch/onnx/__init__.py", line 320, in export custom_opsets, enable_onnx_checker, use_external_data_format) File "/home/zhangzhexin/anaconda3/lib/python3.7/site-packages/torch/onnx/utils.py", line 111, in export custom_opsets=custom_opsets, use_external_data_format=use_external_data_format) File "/home/zhangzhexin/anaconda3/lib/python3.7/site-packages/torch/onnx/utils.py", line 729, in _export dynamic_axes=dynamic_axes) File "/home/zhangzhexin/anaconda3/lib/python3.7/site-packages/torch/onnx/utils.py", line 545, in _model_to_graph _export_onnx_opset_version) RuntimeError: Index is supposed to be an empty tensor or a vector ``` However, if I set the dynamic_axes, there is no problem: ``` torch.onnx.export(model=model, args=inputs, f=export_onnx_file, verbose=False, opset_version=12, do_constant_folding=True, output_names = ['last_hidden_state', 'pooler_output'], input_names=["input_ids", "attention_mask", "token_type_ids"], dynamic_axes={"input_ids": {0: "batch_size"}, "attention_mask": {0: "batch_size"}, "token_type_ids": {0: "batch_size"}, }) ``` Because I need to further convert onnx to tensorrt and my tensorrt version only supports fixed input shape, I don't want to set the dynamic_axes. So how to fix this problem when not setting the dynamic_axes?
07-09-2022 10:23:33
07-09-2022 10:23:33
@[nonstopfor](https://github.com/nonstopfor), you can change dynamic_axes to fixed shape with onnx python API like the following: ``` import onnx model = onnx.load("input.onnx") for tensor in model.graph.input: for dim_proto in tensor.type.tensor_type.shape.dim: if dim_proto.HasField("dim_param"): # and dim_proto.dim_param == 'batch_size': dim_proto.Clear() dim_proto.dim_value = 32 # fixed batch size for tensor in model.graph.output: for dim_proto in tensor.type.tensor_type.shape.dim: if dim_proto.HasField("dim_param"): dim_proto.Clear() dim_proto.dim_value = 32 # fixed batch size onnx.save(model, "output.onnx") ``` <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>I am facing the same issue converting a custom implementation of DETR (transformer). @nonstopfor were you able to fix this?
transformers
18,081
closed
Added the timeout variable in training args to avoid socket timeouts in DDP calls
Overriding the default timeout defined by PyTorch in **torch.distributed.init_process_group** calls by introducing a timeout argument and prevents Socket Timeouts Added a custom timeout argument in **src/transformers/training_args.py**. It can be used to override the timeout argument in the **init_process_group** function call to avoid socket timeouts for mapping or tokenizing huge datasets that may take a longer time. The timeout variable is a **int** type and default value is set to **1800(s)** which is default value in torch.distributed.init_process_group fn. The timeout datatype used in torch.distributed.init_process_group is a **datetime.timedelta** object that expects this timeout variable. # What does this PR do? Fixes #18054 #17106
07-09-2022 09:34:47
07-09-2022 09:34:47
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_18081). All of your documentation changes will be reflected on that endpoint.<|||||>Hey @dvlshah, thank you for your PR! Would this parameter need to be used on https://github.com/huggingface/transformers/blob/ac98a88fbc6377f93e8b7fbd244b0c3331bb82a0/src/transformers/training_args.py#L1310? cc @sgugger <|||||>> Hey @dvlshah, thank you for your PR! Would this parameter need to be used on > > https://github.com/huggingface/transformers/blob/ac98a88fbc6377f93e8b7fbd244b0c3331bb82a0/src/transformers/training_args.py#L1310 > > ? > cc @sgugger The idea is provide the option to use this parameter if they want in **torch.distributed.init_process_group(backend=self.xpu_backend, rank=rank, world_size=size)** fn call.<|||||>Thanks, but I'm not sure I follow the PR: why add a new `TrainingArguments` if it's not used anywhere?<|||||>@sgugger I need to add the timeout var in the torch.distributed.init_process_group call. Forgot the push the change in the PR.<|||||>Hey @dvlshah, did you push the changes?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
18,080
closed
attention_mask bug when training Wav2Vec2ForCTC with DeepSpeed
### System Info - `transformers` version: 4.19.2 - Platform: Linux-4.15.0-144-generic-x86_64-with-glibc2.27 - Python version: 3.8.13 - Huggingface_hub version: 0.7.0 - PyTorch version (GPU?): 1.7.1+cu110 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? @patrickvonplaten @stas00 ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction I experienced a problem when training Wav2Vec2ForCTC if i preprocessed data to create an attention_mask, it's dtype is int32 here is simple_example ``` import torch from transformers import Wav2Vec2FeatureExtractor feature_extractor = Wav2Vec2FeatureExtractor(return_attention_mask=True) data = [{'input_values':[0.1,0.1,0.1]},{'input_values':[0.2,0.2,0.2,0.2,0.2]}] attn_mask = feature_extractor.pad(data,padding = "longest",return_tensors="pt")['attention_mask'] print(attn_mask.dtype) -> torch.int32 ``` it is caused problem when training Wav2Vec2ForCTC with deepspeed _prepare_input method in trainer.py change int32 to float16 (if training fp16) ``` def _prepare_input(self, data: Union[torch.Tensor, Any]) -> Union[torch.Tensor, Any]: """ Prepares one `data` before feeding it to the model, be it a tensor or a nested list/dictionary of tensors. """ if isinstance(data, Mapping): return type(data)({k: self._prepare_input(v) for k, v in data.items()}) elif isinstance(data, (tuple, list)): return type(data)(self._prepare_input(v) for v in data) elif isinstance(data, torch.Tensor): kwargs = dict(device=self.args.device) if self.deepspeed and data.dtype != torch.int64: # NLP models inputs are int64 and those get adjusted to the right dtype of the # embedding. Other models such as wav2vec2's inputs are already float and thus # may need special handling to match the dtypes of the model kwargs.update(dict(dtype=self.args.hf_deepspeed_config.dtype())) return data.to(**kwargs) return data ``` and forword in Wav2Vec2ForCTC is using sum of attention_mask values ``` loss = None if labels is not None: if labels.max() >= self.config.vocab_size: raise ValueError(f"Label values must be <= vocab_size: {self.config.vocab_size}") # retrieve loss input_lengths from attention_mask attention_mask = ( attention_mask if attention_mask is not None else torch.ones_like(input_values, dtype=torch.long) ) input_lengths = self._get_feat_extract_output_lengths(attention_mask.sum(-1)).to(torch.long) # Here! ``` because current attention_mask's dtype is float16(deepspeed), and length vector of the audio is long, attention_mask.sum(-1) has many 'inf' value and it make break training Is this a known bug? i solved this porblem to edit DataCollatorCTCWithPadding in [example](https://github.com/huggingface/transformers/blob/main/examples/pytorch/speech-recognition/run_speech_recognition_ctc.py#L265) like this ``` batch['attention_mask'] = batch['attention_mask'].to(torch.long) ``` but i want know other solution ### Expected behavior maybe change attention_mask's dtype from FeatureExtractor or _prepare_input method's logic
07-09-2022 05:47:55
07-09-2022 05:47:55
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Sorry for being so slow / late here @ddobokki ! I think your solution sounds reasonable: ``` batch['attention_mask'] = batch['attention_mask'].to(torch.long) ``` => `attention_mask` should be in `long` so this is a welcome change. Do you mind opening a PR for this? BTW, we do the same (casting to `long`) for similar inputs for pre-training: https://github.com/huggingface/transformers/blob/6268694e27f1fc0192ba24e4bec181061b4a9bf8/examples/pytorch/speech-pretraining/run_wav2vec2_pretraining_no_trainer.py#L335<|||||>@patrickvonplaten Thank you for the comments! It's a small change but i glad for contribution! I'll opening a PR.
transformers
18,079
closed
Custom pipeline
# What does this PR do? This PR adds the ability to support custom pipelines on the Hub and share it with everyone else. Like the code in the Hub feature for models, tokenizers etc., the user has to add `trust_remote_code=True` when they want to use it. Apart from this, the best way to get familiar with the feature is to look at the [added documentation](https://moon-ci-docs.huggingface.co/docs/transformers/pr_18079/en/add_new_pipeline#adding-it-to-the-list-of-supported-tasks). Note: this PR changes the newly added `PIPELINE_REGISTRY.register_pipeline` API to accept all the arguments one by one instead of inside a big dictionary. This makes the API easier to use in my opinion.
07-08-2022 21:37:12
07-08-2022 21:37:12
_The documentation is not available anymore as the PR was closed or merged._<|||||>@sgugger what is your opinion on using attrs for validation with the representation of the pipeline task? I think it would be nice if it has a validation step here. (this means introducing `attrs` as a transformers' dependency).<|||||>Yes, the doc clearly states that you need to save your custom pipeline in a module and import it from there. I could add support from writing something from `__main__.py` but maybe in a followup PR that also deals with custom models/tokenziers/configs etc?<|||||>[Line 655](https://github.com/huggingface/transformers/blob/f4e172716b91b477ce3cddc9a253094b7121a4b8/src/transformers/pipelines/__init__.py#L655) in https://github.com/huggingface/transformers/blob/f4e172716b91b477ce3cddc9a253094b7121a4b8/src/transformers/pipelines/__init__.py#L649-L657 will call https://github.com/huggingface/transformers/blob/f4e172716b91b477ce3cddc9a253094b7121a4b8/src/transformers/pipelines/base.py#L257 In the current version, we have `model_class` not being an `Auto` class for `(TF) ImageClassificationPipelineTests`, and we get test failure `TypeError: ('Keyword argument not understood:', 'trust_remote_code')` https://github.com/huggingface/transformers/runs/7421505300?check_suite_focus=true Adding `TFAutoModelForImageClassification` in `src/transformers/pipelines/__init__.py` will fix the issue.
transformers
18,078
closed
Make predict() close progress bars after finishing (#17952)
Fixes #17952 by adding an `on_predict` callback. ## Who can review? @sgugger
07-08-2022 20:00:42
07-08-2022 20:00:42
_The documentation is not available anymore as the PR was closed or merged._<|||||>Not sure about the tests, should they be there for notebooks too? I'll go with this for now<|||||>I just triggered all them by pushing a copy of your branch in the main fork of the repo, circleCI is very finnicky. Let's check everything is green!<|||||>Failure is flaky, so this is good to merge, thanks again!
transformers
18,077
closed
Fix slow CI by pinning resampy
# What does this PR do? The recent release of resampy (0.3.1) seems to suddenly make a lot of things (even unrelated to speech) very slow and the CI has several jobs timing out. This PR fixes that by pinning resampy to any previous version.
07-08-2022 14:30:27
07-08-2022 14:30:27
_The documentation is not available anymore as the PR was closed or merged._
transformers
18,076
closed
[Do not merge] debug Circleci
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
07-08-2022 13:48:53
07-08-2022 13:48:53
_The documentation is not available anymore as the PR was closed or merged._
transformers
18,075
closed
Have 'Random Crop' option for truncation_side for Tokenizer
### Feature request Currently for the tokenizer, there's only two options for truncation_side, 'right' and 'left'. I would like an option for 'random_crop' so that it takes a sequence of max length anywhere from the sequence. ### Motivation As a way for data augmentation, some people might want a random crop rather than to consistently get a crop from one side or another. This will vary the inputs into the model from the same data source. ### Your contribution I'm not quite sure. I don't know Rust which I believe is what the tokenizer is based on.
07-08-2022 11:48:54
07-08-2022 11:48:54
@SantoshGuptaML Where would you want this to happen? One option you have is to write a custom slow (non-fast) tokenizer. You can follow this example where the tokenizer is able to make random variations in output tokens: https://github.com/huggingface/transformers/pull/11149/files afaik rust is used for fast tokenizers, but the slow tokenizers are written in python. Another option is you can subclass the DataCollator. This allows you to augment samples during training using the Trainer API instead of creating an augmented dataset ahead of time. Here's a working (albeit unoptimized) example of a custom data collator that does what you want. ``` from transformers import DefaultDataCollator from dataclasses import dataclass import random @dataclass class RandomCropDataCollator(DataCollatorWithPadding): random_truncation_token_length = 10 def __call__(self, features): for f in features: original_token_length = len(f['input_ids']) start_truncation = random.randint(0, original_token_length-self.random_truncation_token_length) f['input_ids'] = f['input_ids'][:start_truncation] + f['input_ids'][start_truncation+self.random_truncation_token_length:] f['attention_mask'] = f['attention_mask'][:start_truncation] + f['attention_mask'][start_truncation+self.random_truncation_token_length:] end_shape = len(f['input_ids']) #print(original_token_length, "-------->", end_shape) return super().__call__(features) ``` `data_collator = RandomCropDataCollator(tokenizer)` [colab](https://colab.research.google.com/drive/1HRTDjuKw1TRTRlIT08MhZOXCbViGlpht?usp=sharing)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
18,074
closed
Cannot successfully convert convnext models to onnx using transformers.onnx
### System Info transformers==4.20.1 onnxruntime==1.11.1 ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction `python -m transformers.onnx --model="facebook/convnext-base-224-22k" convnext-base-224-22k ` Used the above command to convert. The conversion to onnx format is not successful. Error : ``` 2022-07-08 15:21:40.258366: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory 2022-07-08 15:21:40.258414: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine. 2022-07-08 15:22:35.205167: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:975] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2022-07-08 15:22:35.210358: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory 2022-07-08 15:22:35.210594: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcublas.so.11'; dlerror: libcublas.so.11: cannot open shared object file: No such file or directory 2022-07-08 15:22:35.210711: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcublasLt.so.11'; dlerror: libcublasLt.so.11: cannot open shared object file: No such file or directory 2022-07-08 15:22:35.210841: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcufft.so.10'; dlerror: libcufft.so.10: cannot open shared object file: No such file or directory 2022-07-08 15:22:35.211142: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcurand.so.10'; dlerror: libcurand.so.10: cannot open shared object file: No such file or directory 2022-07-08 15:22:35.211293: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcusolver.so.11'; dlerror: libcusolver.so.11: cannot open shared object file: No such file or directory 2022-07-08 15:22:35.211417: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcusparse.so.11'; dlerror: libcusparse.so.11: cannot open shared object file: No such file or directory 2022-07-08 15:22:35.211569: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcudnn.so.8'; dlerror: libcudnn.so.8: cannot open shared object file: No such file or directory 2022-07-08 15:22:35.211603: W tensorflow/core/common_runtime/gpu/gpu_device.cc:1850] Cannot dlopen some GPU libraries. Please make sure the missing libraries mentioned above are installed properly if you would like to use GPU. Follow the guide at https://www.tensorflow.org/install/gpu for how to download and setup the required libraries for your platform. Skipping registering GPU devices... 2022-07-08 15:22:35.247214: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 FMA To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags. Some weights of the model checkpoint at facebook/convnext-base-224-22k were not used when initializing ConvNextModel: ['classifier.bias', 'classifier.weight'] - This IS expected if you are initializing ConvNextModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model). - This IS NOT expected if you are initializing ConvNextModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). Using framework PyTorch: 1.11.0+cu102 Validating ONNX model... -[✓] ONNX model output names match reference model ({'last_hidden_state'}) - Validating ONNX Model output "last_hidden_state": -[✓] (2, 1024, 7, 7) matches (2, 1024, 7, 7) -[x] values not close enough (atol: 1e-05) Traceback (most recent call last): File "/usr/lib/python3.7/runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "/usr/lib/python3.7/runpy.py", line 85, in _run_code exec(code, run_globals) File "/media/dingusagar/Data/python-envs/onnx-env/lib/python3.7/site-packages/transformers/onnx/__main__.py", line 107, in <module> main() File "/media/dingusagar/Data/python-envs/onnx-env/lib/python3.7/site-packages/transformers/onnx/__main__.py", line 100, in main validate_model_outputs(onnx_config, preprocessor, model, args.output, onnx_outputs, args.atol) File "/media/dingusagar/Data/python-envs/onnx-env/lib/python3.7/site-packages/transformers/onnx/convert.py", line 441, in validate_model_outputs "Outputs values doesn't match between reference model and ONNX exported model: " ValueError: Outputs values doesn't match between reference model and ONNX exported model: Got max absolute difference of: 0.000457763671875 ``` ### Expected behavior Expected the validation step to be successful.
07-08-2022 09:54:31
07-08-2022 09:54:31
You need to provide the `--feature` argument<|||||>@NielsRogge is right, if you want to perform image classification you need to provide the `--feature image-classification` argument. Besides, it is also possible that the default absolute tolerance used for validating the model is not adapted to different sizes/checkpoints of the same architecture. We use `"facebook/convnext-tiny-224"` in our internal tests, but, according to the error you got, `"facebook/convnext-base-224-22k"` may require the additional argument `--atol 1e-03`, which is still acceptable.<|||||>Thanks for the reply @NielsRogge @regisss . `--atol 1e-03` fixed the issue.
transformers
18,073
closed
Update TF(Vision)EncoderDecoderModel PT/TF equivalence tests
# What does this PR do? Make PT/TF equivalence tests for TF(Vision)EncoderDecoderModel aligned with the ones defined in `test_modeling_tf_common.py`: - test all hidden states, attention outputs - (and a minor bonus: consistent test names)
07-08-2022 09:47:26
07-08-2022 09:47:26
_The documentation is not available anymore as the PR was closed or merged._<|||||>> Would it be possible to move in the opposite direction, i.e. make the TFEncoderDecoder tests (or TFEncoderDecoderMixin itself) inherit from TFModelTesterMixin? I also asked myself, and think it would be nice. Having something like `TFGenericEncoderDecoderModelTesterMixin` (or whatever better names) and inheriting from there is good enough (see below). Regarding inheriting from `TFModelTesterMixin`, we have to skip some (or a lot of) tests that are not for encoder-decoder architectures, and requires some extra changes in `TFModelTesterMixin` to take into account encoder-decoder architectures. There is a previous related discussion: - one from @patrickvonplaten (and more related to this discussion here): https://github.com/huggingface/transformers/pull/16280#issuecomment-1077449865 - from @sgugger https://github.com/huggingface/transformers/pull/16280#issuecomment-1077603150 - a thread: https://github.com/huggingface/transformers/pull/16280#discussion_r832036543 remark: the discussion regarding `is_encoder_decoder` there is (originally) more about Bart / T5 etc, not the `(Vision/Speech)EncoderDecoderModel`.<|||||>Okay I see the argument and I agree with it -- several composable mixins would be the way! We may be able to carve out some tests from `TFModelTesterMixin` that are truly global, but streamlining `EncoderDecoder` tests seems more important 👍 I will be reviewing this PR soon with that in mind<|||||>@gante I also fixed `closed enough` in `tests/test_modeling_common.py` (thinking it is tiny that could be fixed in this PR too)
transformers
18,072
closed
Enhance IPEX integration in Trainer
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes [# (issue)](https://github.com/huggingface/transformers/issues/17962) - Adding IPEX vs PyTorch version checking to is_ipex_available; - Trainer UT will skip or users will be informed, when IPEX is not aligned with PyTorch version; ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
07-08-2022 09:36:46
07-08-2022 09:36:46
_The documentation is not available anymore as the PR was closed or merged._<|||||>so you probably want to re-enable the tests then in this PR, right? as flagged here they were disabled as they were failing: https://github.com/huggingface/transformers/pull/17138#issuecomment-1171348748 so in this case you'd probably want to add a skip decorator that requires not just availability of ipex but also that it matches torch version.<|||||>> so you probably want to re-enable the tests then in this PR, right? > > as flagged here they were disabled as they were failing: > > [#17138 (comment)](https://github.com/huggingface/transformers/pull/17138#issuecomment-1171348748) > > so in this case you'd probably want to add a skip decorator that requires not just availability of ipex but also that it matches torch version. Yes! And here we enhance the `is_ipex_available `to check both the installation and version matching; And it is used in the `require_intel_extension_for_pytorch`
transformers
18,071
closed
Composite models (encoder-decoder) behave differently across PyTorch and TensorFlow
### System Info - `transformers` version: 4.21.0.dev0 - Platform: Windows-10-10.0.19044-SP0 - Python version: 3.9.5 - Huggingface_hub version: 0.8.1 - PyTorch version (GPU?): 1.10.1+cpu (False) - Tensorflow version (GPU?): 2.7.0 (False) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help? @gante @patrick ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction For composite models (Encoder-Decoder like), the behavior of some config attributes (e.g. `output_hidden_states`) is not consistent. - PT Encoder-Decoder leaves the components to use their own configs - TF Encoder-Decoder uses `unpack_inputs`, which uses its own config attributes (and some attributes of its component configs have no effect) ### Case 1 ```python import torch import tensorflow as tf from transformers import BertConfig from transformers import EncoderDecoderConfig, EncoderDecoderModel, TFEncoderDecoderModel bert_config = BertConfig() config = EncoderDecoderConfig.from_encoder_decoder_configs(encoder_config=bert_config, decoder_config=bert_config) # Set `output_hidden_states` in top models config.output_hidden_states = True config.output_hidden_states = True pt_model = EncoderDecoderModel(config) tf_model = TFEncoderDecoderModel(config) input_ids = torch.tensor([[1, 1]], dtype=torch.int32) pt_output = pt_model(input_ids, decoder_input_ids=input_ids) input_ids = tf.constant([[1, 1]], dtype=tf.int32) tf_output = tf_model(input_ids, decoder_input_ids=input_ids) print(pt_output.keys()) print(tf_output.keys()) ``` ### Output (case 1) ```bash odict_keys(['logits', 'past_key_values', 'encoder_last_hidden_state']) odict_keys(['logits', 'decoder_hidden_states', 'encoder_last_hidden_state', 'encoder_hidden_states']) ``` ### Case 2 ```python import torch import tensorflow as tf from transformers import BertConfig from transformers import EncoderDecoderConfig, EncoderDecoderModel, TFEncoderDecoderModel bert_config = BertConfig() config = EncoderDecoderConfig.from_encoder_decoder_configs(encoder_config=bert_config, decoder_config=bert_config) # Set `output_hidden_states` in sub models config.encoder.output_hidden_states = True config.decoder.output_hidden_states = True pt_model = EncoderDecoderModel(config) tf_model = TFEncoderDecoderModel(config) input_ids = torch.tensor([[1, 1]], dtype=torch.int32) pt_output = pt_model(input_ids, decoder_input_ids=input_ids) input_ids = tf.constant([[1, 1]], dtype=tf.int32) tf_output = tf_model(input_ids, decoder_input_ids=input_ids) print(pt_output.keys()) print(tf_output.keys()) ``` ### Output (case 2) ```bash odict_keys(['logits', 'past_key_values', 'decoder_hidden_states', 'encoder_last_hidden_state', 'encoder_hidden_states']) odict_keys(['logits', 'encoder_last_hidden_state']) ``` ### Expected behavior Ideally, all of them should return hidden states. Or at the least, PT/TF should return the same results.
07-08-2022 08:07:55
07-08-2022 08:07:55
Seems related to the `@unpack_inputs` decorator - I remember it being much trickier on composite models when I first added it. Will be taking a look today 👍
transformers
18,070
closed
Warnings for unexpected parameters when resuming training.
### Feature request The `Trainer` class should report warning messages if the provided training arguments for the resumed run, don't match with the stored configuration in the given checkpoint. ### Motivation I started an MLM training with 9 epochs and then tried to resume the training til 15 epochs. I didn't realise at the time that the learning rate had already decayed to zero in the previous run. In the new run, the LR was re-computed and no warning was given, making me think the training is going on as expected. What would have helped is a warning message saying that the previous run was for 9 epochs and that cannot change in this run. ### Your contribution N/A
07-08-2022 07:54:42
07-08-2022 07:54:42
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
18,069
closed
Fix RESOURCE_EXHAUSTED error when dealing with large datasets in Flax example scripts
# What does this PR do? Fixes #15411. All Flax example scripts using `generate_batch_splits` raise RESOURCE_EXHAUSTED error when training with large datasets on TPU. ## Fix Simply transfer the jnp array `samples_idx` back to the host before putting it in `np.split`. ## Who can review? cc potential reviewers: @sgugger, @patil-suraj, @patrickvonplaten
07-08-2022 07:16:50
07-08-2022 07:16:50
_The documentation is not available anymore as the PR was closed or merged._<|||||>@duongna21 Do you know the reason why we get `RESOURCE_EXHAUSTED` without the fix? Also, it would be nice to add a comment before the line to explain it a bit 🙏 Also, does this issue occur only when we run on TPU? ~~I see this error before when running on TPU, but never see it on CPU.~~ Thank you!<|||||>@ydshieh I dug a little bit into the memory profiling when running the original code: ``` >>> import jax.numpy as jnp >>> import numpy as np >>> import time >>> def generate_batch_splits(samples_idx: jnp.ndarray, batch_size: int) -> jnp.ndarray: ... num_samples = len(samples_idx) ... samples_to_remove = num_samples % batch_size ... if samples_to_remove != 0: ... samples_idx = samples_idx[:-samples_to_remove] ... sections_split = num_samples // batch_size ... batch_idx = np.split(samples_idx, sections_split) ... jax.profiler.save_device_memory_profile("generate_batch_splits_tpu_bs64.prof") ... return batch_idx ... >>> input_rng = jax.random.PRNGKey(0) >>> samples_idx = jax.random.permutation(input_rng, jnp.arange(50758958)) # 50758958 is num of samples in my case >>> tic = time.time() >>> train_batch_idx = generate_batch_splits(samples_idx, batch_size=64) # per device batch size = 8 >>> toc = time.time() >>> print('elapsed time: ', toc-tic) elapsed time: 578.8218853473663 ``` <img src="https://i.imgur.com/8tns1X2.png" width="400" height="300" /> => **968.15MB** (keep increasing when we use larger dataset) of TPU memory ~~wasted~~ used while generating batch splits. When I transfered the jnp array `samples_idx` back to the CPU before putting it in `np.split`: ``` ... def generate_batch_splits(samples_idx: jnp.ndarray, batch_size: int) -> jnp.ndarray: samples_idx = jax.device_get(samples_idx) num_samples = len(samples_idx) ... >>> print('elapsed time: ', toc-tic) elapsed time: 3.2475574016571045 ``` <img src="https://i.imgur.com/MCDDYHv.png" width="220" height="300" /> => We just saved **968.15MB** (and 575 seconds). But we have an even better solution to save another **193.63MB** (and 2 seconds) by generating `samples_idx` using CPU in the first place: ``` ... samples_idx = np.random.permutation(np.arange(50758958)) # 50758958 is num of samples in my case ... >>> print('elapsed time: ', toc-tic) elapsed time: 1.0959625244140625 ``` Anyway, 1GB of wasted TPU memory is not the decisive factor in causing RESOURCE_EXHAUTED, but this fix saves us from the weird RESOURCE_EXHAUTED error when using the same batch size and only increasing the dataset size. Of course, this issue can occur on CPU when we run out of RAM. <|||||>Thanks a lot for the PR @duongna21 ! You're right we should better use numpy there - the PR looks good to me :-)
transformers
18,068
closed
StoppingCriteria "scores" is always None
### System Info I've written a custom StoppingCriteria subclass and I'm trying to utilize the `scores` in my decision logic, but I'm finding that `scores` is always `None`. Is that intentional? ### Who can help? @patrickvonplaten, @Narsil, @gante ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction ``` class TopPredictionOutsideTargetSetStoppingCriteria(StoppingCriteria): def __init__(self, priority_tokens_ids: list): self.priority_token_ids = priority_tokens_ids def __call__(self, input_ids: torch.LongTensor, scores: torch.FloatTensor, **kwargs) -> bool: print(f"TopPred SCORES? {scores}, input_ids: {input_ids}") # <--- "scores" is None but "input_ids" is correct top = torch.topk(scores, 1, dim=1).indices[0] if not top in self.priority_token_ids: return True return False ``` ### Expected behavior Since the function indicates `scores` as an input, I'd expect it to be a non-null value.
07-08-2022 06:34:04
07-08-2022 06:34:04
Hey @jbmaxwell, Sadly we don't have the time to dive into problems related to customized code. For this could you maybe try to use the forum instead? https://discuss.huggingface.co/ Looking at this line: https://github.com/huggingface/transformers/blob/ac98a88fbc6377f93e8b7fbd244b0c3331bb82a0/src/transformers/generation_utils.py#L1738 `scores` should not be `None` though.<|||||>Thanks for the reply. I understand about custom code. I just wanted to report that `scores` is `None`, and to confirm that this _shouldn't_ be the case. I'll see if I can figure out how that's happening. <|||||>> Thanks for the reply. I understand about custom code. I just wanted to report that `scores` is `None`, and to confirm that this _shouldn't_ be the case. I'll see if I can figure out how that's happening. hi,i have the same issue,how did you solve this problem?
transformers
18,067
closed
[ create_a_model.mdx ] Finished
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
07-07-2022 22:23:24
07-07-2022 22:23:24
cc @omarespejel
transformers
18,066
closed
An example for finetuning FLAVA or any VLP multimodel using trainer (for example for classification)
### Feature request There is no example of finetuning any VLP model using trainer. I would appreciate an example ### Motivation The way to use trainers with any Vision and Language pretrained model is not clear. ### Your contribution None.
07-07-2022 21:31:51
07-07-2022 21:31:51
Hi, Notebooks for FLAVA will soon be available in https://github.com/NielsRogge/Transformers-Tutorials. You can find already some tutorials here: https://github.com/apsdehal/flava-tutorials. cc @apsdehal <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>The issue was auto marked as closed, but there aren't yet any resources on how to fine-tune FLAVA. Neither of the links posted above by @NielsRogge have instructions on fine-tuning. I'm posting to also express my interest on this.
transformers
18,065
closed
2d embed
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
07-07-2022 20:52:51
07-07-2022 20:52:51
_The documentation is not available anymore as the PR was closed or merged._
transformers
18,064
closed
add dataset split and config to model-index in TrainingSummary.from_trainer
# What does this PR do? Add dataset split and config to model-index in TrainingSummary when generated with from_trainer on a HF Hub dataset example: https://huggingface.co/loicmagne/pr_dataset_metadata/blob/main/README.md ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @lewtun
07-07-2022 19:46:46
07-07-2022 19:46:46
_The documentation is not available anymore as the PR was closed or merged._<|||||>Thanks for working on this @loicmagne 🔥 ! For the failing CI, I think you first need to run: ``` make style && make quality ``` to format the code. For the failing unit tests, it seems there's a problem with: ``` FAILED examples/pytorch/test_pytorch_examples.py::ExamplesTests::test_run_speech_recognition_seq2seq FAILED examples/pytorch/test_pytorch_examples.py::ExamplesTests::test_run_swag ``` In both cases, the source of the error is a missing key in the dataset tag logic: ``` self = TrainingSummary(model_name='tmpbcyb71hu', language='en', license='apache-2.0', tags=['generated_from_trainer'], finetu...epsilon=1e-08', 'lr_scheduler_type': 'linear', 'lr_scheduler_warmup_steps': 2, 'training_steps': 20}, source='trainer') metric_mapping = {'accuracy': 'Accuracy'} def create_model_index(self, metric_mapping): model_index = {"name": self.model_name} # Dataset mapping tag -> name dataset_names = _listify(self.dataset) dataset_tags = _listify(self.dataset_tags) dataset_args = _listify(self.dataset_args) dataset_metadata = _listify(self.dataset_metadata) if len(dataset_args) < len(dataset_tags): dataset_args = dataset_args + [None] * (len(dataset_tags) - len(dataset_args)) dataset_mapping = {tag: name for tag, name in zip(dataset_tags, dataset_names)} dataset_arg_mapping = {tag: arg for tag, arg in zip(dataset_tags, dataset_args)} dataset_metadata_mapping = {tag: metadata for tag, metadata in zip(dataset_tags, dataset_metadata)} task_mapping = { task: TASK_TAG_TO_NAME_MAPPING[task] for task in _listify(self.tasks) if task in TASK_TAG_TO_NAME_MAPPING } model_index["results"] = [] if len(task_mapping) == 0 and len(dataset_mapping) == 0: return [model_index] if len(task_mapping) == 0: task_mapping = {None: None} if len(dataset_mapping) == 0: dataset_mapping = {None: None} # One entry per dataset and per task all_possibilities = [(task_tag, ds_tag) for task_tag in task_mapping for ds_tag in dataset_mapping] for task_tag, ds_tag in all_possibilities: result = {} if task_tag is not None: result["task"] = {"name": task_mapping[task_tag], "type": task_tag} if ds_tag is not None: > metadata = dataset_metadata_mapping[ds_tag] or {} E KeyError: 'swag' ``` I suggest trying to reproduce the error locally with ``` $ pip install -r examples/pytorch/_tests_requirements.txt # only needed the first time $ python -m pytest -n auto --dist=loadfile -s -v ./examples/pytorch/test_pytorch_examples.py::ExamplesTests::test_run_swag ```<|||||>@loicmagne I see the two same examples test are still failing: ``` FAILED examples/pytorch/test_pytorch_examples.py::ExamplesTests::test_run_speech_recognition_seq2seq FAILED examples/pytorch/test_pytorch_examples.py::ExamplesTests::test_run_swag ``` Could you see if you can reproduce this locally and fix them if needed? Fo reference, here's the stack trace coming from the speech test: ``` self = TrainingSummary(model_name='tmp0e49tqpd', language=None, license=None, tags=['generated_from_trainer'], finetuned_from...Adam with betas=(0.9,0.999) and epsilon=1e-08', 'lr_scheduler_type': 'linear', 'training_steps': 10}, source='trainer') metric_mapping = {} def create_model_index(self, metric_mapping): model_index = {"name": self.model_name} # Dataset mapping tag -> name dataset_names = _listify(self.dataset) dataset_tags = _listify(self.dataset_tags) dataset_args = _listify(self.dataset_args) dataset_metadata = _listify(self.dataset_metadata) if len(dataset_args) < len(dataset_tags): dataset_args = dataset_args + [None] * (len(dataset_tags) - len(dataset_args)) dataset_mapping = {tag: name for tag, name in zip(dataset_tags, dataset_names)} dataset_arg_mapping = {tag: arg for tag, arg in zip(dataset_tags, dataset_args)} dataset_metadata_mapping = {tag: metadata for tag, metadata in zip(dataset_tags, dataset_metadata)} task_mapping = { task: TASK_TAG_TO_NAME_MAPPING[task] for task in _listify(self.tasks) if task in TASK_TAG_TO_NAME_MAPPING } model_index["results"] = [] if len(task_mapping) == 0 and len(dataset_mapping) == 0: return [model_index] if len(task_mapping) == 0: task_mapping = {None: None} if len(dataset_mapping) == 0: dataset_mapping = {None: None} # One entry per dataset and per task all_possibilities = [(task_tag, ds_tag) for task_tag in task_mapping for ds_tag in dataset_mapping] for task_tag, ds_tag in all_possibilities: result = {} if task_tag is not None: result["task"] = {"name": task_mapping[task_tag], "type": task_tag} if ds_tag is not None: > metadata = dataset_metadata_mapping[ds_tag] or {} E KeyError: 'hf-internal-testing/librispeech_asr_dummy' ```<|||||>To make sure all tests are now green, could you rebase on main? The examples test takes forever because of a new release of a dependency, we've pinned it on main.<|||||>CI is now green, so merging this!
transformers
18,063
open
Add TF implementation of LongT5 model
### Feature request Add TF implementation of LongT5 model ### Motivation Add support for TF backend to allow using LongT5 with TF. ### Your contribution I will add this :] cc: @gante @patrickvonplaten
07-07-2022 19:02:04
07-07-2022 19:02:04
Gonna start working on this today, sorry for delay :]
transformers
18,062
closed
Update localized READMES when template is filled.
# What does this PR do? When adding a new model, `make fix-copies` will automatically create a new entry for the model in the README if the user forgot, with some `<FILL XXX>` to fill. This is also added in the localized READMES. Once this has been done, the localized READMES are not updated if the user has filled the template on the main README, which is a bit annoying. We can see an example in #17821 This PR fixes that.
07-07-2022 18:03:24
07-07-2022 18:03:24
_The documentation is not available anymore as the PR was closed or merged._
transformers
18,061
closed
fix loading from pretrained for sharded model with `torch_dtype="auto"
Fixes the following script which failed because `resolved_archive_file` is a list for sharded models and `load_state_dict`expects a path to a single file ```python model = AutoModelForCausalLM.from_pretrained("bigscience/bloom", torch_dtype="auto") ```
07-07-2022 17:15:13
07-07-2022 17:15:13
_The documentation is not available anymore as the PR was closed or merged._<|||||>Hey @NouamaneTazi, do you have a code example that failed before and that doesn't fail anymore with your PR?<|||||>Yes @LysandreJik, the script I provided did fail for me when I tried it: ```python model = AutoModelForCausalLM.from_pretrained("bigscience/bloom", torch_dtype="auto") # this should fail for any sharded models ``` The issue was that `load_state_dict` expects a `str` or a `Pathlike` while `resolved_archive_file` is a list for sharded models.<|||||>Understood! It's a bit hard to play with such a large model, so I'm reproducing with `lysandre/test-bert-sharded`. However, it seems that it doesn't entirely fix the issue: ```py >>> from transformers import AutoModelForCausalLM >>> model = AutoModelForCausalLM.from_pretrained("lysandre/test-bert-sharded", torch_dtype="auto") File ~/Workspaces/Python/transformers/src/transformers/models/auto/auto_factory.py:446, in _BaseAutoModelClass.from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs) 444 elif type(config) in cls._model_mapping.keys(): 445 model_class = _get_model_class(config, cls._model_mapping) --> 446 return model_class.from_pretrained(pretrained_model_name_or_path, *model_args, config=config, **kwargs) 447 raise ValueError( 448 f"Unrecognized configuration class {config.__class__} for this kind of AutoModel: {cls.__name__}.\n" 449 f"Model type should be one of {', '.join(c.__name__ for c in cls._model_mapping.keys())}." 450 ) File ~/Workspaces/Python/transformers/src/transformers/modeling_utils.py:2040, in PreTrainedModel.from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs) 2038 torch_dtype = get_state_dict_dtype(state_dict) 2039 else: -> 2040 one_state_dict = load_state_dict(resolved_archive_file) 2041 torch_dtype = get_state_dict_dtype(one_state_dict) 2042 del one_state_dict # free CPU memory File ~/Workspaces/Python/transformers/src/transformers/modeling_utils.py:359, in load_state_dict(checkpoint_file) 357 except Exception as e: 358 try: --> 359 with open(checkpoint_file) as f: 360 if f.read().startswith("version"): 361 raise OSError( 362 "You seem to have cloned a repository without having git-lfs installed. Please install " 363 "git-lfs and run `git lfs install` followed by `git lfs pull` in the folder " 364 "you cloned." 365 ) TypeError: expected str, bytes or os.PathLike object, not list ```<|||||>This is exactly the error I got before the fix. And from your traceback it seems that the patch wasn't applied You have ```python -> 2040 one_state_dict = load_state_dict(resolved_archive_file) ``` When it should be ```python -> 2040 one_state_dict = load_state_dict(resolved_archive_file[0]) ``` From my side, testing with the patch did succeed in loading your model<|||||>Ah, great catch; I had too many `patch-1` branches locally. Your patch seems to work, pinging @sgugger for additional verification.<|||||>Should I raise a warning when this method is used @sgugger?<|||||>I don't think Stas will like the extra warning, so I'd say no ;-)
transformers
18,060
closed
LED Model returns AlgorithmError when using SageMaker SMP training #16890
### System Info cc @philschmid , cc @ydshieh , cc @sgugger Hello, This is a follow up on a related post with the below link) with the same title: https://github.com/huggingface/transformers/issues/16890 We ade a bit of more progress but are still facing with some issues and are trying to fix them after trying out several fixes including matching the python, transformers, and pytorch versions according to the recommendations (3.8, 4.16.2, and 1.10.2, respectively): -ValueError: not enough values to unpack (expected 2, got 1) The error is in the “modeling_led” within the transformers module expecting a different input_ids shape. New Update is we tried below to unsqueeze input tensors to the "modeling_led" to solve the above error: def unsqueeze_col(example): return {"input_ids": torch.unsqueeze(example["input_ids"], 0)} pubmed_train = pubmed_train.map(unsqueeze_col) It helped moving forward in the process, but we got another error, below, a little further down in the code: UnexpectedStatusException: Error for Training job huggingface-pytorch-training-2022-06-29-04-04-58-606: Failed. Reason: AlgorithmError: ExecuteUserScriptError: ExitCode 1 ErrorMessage ":RuntimeError: Tensors must have same number of dimensions: got 4 and 3 :Environment variable SAGEMAKER_INSTANCE_TYPE is not set :Environment variable SAGEMAKER_INSTANCE_TYPE is not set :Environment variable SAGEMAKER_INSTANCE_TYPE is not set :Environment variable SAGEMAKER_INSTANCE_TYPE is not set :Environment variable SAGEMAKER_INSTANCE_TYPE is not set :Environment variable SAGEMAKER_INSTANCE_TYPE is not set :Environment variable SAGEMAKER_INSTANCE_TYPE is not set :Environment variable SAGEMAKER_INSTANCE_TYPE is not set -------------------------------------------------------------------------- Primary job terminated normally, but 1 process returned a non-zero exit code. Per user-direction, the job has been aborted. mpirun.real detected that one or more processes exited with non-zero status, thus causing the job to be terminated. The first process to do so was: Process name: [[41154,1],0] Exit code: 1" Command "mpirun --host algo-1:8 I’d greatly appreciate your feedback. Please let me know if you need any further information about the project. ### Who can help? [SageMakerAprilTraining.zip](https://github.com/huggingface/transformers/files/9065968/SageMakerAprilTraining.zip) ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction Running this attached file with the training python file ### Expected behavior I have shared the notebook and the error raised in it for clarification
07-07-2022 17:10:37
07-07-2022 17:10:37
@omid0001 @kanwari3, Would it be possible for you to reproduce this issue (`not enough values to unpack`) without using SageMaker, i.e. just with a Python script? ```bash [1,0]: bsz, seq_len = input_ids_shape[:2] [1,0]:ValueError: not enough values to unpack (expected 2, got 1) ``` It would be a good idea to verify what data is received by the model first. Usually the batches in data (`input_ids`) should be already of the format `(batch_size, sequence_length)`, and if you see the above error, it is likely the data or its processing pipeline has some issues. Using `torch.unsqueeze` is not really a good idea, as it implies you have only `batch_size` being 1. My suggestion: - Try to run your training without SageMaker (and without the using the fix `torch.unsqueeze`) - Check what is received by the model, and check in the data pipeline if it prepares the correct input format - If you still get the issue and can't figure it out: - I could try to help if you could provide the training script + data processing script + a tiny portion of your data - If the issue only occurs when you wrap the training in SageMaker, I don't have the competence to help in this case, sorry.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
18,059
closed
fixed other examples
null
07-07-2022 16:13:51
07-07-2022 16:13:51
_The documentation is not available anymore as the PR was closed or merged._
transformers
18,058
closed
[bloom] Add alibi cache for fixed inputs
### Feature request Add alibi cache for BLOOM ### Motivation Many training scenarios involve fixed input length. In which case re-creating the deterministic alibi tensor that is the same on each forward is a waste. The other similar scenario is when one groups inputs into bins with padding to the fixed length of the bin. I propose we add a small cache - even of 1 value to speed up these scenarios. So a 1-value cache would be that we save the tensor in the `BloomModel` object and rebuild it only if tensor's length needs to change. @younesbelkada
07-07-2022 15:06:30
07-07-2022 15:06:30
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.