repo
stringclasses
1 value
number
int64
1
25.3k
state
stringclasses
2 values
title
stringlengths
1
487
body
stringlengths
0
234k
created_at
stringlengths
19
19
closed_at
stringlengths
19
19
comments
stringlengths
0
293k
transformers
18,057
closed
XGLM - Fix Softmax NaNs when using FP16
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes #18049 following the exact same procedure used in #17437. Beside the added test, I also evaluated the fix on my personal use-case and found the behavior of the fixed model to be consistent when performing single or batched generation. ## Before submitting - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you write any new necessary tests? ## Who can review? @patil-suraj @ydshieh @patrickvonplaten
07-07-2022 14:51:23
07-07-2022 14:51:23
_The documentation is not available anymore as the PR was closed or merged._<|||||>@patil-suraj I think only your check is missing!<|||||>Sorry for being so late here @gsarti! Merged master into it to ping circle ci here<|||||>Hey @gsarti - it seems like a test is failing now: ``` tests/models/xglm/test_modeling_xglm.py::XGLMModelTest::test_xglm_model_past ``` with ``` UnboundLocalError: local variable 'dtype_attn_weights' referenced before assignment ```<|||||>> Hey @gsarti - it seems like a test is failing now: > > ``` > tests/models/xglm/test_modeling_xglm.py::XGLMModelTest::test_xglm_model_past > ``` > > with > > ``` > UnboundLocalError: local variable 'dtype_attn_weights' referenced before assignment > ``` I noticed this when running the code. My understanding is that setting `dtype_attn_weights` as `torch.float32` as default beforehand would fix the issue and maintain the expected behavior, could you double-check?<|||||>Hi @gsarti Sorry for being late for this PR. I re-opened it and give some suggestion for a fix to the failing test. Would you like to update this PR after rebasing your working branch on an updated `main` branch?<|||||>Hi @gsarti I made the necessary change to pass the tests, and pushed to your branch directly. The remaining failing test is irrelevant to this PR, but I will wait until tomorrow to check again, then I will merge. cc @patrickvonplaten and @younesbelkada <|||||>Thanks a lot for the fix @ydshieh !! I think for consistency we should apply the same changes on OPT too, I will take care of that first thing in the morning tomorrow 💪
transformers
18,056
closed
Update state.best_metric after model evaluation; track metric for mod…
…el checkpointing separately in new state.best_metric_checkpoint variable # What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
07-07-2022 14:19:17
07-07-2022 14:19:17
As suggested here: https://github.com/huggingface/transformers/issues/16620<|||||>TODO: appropriate tests<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_18056). All of your documentation changes will be reflected on that endpoint.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
18,055
closed
NLP HydraNet
### Feature request I think it would be awesome to be able to easily train a Tesla style HydraNet but using a transformer backbone. The model would take a model_id and a series of tasks. The dataset would need to supply labels for both the task heads and the loss would be controllable via coefficients in the configuration. ### Motivation This would allow people to train multi-task learners using the same neural backbone and co-learn a number of related tasks. ### Your contribution I have some code working for this. Unsure if there is already a pattern for this type of thing or if this is novel enough to contribute back to OSS.
07-07-2022 14:12:34
07-07-2022 14:12:34
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
18,054
closed
Include `timeout` attribute (related to DDP) to TrainingArguments
### Feature request Would it be possible to include a `timeout` attribute to the `TrainingArguments` dataclass, such as it will be used as an argument of the `torch.distributed.init_process_group` calls? Reference: https://pytorch.org/docs/stable/distributed.html#torch.distributed.init_process_group ### Motivation Essentially, if a process uses DDP and performs a set of operations prior to using the GPUs, such as tokenization/mapping, for more than `timeout` seconds (defaults to 30 minutes), the process will stop and get killed due to a `Socket Timeout`. (issue #17106). By adding a `timeout` argument to the `TrainingArguments` class, we will be able to let users decide in overriding the default timeout defined by PyTorch and hopefully introduce a way to prevent `Socket Timeouts` when mapping large datasets. ### Your contribution I could definitely submit a PR, seems to be pretty straightforward to add a new attribute to the `TrainingArguments` class.
07-07-2022 12:59:15
07-07-2022 12:59:15
Hi @gugarosa I have started the work on it. Will create PR for the same by tomorrow.<|||||>That's amazing @dvlshah! Thank you so much for doing it!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
18,053
closed
[Generate Tests] Make sure no tokens are force-generated
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Bart-like models often have the default of forcing certain tokens to be generated (see https://github.com/huggingface/transformers/blob/91c4a3ab1a7f6651bbcd27ccd98d8f3e69189911/src/transformers/models/bart/configuration_bart.py#L141) which could be the reason for the flaky behavior of certain generation tests - e.g. if `eos_token_id` is already `-inf` and then all other tokens are set to `-inf` because at `max_length` EOS should be generated then the test will fail. Would like to give this a try before tweaking the general test architecture (cc @ydshieh @sgugger ) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
07-07-2022 12:45:24
07-07-2022 12:45:24
_The documentation is not available anymore as the PR was closed or merged._
transformers
18,052
closed
Configurations should support id2label as a list
### Feature request This is a rather minor/trivial feature request. Currently `id2label` is of type `Dict[int, str]` in `PretrainedConfig`. Since this map is used to map class ids to labels, the keys is set to `range(0, num_labels)`. Essentially, a list would also work here. I can even argue that a list is better in this case, because it avoids the potential error of the out-of-range keys and the weird serialization as ``` "id2label": { "0": "O", "1": "B-address", "2": "I-address", ... } ``` Given that this attribute is mostly used for lookup `id2label[i]`, it should be pretty easy to support this field both as a `list` and a `map`. We just need to fix a few places where we assumed `.items()` API exists. I'll be happy to contribute a small PR for this. ### Motivation described above ### Your contribution Happy to contribute a PR for this.
07-07-2022 08:03:27
07-07-2022 08:03:27
I would personally advocate for a single way of doing things, rather than many different ways of doing the same thing. In this situation, while the dict is more complex, it is also more complete. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
18,051
closed
Fix type issue in using bucketing with Trainer
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> - Fix type issues in `LengthGrouperSampler`, `DistributedLengthGroupedSampler` Fixes #18003 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> @sgugger
07-07-2022 04:45:33
07-07-2022 04:45:33
_The documentation is not available anymore as the PR was closed or merged._<|||||>> Oh and sorry I didn't catch it at the first review, but we can group the else if in an elif. I totally agree this. @sgugger <|||||>Thanks a lot!
transformers
18,050
closed
A basic NLP question regarding NER task
Hi @patrickvonplaten, I have one basic conceptual NLP question regarding the evaluation for NER. According to [run_ner.py](https://github.com/huggingface/transformers/blob/main/examples/pytorch/token-classification/run_ner.py), the ground truth label is truncated to max_seq_length during prediction. However, this means the ground truth label will be changed. My question is: is the prediction still valid? For example, if the ground truth has 150 tokens, when max_seq_length = 128, both the prediction and label are truncated to 128, isn't the prediction required to contain 150 tokens for consistency of evaluation? Thank you very much in advance for your help, apologize if this is the wrong place for posting.
07-06-2022 23:53:47
07-06-2022 23:53:47
hello @liususan091219 . I believe the logic you are looking for is the [tokenize_and_align_labels](https://github.com/huggingface/transformers/blob/main/examples/pytorch/token-classification/run_ner.py#L420) function. Looks like the logic is as following: - if `len(labels) > len(tokens)`, then the extra labels are thrown out to ensure the same length - if `len(labels) < len(tokens)` (due to padding), then the extra labels with value of `-100` are added to ensure they have the same length<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
18,049
closed
NaN in XGLM Softmax with FP16
### System Info - `transformers` version: 4.21.0.dev0 - Platform: Linux-5.3.0-1017-x86_64-with-glibc2.27 - Python version: 3.9.13 - Huggingface_hub version: 0.8.1 - PyTorch version (GPU?): 1.12.0+cu102 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: Yes, 4x V100 ### Who can help? @ydshieh @patrickvonplaten ### Reproduction Related to the fixes in #17437 most likely. I am using an example similar to `test_batched_nan_fp16` in `test_modeling_opt.py`, but for an XGLM model. The only difference with that test is the `torch.cuda.amp.autocast` usage, which I found necessary to perform inference (otherwise I would get an error saying "expected scalar type Float but found Half" coming from the forward of XGLM) ```python import torch from transformers import AutoConfig, AutoModelForCausalLM, AutoTokenizer # Tested with xglm-564M and 7.5B (the second using `infer_auto_device_map` and # `load_checkpoint_and_dispatch` from `accelerate`. tokenizer = AutoTokenizer.from_pretrained("facebook/xglm-564M", padding_side="left") model = AutoModelForCausalLM.from_pretrained("facebook/xglm-564M", torch_dtype=torch.float16, use_cache=True).cuda() batch = tokenizer(["Who are you?", "Joe Biden is the president of"], padding=True, return_tensors="pt") input_ids = batch["input_ids"].cuda() attention_mask = batch["attention_mask"].cuda() with torch.no_grad(): with torch.cuda.amp.autocast(): outputs = model(input_ids, attention_mask=attention_mask) assert not torch.isnan(outputs.logits[0]).any().item() # Raises an AssertionError ``` ### Expected behavior I would expect the model to have normal logits when using FP16. The spotting of this bug was prompted by an issue of garbage generation when doing batching, despite the left padding and a valid attention mask.
07-06-2022 23:10:02
07-06-2022 23:10:02
cc @patil-suraj for XLGM<|||||>From `padding=left` in the code snippet, I would guess this is similar to issue #17433 and the fix in #17437 should fix it. @gsarti Would you like to try it and maybe also open a PR? P.S I actually thought yesterday if I should apply #17437 to all models, and tried a few models like GPT2, Bart which are fine. Maybe it is indeed better to do it for all models. <|||||>Opened PR #18057 with the suggested fix @ydshieh, should be good to go!<|||||>Can we close this one? cc @ydshieh ? <|||||>I re-opened that PR #18057. Let's see if @gsarti would like to continue the work, otherwise I can take it. The necessary fix is minimal.
transformers
18,048
open
[WIP][Generate] Allow passing pos ids
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> This PR improves generate by allowing to pass the position ids to the generate method. This could help researchers and practitioners to have more control over the method. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
07-06-2022 22:06:58
07-06-2022 22:06:58
Having started the PR, I noticed that it's actually much more work than I thought. We actually need to touch every generate model and also make the test more general. @gante I don't have a lot of bandwidth at the moment, but think I could finish the PR in ~1,2 weeks. In case you're interested in taking in over, please feel free to do so! (think we need to apply the above changes to all decoder-only generate models and also move the test to the general generate testing file.<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_18048). All of your documentation changes will be reflected on that endpoint.<|||||>@patrickvonplaten oops, for some reason I missed this notification -- I will take over :)
transformers
18,047
closed
Different sentiment class probabilities for sequential processing vs batch processing
### System Info platform: windows python: 3.7 transformers: latest Model: finetuned BERT (from cardiffnlp/twitter-roberta-base-sentiment) I have posted the issue here [https://discuss.huggingface.co/t/different-sentiments-when-texts-processed-in-batches-vs-singles/19462](url), but didn't receive any answer. The behavior is not really explainable and might look like a bug. Cheers @LysandreJik ### Who can help? _No response_ ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction see description under the above link. ### Expected behavior I would expect the class probabilities to be equal for a given text, no matter if the classification is done sequentially or in batches.
07-06-2022 18:03:29
07-06-2022 18:03:29
Hi @Kayne88 -- could you please share a complete script for reproducibility? In your [original script](https://discuss.huggingface.co/t/different-sentiments-when-texts-processed-in-batches-vs-singles/19462) you were missing the definition of `tokenizer` and `model` :)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
18,046
closed
Place inputs on device when include_inputs_for_metrics is True
# What does this PR do? As pointed out in #18038, the inputs accumulated in the Trainer when `include_inputs_for_metrics=True` are not on the proper device, which then fails in distributed setups. This PR fixes that. Fixes #18038
07-06-2022 17:36:09
07-06-2022 17:36:09
_The documentation is not available anymore as the PR was closed or merged._
transformers
18,045
closed
Trainer.train always saving model every 500 steps, regardless of input training args [fixed, user error]
### System Info Google Colab, GPU enabled ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Use the Trainer.train with either the epochs saving strategy or adjusting the number of steps for saving, either way, the model is saved every 500 steps, which is a huge problem for training situations with a large number of steps! ### Expected behavior The training args should change how often the model is saved to disk, but it is not working.
07-06-2022 16:15:34
07-06-2022 16:15:34
Nevermind, I was using the wrong training args - oops! All working now. <|||||>Closing this issue then :-)
transformers
18,044
closed
Protect `TFGenerationMixin.seed_generator` so it's not created at import
With thanks to @atakey for pointing this out and suggesting a solution, credit for this one should go to him! Fixes #17804
07-06-2022 13:51:16
07-06-2022 13:51:16
_The documentation is not available anymore as the PR was closed or merged._
transformers
18,043
closed
Add Support for "No Language Left Behind" (NLLB)
### Model description Hi, Meta recently released another cool project called "No Language Left Behind" (NLLB): > No Language Left Behind (NLLB) is a first-of-its-kind, AI breakthrough project that open-sources models capable of delivering high-quality translations directly between any pair of 200+ languages — including low-resource languages like Asturian, Luganda, Urdu and more. It aims to help people communicate with anyone, anywhere, regardless of their language preferences. The project itself is integrated into `fairseq` library and available on the `nllb` branch: https://github.com/facebookresearch/fairseq/tree/nllb It includes code release as well as released checkpoints. A detailed 190 page paper is also available from [here](https://research.facebook.com/publications/no-language-left-behind). We should really add support for these amazing project by adding support for NLLB. ### Open source status - [x] The model implementation is available - [x] The model weights are available ### Provide useful links for the implementation Models checkpoint are available here: | Model Name | Model Type | #params | checkpoint | metrics | | - | - | - | - | - | | NLLB-200 | MoE | 54.5B |[model](https://tinyurl.com/nllb200moe54bmodel) | [metrics](https://tinyurl.com/nllb200moe54bmetrics) | | NLLB-200 | Dense | 3.3B |[model](https://tinyurl.com/nllb200dense3bcheckpoint) | [metrics](https://tinyurl.com/nllb200dense3bmetrics) | | NLLB-200 | Dense | 1.3B |[model](https://tinyurl.com/nllb200dense1bcheckpoint) | [metrics](https://tinyurl.com/nllb200dense1bmetrics) | | NLLB-200-Distilled | Dense | 1.3B | [model](https://tinyurl.com/nllb200densedst1bcheckpoint) | [metrics](https://tinyurl.com/nllb200densedst1bmetrics) | | NLLB-200-Distilled | Dense | 600M | [model](https://tinyurl.com/nllb200densedst600mcheckpoint) | [metrics](https://tinyurl.com/nllb200densedst600mmetrics) | Maintainers are: @vedanuj, @shruti-bh, @annasun28, @elbayadm, @jeanm, @jhcross, @kauterry and @huihuifan. Implementation is available in the `fairseq` repo: https://github.com/facebookresearch/fairseq/tree/nllb
07-06-2022 13:50:44
07-06-2022 13:50:44
For the tokenization part, SPM model is provided and can be downloaded from [here](https://tinyurl.com/flores200sacrebleuspm). It is a "real" SPM model, that can e.g. be loaded like this: ```python import sentencepiece as spm model_file = "flores200sacrebleuspm" sp_model = spm.SentencePieceProcessor() sp_model.Load(model_file) ``` Let's investigate this model a bit more: ```bash In [5]: sp_model.vocab_size() Out[5]: 256000 In [6]: for index in range(0,10): ...: print(index, "->", sp_model.IdToPiece(index)) ...: 0 -> <unk> 1 -> <s> 2 -> </s> 3 -> an 4 -> ▁n 5 -> ▁m 6 -> ▁t 7 -> ▁k 8 -> ▁a 9 -> ▁s ``` The overall vocab size is 256,000 and the output shows the first 10 "pieces" in the SPM model. <|||||>Hi, I'm one of the Meta engineers who worked on NLLB, and I'm happy to support this from our side. That's indeed the correct (real) SPM model for the vocabulary used for input/output, but internally the model's vocabulary (and embedding table) size is supplemented at the end by a token for each language, which happens here: https://github.com/facebookresearch/fairseq/blob/26d62ae8fbf3deccf01a138d704be1e5c346ca9a/fairseq/data/multilingual/multilingual_utils.py#L64 This list of languages come from an input arg which reads them from a string or file. For these particular models that value is: https://github.com/facebookresearch/fairseq/blob/26d62ae8fbf3deccf01a138d704be1e5c346ca9a/examples/nllb/modeling/scripts/flores200/langs.txt#L1 Please let me know if you have any questions about this or if I can be of any further help.<|||||>i see their demo, for chinese translate, very low quality, i think they need do more hard working on improve the AI <|||||>I made a mistake above, as there is another way our internal vocabulary differs from the "standard" SPM model: The 3 special tokens shown at the beginning of your output above are replaced by the following 4 tokens (at indices 0, 1, 2, and 3, respectively): "<s>", "<pad>", </s>", "<unk>". This can be seen where the internal Fairseq dictionary is constructed in the code from the plaintext vocabulary file (before the language tokens are added): https://github.com/fairinternal/fairseq-py/blob/3506ddfb3585aa470f59902ea44625e39287e37c/fairseq/data/dictionary.py#L35-L38<|||||>@jhcross thanks for the explanation. I think we need to perform some fairseq-mapping, as e.g. done in the XLM-R or BART tokenizer: https://github.com/huggingface/transformers/blob/3f936df66287f557c6528912a9a68d7850913b9b/src/transformers/models/mbart/tokenization_mbart.py#L129-L136<|||||>@stefan-it that makes sense, and I would assume that that code could be reused verbatim. The only additional thing would be add the language tokens to the end of the vocabulary. Note that the language list can also be extracted from the checkpoint data as follows: ``` checkpoint = torch.load(path_to_file) langs_list = checkpoint["cfg"]["model"].langs ```<|||||>Thanks for opening an issue! We've managed to convert the models to the M2M_100 architecture and the tokenizers to a new NLLB tokenizer very closely resembling that of the mBART tokenizer. We're in the process of testing all models for generation and performance and I'll likely open a PR in a few hours.<|||||>> Hi, I'm one of the Meta engineers who worked on NLLB, and I'm happy to support this from our side. That's indeed the correct (real) SPM model for the vocabulary used for input/output, but internally the model's vocabulary (and embedding table) size is supplemented at the end by a token for each language, which happens here: > > https://github.com/facebookresearch/fairseq/blob/26d62ae8fbf3deccf01a138d704be1e5c346ca9a/fairseq/data/multilingual/multilingual_utils.py#L64 > > This list of languages come from an input arg which reads them from a string or file. For these particular models that value is: > > https://github.com/facebookresearch/fairseq/blob/26d62ae8fbf3deccf01a138d704be1e5c346ca9a/examples/nllb/modeling/scripts/flores200/langs.txt#L1 > > Please let me know if you have any questions about this or if I can be of any further help. Hello, First of all Thank you so much. nllb is super power translation! There are 208 in the world, and I think it's amazing to translate 200 of them into languages. Also, thank you so much for updating the hugging face 5 days ago so that it can be easily used. I 'm trying to using huggingface-nllb. But, tokenizer is not working... ``` ----> 1 tokenizer = AutoTokenizer.from_pretrained("facebook/nllb-200-distilled-600M") [/usr/local/lib/python3.7/dist-packages/transformers/models/auto/tokenization_auto.py](https://localhost:8080/#) in from_pretrained(cls, pretrained_model_name_or_path, *inputs, **kwargs) 575 if tokenizer_class is None: 576 raise ValueError( --> 577 f"Tokenizer class {tokenizer_class_candidate} does not exist or is not currently imported." 578 ) 579 return tokenizer_class.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs) ValueError: Tokenizer class NllbTokenizer does not exist or is not currently imported. ``` ![image](https://user-images.githubusercontent.com/73736988/179475738-a2cf0271-4b61-49fe-bc2f-5ec580266756.png) - Question1 As an nllb beginer I don't know how to fix this. - Question2 And if the tokenizer works, can I use it in the same way as the M2M100 model?<|||||>Hello, #18126 Really, Really Tokenizer is not working. <|||||>Hey @daje0601, it was just merged to the main branch 15 minutes ago, I just tried it, and it seems to be working, make sure you are installing the main branch. <|||||>> Hey @daje0601, it was just merged to the main branch 15 minutes ago, I just tried it, and it seems to be working, make sure you are installing the main branch. Hey @AhmedIdr, I'm not a liar. I also tried running it in colab a minute ago. But it didn't work. So I asked this question. I've been thinking about this all day today. I knew it was a really simple question, so I searched and searched more and asked the question. Here is a link to a colab I tested.[link](https://colab.research.google.com/drive/1ngXBBQOmUSebtkhbqsh2W9JTI_MnevpJ?usp=sharing)<|||||>Hey @daje0601, you are installing transformers from pip and not installing the latest branch on Github. Try installing transformers like this `!pip install git+https://github.com/huggingface/transformers.git` and see if it does work afterwards.<|||||>> Hey @daje0601, you are installing transformers from pip and not installing the latest branch on Github. Try installing transformers like this `!pip install git+https://github.com/huggingface/transformers.git` and see if it does work afterwards. Oh..!!!!!!!!!! It's working..!!!! So So So Thank you ♥︎<|||||>@AhmedIdr Hi, except the NLLB models itself, authors have also published their language identification model. Is there a chance for having it incorporated to the hf as well?<|||||>@ArturPrzybysz Hi, I am not a part of the hf team, I am just a community member and wanted to help with the issue :)<|||||>@ArturPrzybysz You can use the LID (Language IDentification) model with [fastText](https://github.com/facebookresearch/fastText) https://github.com/huggingface/transformers/issues/18294#issuecomment-1207374838<|||||>First and foremost, thank you to everyone that has been working on this, both from the original team at Meta and then on porting it to huggingface. I was checking the [model's page](https://huggingface.co/docs/transformers/model_doc/nllb) in the huggingface website. Unlike previous translation models (like mBART) there are no details regarding how to train a model with the NLLB architecture in new languages. I am especially interested on the details regarding the Load Balancing Loss function: how to compute it, combine it with the standard Cross Entropy Loss, and back propagate it properly. I would be very thankful if anyone can point me in the right direction concerning this topic.
transformers
18,042
closed
checkpoints_to_be_deleted = checkpoints_sorted[:number_of_checkpoints_to_delete] TypeError: slice indices must be integers or None or have an __index__ method
### System Info transformers.__version__==4.9.2 ### Who can help? _No response_ ### Information - [ ] The official example scripts - [x] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [x] My own task or dataset (give details below) ### Reproduction 1. use **trainer** to train your model 2. specify argument `resume_from_checkpoint` ### Expected behavior if you use **trainer** to train your model and specify argument `resume_from_checkpoint` `trainer.train(resume_from_checkpoint=checkpoint)` you may encounter an error like this: checkpoints_to_be_deleted = checkpoints_sorted[:number_of_checkpoints_to_delete] TypeError: slice indices must be integers or None or have an __index__ method this results from this line in **trainer.py** ``` number_of_checkpoints_to_delete = max(0, len(checkpoints_sorted) - save_total_limit) checkpoints_to_be_deleted = checkpoints_sorted[:number_of_checkpoints_to_delete] ``` resolution: add a conversion to int type for variable `number_of_checkpoints_to_delete` ``` number_of_checkpoints_to_delete = int(max(0, len(checkpoints_sorted) - save_total_limit)) checkpoints_to_be_deleted = checkpoints_sorted[:number_of_checkpoints_to_delete] ```
07-06-2022 09:26:33
07-06-2022 09:26:33
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>cc @sgugger <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
18,041
closed
EfficientFormer
### Model description [EfficientFormer: Vision Transformers at MobileNet Speed](https://arxiv.org/abs/2206.01191) Submitted on 2 Jun 2022, last revised 5 Jul 2022. ### Open source status - [X] The model implementation is available - [X] The model weights are available ### Provide useful links for the implementation https://github.com/snap-research/EfficientFormer Happy to support the model contribution!
07-06-2022 06:20:17
07-06-2022 06:20:17
cc @hollance
transformers
18,040
closed
ValueError: You have to specify pixel_values in CLIP for ver >= 4.18.0
### System Info - `transformers` version: 4.20.1 - Platform: Linux-4.19.0-21-cloud-amd64-x86_64-with-debian-10.12 - Python version: 3.7.12 - Huggingface_hub version: 0.8.1 - PyTorch version (GPU?): not installed (NA) - Tensorflow version (GPU?): 2.8.0 (True) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: yes - Using distributed or parallel set-up in script?: no ### Who can help? @patil-suraj I try to run TFCLIPModel.get_image_features example [here](https://huggingface.co/docs/transformers/model_doc/clip#transformers.TFCLIPModel.get_image_features.example), which is also pasted in Reproduction section. When I use `transformers >= 4.18.0`, it throws an error "ValueError: You have to specify pixel_values" (details pasted below). Is there any way to fix this? ``` Traceback (most recent call last): File "dummy.py", line 13, in <module> image_features = model.get_image_features(**inputs) File "/opt/conda/lib/python3.7/site-packages/transformers/modeling_tf_utils.py", line 383, in run_call_with_unpacked_inputs return func(self, **unpacked_inputs) File "/opt/conda/lib/python3.7/site-packages/transformers/models/clip/modeling_tf_clip.py", line 1318, in get_image_features return_dict=return_dict, File "/opt/conda/lib/python3.7/site-packages/transformers/modeling_tf_utils.py", line 383, in run_call_with_unpacked_inputs return func(self, **unpacked_inputs) File "/opt/conda/lib/python3.7/site-packages/transformers/models/clip/modeling_tf_clip.py", line 796, in get_image_features raise ValueError("You have to specify pixel_values") ValueError: You have to specify pixel_values ``` ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ``` from PIL import Image import requests from transformers import CLIPProcessor, TFCLIPModel model = TFCLIPModel.from_pretrained("openai/clip-vit-base-patch32") processor = CLIPProcessor.from_pretrained("openai/clip-vit-base-patch32") url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) inputs = processor(images=image, return_tensors="tf") image_features = model.get_image_features(**inputs) ``` ### Expected behavior Finish without throwing an error described above.
07-06-2022 05:54:05
07-06-2022 05:54:05
cc @NielsRogge, or @amyeroberts @alaradirik if you have any pointers<|||||>Thanks for raising @naoto0804 ! Doing a bit of digging, this is because of the behaviour of the `unpack_inputs` decorator and the fact `TFCLIPModel` is being used. `unpack_inputs` tries to get the name of the `main_input_name` to the function (see [here](https://github.com/huggingface/transformers/blob/d4ebd4e112034b4a429ab7f813d7e168e7bb63c3/src/transformers/modeling_tf_utils.py#L429)) `TFCLIPModel` inherits from `TFPreTrainedModel` which has `main_input_name` set to `input_ids`. However, the `main_input_name` needed for this function `pixel_values`. @naoto0804 If all you want are the image features, the fastest and cleanest way you'll be able to get them is by using `TFCLIPVisionModel`: ``` from PIL import Image import requests from transformers import CLIPProcessor, TFCLIPVisionModel model = TFCLIPVisionModel.from_pretrained("openai/clip-vit-base-patch32") processor = CLIPProcessor.from_pretrained("openai/clip-vit-base-patch32") url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) inputs = processor(images=image, return_tensors="tf") image_features = model.get_image_features(**inputs) ``` However, this still leaves unexpected behaviour in the code. I can see the `unpack_inputs` decorator was added in https://github.com/huggingface/transformers/pull/16128 as part of https://github.com/huggingface/transformers/pull/15907. One thing I'm unsure of is the logic for `input_ids` in `input_processing`. [It seems there's lots of processing to handle the different possible formats for `input_ids`](https://github.com/huggingface/transformers/blob/981714efe12c5fc481ad38632ca0db88cd85004c/src/transformers/modeling_tf_utils.py#L508). `unpack_inputs` [can pass in any argument as the main_input](https://github.com/huggingface/transformers/blob/981714efe12c5fc481ad38632ca0db88cd85004c/src/transformers/modeling_tf_utils.py#L431), including e.g. `pixel_values`. In the `call` method for `TFCLIPModel` [both `input_ids` and `pixel_values` can be passed](https://github.com/huggingface/transformers/blob/981714efe12c5fc481ad38632ca0db88cd85004c/src/transformers/models/clip/modeling_tf_clip.py#L1344) i.e. it seems the processing logic in `input_processing` isn't necessary for the `pixel_values` input, even if it's set as the `main_input_name`, as in `TFCLIPVisionModel`. Would a reasonable solution be to move this logic, such the signature for `input_processing` becomes `input_processing(func, config, **kwargs)` and then apply the processing logic to the input ids if `input_ids` is in `parameter_names`? @gante <|||||>@amyeroberts good finding! Looking in hindsight, my design of `unpack_inputs` was sub-optimal here -- #18110 simplifies it and makes it independent from `main_input_name`, which solves the issue raised by @naoto0804 As for `input_processing`, it is an old function that I intend to rewrite and simplify soon 🙏 It is the cause of many bugs that are not easy to identify unless you're familiar with the function.<|||||>@amyeroberts thank you for the answer, which saved my day!
transformers
18,039
closed
[Wav2Vec2ForCTC] multi-GPU training's backward hangs
### System Info ubuntu@gpu-1:~$ transformers-cli env Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points. - `transformers` version: 4.16.2 - Platform: Linux-4.15.0-143-generic-x86_64-with-glibc2.10 - Python version: 3.8.5 - PyTorch version (GPU?): 1.11.0+cu102 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? @patrickvonplaten ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction Execution Code : `$ python train.py task=wav2vec2_train task.available_gpu_ids=\'4,5,6,7\' task.project_name=wav2vec2_asr hydra.job.chdir=False` ``` ### train.py from utils.task_manager import TaskManager @hydra.main(version_base=None, config_path=os.path.join("configs"), config_name="train", ) def hydra_main(configs: DictConfig) -> None: number_of_available_gpus = len(eval(configs.task.available_gpu_ids)) if not isinstance(eval(configs.task.available_gpu_ids), int) else 1 if number_of_available_gpus > 1: os.environ['MASTER_ADDR'] = configs.task.master_addr os.environ['MASTER_PORT'] = configs.task.master_port os.environ['CUDA_LAUNCH_BLOCKING'] = "1" os.environ["CUDA_VISIBLE_DEVICES"] = configs.task.available_gpu_ids torch.manual_seed(int(configs.task.seed)) configs.task.world_size = number_of_available_gpus task_manager = TaskManager.train(configs.task) if __name__ == '__main__': hydra_train_init() hydra_main() ``` ``` ### utils.taskmanager.py class TaskManager: def __init__(self, config, device=0): self.config = config self.device = device @classmethod def train(cls, config): cls = TaskManager(config) if config.world_size > 1: mp.spawn( cls._train, args=( config, ), nprocs=min(torch.cuda.device_count(), config.world_size), join=True, ) else: cls._train(cls.device, config) return cls def _train(self, rank, config): if self.config.world_size > 1: torch.cuda.set_device(rank) dist.init_process_group("nccl", rank=rank, world_size=self.config.world_size) model, optimizer, scheduler, scaler, processor, train_dataloader, eval_dataloader = self._prepare_training(rank) step = 1 print('Start Training') for epoch in range(config.num_train_epochs): train_dataloader.batch_sampler.set_epoch(epoch) for i, inputs in enumerate(train_dataloader): inputs = {k: v.cuda(rank) if torch.is_tensor(v) else v for k, v in inputs.items()} with autocast(enabled=config.use_fp16): outputs = model(**inputs) scaler.scale(outputs.loss).backward() if (i+1) % config.gradient_accumulation_step == 0 or i == len(train_dataloader): scaler.unscale_(optimizer) scheduler.step() scaler.step(optimizer) scaler.update() optimizer.zero_grad() model.zero_grad() step += 1 if rank == 0: self._eval(rank, config, model, eval_dataloader, step) self._log_train_metrics({'Train loss' : outputs.loss}, step) self._save_model(model, step) def _log_train_metrics(self, metrics, step): if step % self.config.logging_steps == 0: self._log_metrics(metrics, step) def _eval(self, rank, config, model, eval_dataloader, step): if step % self.config.eval_steps == 0: model.eval() cers = 0. loss = 0. with torch.no_grad(): for inputs in eval_dataloader: inputs = {k: v.cuda(rank) if torch.is_tensor(v) else v for k, v in inputs.items()} outputs = model(**inputs) cers += self.compute_metrics(outputs.logits, inputs['labels'])['wer'] loss += outputs.loss avg_cer = cers / len(eval_dataloader) avg_loss = loss / len(eval_dataloader) metrics = {'Eval avg_loss' : avg_loss, 'Eval avg_cer' : avg_cer} self._log_metrics(metrics, step) model.train() def _log_metrics(self, metrics, step): wandb.log(data=metrics, step=step, commit=False) def _save_model(self, model, step): if step % self.config.save_steps == 0: model.save_pretrained(os.path.join(self.config.saved_model_path, self.config.project_name , '_' + str(step)), save_config=True) def _prepare_training(self, rank): if rank == 0: self._get_vocab_dict() processor = self._get_processor() self.processor = processor train_dataloader, eval_dataloader = self._get_dataloader(processor, rank) model = self._get_model(processor) model.cuda(rank) if self.config.world_size > 1: model = DDP(model, device_ids=[rank], output_device=rank, find_unused_parameters=True) model.train() optimizer = transformers.AdamW( params=model.parameters(), lr=self.config.learning_rate, betas=eval(self.config.betas), eps=self.config.eps, weight_decay=self.config.weight_decay, ) scheduler = transformers.get_scheduler( name='constant_with_warmup', optimizer=optimizer, num_warmup_steps=self.config.warmup_steps, num_training_steps=len(train_dataloader) * self.config.num_train_epochs ) scaler = GradScaler(enabled=self.config.use_fp16) if rank == 0: self._set_wandb(model) self._get_wer_metric() return model, optimizer, scheduler, scaler, processor, train_dataloader, eval_dataloader def _set_wandb(self, model): wandb.init(project=self.config.project_name, entity="xinapse") wandb.config = model.config def _get_vocab_dict(self): text_manager = TextManager(self.config) if not os.path.exists(self.config.vocab_dict_path): print(f'Vocab file not found in {self.config.vocab_dict_path}. make vocab dictionary from texts in {os.path.join(self.config.dataset_path, self.config.manifest_file_name)}') text_manager.make_vocab() def _get_processor(self): tokenizer = Wav2Vec2CTCTokenizer( self.config.vocab_dict_path, unk_token=self.config.unk_token, pad_token=self.config.pad_token, word_delimiter_token=self.config.word_delimiter_token ) feature_extractor = Wav2Vec2FeatureExtractor( feature_size=1, sampling_rate=self.config.sampling_rate, padding_value=0.0, do_normalize=True, return_attention_mask=False ) processor = Wav2Vec2Processor( feature_extractor=feature_extractor, tokenizer=tokenizer ) return processor def _load_data(self): df = pd.read_pickle( os.path.join(self.config.dataset_path, self.config.manifest_file_name) ) train_df, eval_df = train_test_split( df, test_size=self.config.num_eval_data ) train_df.reset_index(drop=True, inplace=True) eval_df.reset_index(drop=True, inplace=True) return train_df, eval_df def _get_dataset(self, processor): train_df, eval_df = self._load_data() train_dataset = STTDataset(self.config, processor, train_df) eval_dataset = STTDataset(self.config, processor, eval_df) return train_dataset, eval_dataset def _get_collator(self, processor): data_collator = DataCollatorCTCWithPadding(processor=processor, padding=True) return data_collator def _get_dataloader(self, processor, rank): train_dataset, eval_dataset = self._get_dataset(processor) data_collator = self._get_collator(processor) train_sampler = DistributedBucketSampler( train_dataset, self.config.batch_size, eval(self.config.bucket_audio_lengths), num_replicas=self.config.world_size, rank=rank, shuffle=True ) train_dataloader = torch.utils.data.DataLoader( train_dataset, shuffle=(train_sampler is None), num_workers=self.config.num_worker, collate_fn=data_collator, pin_memory=True, batch_sampler=train_sampler ) if rank == 0: eval_dataloader = torch.utils.data.DataLoader( eval_dataset, shuffle=False, num_workers=self.config.num_worker, batch_size=self.config.batch_size, drop_last=False, collate_fn=data_collator ) else: eval_dataloader = None return train_dataloader, eval_dataloader def _get_model(self, processor): model = Wav2Vec2ForCTC.from_pretrained( self.config.pretrained_model_path, local_files_only=True, attention_dropout=0.1, hidden_dropout=0.1, feat_proj_dropout=0.0, mask_time_prob=0.05, layerdrop=0.1, ctc_loss_reduction="mean", pad_token_id=processor.tokenizer.pad_token_id, vocab_size=len(processor.tokenizer) ) model.freeze_feature_encoder() model.config.ctc_zero_infinity = True model.config.apply_spec_augment = True return model ``` ### Expected behavior I wrote my own training script to train wav2vec2 asr model with my custom dataset. Executing train script in single-GPU (setting number of CUDA_VISIBLE_DEVICES=0) works fine. But When I use multi-GPU with DDP, training progress hangs at `scaler.scale(outputs.loss).backward()` and GPU usage freezes like below Thanks in advance, Have a great day. ![image](https://user-images.githubusercontent.com/44384060/177467387-f899b125-9c63-4fcc-a3fa-73671c1a07dc.png)
07-06-2022 04:25:58
07-06-2022 04:25:58
I don't think we support training with `hydra` in transformers - could you maybe try to use our official examples for Wav2Vec2: https://github.com/huggingface/transformers/blob/main/examples/pytorch/speech-recognition/run_speech_recognition_ctc.py instead.<|||||>Thank you for your reply. Problem was in wandb's system monitoring process. When I set wandb.init not to monitor system and set layer_drop=0.0, training works fine. But the training loss freezes(?) after lr scheduler's warmup step(500). Thank you. <|||||>Ah yeah, the reason might be that you're using `gradient_checkpointing` and `layer_drop` at the same time. I actually don't recommend using `layer_drop` for Wav2Vec2 training
transformers
18,038
closed
Error occur when using DDP and include_inputs_for_metrics
### System Info when in DDP settings, with `args.include_inputs_for_metrics = True`, an error will occur in evaluation step. Seems that the `inputs_decode` haven't been placed to device before calling `_pad_across_processes`. https://github.com/huggingface/transformers/blob/main/src/transformers/trainer.py#L2963 I've tried that calling something like `inputs_decode.to(device)` would help. > Traceback (most recent call last): File "run_trainer.py", line 152, in <module> Traceback (most recent call last): File "run_trainer.py", line 152, in <module> main() File "run_trainer.py", line 101, in main main() File "run_trainer.py", line 101, in main train_result = task.train(resume_from_checkpoint=checkpoint) File "/usr/local/lib/python3.7/dist-packages/transformers/trainer.py", line 1413, in train train_result = task.train(resume_from_checkpoint=checkpoint) File "/usr/local/lib/python3.7/dist-packages/transformers/trainer.py", line 1413, in train ignore_keys_for_eval=ignore_keys_for_eval, File "/usr/local/lib/python3.7/dist-packages/transformers/trainer.py", line 1728, in _inner_training_loop ignore_keys_for_eval=ignore_keys_for_eval, File "/usr/local/lib/python3.7/dist-packages/transformers/trainer.py", line 1728, in _inner_training_loop self._maybe_log_save_evaluate(tr_loss, model, trial, epoch, ignore_keys_for_eval) File "/usr/local/lib/python3.7/dist-packages/transformers/trainer.py", line 1912, in _maybe_log_save_evaluate self._maybe_log_save_evaluate(tr_loss, model, trial, epoch, ignore_keys_for_eval) File "/usr/local/lib/python3.7/dist-packages/transformers/trainer.py", line 1912, in _maybe_log_save_evaluate metrics = self.evaluate(ignore_keys=ignore_keys_for_eval) File "/usr/local/lib/python3.7/dist-packages/transformers/trainer.py", line 2628, in evaluate metrics = self.evaluate(ignore_keys=ignore_keys_for_eval) File "/usr/local/lib/python3.7/dist-packages/transformers/trainer.py", line 2628, in evaluate metric_key_prefix=metric_key_prefix, File "/usr/local/lib/python3.7/dist-packages/transformers/trainer.py", line 2826, in evaluation_loop metric_key_prefix=metric_key_prefix, File "/usr/local/lib/python3.7/dist-packages/transformers/trainer.py", line 2826, in evaluation_loop inputs_decode = self._pad_across_processes(inputs_decode) File "/usr/local/lib/python3.7/dist-packages/transformers/trainer.py", line 2969, in _pad_across_processes inputs_decode = self._pad_across_processes(inputs_decode) File "/usr/local/lib/python3.7/dist-packages/transformers/trainer.py", line 2969, in _pad_across_processes sizes = self._nested_gather(size).cpu() File "/usr/local/lib/python3.7/dist-packages/transformers/trainer.py", line 2947, in _nested_gather sizes = self._nested_gather(size).cpu() File "/usr/local/lib/python3.7/dist-packages/transformers/trainer.py", line 2947, in _nested_gather tensors = distributed_concat(tensors) File "/usr/local/lib/python3.7/dist-packages/transformers/trainer_pt_utils.py", line 181, in distributed_concat tensors = distributed_concat(tensors) File "/usr/local/lib/python3.7/dist-packages/transformers/trainer_pt_utils.py", line 181, in distributed_concat dist.all_gather(output_tensors, tensor) File "/usr/local/lib/python3.7/dist-packages/torch/distributed/distributed_c10d.py", line 2003, in all_gather dist.all_gather(output_tensors, tensor) File "/usr/local/lib/python3.7/dist-packages/torch/distributed/distributed_c10d.py", line 2003, in all_gather work = default_pg.allgather([tensor_list], [tensor]) RuntimeError: Tensors must be CUDA and dense work = default_pg.allgather([tensor_list], [tensor]) RuntimeError: Tensors must be CUDA and dense ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 127) of binary: /usr/bin/python3 Traceback (most recent call last): File "/usr/local/bin/torchrun", line 8, in <module> sys.exit(main()) File "/usr/local/lib/python3.7/dist-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 345, in wrapper return f(*args, **kwargs) File "/usr/local/lib/python3.7/dist-packages/torch/distributed/run.py", line 719, in main run(args) File "/usr/local/lib/python3.7/dist-packages/torch/distributed/run.py", line 713, in run )(*cmd_args) File "/usr/local/lib/python3.7/dist-packages/torch/distributed/launcher/api.py", line 131, in __call__ return launch_agent(self._config, self._entrypoint, list(args)) File "/usr/local/lib/python3.7/dist-packages/torch/distributed/launcher/api.py", line 261, in launch_agent failures=result.failures, torch.distributed.elastic.multiprocessing.errors.ChildFailedError: ============================================================ run_trainer.py FAILED ------------------------------------------------------------ > ### Who can help? @sgugger ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction just run script on a DDP settings and make sure `include_inputs_for_metrics = True` ### Expected behavior RuntimeError: Tensors must be CUDA and dense
07-06-2022 03:29:15
07-06-2022 03:29:15
Thanks for flagging! This should be fixed by the PR mentioned above.
transformers
18,037
closed
Doc to dataset
# What does this PR do? This PR leverages the feature recently added to the doc-builder that allows links between Hugging Face libraries, to show an example with Datasets.
07-06-2022 00:50:10
07-06-2022 00:50:10
_The documentation is not available anymore as the PR was closed or merged._
transformers
18,036
closed
Flax models should allow `inputs_embeds`
### Feature request Currently, non-Flax models allow `inputs_embeds` instead of `input_ids` (e.g., GPT2) ```python def forward( self, input_ids: Optional[torch.LongTensor] = None, ... inputs_embeds: Optional[torch.FloatTensor] = None, ... ) -> Union[Tuple, CausalLMOutputWithCrossAttentions]: ... ``` However, Flax models have no such option (`input_ids` only). It would be great if Flax models also had this option so that, > Optionally, instead of passing `input_ids` you can choose to directly pass an embedded representation. ### Motivation > This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. (from the docs) Additionally, this can be useful for things like tuning "soft-prompts" (e.g., https://aclanthology.org/2021.emnlp-main.243/) ### Your contribution I'm will try to implement this myself, but I haven't yet found a solution.
07-05-2022 20:44:11
07-05-2022 20:44:11
WDYT @patil-suraj ?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>WDYT, @sanchit-gandhi? :)<|||||>Hey @mattf1n! It'll be quite straightforward to modify the `__call__` methods in the `FlaxGPT2For...` classes to handle `input_embeds`. Here, we can borrow the logic used in the PyTorch counterparts, for example: https://github.com/huggingface/transformers/blob/ab2006e3d6db88654526a4169e65d4bfc52da2e3/src/transformers/models/gpt2/modeling_gpt2.py#L783 and https://github.com/huggingface/transformers/blob/ab2006e3d6db88654526a4169e65d4bfc52da2e3/src/transformers/models/gpt2/modeling_gpt2.py#L848 We'll then have to update the `init_weights` method to reflect these changes: https://github.com/huggingface/transformers/blob/ab2006e3d6db88654526a4169e65d4bfc52da2e3/src/transformers/models/gpt2/modeling_flax_gpt2.py#L404 and also the `__call__` method for the `FlaxGPTPreTrainedModel`: https://github.com/huggingface/transformers/blob/ab2006e3d6db88654526a4169e65d4bfc52da2e3/src/transformers/models/gpt2/modeling_flax_gpt2.py#L459 The first two changes are pretty straightforward! It's the latter two that I envision being more involved and introducing a little bit more extra code. Do you want to have a go at adding this in a PR @mattf1n? Happy to work with you on adding this feature!<|||||>I would like to second @mattf1n 's feature request. This would be super useful for vision-language modeling where we often want to feed a concatenation of image and text features into a sequence-to-sequence model. This approach has become quite popular recently. See for example - [VL-T5](https://arxiv.org/abs/2102.02779), [GPV-1](https://arxiv.org/abs/2104.00743), [GPV-2](https://arxiv.org/abs/2202.02317), [UnifiedIO](https://arxiv.org/abs/2206.08916). And given that non-Flax models already support this, would be great to have this implemented for Flax models as well for consistency!<|||||>Thanks for weighing in @BigRedT! Cool to see so much interest for this feature! Would you or @mattf1n be interested in opening a PR to implement this for a language model (GPT2)? As mentioned, the first two changes should be pretty trivial. More than happy to help with hacking around the `init_weights` and `__call__` method to get the last two bits working!<|||||>@sanchit-gandhi I am traveling currently but gave it a quick try yesterday. Specifically, I was trying to update `FlaxT5ForConditionalGeneration` to accept `input_embed`. It looks like `FlaxGenerationMixin` in `generation_flax_utils.py` would also need to be updated as it assumes `input_ids` are always provided (not `Optional`). This would be needed to use the `generate()` method to generate free form text via beam search, greedy decoding etc. I can share my partial attempt next week when I am back at my desk but overall this seems a bit involved. Would be great if someone like yourself who is more familiar with the huggingface code base takes a stab at it! P.S - Apologies for any typos; I am writing this on my phone.<|||||>Hey @BigRedT! Thanks for jumping on this so quickly. That's a good catch - we will indeed need to handle the case where `input_embeds` are used for generation. Here, we'll need to do three things: 1. Allows both `input_ids` and `input_embeds` to be passed to the `generate()` method https://github.com/huggingface/transformers/blob/1ccd2515ed6d7da4ec46fe94aedbd8a86a2cde8e/src/transformers/generation_flax_utils.py#L163 2. Pass both `input_ids` and `input_embeds` to the `prepare_inputs_for_generation()` method: https://github.com/huggingface/transformers/blob/1ccd2515ed6d7da4ec46fe94aedbd8a86a2cde8e/src/transformers/generation_flax_utils.py#L483 3. Modify the `prepare_inputs_for_generation` method to handle both `input_ids` and `input_embeds`: https://github.com/huggingface/transformers/blob/1ccd2515ed6d7da4ec46fe94aedbd8a86a2cde8e/src/transformers/models/t5/modeling_flax_t5.py#L1683 There'll then be some cleaning up to make sure batch-size and sequence length terms are set correctly (from either the `input_ids` or `input_embeds` accordingly). I've given examples for greedy search and GPT2, but the same logic holds for beam search or sampling and other Causal LMs. As you say @BigRedT, this is already getting quite involved, both in-terms of the amount of code involved and complexity of the problem. It's going to be a challenging task, but feel free to open a PR with what you've currently got, happy to help with the integration and guide you through! I'm out-of-office next week, but can take a look when I'm back :)<|||||>@sanchit-gandhi here's the [PR](https://github.com/huggingface/transformers/pull/18613) you requested. I was actually able to get it to work with minimal modifications to `generation_flax_utils.py`. @mattf1n a similar solution might work for GPT-2 as well?<|||||>Awesome! Replied on the PR 🙂<|||||>Hi! I was away for a bit. I'm happy to see so much activity this past week or so! I have implemented a working version for GPT-Neo. I'll try making a pull request soon<|||||>Here is the PR ^<|||||>This issue is still open with a WIP PR at https://github.com/huggingface/transformers/pull/18613 The PR is near completion - if anyone wants to work with @BigRedT to finish this one off feel free to have a go on the PR! More than happy to answer any questions and provide a review 🤗<|||||>The PR is still open if anyone would like to see it to completion! Happy to lend a hand with questions / queries!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Going to leave this one closed for now since interest seems to have dwindled. If you're interested in picking this back up, feel free to reopen the issue and tag me 🤗 We can go from there!
transformers
18,035
closed
Distributed training for streaming dataset
### Feature request Any documentations for the the `load_dataset(streaming=True)` for (multi-node multi-GPU) DDP training? ### Motivation Given a bunch of data files, it is expected to split them onto different GPUs. Is there a guide or documentation? ### Your contribution Does it requires manually split on data files for each worker in `DatasetBuilder._split_generator()`?
07-05-2022 18:27:57
07-05-2022 18:27:57
Hi @cyk1337 -- this seems like a question for the [datasets repo](https://github.com/huggingface/datasets) :)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
18,034
closed
Sort doc toc
# What does this PR do? This PR adds a script to automatically sort the model doc part of the documentation ToC, so we never need things like #18011 ever again. The script runs in the traditional `make style/fixup/quality` setup meaning that: - `make style` or `make fixup` fix the ToC by auto-sorting it and removing duplicates - `make quality` just checks whether or not the ToC is properly sorted.
07-05-2022 18:11:46
07-05-2022 18:11:46
_The documentation is not available anymore as the PR was closed or merged._
transformers
18,033
closed
Model LayoutLMv3 - error feature_extractor not find pytesseract
### System Info transformers 4.20.1 tesseract 5.1.0-32-gf36c0 pytesseract 0.3.9 ### Who can help? _No response_ ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Colab Install librires ``` !apt -qq update !add-apt-repository ppa:alex-p/tesseract-ocr-devel -y !apt -qq install -y tesseract-ocr !tesseract --version | grep tesseract !tesseract --list-langs ``` ``` >>> tesseract 5.1.0-32-gf36c0 >>> List of available languages in "/usr/share/tesseract-ocr/5/tessdata/" (3): eng osd ``` ``` from transformers import LayoutLMv3FeatureExtractor from PIL import Image image = Image.open("/content/3875.jpg").convert("RGB") ``` error code ``` feature_extractor = LayoutLMv3FeatureExtractor() encoding = feature_extractor(image, return_tensors="pt") print(encoding.keys()) dict_keys(['pixel_values', 'words', 'boxes']) ``` Error message ``` NameError Traceback (most recent call last) [<ipython-input-29-5693feae799a>](https://localhost:8080/#) in <module>() 1 # option 1: with apply_ocr=True (default) 2 feature_extractor = LayoutLMv3FeatureExtractor() ----> 3 encoding = feature_extractor(image, return_tensors="pt") 4 print(encoding.keys()) 5 # dict_keys(['pixel_values', 'words', 'boxes']) 1 frames [/usr/local/lib/python3.7/dist-packages/transformers/models/layoutlmv3/feature_extraction_layoutlmv3.py](https://localhost:8080/#) in apply_tesseract(image, lang) 51 52 # apply OCR ---> 53 data = pytesseract.image_to_data(image, lang=lang, output_type="dict") 54 words, left, top, width, height = data["text"], data["left"], data["top"], data["width"], data["height"] 55 NameError: name 'pytesseract' is not defined ``` ### Expected behavior not error, OCR boxes
07-05-2022 18:05:32
07-05-2022 18:05:32
Hi, Can you verify that you can do `import pytesseract`? It may be that you need to restart the runtime in Colab.<|||||>> Hi, > > Can you verify that you can do `import pytesseract`? It may be that you need to restart the runtime in Colab. restart solved the problem
transformers
18,032
closed
improved torch version bf available checker
I'm using torch version "1.10.0a0+0aef44c", the current `is_torch_bf16_cpu_available()` returns false. ```python >>> version.parse("1.10.0a0+0aef44c") < version.parse("1.10") >>> True >>> version.parse("1.10.0a0+0aef44c") < version.parse("1.10.*") >>> False ``` Add. check: ``` $ python -c "import torch; print(torch.__version__); print(torch.tensor(1).cuda().bfloat16().type())" 1.10.0a0+0aef44c torch.cuda.BFloat16Tensor ``` # What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
07-05-2022 16:28:29
07-05-2022 16:28:29
_The documentation is not available anymore as the PR was closed or merged._<|||||>In this situation, `1.10.0a0+0aef44c` points to a dev version of PyTorch 1.10, or a dev version of PyTorch 1.11? We use `.devX` for transformers versions, but from what I'm seeing PyTorch seems to use `aX.<commit_sha>` for their development versions. cc @sgugger <|||||>The test was written this way specifically to avoid development versions that do not have all required functionality, so we won't update it. If you can't change the version you are using, I suggest building Transformers from your fork with the changes.<|||||>Alright, thanks, so let's close and discard this. :-)
transformers
18,031
closed
Cannot compile T5 for inferentia
### System Info - `transformers` version: 4.20.1 - Platform: Linux-5.18.7-arch1-1-x86_64-with-glibc2.34 - Python version: 3.8.13 - Huggingface_hub version: 0.8.1 - PyTorch version (GPU?): 1.11.0+cu102 (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help? @patrickvonplaten ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction The inferentia for Marian (Seq2Seq) is [here](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/src/examples/pytorch/transformers-marianmt.html) Here is the snippet of code I'm using : <details> <summary>Python Snippet for compiling T5 model</summary> <br> ```python import os import numpy as np import torch import torch.neuron from torch.nn import functional as F from transformers.generation_utils import GenerationMixin from transformers.modeling_outputs import BaseModelOutput, Seq2SeqLMOutput from transformers.modeling_utils import PreTrainedModel from transformers.models.t5.configuration_t5 import T5Config from transformers.models.t5.modeling_t5 import T5Model from transformers.models.t5.tokenization_t5 import T5Tokenizer model_id = "mrm8488/t5-base-finetuned-question-generation-ap" num_texts = 1 # Number of input texts to decode num_beams = 4 # Number of beams per input text max_encoder_length = 32 # Maximum input token length max_decoder_length = 32 def infer(model, tokenizer, text): # Truncate and pad the max length to ensure that the token size is compatible with fixed-sized encoder (Not necessary for pure CPU execution) batch = tokenizer( text, max_length=max_decoder_length, truncation=True, padding="max_length", return_tensors="pt", ) output = model.generate( **batch, max_length=max_decoder_length, num_beams=num_beams, num_return_sequences=num_beams, ) results = [tokenizer.decode(t, skip_special_tokens=True) for t in output] print("Texts:") for i, summary in enumerate(results): print(i + 1, summary) def reduce(hidden, index): _, n_length, _ = hidden.shape # Create selection mask mask = torch.arange(n_length, dtype=torch.float32) == index mask = mask.view(1, -1, 1) # Broadcast mask masked = torch.multiply(hidden, mask) # Reduce along 1st dimension summed = torch.sum(masked, 1) return torch.unsqueeze(summed, 1) class NeuronEncoder(torch.nn.Module): def __init__(self, model): super().__init__() self.encoder = model.encoder def forward(self, input_ids, attention_mask): return self.encoder(input_ids, attention_mask=attention_mask, return_dict=False) class NeuronDecoder(torch.nn.Module): def __init__(self, model, max_length): super().__init__() self.weight = model.shared.weight.clone().detach() self.bias = model.final_logits_bias.clone().detach() self.decoder = model.decoder self.max_length = max_length def forward(self, input_ids, attention_mask, encoder_outputs, index): # Build a fixed sized causal mask for the padded decoder input ids mask = np.triu(np.ones((self.max_length, self.max_length)), 1) mask[mask == 1] = -np.inf causal_mask = torch.tensor(mask, dtype=torch.float) # Invoke the decoder (hidden,) = self.decoder( input_ids=input_ids, encoder_hidden_states=encoder_outputs, encoder_padding_mask=attention_mask, decoder_padding_mask=None, decoder_causal_mask=causal_mask, return_dict=False, use_cache=False, ) # Reduce decoder outputs to the specified index (current iteration) hidden = reduce(hidden, index) # Compute final linear layer for token probabilities logits = F.linear(hidden, self.weight, bias=self.bias) return logits class NeuronGeneration(PreTrainedModel, GenerationMixin): def trace( self, model, num_texts, num_beams, max_encoder_length, max_decoder_length ): """ Traces the encoder and decoder modules for use on Neuron. This function fixes the network to the given sizes. Once the model has been compiled to a given size, the inputs to these networks must always be of fixed size. Args: model (GenerationMixin): The transformer-type generator model to trace num_texts (int): The number of input texts to translate at once num_beams (int): The number of beams to computer per text max_encoder_length (int): The maximum number of encoder tokens max_encoder_length (int): The maximum number of decoder tokens """ self.config.max_decoder_length = max_decoder_length # Trace the encoder inputs = ( torch.ones((num_texts, max_encoder_length), dtype=torch.long), torch.ones((num_texts, max_encoder_length), dtype=torch.long), ) encoder = NeuronEncoder(model) self.encoder = torch.neuron.trace(encoder, inputs) # Trace the decoder (with expanded inputs) batch_size = num_texts * num_beams inputs = ( torch.ones((batch_size, max_decoder_length), dtype=torch.long), torch.ones((batch_size, max_encoder_length), dtype=torch.long), torch.ones( (batch_size, max_encoder_length, model.config.d_model), dtype=torch.float, ), torch.tensor(0), ) decoder = NeuronDecoder(model, max_decoder_length) self.decoder = torch.neuron.trace(decoder, inputs) # ------------------------------------------------------------------------ # Beam Search Methods (Copied directly from transformers) # ------------------------------------------------------------------------ def adjust_logits_during_generation(self, logits, cur_len, max_length): if cur_len == 1 and self.config.force_bos_token_to_be_generated: self._force_token_id_to_be_generated(logits, self.config.bos_token_id) elif cur_len == max_length - 1 and self.config.eos_token_id is not None: self._force_token_id_to_be_generated(logits, self.config.eos_token_id) return logits @staticmethod def _force_token_id_to_be_generated(scores, token_id) -> None: scores[:, [x for x in range(scores.shape[1]) if x != token_id]] = -float("inf") # ------------------------------------------------------------------------ # Encoder/Decoder Invocation # ------------------------------------------------------------------------ def prepare_inputs_for_generation( self, decoder_input_ids, encoder_outputs=None, attention_mask=None, **model_kwargs, ): # Pad the inputs for Neuron current_length = decoder_input_ids.shape[1] pad_size = self.config.max_decoder_length - current_length return dict( input_ids=F.pad(decoder_input_ids, (0, pad_size)), attention_mask=attention_mask, encoder_outputs=encoder_outputs.last_hidden_state, current_length=torch.tensor(current_length - 1), ) def get_encoder(self): """Helper to invoke the encoder and wrap the results in the expected structure""" def encode(input_ids, attention_mask, **kwargs): (output,) = self.encoder(input_ids, attention_mask) return BaseModelOutput( last_hidden_state=output, ) return encode def __call__( self, input_ids, attention_mask, encoder_outputs, current_length, **kwargs ): """Helper to invoke the decoder and wrap the results in the expected structure""" logits = self.decoder( input_ids, attention_mask, encoder_outputs, current_length ) return Seq2SeqLMOutput(logits=logits) # ------------------------------------------------------------------------ # Serialization # ------------------------------------------------------------------------ def save_pretrained(self, directory): if os.path.isfile(directory): print(f"Provided path ({directory}) should be a directory, not a file") return os.makedirs(directory, exist_ok=True) torch.jit.save(self.encoder, os.path.join(directory, "encoder.pt")) torch.jit.save(self.decoder, os.path.join(directory, "decoder.pt")) self.config.save_pretrained(directory) @classmethod def from_pretrained(cls, directory): config = T5Config.from_pretrained(directory) obj = cls(config) obj.encoder = torch.jit.load(os.path.join(directory, "encoder.pt")) obj.decoder = torch.jit.load(os.path.join(directory, "decoder.pt")) return obj @property def device(self): return torch.device("cpu") model_cpu = T5Model.from_pretrained(model_id) tokenizer_cpu = T5Tokenizer.from_pretrained(model_id) model_neuron = NeuronGeneration(model_cpu.config) # 1. Compile the model # Note: This may take a couple of minutes since both the encoder/decoder will be compiled model_neuron.trace( model=model_cpu, num_texts=num_texts, num_beams=num_beams, max_encoder_length=max_encoder_length, max_decoder_length=max_decoder_length, ) # 2. Serialize an artifact # After this call you will have an `encoder.pt`, `decoder.pt` and `config.json` in the neuron_name folder model_neuron.save_pretrained(neuron_name) tokenizer.save_pretrained(neuron_name) model_neuron = NeuronGeneration.from_pretrained(neuron_name) infer(model_neuron, tokenizer_cpu, sample_text) ``` </details> To setup the environment : 1. Create a venv (3.8 seems good for neuron-cc, 3.9 seems not) 2. install neuron-cc `pip install neuron-cc --extra-index-url=https://pip.repos.neuron.amazonaws.com` 3. Install the rest : `pip install pip install torch-neuron sagemaker transformers sentencepiece` 4. run the code above 5. Get a : <details> <summary>TraceBack of the compilation</summary> <br> ``` The `xla_device` argument has been deprecated in v4.4.0 of Transformers. It is ignored and you can safely remove it from your `config.json` file. Some weights of the model checkpoint at mrm8488/t5-base-finetuned-question-generation-ap were not used when initializing T5Model: ['lm_head.weight'] - This IS expected if you are initializing T5Model from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model). - This IS NOT expected if you are initializing T5Model from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). The `xla_device` argument has been deprecated in v4.4.0 of Transformers. It is ignored and you can safely remove it from your `config.json` file. INFO:Neuron:There are 2 ops of 1 different types in the TorchScript that are not compiled by neuron-cc: aten::embedding, (For more information see https://github.com/aws/aws-neuron-sdk/blob/master/release-notes/neuron-cc-ops/neuron-cc-ops-pytorch.md) INFO:Neuron:Number of arithmetic operators (pre-compilation) before = 627, fused = 587, percent fused = 93.62% WARNING:Neuron:torch.neuron.trace failed on _NeuronGraph$641; falling back to native python function call ERROR:Neuron:No module named 'tensorflow' Traceback (most recent call last): File "/mnt/Documents/test/sagemaker_test/.venv/lib/python3.8/site-packages/torch_neuron/convert.py", line 381, in op_converter neuron_function = self.subgraph_compiler( File "/mnt/Documents/test/sagemaker_test/.venv/lib/python3.8/site-packages/torch_neuron/decorators.py", line 67, in trace import tensorflow as tf ModuleNotFoundError: No module named 'tensorflow' INFO:Neuron:Number of arithmetic operators (post-compilation) before = 627, compiled = 0, percent compiled = 0.0% INFO:Neuron:The neuron partitioner created 1 sub-graphs INFO:Neuron:Neuron successfully compiled 0 sub-graphs, Total fused subgraphs = 1, Percent of model sub-graphs successfully compiled = 0.0% INFO:Neuron:Compiled these operators (and operator counts) to Neuron: INFO:Neuron:Not compiled operators (and operator counts) to Neuron: INFO:Neuron: => aten::Int: 49 [supported] INFO:Neuron: => aten::ScalarImplicit: 2 [supported] INFO:Neuron: => aten::abs: 1 [supported] INFO:Neuron: => aten::add: 65 [supported] INFO:Neuron: => aten::arange: 2 [supported] INFO:Neuron: => aten::contiguous: 12 [supported] INFO:Neuron: => aten::div: 2 [supported] INFO:Neuron: => aten::dropout: 50 [supported] INFO:Neuron: => aten::embedding: 2 [not supported] INFO:Neuron: => aten::full_like: 1 [supported] INFO:Neuron: => aten::gt: 1 [supported] INFO:Neuron: => aten::linear: 72 [supported] INFO:Neuron: => aten::log: 1 [supported] INFO:Neuron: => aten::lt: 1 [supported] INFO:Neuron: => aten::matmul: 24 [supported] INFO:Neuron: => aten::mean: 25 [supported] INFO:Neuron: => aten::min: 1 [supported] INFO:Neuron: => aten::mul: 53 [supported] INFO:Neuron: => aten::permute: 1 [supported] INFO:Neuron: => aten::pow: 25 [supported] INFO:Neuron: => aten::relu: 12 [supported] INFO:Neuron: => aten::rsqrt: 25 [supported] INFO:Neuron: => aten::rsub: 1 [supported] INFO:Neuron: => aten::size: 14 [supported] INFO:Neuron: => aten::slice: 4 [supported] INFO:Neuron: => aten::softmax: 12 [supported] INFO:Neuron: => aten::sub: 1 [supported] INFO:Neuron: => aten::to: 41 [supported] INFO:Neuron: => aten::transpose: 60 [supported] INFO:Neuron: => aten::type_as: 12 [supported] INFO:Neuron: => aten::unsqueeze: 5 [supported] INFO:Neuron: => aten::view: 49 [supported] INFO:Neuron: => aten::where: 1 [not supported] Traceback (most recent call last): File "compile_model.py", line 232, in <module> model_neuron.trace( File "compile_model.py", line 128, in trace self.encoder = torch.neuron.trace(encoder, inputs) File "/mnt/Documents/test/sagemaker_test/.venv/lib/python3.8/site-packages/torch_neuron/convert.py", line 184, in trace cu.stats_post_compiler(neuron_graph) File "/mnt/Documents/test/sagemaker_test/.venv/lib/python3.8/site-packages/torch_neuron/convert.py", line 492, in stats_post_compiler raise RuntimeError( RuntimeError: No operations were successfully partitioned and compiled to neuron for this model - aborting trace! ``` </details> Which seem to say that `.where` is not supported by neuron-cc (and thus inferentia), which mean that we cannot run T5 models fast and cheap... Could we change the `.where` to have something supported and thus be able to compile models for inferentia ? Else should I raise the issue to `neuron-cc` so they can support it ? Thanks in advance, Have a great day. ### Expected behavior Be able to compile a T5 model for inferentia (`.where` being supported) Note : I also create an issue in the `aws-neuron-sdk` repo : https://github.com/aws/aws-neuron-sdk/issues/440
07-05-2022 16:00:56
07-05-2022 16:00:56
Gently pinging @philschmid here - do you have any ideas maybe?<|||||>@Ierezell could you please try to set up your environment as described [here](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/neuron-intro/pytorch-setup/pytorch-install.html#id2). It feels like that you are missing packages to successfully compile the models <|||||>Hello @patrickvonplaten, @philschmid. Thanks for the fast reply and help :) It's my bad I haven't pasted the good traceback. Here is the new one with still the same code, and the `pip freeze` on my local machine, I reinstalled all the dependencies to be sure. Do I need to be on an inferentia ec2 machine to compile my model (here I'm on my local laptop : Linux Lenovo X1 with 1650Gpu ) ? <details> <summary>Pip freeze</summary> <br> ``` absl-py==1.1.0 astunparse==1.6.3 attrs==21.4.0 cachetools==5.2.0 certifi==2022.6.15 charset-normalizer==2.1.0 decorator==5.1.1 dmlc-nnvm==1.11.0.0+0 dmlc-topi==1.11.0.0+0 dmlc-tvm==1.11.0.0+0 filelock==3.7.1 flatbuffers==2.0 gast==0.5.3 google-auth==2.9.0 google-auth-oauthlib==0.4.6 google-pasta==0.2.0 grpcio==1.47.0 h5py==3.7.0 huggingface-hub==0.8.1 idna==3.3 importlib-metadata==4.12.0 inferentia-hwm==1.11.0.0+0 iniconfig==1.1.1 islpy==2021.1+aws2021.x.16.0.bld0 keras==2.8.0 Keras-Preprocessing==1.1.2 libclang==14.0.1 Markdown==3.3.7 networkx==2.4 neuron-cc==1.11.4.0+97f99abe4 numpy==1.20.0 oauthlib==3.2.0 opt-einsum==3.3.0 packaging==21.3 pluggy==1.0.0 protobuf==3.20.1 py==1.11.0 pyasn1==0.4.8 pyasn1-modules==0.2.8 pyparsing==3.0.9 pytest==7.1.2 PyYAML==6.0 regex==2022.6.2 requests==2.28.1 requests-oauthlib==1.3.1 rsa==4.8 scipy==1.4.1 sentencepiece==0.1.96 six==1.16.0 tensorboard==2.8.0 tensorboard-data-server==0.6.1 tensorboard-plugin-neuron==2.4.0.0 tensorboard-plugin-wit==1.8.1 tensorflow==2.8.0 tensorflow-io-gcs-filesystem==0.26.0 tensorflow-neuron==2.8.0.2.3.0.0 termcolor==1.1.0 tf-estimator-nightly==2.8.0.dev2021122109 tokenizers==0.12.1 tomli==2.0.1 torch==1.11.0 torch-neuron==1.11.0.2.3.0.0 tqdm==4.64.0 transformers==4.20.1 typing_extensions==4.3.0 urllib3==1.26.9 Werkzeug==2.1.2 wrapt==1.14.1 zipp==3.8.0 ``` </details> <details> <summary>Python compile model traceback</summary> <br> ``` The `xla_device` argument has been deprecated in v4.4.0 of Transformers. It is ignored and you can safely remove it from your `config.json` file. Some weights of the model checkpoint at mrm8488/t5-base-finetuned-question-generation-ap were not used when initializing T5Model: ['lm_head.weight'] - This IS expected if you are initializing T5Model from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model). - This IS NOT expected if you are initializing T5Model from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). The `xla_device` argument has been deprecated in v4.4.0 of Transformers. It is ignored and you can safely remove it from your `config.json` file. INFO:Neuron:There are 2 ops of 1 different types in the TorchScript that are not compiled by neuron-cc: aten::embedding, (For more information see https://github.com/aws/aws-neuron-sdk/blob/master/release-notes/neuron-cc-ops/neuron-cc-ops-pytorch.md) INFO:Neuron:Number of arithmetic operators (pre-compilation) before = 627, fused = 587, percent fused = 93.62% 2022-07-06 15:58:27.564554: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:936] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2022-07-06 15:58:27.622777: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:936] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2022-07-06 15:58:27.622940: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:936] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero INFO:Neuron:Number of neuron graph operations 1637 did not match traced graph 1569 - using heuristic matching of hierarchical information INFO:Neuron:Compiling function _NeuronGraph$641 with neuron-cc INFO:Neuron:Compiling with command line: '/mnt/Documents/test/sagemaker_test/.venv/bin/neuron-cc compile /tmp/tmp428tqio_/graph_def.pb --framework TENSORFLOW --pipeline compile SaveTemps --output /tmp/tmp428tqio_/graph_def.neff --io-config {"inputs": {"tensor.1:0": [[1, 32], "int64"], "1:0": [[1, 32, 768], "float32"], "2:0": [[32, 32, 12], "float32"], "3:0": [[1, 32, 768], "float32"]}, "outputs": ["T5Stack_1/T5LayerNorm_79/aten_mul_1/mul:0"]} --verbose 35' .2022-07-06 15:58:50.575361: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:936] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2022-07-06 15:58:50.605627: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:936] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2022-07-06 15:58:50.605816: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:936] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 07/06/2022 03:58:52 PM ERROR 2484 [neuron-cc]: Failed to parse model /tmp/tmp428tqio_/graph_def.pb: The following operators are not implemented: {'SelectV2'} (NotImplementedError) Compiler status ERROR INFO:Neuron:Compile command returned: 1 WARNING:Neuron:torch.neuron.trace failed on _NeuronGraph$641; falling back to native python function call ERROR:Neuron:neuron-cc failed with the following command line call: /mnt/Documents/test/sagemaker_test/.venv/bin/neuron-cc compile /tmp/tmp428tqio_/graph_def.pb --framework TENSORFLOW --pipeline compile SaveTemps --output /tmp/tmp428tqio_/graph_def.neff --io-config '{"inputs": {"tensor.1:0": [[1, 32], "int64"], "1:0": [[1, 32, 768], "float32"], "2:0": [[32, 32, 12], "float32"], "3:0": [[1, 32, 768], "float32"]}, "outputs": ["T5Stack_1/T5LayerNorm_79/aten_mul_1/mul:0"]}' --verbose 35 Traceback (most recent call last): File "/mnt/Documents/test/sagemaker_test/.venv/lib/python3.8/site-packages/torch_neuron/convert.py", line 381, in op_converter neuron_function = self.subgraph_compiler( File "/mnt/Documents/test/sagemaker_test/.venv/lib/python3.8/site-packages/torch_neuron/decorators.py", line 219, in trace raise subprocess.SubprocessError( subprocess.SubprocessError: neuron-cc failed with the following command line call: /mnt/Documents/test/sagemaker_test/.venv/bin/neuron-cc compile /tmp/tmp428tqio_/graph_def.pb --framework TENSORFLOW --pipeline compile SaveTemps --output /tmp/tmp428tqio_/graph_def.neff --io-config '{"inputs": {"tensor.1:0": [[1, 32], "int64"], "1:0": [[1, 32, 768], "float32"], "2:0": [[32, 32, 12], "float32"], "3:0": [[1, 32, 768], "float32"]}, "outputs": ["T5Stack_1/T5LayerNorm_79/aten_mul_1/mul:0"]}' --verbose 35 INFO:Neuron:Number of arithmetic operators (post-compilation) before = 627, compiled = 0, percent compiled = 0.0% INFO:Neuron:The neuron partitioner created 1 sub-graphs INFO:Neuron:Neuron successfully compiled 0 sub-graphs, Total fused subgraphs = 1, Percent of model sub-graphs successfully compiled = 0.0% INFO:Neuron:Compiled these operators (and operator counts) to Neuron: INFO:Neuron:Not compiled operators (and operator counts) to Neuron: INFO:Neuron: => aten::Int: 49 [supported] INFO:Neuron: => aten::ScalarImplicit: 2 [supported] INFO:Neuron: => aten::abs: 1 [supported] INFO:Neuron: => aten::add: 65 [supported] INFO:Neuron: => aten::arange: 2 [supported] INFO:Neuron: => aten::contiguous: 12 [supported] INFO:Neuron: => aten::div: 2 [supported] INFO:Neuron: => aten::dropout: 50 [supported] INFO:Neuron: => aten::embedding: 2 [not supported] INFO:Neuron: => aten::full_like: 1 [supported] INFO:Neuron: => aten::gt: 1 [supported] INFO:Neuron: => aten::linear: 72 [supported] INFO:Neuron: => aten::log: 1 [supported] INFO:Neuron: => aten::lt: 1 [supported] INFO:Neuron: => aten::matmul: 24 [supported] INFO:Neuron: => aten::mean: 25 [supported] INFO:Neuron: => aten::min: 1 [supported] INFO:Neuron: => aten::mul: 53 [supported] INFO:Neuron: => aten::permute: 1 [supported] INFO:Neuron: => aten::pow: 25 [supported] INFO:Neuron: => aten::relu: 12 [supported] INFO:Neuron: => aten::rsqrt: 25 [supported] INFO:Neuron: => aten::rsub: 1 [supported] INFO:Neuron: => aten::size: 14 [supported] INFO:Neuron: => aten::slice: 4 [supported] INFO:Neuron: => aten::softmax: 12 [supported] INFO:Neuron: => aten::sub: 1 [supported] INFO:Neuron: => aten::to: 41 [supported] INFO:Neuron: => aten::transpose: 60 [supported] INFO:Neuron: => aten::type_as: 12 [supported] INFO:Neuron: => aten::unsqueeze: 5 [supported] INFO:Neuron: => aten::view: 49 [supported] INFO:Neuron: => aten::where: 1 [not supported] Traceback (most recent call last): File "compile_model.py", line 232, in <module> model_neuron.trace( File "compile_model.py", line 128, in trace self.encoder = torch.neuron.trace(encoder, inputs) File "/mnt/Documents/test/sagemaker_test/.venv/lib/python3.8/site-packages/torch_neuron/convert.py", line 184, in trace cu.stats_post_compiler(neuron_graph) File "/mnt/Documents/test/sagemaker_test/.venv/lib/python3.8/site-packages/torch_neuron/convert.py", line 492, in stats_post_compiler raise RuntimeError( RuntimeError: No operations were successfully partitioned and compiled to neuron for this model - aborting trace! ``` </details> Note that with this simpler snippet (from you blog @philschmid ;) ): <details> <summary>Simpler Python compile model</summary> <br> ```python import os import tensorflow # to workaround a protobuf version conflict issue import torch import torch.neuron from transformers import AutoTokenizer, AutoModel # load tokenizer and model tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModel.from_pretrained(model_id, torchscript=True) # create dummy input for max length 128 dummy_input = "dummy input which will be padded later" max_length = 128 embeddings = tokenizer(dummy_input, max_length=max_length, padding="max_length",return_tensors="pt") neuron_inputs = tuple(embeddings.values()) # compile model with torch.neuron.trace and update config model_neuron = torch.neuron.trace(model, neuron_inputs) model.config.update({"traced_sequence_length": max_length}) # save tokenizer, neuron model and config for later use save_dir="tmp" os.makedirs("tmp",exist_ok=True) model_neuron.save(os.path.join(save_dir,"neuron_model.pt")) tokenizer.save_pretrained(save_dir) model.config.save_pretrained(save_dir) ``` </details> I got the same traceback with the same SelectV2 error. Thanks again for your help. <|||||>Hello @patrickvonplaten, @philschmid. Thanks for the fast reply and help :) It's my bad I haven't pasted the good traceback. Here is the new one with still the same code, and the `pip freeze` on my local machine, I reinstalled all the dependencies to be sure. Do I need to be on an inferentia ec2 machine to compile my model (here I'm on my local laptop : Linux Lenovo X1 with 1650Gpu ) ? I know I will need to be for inference, but here I just want to compile the model first. <details> <summary>Pip freeze</summary> <br> ``` absl-py==1.1.0 astunparse==1.6.3 attrs==21.4.0 cachetools==5.2.0 certifi==2022.6.15 charset-normalizer==2.1.0 decorator==5.1.1 dmlc-nnvm==1.11.0.0+0 dmlc-topi==1.11.0.0+0 dmlc-tvm==1.11.0.0+0 filelock==3.7.1 flatbuffers==2.0 gast==0.5.3 google-auth==2.9.0 google-auth-oauthlib==0.4.6 google-pasta==0.2.0 grpcio==1.47.0 h5py==3.7.0 huggingface-hub==0.8.1 idna==3.3 importlib-metadata==4.12.0 inferentia-hwm==1.11.0.0+0 iniconfig==1.1.1 islpy==2021.1+aws2021.x.16.0.bld0 keras==2.8.0 Keras-Preprocessing==1.1.2 libclang==14.0.1 Markdown==3.3.7 networkx==2.4 neuron-cc==1.11.4.0+97f99abe4 numpy==1.20.0 oauthlib==3.2.0 opt-einsum==3.3.0 packaging==21.3 pluggy==1.0.0 protobuf==3.20.1 py==1.11.0 pyasn1==0.4.8 pyasn1-modules==0.2.8 pyparsing==3.0.9 pytest==7.1.2 PyYAML==6.0 regex==2022.6.2 requests==2.28.1 requests-oauthlib==1.3.1 rsa==4.8 scipy==1.4.1 sentencepiece==0.1.96 six==1.16.0 tensorboard==2.8.0 tensorboard-data-server==0.6.1 tensorboard-plugin-neuron==2.4.0.0 tensorboard-plugin-wit==1.8.1 tensorflow==2.8.0 tensorflow-io-gcs-filesystem==0.26.0 tensorflow-neuron==2.8.0.2.3.0.0 termcolor==1.1.0 tf-estimator-nightly==2.8.0.dev2021122109 tokenizers==0.12.1 tomli==2.0.1 torch==1.11.0 torch-neuron==1.11.0.2.3.0.0 tqdm==4.64.0 transformers==4.20.1 typing_extensions==4.3.0 urllib3==1.26.9 Werkzeug==2.1.2 wrapt==1.14.1 zipp==3.8.0 ``` </details> <details> <summary>Python compile model traceback</summary> <br> ``` The `xla_device` argument has been deprecated in v4.4.0 of Transformers. It is ignored and you can safely remove it from your `config.json` file. Some weights of the model checkpoint at mrm8488/t5-base-finetuned-question-generation-ap were not used when initializing T5Model: ['lm_head.weight'] - This IS expected if you are initializing T5Model from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model). - This IS NOT expected if you are initializing T5Model from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). The `xla_device` argument has been deprecated in v4.4.0 of Transformers. It is ignored and you can safely remove it from your `config.json` file. INFO:Neuron:There are 2 ops of 1 different types in the TorchScript that are not compiled by neuron-cc: aten::embedding, (For more information see https://github.com/aws/aws-neuron-sdk/blob/master/release-notes/neuron-cc-ops/neuron-cc-ops-pytorch.md) INFO:Neuron:Number of arithmetic operators (pre-compilation) before = 627, fused = 587, percent fused = 93.62% 2022-07-06 15:58:27.564554: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:936] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2022-07-06 15:58:27.622777: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:936] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2022-07-06 15:58:27.622940: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:936] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero INFO:Neuron:Number of neuron graph operations 1637 did not match traced graph 1569 - using heuristic matching of hierarchical information INFO:Neuron:Compiling function _NeuronGraph$641 with neuron-cc INFO:Neuron:Compiling with command line: '/mnt/Documents/test/sagemaker_test/.venv/bin/neuron-cc compile /tmp/tmp428tqio_/graph_def.pb --framework TENSORFLOW --pipeline compile SaveTemps --output /tmp/tmp428tqio_/graph_def.neff --io-config {"inputs": {"tensor.1:0": [[1, 32], "int64"], "1:0": [[1, 32, 768], "float32"], "2:0": [[32, 32, 12], "float32"], "3:0": [[1, 32, 768], "float32"]}, "outputs": ["T5Stack_1/T5LayerNorm_79/aten_mul_1/mul:0"]} --verbose 35' .2022-07-06 15:58:50.575361: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:936] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2022-07-06 15:58:50.605627: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:936] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2022-07-06 15:58:50.605816: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:936] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 07/06/2022 03:58:52 PM ERROR 2484 [neuron-cc]: Failed to parse model /tmp/tmp428tqio_/graph_def.pb: The following operators are not implemented: {'SelectV2'} (NotImplementedError) Compiler status ERROR INFO:Neuron:Compile command returned: 1 WARNING:Neuron:torch.neuron.trace failed on _NeuronGraph$641; falling back to native python function call ERROR:Neuron:neuron-cc failed with the following command line call: /mnt/Documents/test/sagemaker_test/.venv/bin/neuron-cc compile /tmp/tmp428tqio_/graph_def.pb --framework TENSORFLOW --pipeline compile SaveTemps --output /tmp/tmp428tqio_/graph_def.neff --io-config '{"inputs": {"tensor.1:0": [[1, 32], "int64"], "1:0": [[1, 32, 768], "float32"], "2:0": [[32, 32, 12], "float32"], "3:0": [[1, 32, 768], "float32"]}, "outputs": ["T5Stack_1/T5LayerNorm_79/aten_mul_1/mul:0"]}' --verbose 35 Traceback (most recent call last): File "/mnt/Documents/test/sagemaker_test/.venv/lib/python3.8/site-packages/torch_neuron/convert.py", line 381, in op_converter neuron_function = self.subgraph_compiler( File "/mnt/Documents/test/sagemaker_test/.venv/lib/python3.8/site-packages/torch_neuron/decorators.py", line 219, in trace raise subprocess.SubprocessError( subprocess.SubprocessError: neuron-cc failed with the following command line call: /mnt/Documents/test/sagemaker_test/.venv/bin/neuron-cc compile /tmp/tmp428tqio_/graph_def.pb --framework TENSORFLOW --pipeline compile SaveTemps --output /tmp/tmp428tqio_/graph_def.neff --io-config '{"inputs": {"tensor.1:0": [[1, 32], "int64"], "1:0": [[1, 32, 768], "float32"], "2:0": [[32, 32, 12], "float32"], "3:0": [[1, 32, 768], "float32"]}, "outputs": ["T5Stack_1/T5LayerNorm_79/aten_mul_1/mul:0"]}' --verbose 35 INFO:Neuron:Number of arithmetic operators (post-compilation) before = 627, compiled = 0, percent compiled = 0.0% INFO:Neuron:The neuron partitioner created 1 sub-graphs INFO:Neuron:Neuron successfully compiled 0 sub-graphs, Total fused subgraphs = 1, Percent of model sub-graphs successfully compiled = 0.0% INFO:Neuron:Compiled these operators (and operator counts) to Neuron: INFO:Neuron:Not compiled operators (and operator counts) to Neuron: INFO:Neuron: => aten::Int: 49 [supported] INFO:Neuron: => aten::ScalarImplicit: 2 [supported] INFO:Neuron: => aten::abs: 1 [supported] INFO:Neuron: => aten::add: 65 [supported] INFO:Neuron: => aten::arange: 2 [supported] INFO:Neuron: => aten::contiguous: 12 [supported] INFO:Neuron: => aten::div: 2 [supported] INFO:Neuron: => aten::dropout: 50 [supported] INFO:Neuron: => aten::embedding: 2 [not supported] INFO:Neuron: => aten::full_like: 1 [supported] INFO:Neuron: => aten::gt: 1 [supported] INFO:Neuron: => aten::linear: 72 [supported] INFO:Neuron: => aten::log: 1 [supported] INFO:Neuron: => aten::lt: 1 [supported] INFO:Neuron: => aten::matmul: 24 [supported] INFO:Neuron: => aten::mean: 25 [supported] INFO:Neuron: => aten::min: 1 [supported] INFO:Neuron: => aten::mul: 53 [supported] INFO:Neuron: => aten::permute: 1 [supported] INFO:Neuron: => aten::pow: 25 [supported] INFO:Neuron: => aten::relu: 12 [supported] INFO:Neuron: => aten::rsqrt: 25 [supported] INFO:Neuron: => aten::rsub: 1 [supported] INFO:Neuron: => aten::size: 14 [supported] INFO:Neuron: => aten::slice: 4 [supported] INFO:Neuron: => aten::softmax: 12 [supported] INFO:Neuron: => aten::sub: 1 [supported] INFO:Neuron: => aten::to: 41 [supported] INFO:Neuron: => aten::transpose: 60 [supported] INFO:Neuron: => aten::type_as: 12 [supported] INFO:Neuron: => aten::unsqueeze: 5 [supported] INFO:Neuron: => aten::view: 49 [supported] INFO:Neuron: => aten::where: 1 [not supported] Traceback (most recent call last): File "compile_model.py", line 232, in <module> model_neuron.trace( File "compile_model.py", line 128, in trace self.encoder = torch.neuron.trace(encoder, inputs) File "/mnt/Documents/test/sagemaker_test/.venv/lib/python3.8/site-packages/torch_neuron/convert.py", line 184, in trace cu.stats_post_compiler(neuron_graph) File "/mnt/Documents/test/sagemaker_test/.venv/lib/python3.8/site-packages/torch_neuron/convert.py", line 492, in stats_post_compiler raise RuntimeError( RuntimeError: No operations were successfully partitioned and compiled to neuron for this model - aborting trace! ``` </details> Note that with this simpler snippet (from you blog @philschmid ;) ): <details> <summary>Simpler Python compile model</summary> <br> ```python import os import tensorflow # to workaround a protobuf version conflict issue import torch import torch.neuron from transformers import AutoTokenizer, AutoModel # load tokenizer and model tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModel.from_pretrained(model_id, torchscript=True) # create dummy input for max length 128 dummy_input = "dummy input which will be padded later" max_length = 128 embeddings = tokenizer(dummy_input, max_length=max_length, padding="max_length",return_tensors="pt") neuron_inputs = tuple(embeddings.values()) # compile model with torch.neuron.trace and update config model_neuron = torch.neuron.trace(model, neuron_inputs) model.config.update({"traced_sequence_length": max_length}) # save tokenizer, neuron model and config for later use save_dir="tmp" os.makedirs("tmp",exist_ok=True) model_neuron.save(os.path.join(save_dir,"neuron_model.pt")) tokenizer.save_pretrained(save_dir) model.config.save_pretrained(save_dir) ``` </details> I got the same traceback with the same SelectV2 error. Thanks again for your help. <|||||>@Ierezell i see you have `transformers 4.20.1` and torch 1.11 installed could you try to use the same versions as in the [guide](https://www.philschmid.de/huggingface-sentence-transformers-aws-inferentia) we created?<|||||>@philschmid Here is the dependencies with `transformers 4.12.3` ```python python -c "import transformers;print(transformers.__version__)" # 4.12.3 python -c "import torch_neuron;print(torch_neuron.__version__)" #1.9.1.2.3.0.0 ``` <details> <summary>pip freeze</summary> <br> ``` absl-py==1.1.0 astunparse==1.6.3 attrs==20.3.0 boto3==1.24.24 botocore==1.27.24 cachetools==5.2.0 certifi==2022.6.15 charset-normalizer==2.1.0 click==8.1.3 decorator==5.1.1 dill==0.3.5.1 dmlc-nnvm==1.11.0.0+0 dmlc-topi==1.11.0.0+0 dmlc-tvm==1.11.0.0+0 filelock==3.7.1 flatbuffers==2.0 gast==0.5.3 google-auth==2.9.0 google-auth-oauthlib==0.4.6 google-pasta==0.2.0 grpcio==1.47.0 h5py==3.7.0 huggingface-hub==0.8.1 idna==3.3 importlib-metadata==4.12.0 inferentia-hwm==1.11.0.0+0 iniconfig==1.1.1 islpy==2021.1+aws2021.x.16.0.bld0 jmespath==1.0.1 joblib==1.1.0 keras==2.8.0 Keras-Preprocessing==1.1.2 libclang==14.0.1 Markdown==3.3.7 multiprocess==0.70.13 networkx==2.4 neuron-cc==1.11.4.0+97f99abe4 numpy==1.20.0 oauthlib==3.2.0 opt-einsum==3.3.0 packaging==21.3 pandas==1.4.3 pathos==0.2.9 pluggy==1.0.0 pox==0.3.1 ppft==1.7.6.5 protobuf==3.20.1 protobuf3-to-dict==0.1.5 py==1.11.0 pyasn1==0.4.8 pyasn1-modules==0.2.8 pyparsing==3.0.9 pytest==7.1.2 python-dateutil==2.8.2 pytz==2022.1 PyYAML==6.0 regex==2022.6.2 requests==2.28.1 requests-oauthlib==1.3.1 rsa==4.8 s3transfer==0.6.0 sacremoses==0.0.53 sagemaker==2.98.0 scipy==1.4.1 six==1.16.0 smdebug-rulesconfig==1.0.1 tensorboard==2.8.0 tensorboard-data-server==0.6.1 tensorboard-plugin-neuron==2.4.0.0 tensorboard-plugin-wit==1.8.1 tensorflow==2.8.0 tensorflow-io-gcs-filesystem==0.26.0 tensorflow-neuron==2.8.0.2.3.0.0 termcolor==1.1.0 tf-estimator-nightly==2.8.0.dev2021122109 tokenizers==0.10.3 tomli==2.0.1 torch==1.9.1 torch-neuron==1.9.1.2.3.0.0 tqdm==4.64.0 transformers==4.12.3 typing_extensions==4.3.0 urllib3==1.26.9 Werkzeug==2.1.2 wrapt==1.14.1 zipp==3.8.0 ``` </details> which leads to the same traceback (with the "small" snipet above, the same as in your guide): <details> <summary>Traceback</summary> <br> ``` 2022-07-07 11:13:09.200830: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:936] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2022-07-07 11:13:09.258510: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:936] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2022-07-07 11:13:09.258664: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:936] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero INFO:Neuron:There are 3 ops of 1 different types in the TorchScript that are not compiled by neuron-cc: aten::embedding, (For more information see https://github.com/aws/aws-neuron-sdk/blob/master/release-notes/neuron-cc-ops/neuron-cc-ops-pytorch.md) INFO:Neuron:Number of arithmetic operators (pre-compilation) before = 293, fused = 285, percent fused = 97.27% INFO:Neuron:Number of neuron graph operations 837 did not match traced graph 723 - using heuristic matching of hierarchical information INFO:Neuron:Compiling function _NeuronGraph$347 with neuron-cc INFO:Neuron:Compiling with command line: '/mnt/Documents/test/sagemaker_test/.venv/bin/neuron-cc compile /tmp/tmp94190olp/graph_def.pb --framework TENSORFLOW --pipeline compile SaveTemps --output /tmp/tmp94190olp/graph_def.neff --io-config {"inputs": {"tensor.1:0": [[1, 128], "int64"], "1:0": [[1, 128, 384], "float32"], "2:0": [[1, 128, 384], "float32"], "3:0": [[1, 128, 384], "float32"]}, "outputs": ["BertEncoder_28/BertLayer_17/BertOutput_5/LayerNorm_7/aten_layer_norm/batchnorm/add_1:0", "BertPooler_29/Tanh_11/aten_tanh/Tanh:0"]} --verbose 35' huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks... To disable this warning, you can either: - Avoid using `tokenizers` before the fork if possible - Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false) .2022-07-07 11:13:25.708020: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:936] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2022-07-07 11:13:25.744793: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:936] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2022-07-07 11:13:25.744998: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:936] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 07/07/2022 11:13:26 AM ERROR 18562 [neuron-cc]: Failed to parse model /tmp/tmp94190olp/graph_def.pb: The following operators are not implemented: {'SelectV2'} (NotImplementedError) Compiler status ERROR INFO:Neuron:Compile command returned: 1 WARNING:Neuron:torch.neuron.trace failed on _NeuronGraph$347; falling back to native python function call ERROR:Neuron:neuron-cc failed with the following command line call: /mnt/Documents/test/sagemaker_test/.venv/bin/neuron-cc compile /tmp/tmp94190olp/graph_def.pb --framework TENSORFLOW --pipeline compile SaveTemps --output /tmp/tmp94190olp/graph_def.neff --io-config '{"inputs": {"tensor.1:0": [[1, 128], "int64"], "1:0": [[1, 128, 384], "float32"], "2:0": [[1, 128, 384], "float32"], "3:0": [[1, 128, 384], "float32"]}, "outputs": ["BertEncoder_28/BertLayer_17/BertOutput_5/LayerNorm_7/aten_layer_norm/batchnorm/add_1:0", "BertPooler_29/Tanh_11/aten_tanh/Tanh:0"]}' --verbose 35 Traceback (most recent call last): File "/mnt/Documents/test/sagemaker_test/.venv/lib/python3.8/site-packages/torch_neuron/convert.py", line 381, in op_converter neuron_function = self.subgraph_compiler( File "/mnt/Documents/test/sagemaker_test/.venv/lib/python3.8/site-packages/torch_neuron/decorators.py", line 219, in trace raise subprocess.SubprocessError( subprocess.SubprocessError: neuron-cc failed with the following command line call: /mnt/Documents/test/sagemaker_test/.venv/bin/neuron-cc compile /tmp/tmp94190olp/graph_def.pb --framework TENSORFLOW --pipeline compile SaveTemps --output /tmp/tmp94190olp/graph_def.neff --io-config '{"inputs": {"tensor.1:0": [[1, 128], "int64"], "1:0": [[1, 128, 384], "float32"], "2:0": [[1, 128, 384], "float32"], "3:0": [[1, 128, 384], "float32"]}, "outputs": ["BertEncoder_28/BertLayer_17/BertOutput_5/LayerNorm_7/aten_layer_norm/batchnorm/add_1:0", "BertPooler_29/Tanh_11/aten_tanh/Tanh:0"]}' --verbose 35 INFO:Neuron:Number of arithmetic operators (post-compilation) before = 293, compiled = 0, percent compiled = 0.0% INFO:Neuron:The neuron partitioner created 1 sub-graphs INFO:Neuron:Neuron successfully compiled 0 sub-graphs, Total fused subgraphs = 1, Percent of model sub-graphs successfully compiled = 0.0% INFO:Neuron:Compiled these operators (and operator counts) to Neuron: INFO:Neuron:Not compiled operators (and operator counts) to Neuron: INFO:Neuron: => aten::Int: 49 [supported] INFO:Neuron: => aten::add: 21 [supported] INFO:Neuron: => aten::contiguous: 6 [supported] INFO:Neuron: => aten::div: 6 [supported] INFO:Neuron: => aten::dropout: 19 [supported] INFO:Neuron: => aten::embedding: 3 [not supported] INFO:Neuron: => aten::gelu: 6 [supported] INFO:Neuron: => aten::layer_norm: 13 [supported] INFO:Neuron: => aten::linear: 37 [supported] INFO:Neuron: => aten::matmul: 12 [supported] INFO:Neuron: => aten::mul: 1 [supported] INFO:Neuron: => aten::permute: 24 [supported] INFO:Neuron: => aten::rsub: 1 [supported] INFO:Neuron: => aten::select: 1 [supported] INFO:Neuron: => aten::size: 49 [supported] INFO:Neuron: => aten::slice: 5 [supported] INFO:Neuron: => aten::softmax: 6 [supported] INFO:Neuron: => aten::tanh: 1 [supported] INFO:Neuron: => aten::to: 1 [supported] INFO:Neuron: => aten::transpose: 6 [supported] INFO:Neuron: => aten::unsqueeze: 2 [supported] INFO:Neuron: => aten::view: 24 [supported] Traceback (most recent call last): File "compile_model_2.py", line 20, in <module> model_neuron = torch.neuron.trace(model, neuron_inputs) File "/mnt/Documents/test/sagemaker_test/.venv/lib/python3.8/site-packages/torch_neuron/convert.py", line 184, in trace cu.stats_post_compiler(neuron_graph) File "/mnt/Documents/test/sagemaker_test/.venv/lib/python3.8/site-packages/torch_neuron/convert.py", line 492, in stats_post_compiler raise RuntimeError( RuntimeError: No operations were successfully partitioned and compiled to neuron for this model - aborting trace! ``` </details><|||||>Closing: The problem was on my side, thanks anyway for your help and time! :) All the "neuron packages" are installable and wheels for 3.8 exists with the repo: `https://pip.repos.neuron.amazonaws.com` However, the installation leads to the wrong version of TensorFlow (neuron needs <2) that does not exists anymore. I cleaned everything, reinstalled with python3.7, and ran out of the box the two scripts above.
transformers
18,030
closed
Pretraining BART language model
### Feature request Hi, I'm looking into BART [docs](https://huggingface.co/docs/transformers/model_doc/bart). Seems that the provided examples are on fine-tuning BART on Seq2Seq summarization tasks. I'm wondering if there is any example on pertaining BART's "Language Model" itself, with the pre-training objectives (Token infilling, Token Masking, etc.) that are mentioned in the original paper. I was looking into this a couple of months ago and found this thread: https://github.com/huggingface/transformers/issues/4151 and a relevant issue in fairseq: https://github.com/facebookresearch/fairseq/issues/1899. Now decided to ask it directly here to see if there has been any update so far. Thanks, @patrickvonplaten @patil-suraj, ### Motivation Making BART for further pre-training (on Language Model).
07-05-2022 15:59:08
07-05-2022 15:59:08
I don't think we have plans currently to add code for BART pretraining as it's quite a time-costly endeavor . Maybe we should revisit this decision though at some point for some models (BART, T5, Wav2Vec2) as the community is asking more and more. Wdyt @sgugger @LysandreJik ? <|||||>It's not something anyone on the team has the time to focus on right now, but we can welcome a contribution for such an example.<|||||>I have attempted to reproduce BART pretraining for Swedish using fairseq [here](https://github.com/kb-labb/kb_bart) with some instructions in the README. We trained a custom tokenizer with Huggingface `tokenizers` package and figured we'd later retrofit the `dict.txt` generated by fairseq preprocessing back to a Huggingface compatible `tokenizer.json`. This worked, but we later discovered our chosen method of doing it was unnecessarily complicated, as we could have just copy pasted the vocabulary from our original `tokenizer.json` to a tab separated `dict.txt` where the 1st column is the tokens, and the 2nd column can be dummy frequency counts of how often the tokens occur in your tokenized data (You can set the frequencies to 0 or any integer). We were unfamiliar with fairseq prior to doing this and based the entire reproduction off of reading the paper closely and trying to match the reported experimental settings in the paper against fairseq args by reading the documentation and the source code. The finished model was uploaded to [Huggingface](https://huggingface.co/KBLab/bart-base-swedish-cased). However, we learned quite a few lessons along the way that aren't documented there. The first one being the easiest process of using a custom Huggingface tokenizer with fairseq that I described above. A second important one was that you should generate several epochs worth of sharded and shuffled data and not just a shards for one single epoch. Fairseq pretraining will crash once you reach the end of your listed shards, and won't allow you to reuse or relist the same shard filename. I guess this sort of stuff maybe is obvious if you have the chance to ask your Facebook engineer colleagues questions over a lunch, but to us it wasn't obvious what best practices were, since it's not described in the docs. Anyway. Maybe the pretraining task along with the training args I researched in trying to replicate it may be of interest to you: https://github.com/kb-labb/kb_bart/blob/main/train_bart_args.sh . I cannot guarantee that it is identical to actual BART pretraining settings, but it was my best attempt based on reading the paper and source code. Edit: Seems BART author replied in the issue thread with the actual [args they used in pretraining](https://github.com/facebookresearch/fairseq/issues/1899#issuecomment-1069429320) . <|||||> @sajastu You can use this [repo](https://github.com/morganmcg1/rotobart/blob/main/data_collator.py) and in the HF [run_mlm.py](https://github.com/huggingface/transformers/blob/main/examples/pytorch/language-modeling/run_mlm.py) script you have to: * replace `AutoModelForMaskedLM` by `AutoModelForSeq2SeqLM` * replace `data_collator` by `DataCollatorForDenoisingTasks` (token infilling + sentence permutation) * or replace `data_collator` by `DataCollatorForTextInfilling` (you need to process decoder_input_ids for this one) I'm not the author of this repo but @patrickvonplaten is one of them. You need to convert numpy outputs from the collator to torch ones if you dont want to rewrite it. You can also check [#12370](https://github.com/huggingface/transformers/pull/12370) for the pytorch implementation of the `DataCollatorForTextInfilling` which is very similar. That's it! <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>We've released [nanoT5](https://github.com/PiotrNawrot/nanoT5) that reproduces T5-model (similar to BART) pre-training. You can take a look! Any suggestions are more than welcome.
transformers
18,029
closed
Fix T5/mT5 tests for TensorFlow
Some tests used `tf.reduce_sum()` on a per-token loss. Since we now return a scalar loss to enable XLA compilation and match PyTorch, the expected loss in these tests is wrong. If we change the tests to use `tf.reduce_mean()` and divide the expected value by the number of tokens, then the tests work with both the modern and legacy losses.
07-05-2022 14:39:51
07-05-2022 14:39:51
_The documentation is not available anymore as the PR was closed or merged._<|||||>@ydshieh I ran them all locally and they passed!
transformers
18,028
closed
Evaluation results of `run_qa.py` are anormally low
### System Info - `transformers` version: 4.21.0.dev0 - Platform: Linux-5.15.0-39-generic-x86_64-with-glibc2.35 - Python version: 3.9.12 - Huggingface_hub version: 0.8.1 - PyTorch version (GPU?): 1.11.0+cu102 (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: no - Using distributed or parallel set-up in script?: no ### Who can help? _No response_ ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Do * `conda create -n tempoenv python=3.9` * `git clone https://github.com/huggingface/transformers.git` * `cd transformers` * `pip install .[torch]` * `pip install datasets` and ``` python examples/pytorch/question-answering/run_qa.py --model_name_or_path distilbert-base-uncased-distilled-squad \ --dataset_name squad --do_eval --max_seq_length 384 \ --doc_stride 128 --output_dir /tmp/debug_squad/ ``` giving ``` ***** eval metrics ***** eval_exact_match = 0.369 eval_f1 = 0.4673 eval_samples = 100 ``` which is much lower than what we used to see (about 0.85 f1), see e.g. https://huggingface.co/distilbert-base-uncased-distilled-squad or https://huggingface.co/spaces/autoevaluate/leaderboards?dataset=squad . It's not clear why the eval is only on 100 samples. ### Expected behavior Higher f1 score
07-05-2022 14:26:25
07-05-2022 14:26:25
Removing the cached arrow processed dataset solved the issue.
transformers
18,027
closed
Fix tensor device mismatch in OPT model
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> It's a straightforward fix on opt model when using cuda device ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @younesbelkada
07-05-2022 13:49:15
07-05-2022 13:49:15
_The documentation is not available anymore as the PR was closed or merged._<|||||>Error msg before this PR: ![image](https://user-images.githubusercontent.com/11887940/177463291-8b181f91-7c12-43bf-9628-b604d200eaaf.png) <|||||>Hi @mzmssg ! thanks for proposing this fix ;) ! I am wondering why the gpu tests are not failing in our case, could you share with us your PyTorch + cudatoolkit version? Sorry fo getting back late on this<|||||>> Hi @mzmssg ! thanks for proposing this fix ;) ! I am wondering why the gpu tests are not failing in our case, could you share with us your PyTorch + cudatoolkit version? Sorry fo getting back late on this Hi @younesbelkada, I tried pytorch 1.6 and 1.12 on cuda10, 1.12 works fine while 1.6 gives errors. I think that's why it pass the test on your end. And I found that even on pytorch 1.12 the behavior is a bit strange, it allows max operations on cuda tensor and cpu **scalar** but not on cuda tensor and cpu 1-d tensor. So I think it's better to set the device explicitly, your opinion? Below are my test: ``` torch==1.6 >>> aa = torch.rand([2, 2]).cuda() >>> bb = torch.rand([]) >>> torch.max(aa, bb) Traceback (most recent call last): File "<stdin>", line 1, in <module> RuntimeError: iter.device(arg).is_cuda() INTERNAL ASSERT FAILED at "/pytorch/aten/src/ATen/native/cuda/Loops.cuh":61, please report a bug to PyTorch. >>> cc = torch.rand([1]) >>> torch.max(aa, cc) Traceback (most recent call last): File "<stdin>", line 1, in <module> RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! torch==1.12 >>> aa = torch.rand([2, 2]).cuda() >>> bb = torch.rand([]) >>> torch.max(aa, bb) tensor([[0.9149, 0.9149], [0.9149, 0.9149]], device='cuda:0') >>> cc = torch.rand([1]) >>> torch.max(aa, cc) Traceback (most recent call last): File "<stdin>", line 1, in <module> RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! ```
transformers
18,026
closed
Load sharded pt model to flax (from the hub )
# What does this PR do? Implements the automatic conversion from `pt` to `flax` if the model is sharded but does not already have flax weights. This could be useful for automatic conversion ## Who can review? A few tests still need to be implemented, for now works well with `hf-internal-testing/tiny-random-bert-sharded`
07-05-2022 13:09:18
07-05-2022 13:09:18
_The documentation is not available anymore as the PR was closed or merged._<|||||>Follows #17537
transformers
18,025
closed
`log_metrics` method of `Trainer` is using `print` instead of `logging`
### System Info - `transformers` version: 4.20.1 - Platform: Linux-5.4.0-86-generic-x86_64-with-debian-bullseye-sid - Python version: 3.7.13 - Huggingface_hub version: 0.1.2 - PyTorch version (GPU?): 1.10.0 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ### Who can help? @sgugger @stas00 (related to #12276) ### Information - [X] The official example scripts - [X] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction run the following code snippet ``` python3 from transformers import BertConfig, Trainer, BertForSequenceClassification trainer = Trainer(BertForSequenceClassification(BertConfig())) trainer.log_metrics('eval', {'accuracy': 0.9}) ``` ### Expected behavior I expect the output should be something like the following, which comes from `logging` library, and can be easily redirected to somewhere else with proper configuration. ``` [INFO|trainer_pt_utils.py:908] 2022-07-05 12:32:08,846 >> ***** eval metrics ***** [INFO|trainer_pt_utils.py:913] 2022-07-05 12:32:08,846 >> accuracy = 0.9 ``` The actual output comes from a `print()`, which is inconsistent with other part of this library, and cannot be easily redirected unless we just redirect `stdout` itself. https://github.com/huggingface/transformers/blob/f0982682bd6fd0b438dda79ec45f3a8fac83a985/src/transformers/trainer_pt_utils.py#L948-L953 **BEFORE** #12276, the code looked like the following, which behaves as expected. https://github.com/huggingface/transformers/blob/a4ed074d4b8c1ab0a1045bd86963ef209c3c467f/src/transformers/trainer_pt_utils.py#L908-L913
07-05-2022 12:45:59
07-05-2022 12:45:59
In short, I want to redirect logging info (including metrics) with the `logging` library, but a bare `print` does not allow me to do that.<|||||>Why are you not using `save_metrics` then? All our examples use both the prints of `log_metrics` and `save_metrics` to have them accessible in a file.<|||||>After digging the code, I realized what I want is to log the metrics of the evaluation loop, which should be handled with callbacks, and is not related to `log_metrics` or `save_metrics`. Sorry for wrong report.<|||||>No problem, glad you found a solution!
transformers
18,024
closed
trainer.push_to_hub() doesn't work
### System Info after I trained via Trainer.train(), I found I can't push my model to hub even though there is no conflict or so. I could save my model to local by trainer.save_model("path/to/local/model"). ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ` def compute_metrics(pred): pred_logits = pred.predictions pred_ids = np.argmax(pred_logits, axis=-1) pred.label_ids[pred.label_ids == -100] = processor.tokenizer.pad_token_id pred_str = processor.batch_decode(pred_ids) # we do not want to group tokens when computing the metrics label_str = processor.batch_decode(pred.label_ids, group_tokens=False) cer = cer_metric.compute(predictions=pred_str, references=label_str) return {"cer": cer} from transformers import Wav2Vec2ForCTC cer_metric = load_metric("cer", revision="master") model = Wav2Vec2ForCTC.from_pretrained( "facebook/wav2vec2-base", ctc_loss_reduction="mean", pad_token_id=processor.tokenizer.pad_token_id, vocab_size=48 ) model.to('cuda') from transformers import TrainingArguments training_args = TrainingArguments( output_dir= repo_name, group_by_length=True, per_device_train_batch_size=32, evaluation_strategy="steps", num_train_epochs=30, fp16=True, gradient_checkpointing=True, save_steps=500, eval_steps=500, logging_steps=500, learning_rate=1e-4, weight_decay=0.005, warmup_steps=1000, save_total_limit=2, push_to_hub=True, ) from transformers import Trainer trainer = Trainer( model=model, data_collator=data_collator, args=training_args, compute_metrics=compute_metrics, train_dataset=rwcp_ssd_ono["train"], eval_dataset=rwcp_ssd_ono["eval"], tokenizer=processor.feature_extractor, ) trainer.train() trainer.push_to_hub() ` ### Expected behavior ``` --------------------------------------------------------------------------- CalledProcessError Traceback (most recent call last) /opt/conda/lib/python3.7/site-packages/huggingface_hub/repository.py in git_commit(self, commit_message) 903 try: --> 904 lfs_config = "git config lfs.customtransfer.multipart" 905 run_subprocess(f"{lfs_config}.path huggingface-cli".split(), self.local_dir) /opt/conda/lib/python3.7/subprocess.py in run(input, capture_output, timeout, check, *popenargs, **kwargs) 511 raise CalledProcessError(retcode, process.args, --> 512 output=stdout, stderr=stderr) 513 return CompletedProcess(process.args, retcode, stdout, stderr) CalledProcessError: Command '['git', 'commit', '-m', 'End of training', '-v']' returned non-zero exit status 1. During handling of the above exception, another exception occurred: OSError Traceback (most recent call last) /tmp/ipykernel_2655812/1405518398.py in <module> ----> 1 trainer.push_to_hub() /opt/conda/lib/python3.7/site-packages/transformers/trainer.py in push_to_hub(self, commit_message, blocking, **kwargs) 2675 return 2676 -> 2677 git_head_commit_url = self.repo.push_to_hub(commit_message=commit_message, blocking=blocking) 2678 # push separately the model card to be independant from the rest of the model 2679 if self.args.should_save: /opt/conda/lib/python3.7/site-packages/huggingface_hub/repository.py in push_to_hub(self, commit_message, blocking, clean_ok) 1191 return self.git_head_commit_url() 1192 -> 1193 def git_checkout(self, revision: str, create_branch_ok: Optional[bool] = False): 1194 """ 1195 git checkout a given revision /opt/conda/lib/python3.7/site-packages/huggingface_hub/repository.py in git_commit(self, commit_message) 909 ) 910 except subprocess.CalledProcessError as exc: --> 911 raise EnvironmentError(exc.stderr) 912 913 def auto_track_binary_files(self, pattern: Optional[str] = ".") -> List[str]: OSError: On branch main Your branch is ahead of 'origin/main' by 2 commits. (use "git push" to publish your local commits) nothing to commit, working tree clean ```
07-05-2022 11:47:01
07-05-2022 11:47:01
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Facing a similar as above issue, have anyone found the solution to it?<|||||>having the same issue
transformers
18,023
closed
Fix codeparrot deduplication - ignore whitespaces
The current exact deduplication doesn't ignore whitespaces before computing the hash.
07-05-2022 10:19:29
07-05-2022 10:19:29
_The documentation is not available anymore as the PR was closed or merged._
transformers
18,022
closed
BLOOM Flax
# What does this PR do? An attempt of adding Flax implementation of BLOOM - original PR from @haileyschoelkopf #17761 ## TODOs: - [x] alibi shifting for batched generation - [ ] change mask fill value cc @patil-suraj - [x] optimize code (alibi creation + mask creation)
07-05-2022 09:30:20
07-05-2022 09:30:20
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_18022). All of your documentation changes will be reflected on that endpoint.<|||||>Is this ready to merge @younesbelkada ? @sanchit-gandhi @patil-suraj could you take a final look?<|||||>Yeah ready to merge for me, for partitioning compatibility I think that we can address another PR for that<|||||>Also we need to add the credits to @haileyschoelkopf for starting the contribution<|||||>Modelling file and tests LGTM. Am in agreement that the the named axes should come as a follow-up PR - better to modify a script that's already in-place and working rather than merging a complete overhaul of how we structure our code in one go.<|||||>Good to merge for me once @patil-suraj and @sanchit-gandhi give the green light here :-) <|||||>Thanks all for your comments! I had to fix a small test since the main modeling code has changed a bit in #18344 ! I think this PR is ready to be merged 💪 cc @patil-suraj <|||||>@patil-suraj @sanchit-gandhi - should we merge this one? <|||||>Good for me! Awaiting the green light from @patil-suraj who requested a look over the modelling code prior to merging. Regarding `setup`, I previously asked in the G Chat with the JAX/Flax guys how to make this work with `scan`. Think it went under the radar, so will bring this issue back up!<|||||>Hi @sanchit-gandhi @patil-suraj @patrickvonplaten I think that the PR is ready for a final round of review! 💪 I proposed a potential solution to re-write the model definition with `setup` but I am unsure how this fits the HF+Jax approach regarding `scan` feature<|||||>@sanchit-gandhi @patil-suraj could you check here? I won't have time do review the PR this week I think<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Should we merge this one ? cc @patrickvonplaten @patil-suraj @sanchit-gandhi <|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_18022). All of your documentation changes will be reflected on that endpoint.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Both https://github.com/huggingface/transformers/pull/17761 and this PR show a lot of work. Why isn't it merged? <|||||>Adding this to my TODOs
transformers
18,021
closed
Trainer dataloader_drop_last=True looks can not work with distributed setting
### System Info dataloader_drop_last=True , works fine in the single GPU setting. But fails while using distributed launch `python -m torch.distributed.launch --nproc_per_node $NUM_GPU --master_port $PORT_ID train_model.py` it fails when compute metrics, I checked the shape, it's not as expected. Thank you team, @sgugger ### Who can help? _No response_ ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction define a custom compute_metrics function, and set trainer with dataloader_drop_last=True, and use distributed launch on single machine multiple GPU `python -m torch.distributed.launch --nproc_per_node $NUM_GPU --master_port $PORT_ID train_model.py`. you can check the input of pred shape in the compute_metrics function, and found it is not correct. (doesn't drop the last) ### Expected behavior dataloader_drop_last=True should works fine while using `python -m torch.distributed.launch --nproc_per_node $NUM_GPU --master_port $PORT_ID train_model.py`
07-05-2022 07:55:01
07-05-2022 07:55:01
I have trouble understanding the bug you are reporting. If you set `dataloader_drop_last=True`, the dataloader you use for evaluation will have its last batch truncated and yes, you won't have the right number of examples.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
18,020
closed
[TensorFlow] Adding GroupViT
# Adding TensorFlow version of GroupViT This PR adds the TensorFlow version of [GroupViT](https://github.com/NVlabs/GroupViT). ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? CC: @LysandreJik @Rocketknight1
07-05-2022 05:53:23
07-05-2022 05:53:23
_The documentation is not available anymore as the PR was closed or merged._<|||||>Tagging @NielsRogge @Rocketknight1 @gante for the PR review!<|||||>@ariG23498 The parsing issue seen in the `check_repository_consistency` tests is arising because of `@keras_serializable` decorators being below `# Copied from` statements. If I check out your branch, I can run `make fix-copies` successfully if I move the decorator to be above the comment. <|||||>Hey @ariG23498 👋 Regarding failing tests: - The tests in `run_tests_torch_and_tf` seem to need a slightly higher tolerance. Let's leave those as their are for now (failing), as we might be able to fix them throughout the PR review process :) - The other errors is because the attribute `base_model_prefix` does not match the name of the attribute that is used to hold the models. I.e. it should be changed to `base_model_prefix = "group_vit"` (notice the `_` :D ) [`base_model_prefix` wrong -> [this line](https://github.com/huggingface/transformers/blob/main/src/transformers/modeling_tf_utils.py#L1084) does not get matched with the correct class -> raises the exception in the function] <|||||>Hey @gante thanks for the insights. > The other errors is because the attribute base_model_prefix does not match the name of the attribute that is used to hold the models. I.e. it should be changed to base_model_prefix = "group_vit" (notice the _ :D ) [base_model_prefix wrong -> [this line](https://github.com/huggingface/transformers/blob/main/src/transformers/modeling_tf_utils.py#L1084) does not get matched with the correct class -> raises the exception in the function] The [PyTorch](https://github.com/huggingface/transformers/blob/8cb5ecd912e09301be126c6ce6e9a22ca7153da4/src/transformers/models/groupvit/modeling_groupvit.py#L769) implementation has `groupvit` as the `base_model_prefix`. Would this be a problem if I changed the name for the TensorFlow port? <|||||>@ariG23498 Good point -- `groupvit` (no underscore) should be used everywhere then!<|||||>Fixes #18543 CC: @NielsRogge <|||||>References - hard_softmax: https://gist.github.com/ariG23498/08cdae21637b8b61bdd6d21d11719fb3 - resize_attention_map: https://gist.github.com/ariG23498/3777f8d9be25de8ae782256f5aacb2c5<|||||>@amyeroberts I have added the `shape_list` back to places which are copied from other parts of the repository. This was done to pass the `make fixup`.<|||||>Hey @amyeroberts and @gante I had to swap back in the `shape_list` and also the `axis=range(len(shape_list(logits)))[dim]` to pass the tests.<|||||>Perhaps relevant to the TF-PT mismatch: https://github.com/huggingface/transformers/pull/18555#issuecomment-1230066514<|||||>> Perhaps relevant to the TF-PT mismatch: [#18555 (comment)](https://github.com/huggingface/transformers/pull/18555#issuecomment-1230066514) Hey @gante I do not think this might be the source of the problem. `nn.functional.interpolate` has been used twice in the torch code: - [`resize_attention_map`](https://github.com/ariG23498/transformers/blob/feat/groupvit-tf/src/transformers/models/groupvit/modeling_groupvit.py#L135) - [`interpolate_pos_encoding`](https://github.com/ariG23498/transformers/blob/feat/groupvit-tf/src/transformers/models/groupvit/modeling_groupvit.py#L416) I have a [colab notebook](https://gist.github.com/ariG23498/3777f8d9be25de8ae782256f5aacb2c5) that takes care of the tf and pt equivalence for the `resize_attention_map`. As for the `interpolate_pos_encoding`, it has been taken from the ViT implementation (which is assumed to have no equivalence problem?).<|||||>@ariG23498 With the selected seeds, it passes now on CircleCI. This depends also on hardware, so it may or may not work when running on different machines, sadly.<|||||>It passes on GCP CPU-only VM too. For GPU, we might need different seeds, but we can do that later.<|||||>`tests/models/groupvit/test_modeling_tf_groupvit.py::TFGroupViTModelTest::test_keras_fit` failed however. I will leave @ariG23498 to check this one 🙏 <|||||>Tagging @sgugger for a review (as core maintainer) 🎉 After this PR has 3 approvals (including Sylvain's), these are the instructions to finalize the PR: 1. Add the TF weights for the model(s). Assuming you are logged in to the Hub in your terminal (if not, run `huggingface-cli login`) and are on this branch, run `transformers-cli pt-to-tf --model-name foo/bar`, where `foo/bar` is the target model repo. Assuming it passes the CLI's checks, it will open a PR on the model hub. Please tag `@_nielsr, @_sgugger, @_joaogante` (⚠️ removing the `_`) in the opened Hub PRs :) 2. After the TF model weights are merged, make sure there are no `from_pt=True` in the PR. Then, re-run the test suite for the model locally and confirm that all tests pass (command: `NVIDIA_TF32_OVERRIDE=0 RUN_SLOW=1 py.test -vv tests/models/groupvit/modeling_tf_groupvit.py`)<|||||>The [`run_tests_hub`](https://app.circleci.com/pipelines/github/huggingface/transformers/47547/workflows/26d9127c-5aac-43e9-99ff-8a42a25aa0e1/jobs/563161) fails for the time being. @ydshieh thinks it is irrelevant to this PR. All the relevant tests pass. Pinging @Rocketknight1 for the serving outputs bit (as mentioned by @sgugger [here](https://github.com/huggingface/transformers/pull/18020#pullrequestreview-1112320572)).<|||||>@Rocketknight1 The changes you have suggested are taken care of! @ydshieh the tests pass as well! @gante @amyeroberts should we talk about the deprecated code used [here](https://github.com/huggingface/transformers/pull/18020#discussion_r974312323), or should we have a different PR to handle this? @gante over to you for saving the TF model to hub 🫂<|||||>FYI: The failing tests are not related to the `GroupViT` model.<|||||>The failure is unrelated. As long as all models have been moved this PR can be merged.<|||||>@gante over to you now! 🤗 <|||||>@ariG23498 adding TF weights as PRs to the hub is now available to everyone -- you can have a go at it, following [the instructions above](https://github.com/huggingface/transformers/pull/18020#issuecomment-1250738894)! I can also do it. Let me know how you want to proceed :)<|||||>Thanks @gante I will give it a try myself!<|||||>@gante while running `transformers-cli pt-to-tf --model-name nvidia/groupvit-gcc-yfcc` I get the following error. ``` List of maximum hidden layer differences above the threshold (5e-05): text_model_outputhidden_states[1]: 9.155e-05 text_model_outputhidden_states[2]: 9.155e-05 text_model_outputhidden_states[3]: 9.155e-05 text_model_outputhidden_states[4]: 9.155e-05 text_model_outputhidden_states[5]: 9.155e-05 text_model_outputhidden_states[6]: 9.155e-05 text_model_outputhidden_states[7]: 9.155e-05 text_model_outputhidden_states[8]: 9.155e-05 text_model_outputhidden_states[9]: 9.155e-05 text_model_outputhidden_states[10]: 9.155e-05 text_model_outputhidden_states[11]: 9.155e-05 text_model_outputhidden_states[12]: 9.155e-05 vision_model_outputhidden_states[1]: 2.441e-04 vision_model_outputhidden_states[2]: 7.629e-05 vision_model_outputhidden_states[3]: 7.629e-05 ``` I think this is because of the seed issue that @ydshieh had pointed out. How would you want me to proceed in this case?<|||||>@gante thanks for the help. The PR for the TF weights on HF Hub are [here](https://huggingface.co/nvidia/groupvit-gcc-yfcc/discussions/1).<|||||>It seems ready to be merged, except for an inconsistency that arises from a `# Copied from` statement -- lmk if you need a hand!<|||||>@gante I have taken care of the `copied from` inconsistency. I think we are good to go.<|||||>Merged! Thank you for making the TF users happier, @ariG23498 🧡
transformers
18,019
closed
LongT5 Models Are Not Initialized With Pretrained Weights
### System Info `transformers` version: 4.20.1 - Platform: Linux-5.4.188+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.13 - Huggingface_hub version: 0.8.1 - PyTorch version (GPU?): 1.11.0+cu113 (True) - Tensorflow version (GPU?): 2.8.2 (True) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ### Who can help? @LysandreJik @stancld @patrickvonplaten ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction I have tried using LongT5 fine-tuning on a long range summarization task on a custom dataset (consider it like CNN/DM in that it is highly extractive). While long-t5-tglobal-base works well (I am able to converge on a validation loss of ~1.25 and ROUGE-2 of ~21), the long-t5-local-base, long-t5-local-large, and long-t5-tglobal-large all end up getting me training/validation losses of 200+ with ROUGE scores of exactly 0, making me believe that these models haven't actually been initialized with Google's weights. Here are the json outputs associated with trainer.evaluate() after 1 epoch of training: **google/long-t5-local-base** {'epoch': 1.0, 'eval_gen_len': 1023.0, 'eval_loss': 366.21673583984375, 'eval_rouge1': 0.0, 'eval_rouge2': 0.0, 'eval_rougeL': 0.0, 'eval_rougeLsum': 0.0, 'eval_runtime': 37.9896, 'eval_samples_per_second': 0.132, 'eval_steps_per_second': 0.053} **google/long-t5-tglobal-base (This one works correctly)** {'epoch': 1.0, 'eval_gen_len': 708.2, 'eval_loss': 1.6017440557479858, 'eval_rouge1': 35.7791, 'eval_rouge2': 11.5732, 'eval_rougeL': 19.1541, 'eval_rougeLsum': 31.8491, 'eval_runtime': 34.8695, 'eval_samples_per_second': 0.143, 'eval_steps_per_second': 0.057} **google/long-t5-local-large** {'epoch': 0.77, 'eval_gen_len': 1023.0, 'eval_loss': 252.44662475585938, 'eval_rouge1': 0.0, 'eval_rouge2': 0.0, 'eval_rougeL': 0.0, 'eval_rougeLsum': 0.0, 'eval_runtime': 89.2506, 'eval_samples_per_second': 0.056, 'eval_steps_per_second': 0.034} **google/long-t5-tglobal-large** {'epoch': 0.77, 'eval_gen_len': 1023.0, 'eval_loss': 241.6276397705078, 'eval_rouge1': 0.0, 'eval_rouge2': 0.0, 'eval_rougeL': 0.0, 'eval_rougeLsum': 0.0, 'eval_runtime': 89.9801, 'eval_samples_per_second': 0.056, 'eval_steps_per_second': 0.033} For reproduction, just run the standard Huggingface PyTorch training script for summarization on any official dataset (CNN/DM, XSum, etc.). Note that I haven't tried the 3B parameter versions so cannot speak to whether this problem affects them as well. ### Expected behavior All four models should have a low validation loss when fine tuning on summarization (as opposed to three of them having 300+ validation losses as if they are randomly initialized).
07-04-2022 22:49:24
07-04-2022 22:49:24
Hey @reelmath, hmm - we've verified that `google/long-t5-tglobal-large` can be correctly fine-tuned - see: https://huggingface.co/Stancld/longt5-tglobal-large-16384-pubmed-3k_steps where @stancld fine-tuned the model on the PubMed summarization dataset. Could you maybe copy-paste the command that you used to fine-tune those models here? <|||||>Also cc @patil-suraj - can you double check quickly that the uploaded weights are the correct ones?<|||||>from transformers import AutoModelForSeq2SeqLM, DataCollatorForSeq2Seq, Seq2SeqTrainingArguments, Seq2SeqTrainer import torch model_checkpoint = google/long-t5-tglobal-base # substitute with alternate model checkpoints model = AutoModelForSeq2SeqLM.from_pretrained(model_checkpoint) from datasets import load_metric metric = load_metric("rouge") batch_size = 4 model_name = model_checkpoint.split("/")[-1] args = Seq2SeqTrainingArguments( f"{model_name}-summaries", evaluation_strategy = "steps", save_strategy = "steps", logging_strategy = "epoch", learning_rate=5e-4, per_device_train_batch_size=batch_size, per_device_eval_batch_size=batch_size, weight_decay=0.01, gradient_accumulation_steps=4, warmup_steps=3000, num_train_epochs=1, eval_steps=100, save_steps=100, load_best_model_at_end=True, predict_with_generate=True, gradient_checkpointing=True, adafactor=True, fp16=False ) data_collator = DataCollatorForSeq2Seq(tokenizer, model=model) import nltk import numpy as np def compute_metrics(eval_pred): predictions, labels = eval_pred decoded_preds = tokenizer.batch_decode(predictions, skip_special_tokens=True) # Replace -100 in the labels as we can't decode them. labels = np.where(labels != -100, labels, tokenizer.pad_token_id) decoded_labels = tokenizer.batch_decode(labels, skip_special_tokens=True) # Rouge expects a newline after each sentence decoded_preds = ["\n".join(nltk.sent_tokenize(pred.strip())) for pred in decoded_preds] decoded_labels = ["\n".join(nltk.sent_tokenize(label.strip())) for label in decoded_labels] result = metric.compute(predictions=decoded_preds, references=decoded_labels, use_stemmer=True) # Extract a few results result = {key: value.mid.fmeasure * 100 for key, value in result.items()} # Add mean generated length prediction_lens = [np.count_nonzero(pred != tokenizer.pad_token_id) for pred in predictions] result["gen_len"] = np.mean(prediction_lens) result["eval_rougeLsum"] = result["rougeLsum"] return {k: round(v, 4) for k, v in result.items()} trainer = Seq2SeqTrainer( model, args, train_dataset=training_dataset, eval_dataset=validation_dataset, data_collator=data_collator, tokenizer=tokenizer, compute_metrics=compute_metrics ) import nltk nltk.download('punkt') trainer.train() trainer.evaluate() @patrickvonplaten<|||||>Hello @reelmath, don't you have please available any loss logs? Not sure if `lr=5e-4` with a batch size of 16 is not too high, so there might be a danger of the model diverging once the training starts?<|||||>@stancld Here are the results I got from fine tuning LongT5 TGlobal Large. I dropped the learning rate to 1e-5 and ran bsz=2 with 8 accumulation steps (so effective bsz of 16) Step Training Loss Validation Loss Rougelsum Rouge1 Rouge2 Rougel Gen Len 20 246.385800 241.613937 0.000000 0.000000 0.000000 0.000000 2047.000000 40 244.759000 241.159866 0.000000 0.000000 0.000000 0.000000 2047.000000 60 245.665300 240.319992 0.000000 0.000000 0.000000 0.000000 2047.000000 80 246.063400 239.445801 0.000000 0.000000 0.000000 0.000000 2047.000000 Logs attached below: [runs-20220708T155211Z-001.zip](https://github.com/huggingface/transformers/files/9073383/runs-20220708T155211Z-001.zip) <|||||>Update: it appears that changing: model = AutoModelForSeq2SeqLM.from_pretrained(model_checkpoint) to model = AutoModelForSeq2SeqLM.from_pretrained(model_checkpoint, from_flax=True) helps resolve the issue. This is true for longt5-tglobal-large, at least.<|||||>Update 2: Loading from flax works for longt5-tglobal-large and longt5-local-base, but does not work for longt5-local-large (which starts and flatlines at a training and validation loss of around 10).<|||||>@reelmath Thanks for all the information. Could you please also run the first 20 training steps and log loss every step? It's really strange, as mentioned @patrickvonplaten, there are already a few successfully fine-tuned checkpoints of TGlobal-Large model which was initialized directly from the PyTorch checkpoint. @patil-suraj Is there by chance any possibility that checkpoints became corrupted during the transfer from my repo to Google's?<|||||>LongT5 Local Large initialized from Flax into pt [events.out.tfevents.1657582716.9bccc2c81a7b.71.zip](https://github.com/huggingface/transformers/files/9088386/events.out.tfevents.1657582716.9bccc2c81a7b.71.zip) LongT5 Local Large initialized directly with pt [events.out.tfevents.1657582716.9bccc2c81a7b.71.zip](https://github.com/huggingface/transformers/files/9088387/events.out.tfevents.1657582716.9bccc2c81a7b.71.zip) LongT5 TGlobal Large initialized from Flax into pt [events.out.tfevents.1657586027.abf516f8c8c8.73.zip](https://github.com/huggingface/transformers/files/9088616/events.out.tfevents.1657586027.abf516f8c8c8.73.zip) LongT5 TGlobal Large initialized directly with pt [events.out.tfevents.1657587671.abf516f8c8c8.73.zip](https://github.com/huggingface/transformers/files/9088622/events.out.tfevents.1657587671.abf516f8c8c8.73.zip) My guess is that Local Base is going to look quite similar to TGlobal Large in patterns, though with higher training/validation losses (at least based on previous experiments).<|||||>@reelmath Thanks, try to will have a look at those at the weekend :]<|||||>@stancld Any progress? I've been hoping to try out Local Large sometime.<|||||>Hi @reelmath, sorry for my late reply, I've been really busy lately. When I load weights from the pre-trained checkpoints everything looks good to me, however, I have currently no access to computational resources to verify the aforementioned problem. Though, I think there's one way how to verify whether the weights have been broken during the transition to Google repo. You can try to convert the original Google checkpoint to HF checkpoint using the script available in this repo: https://github.com/huggingface/transformers/blob/main/src/transformers/models/longt5/convert_longt5x_checkpoint_to_flax.py. The links to all the publicly available checkpoints can be found in this repo https://github.com/google-research/longt5. Then, it's need to download checkpoints from GDrive and subsequently convert these checkpoints with the script attached. Please, let me know if this works for you :] Also, definitely stay in touch if you have any issue with converting those checkpoints.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
18,018
closed
Generate: deprecate generation relying on default `max_length`
# What does this PR do? (EDITED) This PR applies the outcome of two discussions: 1. [Favor the use of `max_new_tokens`, as opposed to `max_length`](https://github.com/huggingface/transformers/issues/17414#issuecomment-1148836312). This introduces the `max_new_tokens` to TF and FLAX. 2. [Update the docstring about `scores`](https://github.com/huggingface/transformers/issues/17868#issuecomment-1171148445), to be clear that its length depends on `max_new_tokens` and other factors. Review suggestion: review PT first. TF and Flax are mostly copy/paste. Closes #17868
07-04-2022 19:34:34
07-04-2022 19:34:34
_The documentation is not available anymore as the PR was closed or merged._<|||||>Thanks a lot for opening this very-much needed PR @gante! Before moving forward, I'd like to discuss a bit if we should remove `max_length` in a v5 version or not because I'm a bit torn in-between, but don't really think we can or should remove `max_length` ever. First, I want to lay out my opinion about `generate` and the future of `generate` a bit in general and second want to give my opinion of `max_new_tokens` vs `max_length`. **First:** My general opinion regarding `generate` and how it should develop further: 1) `generate` is very important as it's used by all GPT-like and all encoder-decoder models. The are three methods that are used in 99% of the cases (being `greedy_search`, `sample` and `beam_search`) - the other generation methods are somewhat special. 2) It was a mistake to put the generation parameters in the `PreTrainedConfig` and have the function arguments default to them. This makes it extremely difficult to deal with breaking changes now and is also too much "black magic" for the user 3) We have to be extremely careful with adding new features to `generate` since the function quickly explodes (see [this PR](https://github.com/huggingface/transformers/pull/17920#issuecomment-1177306711)) 4) `generate` != generation pipeline . The `generate` function should limit the amount of "black magic" *a.k.a.* falling back to certain defaults as much as possible. E.g. in PyTorch's `generate` we create the `attention_mask` automatically from the padding id. I've not copied these things to Flax generate as I think the should not have been done as it makes the method hard to understand and now limits us severely in adding good error messages, keeping the function understandable. Now from this perspective, if somehow possible, I'd like to long-term remove the "fall-back" mechanism to the config generation arguments and delete the generation parameters from the config fully (I'd be happy with an optional generation config instead though). This would help us enormously to: - a) Change this in the future since it generate won't rely on thousands of saved configs - b) Very much help the user to better understand what's going on / remove the black magic - c) Keep `generate` slim by removing many edge cases. What happens if `max_length` is defined before it falls back to `config.max_length` / what after. How to deal with conflicting config generation params and config generation params, etc... **Second:** Now from that perspective, I think we should not remove `max_length` since it has some clear advantages over `max_new_tokens`, but rather move towards removing the default of `max_length=20` and changing all docs to use `max_new_tokens`, but allow the user to use both. More specifically: **IMO `max_new_tokens` is definitely better than `max_length` because of the following:** - `max_length` has a different meaning for "decoder-only" models such as GPT and "encoder-decoder" models such as T5. For "decoder-only" models the `max_length` is understood as the current length of the input (`input_ids.shape` + `max_new_tokens`) whereas for "encoder-decoder" models `max_length` is understood as `max_new_tokens -1` (-1 because there is already the dummy decoder token id), which is essentially the same as `max_new_tokens` which is regardless of what the users has put as `input_ids` (since those are processed only once by the encoder). => Switching to `max_new_tokens` here reduces this knowledge burden as both "decoder-only" and "encoder-decoder" behave the same - `max_new_tokens` is more "natural" as users don't consider the input to the model as part of the generation & it avoids weird errors like `max_length` < `len(input_ids)` **However** - `max_length` has some very useful use cases. E.g. if one always want to generate to the maximum allowed length of the model, it's very easy to do this with `max_length` but harder with `max_new_tokens` as `max_new_tokens` needs to be derived dynamically for every input - `max_length` is simply used too much IMO So moving forward, I would propose to start with a warning if the user passes neither `max_length` nor `max_new_tokens` and state that this won't be allowed anymore in v5. Currently the model would just fall back to a somewhat arbitrary default of 20 which is not great IMO. At the same time we can strongly advice the user to use `max_new_tokens` instead of `max_length`. Would love to hear your (general) opinions here regarding `generate` @gante @sgugger @LysandreJik @patil-suraj @Narsil @sanchit-gandhi @thomwolf <|||||>I completely agree with the point of moving away from the config things that are not related to instantiating the model (like I've been doing for training parameters like gradient accumulation) and in general we might need some sort of "pipeline config" for the use of the model by default in a pipeline/widget, which could also be re-used for the parameters used by generate. Good for me to have the two arguments forever and ever while encouraging `max_new_tokens`, your arguments make sense.<|||||>I also agree with the reasoning and the suggested path forward 👍 <|||||>@patrickvonplaten @sgugger I've implemented the changes from the comments above 👍 Related to the `max_length`/`max_new_tokens`, the behavior is now the following: - Neither are set ➡️ raise a deprecation warning (will be an exception in v5), nudge towards `max_new_tokens`, and use the default for `max_length`; - Both are set ➡️ raise an exception; - `max_length` is set ➡️ nothing happens - `max_new_tokens` is set ➡️ updates `max_length` according to the prompt length<|||||>Your comment @patrickvonplaten is **exactly** what I think ! Also, not sure about v5 but using **neither** `max_length` nor `max_new_tokens` also make sense to me as a user. Using neither would mean just keep on generating while I ask you to stop, eventually eternally (if we had such a transformers + no max length and infinite compute power). Overriding the StoppingCriteria currently allows that. Another way would be expressing it as a real python generator for instance. ```python for new_token in model.generate(...): #do something ``` Again this is pretty theoretical, doesn't map really well to TF or Jax, but I think it's been proposed in other Issues, and is something to have in mind (probably to at least address this in docs and explain why you probably don't want to generate eternally :)) <|||||>@Narsil that seems like an interesting use case 🤔 What would be the default arguments, if setting neither was a valid option? Perhaps a Stopping condition that would target this use case? From what I've seen, the default arguments are one of the biggest pain points for new users (and even some experienced users), so whatever decisions we make in this PR should also cover that!
transformers
18,017
closed
Fix ORTTrainer failure on gpt2 fp16 training
# What does this PR do? Fixes [#11279 of onnxruntime](https://github.com/microsoft/onnxruntime/issues/11279) ## Context Optimum users reported that the mixed-precision training on gpt2 with `optimum.onnxruntime.ORTTrainer` is broken since transformers>4.16.0. After investigation, the break comes from the [removal of `float()`](https://github.com/huggingface/transformers/pull/14321/files#r912791693) in gpt2 modeling from PR #14321. __Reproduction__ Run optimum onnxruntime training example [run_glue.py](https://github.com/huggingface/optimum/blob/main/examples/onnxruntime/training/text-classification/run_glue.py) with: ```bash python run_glue.py \ --model_name_or_path gpt2 \ --task_name sst2 \ --do_train \ --do_eval \ --fp16 \ --output_dir /tmp/ort-gpt2-sst2/ ``` __Error Message__ ``` RuntimeError: /onnxruntime_src/orttraining/orttraining/python/orttraining_pybind_state.cc:752 onnxruntime::python::addObjectMethodsForTraining(pybind11::module&, onnxruntime::python::ExecutionProviderRegistrationFn)::<lambda(onnxruntime::training::OrtModuleGraphBuilder*, const pybind11::bytes&, const onnxruntime::training::OrtModuleGraphBuilderConfiguration&)> [ONNXRuntimeError] : 1 : FAIL : Type Error: Type parameter (T) of Optype (Where) bound to different types (tensor(float) and tensor(float16) in node (Where_201). ``` As mentioned in the error message, the forward with onnxruntime InferenceSession will fail on a node __Where__ in the graph, which corresponds to the [Where op](https://github.com/michaelbenayoun/transformers/blob/9ef5813d73b2af238e8820e117dd32415ce4c173/src/transformers/models/gpt2/modeling_gpt2.py#L206) in gpt2 modeling. And the problem comes from the fact that after removing `float()`, during fp16 training, the inputs of Where have different dtype (one in fp32 and one in fp16), which violates the definition in ONNX and leads to the failure. ## Who can review? @michaelbenayoun @patrickvonplaten, @LysandreJik ## Fix Ensure `attn_weights` and `value` has the same type in exported ONNX IR.
07-04-2022 17:48:01
07-04-2022 17:48:01
_The documentation is not available anymore as the PR was closed or merged._<|||||>Hello @JingyaHuang By looking this 2 lines https://github.com/huggingface/transformers/blob/04ffba9a9a4d452b328eb7e57f7c87f2ed78bd10/src/transformers/models/decision_transformer/modeling_decision_transformer.py#L196-L197 `mask_value` is already of type `attn_weights.dtype`. Does the issue only occur when we use ONNX? (i.e. if we run the model in Python with FP16, does it work?). This issue seems strange. Do you happen to know which argument gets fp32 and which one gets fp16? <|||||>Hi @ydshieh, yes this issue only occurs with ONNX. When exporting the ONNX IR `mask_value` is exported as a constant initializer(min of fp32) with dtype float32. Thus during the mixed-precision training with onnxruntime, the `attn_weights` will be in dtype fp16 and `mask_value` as a constant always fp32 -> two inputs with different dtype -> training failed. Here is the ONNX IR which illustrates what happened with Where op: ![image](https://user-images.githubusercontent.com/44135271/177803009-ca35b67b-6614-4d83-a34a-33252e6e305d.png) __[EDIT]__ Here I made a mistake, according to the training graph, actually `mask_value` was successfully cast to fp16, but `attn_weights` not. Check the local exported IR below.<|||||>And if we run the model with PyTorch backend, there is no problem of the tricky tracing or op definition, it should work fine.<|||||>@JingyaHuang Thank you!<|||||>Hi @ydshieh, I've just double-checked the debug exported training onnx graph. Actually the `mask_value` has been cast to fp16 before `Where` node, and it was `attn_weights` which was fp32, and the fix inserts another cast op to cast `attention_mask` from fp32 to fp16. The IR corresponding to this line https://github.com/huggingface/transformers/blob/2544c1434f8d831daff3fe6a925dced67bc70c64/src/transformers/models/decision_transformer/modeling_decision_transformer.py#L181 The IR before fix: <img width="180" alt="image" src="https://user-images.githubusercontent.com/44135271/177832731-570a6ae4-7e84-4b57-ba11-b2cd29194ed9.png"> The IR after fix: <img width="180" alt="image" src="https://user-images.githubusercontent.com/44135271/177832878-cc9480a8-fffb-4e24-9648-12a9dd28fb7a.png"> So this is exactly what we want for fp16 training.<|||||>Gently pinging @patrickvonplaten and @LysandreJik for a review. <|||||>I got the same error. But I use the torch.fx and amp to train the GPT2 model. I fix this error with the method is add `attn_weights.to(attn_weights.dtype)` in `torch.where`. I don't know why this way can fix it, but it does.<|||||>Hi @TXacs , which version you tried? Could you try to install the latest version on `main`: ```bash pip install git+https://github.com/huggingface/accelerate ``` and see if you still have the issue (without your fix). Thanks!
transformers
18,016
closed
Update expected values in DecisionTransformerModelIntegrationTest
# What does this PR do? As the checkpoint is updated a few days ago.
07-04-2022 16:19:40
07-04-2022 16:19:40
_The documentation is not available anymore as the PR was closed or merged._
transformers
18,015
closed
Constraint decoding using long constraints, model keeps resetting the constraint
### System Info transformers: 4.21.0 torch: 1.11.0+cpu python: 3.8.10 Linux-5.13.0-1033-gcp ### Who can help? @patrickvonplaten, @Narsil @patil-suraj ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction Hi, I am currently working with the constrained decoding methods of the transformers. I have found that when the constraints get very long the model will at some point reset the constraint. Once the constraint is reset the model is no longer able to generate a sensible sequence and will thus keep resetting the constraint every couple tokens. I have posted a small example below, in this example I feed the entire text as the constraint and simply want the model to output the input: ```ruby from transformers import BartTokenizer, BartForConditionalGeneration, PhrasalConstraint tokenizer = BartTokenizer.from_pretrained("facebook/bart-base", extra_ids=0) model = BartForConditionalGeneration.from_pretrained("facebook/bart-base") long_text = """Amsterdam is the capital and most populous city of the Netherlands; with a population of 907,976 within the city proper, 1,558,755 in the urban area and 2,480,394 in the metropolitan area. Found within the Dutch province of North Holland, Amsterdam is colloquially referred to as the "Venice of the North", due to the large number of canals which form a UNESCO World Heritage Site. Amsterdam was founded at the Amstel, that was dammed to control flooding; the city's name derives from the Amstel dam. Originating as a small fishing village in the late 12th century, Amsterdam became one of the most important ports in the world during the Dutch Golden Age of the 17th century, and became the leading centre for the finance and trade sectors. In the 19th and 20th centuries, the city expanded and many new neighborhoods and suburbs were planned and built. The 17th-century canals of Amsterdam and the 19–20th century Defence Line of Amsterdam are on the UNESCO World Heritage List. Sloten, annexed in 1921 by the municipality of Amsterdam, is the oldest part of the city, dating to the 9th century.""" input_ids = tokenizer(long_text, return_tensors="pt", padding=True, truncation=True, max_length=1024).input_ids print(input_ids.shape) force_article_ids = input_ids[0].tolist() constraints = [PhrasalConstraint(force_article_ids)] outputs = model.generate(input_ids[0].unsqueeze(0), max_length=512, num_beams=2, constraints=constraints) tokenizer.decode(outputs[0], skip_special_tokens=True) ``` ### Expected behavior The expected output here should be the same as the input, however, the output I get is: ` ` ` Amsterdam is the capital and most populous city of the Netherlands; with a population of 907,976 within the city proper, 1,558,755 in the urban area and 2,480,394 in the metropolitan area. Found within the Dutch province of North Holland, Amsterdam is colloquially referred to as the "Venice of the North", due to the large number of canals which form a UNESCO World Heritage Site. Amsterdam was founded at the Amstel, that was dammed to control flooding; the city\'s name derives from the AmsterelAmstAmsterAmstadAmestAmstrAm stAm StAmStAmstedAm AmsterdamAmstadtAmstaAmstenAmsteAmstraAmtAmetAmthAmothAmstonAmlandAmastAmselAmandAmantAmatAmartAmAmAmottAmostAmotAmachtAmersonAmottenAmoorAmoldAmasterAm.Am ·AmnAmenAmethAmiensAmensAmienAmäAmeterAmogenAmetersAm SterAmmsAmmarkAm OstAmertAmendAmentAmoterAmtsAmisAmemAmusAmokAméAmoisAmierAmmAmereAmamAmæAmēAmosAmischAmorAmmetAmersAmesAmnesAm AmAmseeAmseAmaterAmmarAmmaAm 18AmmatAm18AmstatAmareAm 16AmambAm16AmstyAmameAmalAmasAmperAmplAmlemAmollAmolAmoleAmolandAmolenAmolloAmopAmopenAmelandAmoyAmophAmateurAmomAmodAmoudAmoAm oAm�AmmanAmhofAmofAmafAmoffAmåAmoustAmachAmoutAmairAmbornAmandaAmáAmasteAmfoAm ` ` ` Is there a way I can prevent the model from resetting the constraint or stop the generation on reset? Thanks in advance!
07-04-2022 14:07:16
07-04-2022 14:07:16
@cwkeam - any ideas on this one by any chance?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
18,014
closed
Fix loss computation in TFWav2Vec2ForCTC
What does this PR do? `TFWav2Vec2ForCTC` implementation was incorrect. The CTC loss calculation wasn't proper. The root of the problem was that the CTC target labels weren't reaching the loss calculation and it was None. So adding `@unpack_inputs` now unpacks the input properly and loss calculation is properly done. Additionally, the loss needed to be reshaped for backpropagation. Fixes #18009
07-04-2022 13:43:15
07-04-2022 13:43:15
_The documentation is not available anymore as the PR was closed or merged._<|||||>This looks good to me! We just made the same change to several other losses (reshaping the output from a scalar to a tensor with shape `(1,)`). My only concern is that I have no idea how the lack of `@unpack_inputs` was missed by tests, but I'm happy to merge this for now and think about how to expand test coverage afterwards!<|||||>@Sreyan88 I'm happy with this and I think it's ready to merge now - if you want to make any other changes, now's the time. If not, ping me and I'll merge it!<|||||>> @Sreyan88 I'm happy with this and I think it's ready to merge now - if you want to make any other changes, now's the time. If not, ping me and I'll merge it! @Rocketknight1 I'm happy you can merge!
transformers
18,013
closed
Return scalar losses instead of per-sample means
This updates the TF XLA-compatible losses to return scalars instead of per-sample means. As @ydshieh pointed out, per-sample means give too much weight to samples with fewer masked positions. The new approach should match PyTorch losses exactly (up to floating-point error). TODO: - [x] Update expected sizes in tests
07-04-2022 13:28:24
07-04-2022 13:28:24
_The documentation is not available anymore as the PR was closed or merged._<|||||>@ydshieh will investigate RAG and make sure tests pass before merging!<|||||>@ydshieh I reverted the RAG loss function to the pre-XLA version, so hopefully those tests pass now. All other tests are passing! Do you think there's anything else I'm missing before I merge?<|||||>@gante We do, but it [takes the mean of the TF loss in order to make the two losses comparable](https://github.com/huggingface/transformers/blob/main/tests/test_modeling_tf_common.py#L560-L567).<|||||>> Do you think there's anything else I'm missing before I merge? Nothing I can think of as the tests pass now 🎉 . Thank you!
transformers
18,012
closed
Fix torchscript tests for GPT-NeoX
# What does this PR do? Fix torchscript tests for GPT-NeoX. The main issue comes from the fact that current `RotaryEmbedding` changes the model structure in `forward`. This PR creates the necessary embeddings in `__init__`, which basically makes the cache (of embedding) mechanism useless. Furthermore, the attribute names seems a bit confusing now. We could probably add some attribute (ex. `init_sin_cos_cache_seq_len`) in config with a value `<= max_position_embeddings`, but I think it's way too much. Not certain if it is worth it. However, with a PR opened, we have a reference. The current failing test is https://github.com/huggingface/transformers/runs/7216768053?check_suite_focus=true
07-04-2022 13:10:27
07-04-2022 13:10:27
_The documentation is not available anymore as the PR was closed or merged._<|||||>> LGTM! > > However, could we add the failing test for reference or do we need to add a new test here? I updated the PR description to include the current failing test. Regarding new tests, I don't think it's necessary, as we just build the necessary tensors in `__init__` instead of in `forward`, and the current set of tests should be enough :-) (however, let me know if you have some idea of new necessary test cases!)<|||||>> > LGTM! > > However, could we add the failing test for reference or do we need to add a new test here? > > I updated the PR description to include the current failing test. Regarding new tests, I don't think it's necessary, as we just build the necessary tensors in `__init__` instead of in `forward`, and the current set of tests should be enough :-) > > (however, let me know if you have some idea of new necessary test cases!) Perfect thanks!
transformers
18,011
closed
sort list of models in docs table-of-contents
# What does this PR do? The list of models in the table of contents for the Transformers documentation had some duplicate entries, and was not entirely sorted in alphabetical order. This PR fixes these issues. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
07-04-2022 12:52:39
07-04-2022 12:52:39
_The documentation is not available anymore as the PR was closed or merged._
transformers
18,010
closed
Adding a new `align_to_words` param to qa pipeline.
# What does this PR do? Adding a new `align_to_words` param to qa pipeline. The problem lies that the "word" alignment really depends on the tokenizer "pre_tokenizer" property, which might well not exist, especially on japanese and other non space separated langages. This new parameter allows to disable the "word" alignement for better/simpler output for these kind of tokenizers. Fixes #17706 <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @sgugger Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
07-04-2022 12:42:29
07-04-2022 12:42:29
_The documentation is not available anymore as the PR was closed or merged._<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
18,009
closed
Negative CTC loss while training TFWav2Vec2ForCTC model
### System Info `transformers` version: 4.21.0.dev0 - Platform: Linux-5.13.0-48-generic-x86_64-with-glibc2.31 - Python version: 3.7.13 - PyTorch version (GPU?): 1.11.0 (False) - Tensorflow version (GPU?): 2.9.1 (True) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? @Rocketknight1 @gante ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Colab link to reproduce: https://colab.research.google.com/drive/1HXOdDhaIWcLF_4xF-zKZ_gYRf-sMfHkL?usp=sharing ``` Epoch 1/5 28/3859 [..............................] - ETA: 47:03 - loss: -0.5141 ``` ### Expected behavior The model should train with positive CTC loss. I have been able to figure out the source of the error which is that the target sequence never reaches the model at the forward pass and CTC loss is calculated over empty targets (None). I have also figured out the solution which is: add `@unpack_inputs` here: https://github.com/huggingface/transformers/blob/main/src/transformers/models/wav2vec2/modeling_tf_wav2vec2.py#L1583 However, with this, the CTC loss now gets the targets and calculates the loss but it raises another error: ``` InvalidArgumentError Traceback (most recent call last) /tmp/ipykernel_33658/3396866883.py in <module> 3 tf.compat.v1.logging.set_verbosity(tf.compat.v1.logging.ERROR) 4 ----> 5 model.fit(train, validation_data = validation, epochs=5) ~/anaconda3/envs/gsoc-2/lib/python3.7/site-packages/keras/utils/traceback_utils.py in error_handler(*args, **kwargs) 65 except Exception as e: # pylint: disable=broad-except 66 filtered_tb = _process_traceback_frames(e.__traceback__) ---> 67 raise e.with_traceback(filtered_tb) from None 68 finally: 69 del filtered_tb ~/anaconda3/envs/gsoc-2/lib/python3.7/site-packages/transformers/modeling_tf_utils.py in train_step(self, data) 1024 1025 if self._using_dummy_loss: -> 1026 loss = self.compiled_loss(y_pred.loss, y_pred.loss, sample_weight, regularization_losses=self.losses) 1027 else: 1028 loss = None InvalidArgumentError: slice index 0 of dimension 0 out of bounds. [Op:StridedSlice] name: strided_slice/ ``` To solve this I added `loss = tf.reshape(loss, (1,))` after CTC loss calculation here: https://github.com/huggingface/transformers/blob/main/src/transformers/models/wav2vec2/modeling_tf_wav2vec2.py#L1707 These solve the error and I can train my model now. I am hoping the changes get pushed to the main branch. The issue was previously mentioned here: https://github.com/huggingface/transformers/issues/15114 But since @Rocketknight1 mentioned that he is working with loss calculation across HF TF models, I thought I would open a new issue.
07-04-2022 10:08:13
07-04-2022 10:08:13
Hi @Sreyan88, this looks like a real bug and a good solution - would you be willing to file a PR with your changes, and we'll review it?<|||||>Hi @Rocketknight1 , The pull request is up! You can review it and assign reviewers too!<|||||>Hi @Rocketknight1 , Though this works perfectly now, I notice that the proper functioning of this code requires me to: `tf.config.run_functions_eagerly(True)` otherwise, I get an error: `raise ValueError(f"Label values must be <= vocab_size: 30")` Though everything is perfect and just adding the above line works perfectly. any reason for this? Why would this not work without eager execution? You can refer to the notebook above too to reproduce. Thank You!<|||||>Can this warning speak anything about this? ``` Epoch 1/2 WARNING: AutoGraph could not transform <bound method ArrowWriter._build_writer of <datasets.arrow_writer.ArrowWriter object at 0x7f3033179b10>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: annotated name 'schema' can't be nonlocal (__autograph_generated_file8pfv1pvr.py, line 70) To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert WARNING: AutoGraph could not transform <bound method Dataset.set_format of Dataset({ features: ['predictions', 'references'], num_rows: 2 })> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: annotated name 'self' can't be nonlocal (__autograph_generated_file7oqoxv4_.py, line 29) To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert 839/901 [==========================>...] - ETA: 1:51 - loss: 447.1289 - compute_wer: 1.0000 ```<|||||>Hi @Sreyan88 - I can't figure out where that error is coming from. In your example scripts above, you're running everything eagerly, which means that AutoGraph should not be doing anything. I think this is probably related to the issues in #18096, but let me know if you resolve those and this issue is still occurring!<|||||>Hi @Rocketknight1 , Yes, I am, trying to figure out #18096 though it's a bit difficult for me as I am a bit new to Keras/Tensorflow. @gante 's suggestion did not work so I am still investigating! Thank You for the reply!
transformers
18,008
closed
Added Command for windows VENV activation in installation docs
- Added command for activating virtual environment on windows machine in Installation docs ( @sgugger )
07-04-2022 09:30:43
07-04-2022 09:30:43
_The documentation is not available anymore as the PR was closed or merged._<|||||>Thank you @sgugger and @LysandreJik ! I have made the changes as per your comments<|||||>Thanks a lot!
transformers
18,007
closed
fix inplace modification error by adding clone()
# What does this PR do? When using `DistributedDataParallel`, `token_type_embedding` may occur RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: <img width="1791" alt="image" src="https://user-images.githubusercontent.com/19511788/177119562-1c341b15-da7b-474e-a1dd-f5f68f843142.png"> This is because of inplace operation at `buffered_token_type_ids = self.embeddings.token_type_ids[:, :seq_length]` so I just added `.clone()` to solve it as follows. from (original) ```python if token_type_ids is None: if hasattr(self.embeddings, "token_type_ids"): buffered_token_type_ids = self.embeddings.token_type_ids[:, :seq_length] buffered_token_type_ids_expanded = buffered_token_type_ids.expand(batch_size, seq_length) token_type_ids = buffered_token_type_ids_expanded ``` to ```python if token_type_ids is None: if hasattr(self.embeddings, "token_type_ids"): buffered_token_type_ids = self.embeddings.token_type_ids[:, :seq_length] buffered_token_type_ids_expanded = buffered_token_type_ids.expand(batch_size, seq_length) token_type_ids = buffered_token_type_ids_expanded.clone() ``` ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @LysandreJik
07-04-2022 08:57:44
07-04-2022 08:57:44
_The documentation is not available anymore as the PR was closed or merged._<|||||>Do you have a reproducible code example that raises this error? cc @sgugger @patrickvonplaten <|||||>@LysandreJik I found the problem is in DistributedDataParallel not transformers. I fixed the problem. Sorry to take your time!
transformers
18,006
closed
Add SegFormer ONNX support
# What does this PR do? This PR adds support for the first "semantic-segmentation" feature, SegFormer. Related to #16308
07-04-2022 08:27:46
07-04-2022 08:27:46
_The documentation is not available anymore as the PR was closed or merged._<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>@lewtun I can confirm that all tests pass!<|||||>@lewtun ok for me to merge?
transformers
18,005
closed
Replace BloomTokenizer by BloomTokenizerFast in doc
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> The current documentation examples for BLOOM propose to import its tokenizer as follows: `from transformers import BloomTokenizer`. This returns the following error with version 4.20.1: ```shell Traceback (most recent call last): File "<stdin>", line 1, in <module> ImportError: cannot import name 'BloomTokenizer' from 'transformers' (/usr/local/lib/python3.8/dist-packages/transformers/__init__.py) ``` because `BloomTokenizer` is actually not defined anywhere. This PR replaces it in the doc by `BloomTokenizerFast`. ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
07-04-2022 08:25:37
07-04-2022 08:25:37
_The documentation is not available anymore as the PR was closed or merged._<|||||>CI passed, pinging @sgugger for final approval
transformers
18,004
open
Add DFFT
### Model description DFFT is a new fully Transformer-based object detector. The model doesn't require a decoder, unlike DETR. ### Open source status - [X] The model implementation is available - [X] The model weights are available ### Provide useful links for the implementation Paper: https://arxiv.org/abs/2206.06829 Github repo (and weights): https://github.com/Pealing/DFFT
07-04-2022 08:19:10
07-04-2022 08:19:10
Hello, is this issue still open? I would like to contribute to it.<|||||>@NielsRogge Anyone working on this? Thank you<|||||>Feel free to start working on this!<|||||>@NielsRogge if no one is working on this then I would like to start working on it<|||||>Cool, feel free to open a draft PR<|||||>Hi @NielsRogge, @soma2000-lang, are there any updates on this issue? I would like to contribute.<|||||>@atharvakavitkar working to open a draft pr
transformers
18,003
closed
Optional type of lengths causes slow speed in LengthGroupedSampler
### System Info - `transformers` version: 4.20.1 - Platform: macOS-12.4-arm64-arm-64bit - Python version: 3.8.13 - Huggingface_hub version: 0.8.1 - PyTorch version (GPU?): 1.12.0 (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: no but this bug doesn't depend on environment. ### Who can help? @sgugger ### Information - [x] The official example scripts - [x] My own modified scripts ### Tasks - [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction ```python import time import torch import random megabatches = [[random.randint(1, 512) for _ in range(51200)] for _ in range(2)] # test lengths_list_int lengths_torch_tensor = torch.tensor([random.randint(1, 512) for _ in range(102400)]) start = time.time() megabatches_ = [list(sorted(megabatch, key=lambda i: lengths_torch_tensor[i], reverse=True)) for megabatch in megabatches] end = time.time() print(end-start) # test lengths_list_int lengths_torch_tensor = torch.tensor([random.randint(1, 512) for _ in range(102400)]) lengths_list_int = lengths_torch_tensor.tolist() start = time.time() megabatches_ = [list(sorted(megabatch, key=lambda i: lengths_list_int[i], reverse=True)) for megabatch in megabatches] end = time.time() print(end - start) ``` ```bash 1.0904111862182617 0.013269901275634766 ``` ### Expected behavior When using `group_by_length`, `length_column_name` in `TrainingArguments`, `get_length_grouped_indices` function in `LengthGroupedSampler` is very slow if `Dataset[length_cloumn_name]` is `torch.Tensor(List[int])` (e.g. `torch.Tensor([200,100,..])`). So I think that `lengths: Optional[List[int]]` -> `lengths: List[int]` in `__init__` method of `LengthGroupedSampler` or warning message is printed and type casting to `List[int]`. https://github.com/huggingface/transformers/blob/49c8c67fb815a277405f84dea4a66353e19fb347/src/transformers/trainer_pt_utils.py#L532-L569 https://github.com/huggingface/transformers/blob/49c8c67fb815a277405f84dea4a66353e19fb347/src/transformers/trainer_pt_utils.py#L520
07-04-2022 07:31:23
07-04-2022 07:31:23
I don't understand why changing the type annotation would change anything. Those type annotations are completely ignored at execution, they are just here as documentation of the code. Python is not a typed language. The problem is just that if a tensor is passed for the lengths, it should be converted back to a list. Would you like to make a PR with that? <|||||>> I don't understand why changing the type annotation would change anything. Those type annotations are completely ignored at execution, they are just here as documentation of the code. Python is not a typed language. > > The problem is just that if a tensor is passed for the lengths, it should be converted back to a list. Would you like to make a PR with that? Ok, sure. I'll pr this. @sgugger
transformers
18,002
closed
Fix T5 incorrect weight decay in Trainer and official summarization example
# What does this PR do? Official summrization examples use T5 as pretrained model, but the name of T5 layer norm is `layer_norm`, not `layerNorm`: ``` from transformers import T5Tokenizer, T5ForConditionalGeneration model = T5ForConditionalGeneration.from_pretrained("t5-small") for n, p in model.named_parameters(): print(n) ``` Output: ``` ... encoder.block.0.layer.0.layer_norm.weight encoder.block.0.layer.1.DenseReluDense.wi.weight encoder.block.0.layer.1.DenseReluDense.wo.weight encoder.block.0.layer.1.layer_norm.weight ... ``` In [Official example of summarization](https://github.com/huggingface/transformers/blob/49c8c67fb815a277405f84dea4a66353e19fb347/examples/pytorch/summarization/run_summarization_no_trainer.py#L529), `layer_norm` not included: ``` no_decay = ["bias", "LayerNorm.weight"] ``` A similar problem occurred in [trainer.py](https://github.com/huggingface/transformers/blob/49c8c67fb815a277405f84dea4a66353e19fb347/src/transformers/trainer.py#L970), which may cause `Seq2SeqTrainer` train T5 layer norm with weight decay: ``` if self.optimizer is None: decay_parameters = get_parameter_names(opt_model, [nn.LayerNorm]) decay_parameters = [name for name in decay_parameters if "bias" not in name] ``` Because T5 uses [T5LayerNorm](https://github.com/huggingface/transformers/blob/49c8c67fb815a277405f84dea4a66353e19fb347/src/transformers/models/t5/modeling_t5.py#L239) Layer norm not`nn.LayerNorm`: ``` class T5LayerNorm(nn.Module): def __init__(self, hidden_size, eps=1e-6): """ Construct a layernorm module in the T5 style. No bias and no subtraction of mean. """ super().__init__() self.weight = nn.Parameter(torch.ones(hidden_size)) self.variance_epsilon = eps ``` <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
07-03-2022 17:09:08
07-03-2022 17:09:08
_The documentation is not available anymore as the PR was closed or merged._<|||||>Thanks a lot for iterating with us!
transformers
18,001
closed
Data collator not working with Masking language model of Bart type
### System Info **System Info** - transformers version : 4.20.1 - Ubuntu : "20.04.3 LTS - Python: 3.7 - PyTorch + GPU: torch:1.11+ cuda:11.4 ### Who can help? @patil-suraj @sgugger ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Trying to train a BART model using masking. The model type is BartForConditionalGeneration. Before trying it on a custom dataset, I wanted to try it on the given example [here](https://huggingface.co/course/chapter7/3?fw=pt#the-dataset). (which is in fact similar [to here](https://github.com/huggingface/notebooks/blob/main/examples/language_modeling.ipynb) My code is an exact match to the provided example except the choice of a model relevant to my task. So instead of ``` model_checkpoint = "distilbert-base-uncased" model = AutoModelForMaskedLM.from_pretrained(model_checkpoint) ``` We have ``` model_checkpoint = "memray/bart-wikikp" model = AutoModelForMaskedLM.from_pretrained(model_checkpoint) ``` Based on the documentation this unsupervised approach is viable if one wants to fine-tune the model for a specific domain. Therefore, before fine-tuning, _Masked language modelling helps acquaint the model with the new corpus first._ Also, in the documentation _there is no mention of specific-tasks against using masking training_ e.g. text classification or question answering vs my text generation task. When running the trainer.train I get the following flavors of CUDA error (if I use the accelerator the error becomes RuntimeError: CUDA error: device-side assert triggered CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.) > “***** Running training ***** Num examples = 10000 Num Epochs = 3 Instantaneous batch size per device = 32 Total train batch size (w. parallel, distributed & accumulation) = 32 Gradient Accumulation steps = 1 Total optimization steps = 939 0%| | 0/939 [00:00<?, ?it/s]/opt/conda/conda-bld/pytorch_1646755953518/work/aten/src/ATen/native/cuda/Indexing.cu:703: indexSelectLargeIndex: block: [42,0,0], thread: [32,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1646755953518/work/aten/src/ATen/native/cuda/Indexing.cu:703: indexSelectLargeIndex: block: [42,0,0], thread: [33,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1646755953518/work/aten/src/ATen/native/cuda/Indexing.cu:703: indexSelectLargeIndex: block: [42,0,0], thread: [34,0,0] Assertion `srcIndex < srcSelectDimSize` failed. >Traceback (most recent call last): File "/home/haddad/.conda/envs/hugg/lib/python3.7/site-packages/IPython/core/interactiveshell.py", line 3457, in run_code exec(code_obj, self.user_global_ns, self.user_ns) File "<ipython-input-13-7c826244a095>", line 1, in <module> trainer.train() File "/home/haddad/.conda/envs/hugg/lib/python3.7/site-packages/transformers/trainer.py", line 1413, in train ignore_keys_for_eval=ignore_keys_for_eval, File "/home/haddad/.conda/envs/hugg/lib/python3.7/site-packages/transformers/trainer.py", line 1651, in _inner_training_loop tr_loss_step = self.training_step(model, inputs) File "/home/haddad/.conda/envs/hugg/lib/python3.7/site-packages/transformers/trainer.py", line 2345, in training_step loss = self.compute_loss(model, inputs) File "/home/haddad/.conda/envs/hugg/lib/python3.7/site-packages/transformers/trainer.py", line 2377, in compute_loss outputs = model(**inputs) File "/home/haddad/.conda/envs/hugg/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl return forward_call(*input, **kwargs) File "/home/haddad/.conda/envs/hugg/lib/python3.7/site-packages/transformers/models/bart/modeling_bart.py", line 1368, in forward return_dict=return_dict, File "/home/haddad/.conda/envs/hugg/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl return forward_call(*input, **kwargs) File "/home/haddad/.conda/envs/hugg/lib/python3.7/site-packages/transformers/models/bart/modeling_bart.py", line 1229, in forward return_dict=return_dict, File "/home/haddad/.conda/envs/hugg/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl return forward_call(*input, **kwargs) File "/home/haddad/.conda/envs/hugg/lib/python3.7/site-packages/transformers/models/bart/modeling_bart.py", line 850, in forward output_attentions=output_attentions, File "/home/haddad/.conda/envs/hugg/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl return forward_call(*input, **kwargs) File "/home/haddad/.conda/envs/hugg/lib/python3.7/site-packages/transformers/models/bart/modeling_bart.py", line 327, in forward output_attentions=output_attentions, File "/home/haddad/.conda/envs/hugg/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl return forward_call(*input, **kwargs) File "/home/haddad/.conda/envs/hugg/lib/python3.7/site-packages/transformers/models/bart/modeling_bart.py", line 191, in forward query_states = self.q_proj(hidden_states) * self.scaling File "/home/haddad/.conda/envs/hugg/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl return forward_call(*input, **kwargs) File "/home/haddad/.conda/envs/hugg/lib/python3.7/site-packages/torch/nn/modules/linear.py", line 103, in forward return F.linear(input, self.weight, self.bias) RuntimeError: CUDA error: CUBLAS_STATUS_NOT_INITIALIZED when calling `cublasCreate(handle)` ### Expected behavior IF i remove the standard datacollator from the training args , it works(it runs trainer.train till the end) ! That is the only difference i saw between the two official examples (links above) , but this means we are losing that functionality! I didn't try with my own custom collator.
07-03-2022 16:16:58
07-03-2022 16:16:58
So commenting the collator argument , otherwise CUDA faces lots of issues whether with the accelator or not. ``` trainer = Trainer( model=model, args=training_args, train_dataset=downsampled_dataset["train"], eval_dataset=downsampled_dataset["test"], # data_collator=data_collator, ) ```<|||||>This means there is an indexing problem somewhere. Please use the [forums](https://discuss.huggingface.co/) to help debug your code. You can also look at [this video](https://youtu.be/L-WSwUWde1U) which explains how to debug a training loop step by step to get more information about where the error comes from.<|||||>Solution [here](https://discuss.huggingface.co/t/masked-language-model-for-bart-not-bert/19945)
transformers
18,000
closed
Fix typo in error message in generation_utils
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> This PR fixes a typo in an error message in `src/transformers/generation_utils.py`. ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
07-03-2022 10:57:35
07-03-2022 10:57:35
_The documentation is not available anymore as the PR was closed or merged._
transformers
17,999
closed
can not covert LayoutLMv2ForRelationExtraction model to onnx
null
07-03-2022 03:43:05
07-03-2022 03:43:05
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
17,998
closed
input_inds greater than the vocab_size in Question answering example
### System Info - `transformers` version: 4.21.0.dev0 - Platform: Linux-3.10.0-693.el7.x86_64-x86_64-with-glibc2.17 - Python version: 3.9.12 - Huggingface_hub version: 0.8.1 - PyTorch version (GPU?): 1.12.0+cu102 (False) - Tensorflow version (GPU?): 2.9.1 (False) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in ### Who can help? @SaulLu, @sgugger, @patil-suraj ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Hi, I got some errors when I run the [Question answering example](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/question-answering). My commands: ``` cd /transformers/transformers/examples/tensorflow/question-answering python run_qa.py --model_name_or_path distilbert-base-cased --output_dir output --dataset_name squad --do_train --do_eval ``` Error msg: ``` XXX: W tensorflow/core/framework/dataset.cc:768] Input of GeneratorDatasetOp::Dataset will not be optimized because the dataset does not implement the AsGraphDefInternal() method needed to apply optimizations. Epoch 1/3 Traceback (most recent call last): File "/path1/xxxx/workdir/traner_env/transformers/examples/tensorflow/question-answering/run_qa.py", line 703, in <module> main() File "/path1/xxxx/workdir/traner_env/transformers/examples/tensorflow/question-answering/run_qa.py", line 487, in main processed_datasets["train"] = train_dataset File "/path1/xxxx/anaconda3/envs/traner_env/lib/python3.9/site-packages/keras/utils/traceback_utils.py", line 67, in error_handler raise e.with_traceback(filtered_tb) from None File "/path1/xxxx/anaconda3/envs/traner_env/lib/python3.9/site-packages/tensorflow/python/eager/execute.py", line 54, in quick_execute tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name, tensorflow.python.framework.errors_impl.InvalidArgumentError: Graph execution error: Detected at node 'tf_distil_bert_for_question_answering/distilbert/embeddings/Gather' defined at (most recent call last): File "/path1/xxxx/workdir/traner_env/transformers/examples/tensorflow/question-answering/run_qa.py", line 703, in <module> main() File "/path1/xxxx/workdir/traner_env/transformers/examples/tensorflow/question-answering/run_qa.py", line 487, in main processed_datasets["train"] = train_dataset File "/path1/xxxx/anaconda3/envs/traner_env/lib/python3.9/site-packages/keras/utils/traceback_utils.py", line 64, in error_handler return fn(*args, **kwargs) File "/path1/xxxx/anaconda3/envs/traner_env/lib/python3.9/site-packages/keras/engine/training.py", line 1409, in fit tmp_logs = self.train_function(iterator) File "/path1/xxxx/anaconda3/envs/traner_env/lib/python3.9/site-packages/keras/engine/training.py", line 1051, in train_function return step_function(self, iterator) File "/path1/xxxx/anaconda3/envs/traner_env/lib/python3.9/site-packages/keras/engine/training.py", line 1040, in step_function outputs = model.distribute_strategy.run(run_step, args=(data,)) File "/path1/xxxx/anaconda3/envs/traner_env/lib/python3.9/site-packages/keras/engine/training.py", line 1030, in run_step outputs = model.train_step(data) File "/path1/xxxx/anaconda3/envs/traner_env/lib/python3.9/site-packages/transformers/modeling_tf_utils.py", line 1409, in train_step y_pred = self(x, training=True) File "/path1/xxxx/anaconda3/envs/traner_env/lib/python3.9/site-packages/keras/utils/traceback_utils.py", line 64, in error_handler return fn(*args, **kwargs) File "/path1/xxxx/anaconda3/envs/traner_env/lib/python3.9/site-packages/keras/engine/training.py", line 490, in __call__ return super().__call__(*args, **kwargs) File "/path1/xxxx/anaconda3/envs/traner_env/lib/python3.9/site-packages/keras/utils/traceback_utils.py", line 64, in error_handler return fn(*args, **kwargs) File "/path1/xxxx/anaconda3/envs/traner_env/lib/python3.9/site-packages/keras/engine/base_layer.py", line 1014, in __call__ outputs = call_fn(inputs, *args, **kwargs) File "/path1/xxxx/anaconda3/envs/traner_env/lib/python3.9/site-packages/keras/utils/traceback_utils.py", line 92, in error_handler return fn(*args, **kwargs) File "/path1/xxxx/anaconda3/envs/traner_env/lib/python3.9/site-packages/transformers/modeling_tf_utils.py", line 1012, in run_call_with_unpacked_inputs _auto_class = None File "/path1/xxxx/anaconda3/envs/traner_env/lib/python3.9/site-packages/transformers/models/distilbert/modeling_tf_distilbert.py", line 1029, in call distilbert_output = self.distilbert( File "/path1/xxxx/anaconda3/envs/traner_env/lib/python3.9/site-packages/keras/utils/traceback_utils.py", line 64, in error_handler return fn(*args, **kwargs) File "/path1/xxxx/anaconda3/envs/traner_env/lib/python3.9/site-packages/keras/engine/base_layer.py", line 1014, in __call__ outputs = call_fn(inputs, *args, **kwargs) File "/path1/xxxx/anaconda3/envs/traner_env/lib/python3.9/site-packages/keras/utils/traceback_utils.py", line 92, in error_handler return fn(*args, **kwargs) File "/path1/xxxx/anaconda3/envs/traner_env/lib/python3.9/site-packages/transformers/modeling_tf_utils.py", line 1012, in run_call_with_unpacked_inputs _auto_class = None File "/path1/xxxx/anaconda3/envs/traner_env/lib/python3.9/site-packages/transformers/models/distilbert/modeling_tf_distilbert.py", line 400, in call embedding_output = self.embeddings(input_ids, inputs_embeds=inputs_embeds) # (bs, seq_length, dim) File "/path1/xxxx/anaconda3/envs/traner_env/lib/python3.9/site-packages/keras/utils/traceback_utils.py", line 64, in error_handler return fn(*args, **kwargs) File "/path1/xxxx/anaconda3/envs/traner_env/lib/python3.9/site-packages/keras/engine/base_layer.py", line 1014, in __call__ outputs = call_fn(inputs, *args, **kwargs) File "/path1/xxxx/anaconda3/envs/traner_env/lib/python3.9/site-packages/keras/utils/traceback_utils.py", line 92, in error_handler return fn(*args, **kwargs) File "/path1/xxxx/anaconda3/envs/traner_env/lib/python3.9/site-packages/transformers/models/distilbert/modeling_tf_distilbert.py", line 112, in call if input_ids is not None: File "/path1/xxxx/anaconda3/envs/traner_env/lib/python3.9/site-packages/transformers/models/distilbert/modeling_tf_distilbert.py", line 113, in call inputs_embeds = tf.gather(params=self.weight, indices=input_ids) Node: 'tf_distil_bert_for_question_answering/distilbert/embeddings/Gather' indices[4,22] = 29261 is not in [0, 28996) [[{{node tf_distil_bert_for_question_answering/distilbert/embeddings/Gather}}]] [Op:__inference_train_function_12652] ``` It seem that the `vocab_size` or the tokenizer has some issue? ### Expected behavior I want to run the demo by the [descried command](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/question-answering#example-command). If you need other information to reproduce the error, pls let me know. Thanks :)
07-02-2022 16:59:58
07-02-2022 16:59:58
The max of `input_ids` is 30511.<|||||>This is a TensorFlow example, so you tagged the wrong persons :-) cc @Rocketknight1 @gante <|||||>Hi @yiliu30 I have just tried your command with the latest version of `transformers` installed from `main` and TensorFlow 2.9.1 and could not reproduce the issue. Can you install the latest versions of `transformers` and `datasets` from main/master and confirm that the issue is still occurring? <|||||>Hi @Rocketknight1, this issue solved by install `datasets` from main instead of `pip`. BTW, may I know the possible deep reason for this issue? Thanks.<|||||>@yiliu30 I'm not sure, I'm afraid! It's possible that there was some incompatibility between newer versions of `transformers` and your older version of `datasets`.<|||||>okay, thanks a lot.
transformers
17,997
closed
Added Doctest for Deberta Pytorch
# What does this PR do? Adds Doctest for DeBerta [Pytorch version] Issue: #16292 ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? @ydshieh @patrickvonplaten
07-02-2022 15:11:50
07-02-2022 15:11:50
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_17997). All of your documentation changes will be reflected on that endpoint.<|||||>The doctest pass locally. I will look the changes in this PR :-) <|||||>Hi @Tegzes, Thank you for this PR! - For `DebertaForMaskedLM`, is there any reason to use `lsanochkin/deberta-large-feedback` instead of `_CHECKPOINT_FOR_DOC = "microsoft/deberta-base"`? - In order to fix the tests (see the end), could you try to use variable names like ``` _CHECKPOINT_FOR_SEQUENCE_CLASSIFICATION _CHECKPOINT_FOR_TOKEN_CLASSIFICATION _CHECKPOINT_FOR_QA ``` See, for example, the doctest for `mobilebert` https://github.com/huggingface/transformers/blob/f681437203baa7671de3174b0fa583c349d9d5e1/src/transformers/models/mobilebert/modeling_mobilebert.py#L59-L78 - You will need something new like `_MASKED_LM_EXPECTED_OUTPUT`, `_MASKED_LM_EXPECTED_LOSS` etc. - You will also have to work on the corresponding places in `DebertaV2` - as it has some code copied from `Deberta` Let us know if you need more information. ### The errors in `ci/circleci: check_repository_consistency` ``` - src/transformers/models/deberta_v2/modeling_deberta_v2.py: copy does not match models.deberta.modeling_deberta.DebertaForMaskedLM at line 1094 - src/transformers/models/deberta_v2/modeling_deberta_v2.py: copy does not match models.deberta.modeling_deberta.DebertaForSequenceClassification at line 1230 - src/transformers/models/deberta_v2/modeling_deberta_v2.py: copy does not match models.deberta.modeling_deberta.DebertaForTokenClassification at line 1350 - src/transformers/models/deberta_v2/modeling_deberta_v2.py: copy does not match models.deberta.modeling_deberta.DebertaForQuestionAnswering at line 1427 ```<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||> > * For `DebertaForMaskedLM`, is there any reason to use `lsanochkin/deberta-large-feedback` instead of `_CHECKPOINT_FOR_DOC = "microsoft/deberta-base"`? Hi @ydshieh, I used this model because it was the only one returning the correct output (from what I've tested) <|||||> > * In order to fix the tests (see the end), could you try to use variable names like > ``` > _CHECKPOINT_FOR_SEQUENCE_CLASSIFICATION > _CHECKPOINT_FOR_TOKEN_CLASSIFICATION > _CHECKPOINT_FOR_QA > ``` All right! Thanks! I will make the changes and open another pull request
transformers
17,996
closed
Result of T5 tokenizer doesn't match
### System Info transformers: 4.16.2 python: 3.7.11 pytorch: 1.10.0 ### Who can help? @patrickvonplaten ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [x] My own task or dataset (give details below) ### Reproduction ``` from transformers import T5Tokenizer, T5ForConditionalGeneration text = '(CNN)Then-President Donald Trump angrily demanded to go to the US Capitol on January 6, 2021, and berated his protective detail when he didnt get his way.' tokenizer = T5Tokenizer.from_pretrained('t5-base') tokenize_result = tokenizer.tokenize(text) ids_1 = tokenizer(text, max_length=128, truncation=True, padding='max_length', return_tensors='np')['input_ids'] ids_2 = tokenizer(tokenize_result, is_split_into_words=True, max_length=128, truncation=True, padding='max_length', return_tensors='np')['input_ids'] ids_3 = tokenizer.convert_tokens_to_ids(tokenize_result) print(tokenizer.batch_decode(ids_1, skip_special_tokens=True)[0]) # >>> (CNN)Then-President Donald Trump angrily demanded to go to the US Capitol on January 6, 2021, and berated his protective detail when he didnt get his way. print(tokenizer.batch_decode(ids_2, skip_special_tokens=True)[0]) # >>> ( C NN ) The n - P resident Donald Trump an gri ly demande d to go to the US Capitol on January 6, 20 21, and be rated his protective detail when he didn t get his way. print(tokenizer.decode(ids_3, skip_special_tokens=True)) # >>> (CNN)Then-President Donald Trump angrily demanded to go to the US Capitol on January 6, 2021, and berated his protective detail when he didnt get his way. ``` ### Expected behavior Three print results should be the same, but when we pre-tokenize with same tokenizer and convert to ids with flag is_split_into_words=True, the tokenizer gives different result.
07-02-2022 10:12:50
07-02-2022 10:12:50
I think your issue was already resolved here: https://github.com/huggingface/transformers/issues/8217 In short: `is_split_into_words=True` assumes that the input is a list of words, not tokens. When you input `tokenize_result` to tokenizer with `is_split_into_words=True`, the tokenizer assumes that the tokens are distinct words (very short words). During decoding these tokens are interpeted as words and that's why there are so many additional spaces in the second result. try replacing ``` ids_2 = tokenizer(tokenize_result, is_split_into_words=True, max_length=128, truncation=True, padding='max_length', return_tensors='np')['input_ids'] ``` with ``` ids_2 = tokenizer(text.split(' '), is_split_into_words=True, max_length=128, truncation=True, padding='max_length', return_tensors='np')['input_ids'] ``` You'll see that all results will be the same.
transformers
17,995
closed
Fix BLOOM dtype
Issues: a) The bloom attribute map does not follow the convention of ```json common_arg: model_specific_arg ``` & the argument n_embed is there for no apparent reason. b) Using attributes with defaults in the attribute_map leads to them being overwritten in [setattr](https://github.com/huggingface/transformers/blob/7498db06a1724773ad7185887743afcb8406ee6b/src/transformers/configuration_utils.py#L249) when the respective kwarg is popped [here](https://github.com/huggingface/transformers/blob/7498db06a1724773ad7185887743afcb8406ee6b/src/transformers/configuration_utils.py#L265). For now I just removed the `dtype` argument for `torch_dtype`. I think the best solution for this would be to check the attribute map before popping. I can also implement that instead if someone gives me a heads up. c) Weights end up in fp32 & not the torch_dtype when loading them in via load_state_dict. I havn't looked into this in more detail, so I just fixed it via casting, which is sufficient for the conversion script I think. Notes: - If we merge this, we need to update all BLOOM configs in the Hub by swapping `dtype` for `torch_dtype` & swapping `n_embed` for `hidden_size`
07-02-2022 08:05:25
07-02-2022 08:05:25
_The documentation is not available anymore as the PR was closed or merged._<|||||>You should just set the `dtype` in the config file<|||||>Hmm maybe I'm doing sth wrong. I use `python /gpfsssd/worksf/projects/rech/six/commun/conda/muennighoffmodelconv/lib/python3.8/site-packages/transformers/models/bloom/convert_bloom_original_checkpoint_to_pytorch.py --bloom_checkpoint_path /gpfsscratch/rech/six/commun/checkpoints/tr11e-350M-ml/checkpoints/main/global_step659500 --pytorch_dump_folder_path bloomfp16 --bloom_config_file /gpfsscratch/rech/six/commun/commun/experiments/muennighoff/bloomckpt/350M/bloom-350m/config.json --pretraining_tp 1` where the config is the below cloned from [here](https://huggingface.co/bigscience/bloom-350m) ```json { "apply_residual_connection_post_layernorm": false, "attention_dropout": 0.0, "attention_softmax_in_fp32": true, "bias_dropout_fusion": true, "bos_token_id": 1, "dtype": "float16", "eos_token_id": 2, "pad_token_id": 3, "unk_token_id": 0, "hidden_dropout": 0.0, "initializer_range": 0.02, "layer_norm_epsilon": 1e-05, "masked_softmax_fusion": true, "model_type": "bloom", "n_embed": 1024, "n_inner": null, "n_layer": 24, "num_attention_heads": 16, "offset_alibi": 100, "pretraining_tp": 1, "seq_length": 2048, "skip_bias_add": true, "skip_bias_add_qkv": false, "transformers_version": "4.20.0.dev0", "use_cache": true, "vocab_size": 250880 } ```<|||||>See updated description cc @thomasw21 , @younesbelkada <|||||>Thanks for the fix! Indeed there seems to be a mistake there, and since the native weights are already in `fp16` / `bfloat16` we did not observed it since the `torch_dtype` can be automatically retrieved here when doing `torch_dtype="auto"`: https://github.com/huggingface/transformers/blob/7498db06a1724773ad7185887743afcb8406ee6b/src/transformers/modeling_utils.py#L196 I will ask @sgugger to review this since it may break backward compatibility but I think if we change all config files it should be fine <|||||>So in case we want to load models in half precision we always have to add `torch_dtype=auto` when loading a model using `from_pretrained` ?<|||||>Sure, I will remove `torch_dtype` then entirely from the Bloom config. Like @younesbelkada said in order to get BLOOM in the correct dtype one will always have to specify `torch_dtype=auto`. The below returns a model in fp32 not the desired bf16: ```python from transformers import AutoModel, BloomModel model = BloomModel.from_pretrained("bigscience/bloom-350m") ```<|||||>Yes, but that was already the case before this PR.<|||||>Thanks for iterating! I just looked and all the configs online define `n_embed` instead of `hidden_size`. So we will need some backward compatible line in the init to pop `n_embed` from the kwargs and set it as hidden_size to not break everything. We can't just replace the existing configs as it would break BLOOM with the current version of Transformers. The `dtype` field is different since it wasn't used anywhere. Also, you will need a rebase on main to have the tests (example torch test specifically) finish faster :-)<|||||>Tests are passing & it is backwards compatible with `n_embed`. Let's merge? I can remove `dtype` from the bloom configs under bigscience; Even if it's still set somewhere, it shouldn't be a problem as it hasn't been used before and would still just be an unused kwarg<|||||>Yes, we can merge, thanks! We can remove the `dtype` attributes in the configurations but the `n_embed` ones will need to stay for compatibility with older versions of Transformers.<|||||>I'm late to chime in. We absolutely need `torch_dtype` in all configs, since some models are fp16 and others are bf16. The user needs to know even if the core doesn't yet use `torch_dtype` directly. It can be used as: `AutoModel.from_pretrained(..., torch_dtype=config.torch_dtype)` model configs should ideally be created with `save_from_pretrained`, which will do the right thing, but given the size on some of the models, like 176B, we might have to find a workaround for that one. And some models are missing crucial configs - hidden size. Thank you!<|||||>> I'm late to chime in. We absolutely need `torch_dtype` in all configs, since some models are fp16 and others are bf16. The user needs to know even if the core doesn't yet use `torch_dtype` directly. It can be used as: > > `AutoModel.from_pretrained(..., torch_dtype=config.torch_dtype)` > > model configs should ideally be created with `save_from_pretrained`, which will do the right thing, but given the size on some of the models, like 176B, we might have to find a workaround for that one. > > And some models are missing crucial configs - hidden size. > > Thank you! Good point. I think we can add the `torch_dtype` kwarg to the hub configs for information purposes, but it does not change the loading of the models. I.e. ```python from transformers import AutoModel, AutoConfig model = AutoModel.from_pretrained("Muennighoff/bloom-tiny-random") model.dtype ``` yields `torch.float32` even though `torch_dtype` is [set](https://huggingface.co/Muennighoff/bloom-tiny-random/blob/main/config.json) to `float16`. I'm not sure if this is a feature or a bug? So models always have to be loaded with ```python from transformers import AutoModel, AutoConfig model = AutoModel.from_pretrained("Muennighoff/bloom-tiny-random", torch_dtype="auto") model.dtype # torch.bfloat16 ``` <|||||>@Muennighoff the behaviour that you are exposing is expected, if you want to load a model with its native dtype you always have to pass `torch_dtype=auto`. I imagine the reason behind this choice is to let some operations (eg LayerNorm) that cannot run in half precision on CPU, be able to run them by default on CPU.<|||||>not **always**, sometimes one needs to instantiate a model w/o having access to weights, so one has to do: ``` AutoModel.from_config(config, torch_dtype=config.torch_dtype) ``` "auto" won't work here. we had to use it for DeepSpeed-Inference just recently.<|||||>The main reason for `torch_dtype` not being used automatically on load is that most model configs don't have it. So we decided to start populating this meta data and eventually switch to using it. I think we added it a year ago or so. Otherwise there would be an inconsistent behavior where some models would load in a correct dtype and other won't. So for now it's an explicit way.
transformers
17,994
closed
Flax Remat for LongT5
# What does this PR do? Following on from the discussion in https://github.com/huggingface/transformers/issues/17399 this PR adds remat (flax's gradient checkpointing) to LongT5. Most of the code is copied from https://github.com/huggingface/transformers/pull/17843/files and then altered to be applicable for LongT5. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? @sanchit-gandhi or @patrickvonplaten
07-02-2022 06:48:08
07-02-2022 06:48:08
_The documentation is not available anymore as the PR was closed or merged._<|||||>Hey @KMFODA, looks like a great start on this PR! I would advocate for adding gradient checkpointing to Flax T5 first. Once we're confident that the implementation is correct, we can run the command `make fix-copies` to have the changes reflected across all Flax T5 based models. What remains to do is wrapping the Flax T5 Block Collection in the `remat` transformation when `gradient_checkpointing` is passed as `True`. This is analogous to wrapping the Flax Bert Layer Collection in the `remat` operation in the reference PR https://github.com/huggingface/transformers/pull/17843. We just need to update the following lines: https://github.com/huggingface/transformers/blob/f0982682bd6fd0b438dda79ec45f3a8fac83a985/src/transformers/models/t5/modeling_flax_t5.py#L648-L651 To reflect what is done in Flax Bert: https://github.com/huggingface/transformers/blob/f0982682bd6fd0b438dda79ec45f3a8fac83a985/src/transformers/models/bert/modeling_flax_bert.py#L553-L558 As a pointer, the `static_argnums` kwarg is used to denote any boolean arguments passed to the `remat` layer. Inspecting the `__call__` method of FlaxT5LayerCollection, we see that the final four arguments are booleans: https://github.com/huggingface/transformers/blob/f0982682bd6fd0b438dda79ec45f3a8fac83a985/src/transformers/models/t5/modeling_flax_t5.py#L616-L628 Thus, we have `static_argnums=(6,7,8,9)` (corresponding to `output_attentions`, `return_dict`, `deterministic` and `init_cache`).<|||||>Hey @sanchit-gandhi. Sorry this is taking so long, adding your changes was relatively easy but I'm a bit stuck trying to pass a few failing tests which I believe are caused by my addition of `key_value_states`, `encoder_hidden_states`, `encoder_attention_mask`, `use_cache` and `init_cache` into most calls as I remember in your PR you mentioned that remat doesn't take kwargs. I believe somewhere in the code `key_value_states` is being incorrectly set to a non `None` figure which is causing some of these errors but I'm not sure where. <|||||>of course that makes sense. Apologies for the misunderstanding. I'll work on the gradient checkpointing part using your suggestions and remove the `key_value_states` for this PR.<|||||>hey @sanchit-gandhi. I beleive this is now ready for review. The PR passes all the tests except ones related to inconsistencies between t5 and long_t5. If you're happy with this let me know and then I think running `make fix-copies` should fix the remaining errors?<|||||>Thanks @sanchit-gandhi for all the helpful comments. Addressed them all and ran `make fix-copies`. Hopefully these changes should be reflected properly for `LongT5 `as well now.<|||||>Thanks for this PR @KMFODA! Awesome stuff, excited to hear how you get on with improved memory usage with the aid of checkpointing! <|||||>Amazing, will keep you posted. Thanks for all the help getting this merged!
transformers
17,993
closed
BART with multiple encoders.
Hi @patrickvonplaten , I am trying to evaluate my a model (Custom BART model with 2 inputs) that has 2 encoder and 1 decoder. For evaluation in the prediction_step method, it uses model.generate (from generation_utils.py) for generating outputs. But the problem is that it only accepts inputs: Optional[torch.Tensor] = None, but in my case I need to pass 2 set of inputs (one for both the encoders). So is there a way to pass 2 set of inputs to the generate method?
07-01-2022 22:43:11
07-01-2022 22:43:11
Hi @prakamya-mishra -- if your model is designed to take two inputs, then you may be able to pass the second through `model_kwargs`. E.g. if the model signature is `model(input_ids, foo)`, then you may be able to run `model.generate(input_ids, model_kwargs{"foo": foo})`. Although you have to make sure your `prepare_inputs_for_generation` [method](https://github.com/huggingface/transformers/blob/main/src/transformers/models/bart/modeling_bart.py#L1393) is expecting this argument. Please note that as per our [issues guidelines](https://github.com/huggingface/transformers/blob/main/ISSUES.md), we do not provide support to custom code. We also reserve these issues for bugs and feature requests -- for other questions, we'd like to invite you to use the [forums](https://discuss.huggingface.co/) :)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
17,992
closed
traducción al español de token_classification.mdx
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # 15947 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
07-01-2022 22:27:26
07-01-2022 22:27:26
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_17992). All of your documentation changes will be reflected on that endpoint.<|||||>Hola @gpalomeque! Thank you for your translations. I am sorry for the late review on my side. I left you some comments on the review. Also, please update your local version of the repository with `git pull upstream main` (you might have to fix some conflicts, please let me know if I can help). This I believe would let your PR pass all tests. Fixes #15947<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
17,991
closed
issue installing SentencePiece
### System Info the error i get when i do "pip3 install SentencePiece": ./build_bundled.sh: line 15: cmake: command not found ./build_bundled.sh: line 16: nproc: command not found make: *** No targets specified and no makefile found. Stop. make: *** No rule to make target `install'. Stop. env: pkg-config: No such file or directory Failed to find sentencepiece pkg-config [end of output] ### Who can help? _No response_ ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction pip3 install SentencePiece ### Expected behavior unable to install
07-01-2022 20:53:20
07-01-2022 20:53:20
Hi @SamitM1 👋 The issue seems to be related to the package you mentioned, not `transformers` -- I'd suggest opening an issue there (https://github.com/google/sentencepiece)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
17,990
closed
--save_on_each_node doesn't save pytorch_model.bin when deepspeed zero3 is used
### System Info - `transformers` version: 4.20.1 - Platform: Linux-5.4.0-1083-azure-x86_64-with-glibc2.10 - Python version: 3.8.12 - Huggingface_hub version: 0.8.1 - PyTorch version (GPU?): 1.10.0a0+0aef44c (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> I am finetuning GPT-J with DeepSpeed zero3 and 2 GPU nodes, each with 8 GPUs. I turned on --save_on_each_node but I found pytorch_model.bin is only saved by process world_rank=0 (i.e. process 0), not world_rank=8 (i.e. process 8). Below are the relevant part of logs from process 0 and process 8. Process 0: ``` [INFO|trainer.py:2503] 2022-07-01 10:56:15,212 >> Saving model checkpoint to /tmp/azureml/cr/j/b531489bec744033bf0a29a63689aa44/cap/data-capability/wd/output_dir/checkpoint-10 [INFO|configuration_utils.py:446] 2022-07-01 10:56:15,213 >> Configuration saved in /tmp/azureml/cr/j/b531489bec744033bf0a29a63689aa44/cap/data-capability/wd/output_dir/checkpoint-10/config.json [INFO|modeling_utils.py:1660] 2022-07-01 10:56:15,353 >> Model weights saved in /tmp/azureml/cr/j/b531489bec744033bf0a29a63689aa44/cap/data-capability/wd/output_dir/checkpoint-10/pytorch_model.bin [INFO|tokenization_utils_base.py:2123] 2022-07-01 10:56:15,354 >> tokenizer config file saved in /tmp/azureml/cr/j/b531489bec744033bf0a29a63689aa44/cap/data-capability/wd/output_dir/checkpoint-10/tokenizer_config.json [INFO|tokenization_utils_base.py:2130] 2022-07-01 10:56:15,354 >> Special tokens file saved in /tmp/azureml/cr/j/b531489bec744033bf0a29a63689aa44/cap/data-capability/wd/output_dir/checkpoint-10/special_tokens_map.json [2022-07-01 10:56:20,868] [INFO] [engine.py:3229:save_16bit_model] Saving model weights to /tmp/azureml/cr/j/b531489bec744033bf0a29a63689aa44/cap/data-capability/wd/output_dir/checkpoint-10/pytorch_model.bin [2022-07-01 10:56:43,320] [INFO] [logging.py:69:log_dist] [Rank 0] Saving model checkpoint: /tmp/azureml/cr/j/b531489bec744033bf0a29a63689aa44/cap/data-capability/wd/output_dir/checkpoint-10/global_step10/zero_pp_rank_0_mp_rank_00_model_states.pt [2022-07-01 10:56:52,826] [INFO] [engine.py:3115:_save_zero_checkpoint] zero checkpoint saved /tmp/azureml/cr/j/b531489bec744033bf0a29a63689aa44/cap/data-capability/wd/output_dir/checkpoint-10/global_step10/zero_pp_rank_0_mp_rank_00_optim_states.pt [INFO|trainer.py:1761] 2022-07-01 10:56:52,858 >> ``` Process 8: ``` [INFO|trainer.py:2503] 2022-07-01 10:56:15,211 >> Saving model checkpoint to /tmp/azureml/cr/j/b531489bec744033bf0a29a63689aa44/cap/data-capability/wd/output_dir/checkpoint-10 [INFO|configuration_utils.py:446] 2022-07-01 10:56:15,213 >> Configuration saved in /tmp/azureml/cr/j/b531489bec744033bf0a29a63689aa44/cap/data-capability/wd/output_dir/checkpoint-10/config.json [INFO|modeling_utils.py:1660] 2022-07-01 10:56:15,347 >> Model weights saved in /tmp/azureml/cr/j/b531489bec744033bf0a29a63689aa44/cap/data-capability/wd/output_dir/checkpoint-10/pytorch_model.bin [INFO|tokenization_utils_base.py:2123] 2022-07-01 10:56:15,348 >> tokenizer config file saved in /tmp/azureml/cr/j/b531489bec744033bf0a29a63689aa44/cap/data-capability/wd/output_dir/checkpoint-10/tokenizer_config.json [INFO|tokenization_utils_base.py:2130] 2022-07-01 10:56:15,348 >> Special tokens file saved in /tmp/azureml/cr/j/b531489bec744033bf0a29a63689aa44/cap/data-capability/wd/output_dir/checkpoint-10/special_tokens_map.json [2022-07-01 10:56:52,831] [INFO] [engine.py:3115:_save_zero_checkpoint] zero checkpoint saved /tmp/azureml/cr/j/b531489bec744033bf0a29a63689aa44/cap/data-capability/wd/output_dir/checkpoint-10/global_step10/zero_pp_rank_8_mp_rank_00_optim_states.pt [INFO|trainer.py:1761] 2022-07-01 10:56:52,858 >> ``` The difference is that process 8 is missing these lines: ``` [2022-07-01 10:56:20,868] [INFO] [engine.py:3229:save_16bit_model] Saving model weights to /tmp/azureml/cr/j/b531489bec744033bf0a29a63689aa44/cap/data-capability/wd/output_dir/checkpoint-10/pytorch_model.bin [2022-07-01 10:56:43,320] [INFO] [logging.py:69:log_dist] [Rank 0] Saving model checkpoint: /tmp/azureml/cr/j/b531489bec744033bf0a29a63689aa44/cap/data-capability/wd/output_dir/checkpoint-10/global_step10/zero_pp_rank_0_mp_rank_00_model_states.pt ``` My guess is that engine.py->save_16bit_model did not run on process 8. Looking at the [source code from DeepSpeed](https://github.com/microsoft/DeepSpeed/blob/9b70ce56e7af89d5226f9b06ebe1137407f371dc/deepspeed/runtime/engine.py#L3165), I suspect that these lines were not run because dist.get_rank() is not equal to 0 on process 8. ``` if dist.get_rank() == 0: os.makedirs(save_dir, exist_ok=True) logger.info(f"Saving model weights to {path}") torch.save(state_dict, path) ``` ### Who can help? @stas00 @jeffra ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Not sure what to put here as the issue wouldn't happen unless more than one GPU node is used. ### Expected behavior The main process on each node, or process 0 and process 8 in my examples, both save model file pytorch_model.bin
07-01-2022 19:21:05
07-01-2022 19:21:05
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
17,989
closed
Added option for users to modify config parameter when calling pytess…
# What does this PR do? This is a feature addition to LayoutLMV3, a follow up to the same feature added to [LayoutLMV2's feature extractor ](https://github.com/huggingface/transformers/pull/17733) Giving user option to set config parameter used by Tesseract when performing feature extraction. Eg. to change psm levels while performing transcription by passing in '--psm 10' to config parameter while invoking image_to_data It is shown that changing the psm values greatly influences the end result of LayoutLMV2/XLM/LMV3, and the specific psm value is different depending on the document formatting. Refer : [PSM](https://github.com/tesseract-ocr/tesseract/issues/434) ```python pytesseract.image_to_data(image, lang=lang, output_type="dict", config="--psm 10") ``` Users can now set the tesseract config parameter during Processor initialization, like so: ```python processor = AutoProcessor.from_pretrained("microsoft/layoutlmv3-base", ocr_lang="eng", tesseract_config="--psm 5") ``` ## Before submitting - [❌] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [✔️] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [✔️] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [✔️] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [❌] Did you write any new necessary tests?
07-01-2022 18:42:37
07-01-2022 18:42:37
_The documentation is not available anymore as the PR was closed or merged._<|||||>@NielsRogge Added & tested LayoutLMV3 feature extractor<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Hmm @kelvinAI for some reason I'm seeing "files changed = 0", did something go wrong here?<|||||>Hey @NielsRogge , the changes for this branch have already been merged into [this pr](https://github.com/huggingface/transformers/pull/17733) as discussed earlier. And since that PR have already been merged with main earlier that's probably why it's showing 0 files changed over here. Looking at the autogenerated documentation for [main](https://huggingface.co/docs/transformers/main/en/model_doc/layoutlmv3#transformers.LayoutLMv3FeatureExtractor) it seems that the changes for both LayoutLMv2 and LayoutLMv3 are already included :). Could you please help to verify? <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Yes, so seems like you can close this PR.<|||||>Right, forgot to close this :)
transformers
17,988
closed
Exclude Databricks from notebook env only if the runtime is below 11.0
# What does this PR do? This PR adds additional check to `is_in_notebook()` to return `False` in Databricks only if the Databricks Runtime is below 11.0. In Databricks Runtime 11.0 and above, IPython kernel is the default Python execution engine. Therefore it should be more compatible with Jupyter notebook https://docs.databricks.com/notebooks/ipython-kernel.html https://github.com/huggingface/transformers/issues/17406#issuecomment-1172510084 ## Additional Info In Databricks Runtime 10.5 and earlier this is what the evaluation HTML output used to be. Therefore, we need to disable the notebook env. ![image](https://user-images.githubusercontent.com/5300554/176943749-bf922f69-6bea-4ae7-82ae-4487b37967ba.png) In Databricks Runtime 11.0 with the default IPython kernel, the output looks similar to that in Jupyter notebook. Therefore, I believe we no longer need to disable the notebook env. ![image](https://user-images.githubusercontent.com/5300554/176944081-56e7fc2d-f40c-4948-a572-53b553e0ffa6.png) What does the value of `DATABRICKS_RUNTIME_VERSION` environment look like in Databricks notebook? ![image](https://user-images.githubusercontent.com/5300554/176944270-6d78b2be-50d5-428e-b191-c4e5a46cb7d2.png) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
07-01-2022 17:41:17
07-01-2022 17:41:17
Arg, our CI wasn't triggered properly. Could you try pushing an empty commit?<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>Sure. I pushed a dummy commit removing a full stop in the comment.<|||||>Didn't work again, there should be more tests spawning. You can do empty commits with `--allow-empty` to avoid having to do a real change. Can you try a couple of times to see if the test suite starts (there should be 18 checks total)?<|||||>Hmm still not sure why the CircleCI tests are skipped haha<|||||>Okay managed to trigger it by pushing your branch inside the main fork of the repo. Let's just check all is green before merging, thanks for your patience!<|||||>Thank you. Appreciate the quick feedback
transformers
17,987
closed
Shifting labels for causal LM when using label smoother
# What does this PR do? Fixes #17960 When training CausalLM such as GPT2, loss is computed within model's `foward()` function and `labels` are shifted internally. However, if label smoothing is applied, loss is computed in trainer's `compute_loss` function and `labels` are not shifted. This causes misalignment of `labels` and corresponding `input_ids`. This commit is for resolving this misalignment. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @sgugger
07-01-2022 17:25:10
07-01-2022 17:25:10
_The documentation is not available anymore as the PR was closed or merged._
transformers
17,986
closed
TF: GPT-J compatible with XLA generation
# What does this PR do? This PR modifies TF GPT-J so as to be compatible with XLA generation. It borrows the new code from FLAX -- in essence, instead of computing the embedded positions (`sincos`) at each call, given the size of the sequence (which could be obtained from the size of the past), now pre-computes the embedded positions and gathers them given the `position_ids`. ⚠️ The integration tests are disabled with `@tooslow`, due to the size of the model. I've reworked the tests BEFORE touching GPT-J code, to test all needed features correctly. All but the XLA test were passing before GPT-J was changed, and all tests pass after the changes. We still have two XLA tests being run in CI frequently (`test_xla_generate_fast` and `test_xla_generate_slow`), as well as a couple of generic generate tests -- they just don't use the trained model weights.
07-01-2022 17:18:33
07-01-2022 17:18:33
Related issue: #17935 <|||||>_The documentation is not available anymore as the PR was closed or merged._
transformers
17,985
closed
Restore original task in test_warning_logs
# What does this PR do? `test_warning_logs` modifies `PIPELINE_REGISTRY` via `PIPELINE_REGISTRY.register_pipeline(alias, {})`, which fails the tests in `TextClassificationPipelineTests`. This PR restores the original task at the end of `test_warning_logs`. cc @aarnphm
07-01-2022 17:04:17
07-01-2022 17:04:17
_The documentation is not available anymore as the PR was closed or merged._
transformers
17,984
closed
Exception: EOF while parsing a string at line 1 column 8862550
### System Info - `transformers` version: 4.20.1 also tried with 4.15.0 - Platform: Linux-5.13.0-48-generic-x86_64-with-glibc2.31 - Python version: 3.9.12 - Huggingface_hub version: 0.5.1 - PyTorch version (GPU?): 1.11.0+cu102 (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help? @SaulLu @LysandreJik ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction Steps to reproduce the behavior: 1. ```git clone https://github.com/aiforsec/CyNER.git``` 2. Go into the cloned repository folder 3. ```pip install -r requirements.txt``` 4. open ```CyNER Demo.ipynb``` notebook 5. Run first and the very last cell for training. Getting results like this: https://github.com/aiforsec/CyNER/issues/4 ### Expected behavior The model starts training properly and completes training.
07-01-2022 11:59:33
07-01-2022 11:59:33
Hi @MrAsimZahid , I followed the first 4 steps to try to reproduce your problem. Unfortunately I get the error bellow (I have spacy installed so it seems that it is necessary to be more restrictive on the versions in the requirements file - see my environment [here](https://github.com/huggingface/transformers/files/9041404/env.txt)): ```bash Traceback (most recent call last): File "notebook.py", line 5, in <module> import cyner File "/home/lucile_huggingface_co/repos/CyNER/cyner/__init__.py", line 1, in <module> from .cyner import CyNER File "/home/lucile_huggingface_co/repos/CyNER/cyner/cyner.py", line 1, in <module> from .entity_extraction_factory import EntityExtractionFactory as eef File "/home/lucile_huggingface_co/repos/CyNER/cyner/entity_extraction_factory.py", line 3, in <module> from .spacy_ner import Spacy File "/home/lucile_huggingface_co/repos/CyNER/cyner/spacy_ner.py", line 1, in <module> import spacy File "/home/lucile_huggingface_co/anaconda3/envs/cyner/lib/python3.8/site-packages/spacy/__init__.py", line 15, in <module> from .cli.info import info # noqa: F401 File "/home/lucile_huggingface_co/anaconda3/envs/cyner/lib/python3.8/site-packages/spacy/cli/__init__.py", line 17, in <module> from .debug_diff import debug_diff # noqa: F401 File "/home/lucile_huggingface_co/anaconda3/envs/cyner/lib/python3.8/site-packages/spacy/cli/debug_diff.py", line 10, in <module> from .init_config import init_config, Optimizations File "/home/lucile_huggingface_co/anaconda3/envs/cyner/lib/python3.8/site-packages/spacy/cli/init_config.py", line 8, in <module> from jinja2 import Template File "/home/lucile_huggingface_co/anaconda3/envs/cyner/lib/python3.8/site-packages/jinja2/__init__.py", line 12, in <module> from .environment import Environment File "/home/lucile_huggingface_co/anaconda3/envs/cyner/lib/python3.8/site-packages/jinja2/environment.py", line 25, in <module> from .defaults import BLOCK_END_STRING File "/home/lucile_huggingface_co/anaconda3/envs/cyner/lib/python3.8/site-packages/jinja2/defaults.py", line 3, i [env.txt](https://github.com/huggingface/transformers/files/9041403/env.txt) n <module> from .filters import FILTERS as DEFAULT_FILTERS # noqa: F401 File "/home/lucile_huggingface_co/anaconda3/envs/cyner/lib/python3.8/site-packages/jinja2/filters.py", line 13, in <module> from markupsafe import soft_unicode ImportError: cannot import name 'soft_unicode' from 'markupsafe' (/home/lucile_huggingface_co/anaconda3/envs/cyner/lib/python3.8/site-packages/markupsafe/__init__.py) ``` If you ever find a way to isolate the bug related to transformers from the rest of your software solution, it could surely help us to help you more quickly. :hugs: <|||||>Hi @SaulLu, Thank you for taking the time to review the bug. I am using this ```pip install MarkupSafe==2.0.1```<|||||>I still have some issues: NLTK wasn't installed and now I have: ```bash File "/home/lucile_huggingface_co/anaconda3/envs/cyner/lib/python3.8/runpy.py", line 194, in _run_module_as_main return _run_code(code, main_globals, None, File "/home/lucile_huggingface_co/anaconda3/envs/cyner/lib/python3.8/runpy.py", line 87, in _run_code exec(code, run_globals) File "/home/lucile_huggingface_co/.vscode-server/extensions/ms-python.python-2022.8.1/pythonFiles/lib/python/debugpy/__main__.py", line 45, in <module> cli.main() File "/home/lucile_huggingface_co/.vscode-server/extensions/ms-python.python-2022.8.1/pythonFiles/lib/python/debugpy/../debugpy/server/cli.py", line 444, in main run() File "/home/lucile_huggingface_co/.vscode-server/extensions/ms-python.python-2022.8.1/pythonFiles/lib/python/debugpy/../debugpy/server/cli.py", line 285, in run_file runpy.run_path(target_as_str, run_name=compat.force_str("__main__")) File "/home/lucile_huggingface_co/anaconda3/envs/cyner/lib/python3.8/runpy.py", line 265, in run_path return _run_module_code(code, init_globals, run_name, File "/home/lucile_huggingface_co/anaconda3/envs/cyner/lib/python3.8/runpy.py", line 97, in _run_module_code _run_code(code, mod_globals, init_globals, File "/home/lucile_huggingface_co/anaconda3/envs/cyner/lib/python3.8/runpy.py", line 87, in _run_code exec(code, run_globals) File "/home/lucile_huggingface_co/repos/CyNER/notebook.py", line 48, in <module> entities = model3.get_entities(text) File "/home/lucile_huggingface_co/repos/CyNER/cyner/cyner.py", line 60, in get_entities entities = model.get_entities(text) File "/home/lucile_huggingface_co/repos/CyNER/cyner/flair_ner.py", line 24, in get_entities for x in pred['entities']: KeyError: 'entities' ``` I'm sorry, I don't really have time to look for (maybe obvious) solutions to make this notebook work. As said before, it would help me a lot if you could isolate the problem you are having with transformers. :slightly_smiling_face: <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
17,983
closed
Fix typo in perf_train_gpu_one.mdx
Fix typo in perf_train_gpu_one.mdx docs where "exit" should be "exist". <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
07-01-2022 10:22:40
07-01-2022 10:22:40
_The documentation is not available anymore as the PR was closed or merged._<|||||>hi, @sgugger do you mind helping to review this PR? Thanks<|||||>Thanks for fixing!
transformers
17,982
closed
WIP Speecht5
# What does this PR do? This PR is a WIP for adding Speech T5 to HF referenced here. https://github.com/huggingface/transformers/issues/17569 <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
07-01-2022 09:45:14
07-01-2022 09:45:14
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
17,981
closed
Add ONNX support for YOLOS
# What does this PR do? Make it possible to export the YOLOS model to ONNX format. Linked to https://github.com/huggingface/transformers/issues/16308
07-01-2022 06:55:18
07-01-2022 06:55:18
_The documentation is not available anymore as the PR was closed or merged._<|||||>Thanks, I've removed the file and squashed the commits.
transformers
17,980
closed
Grammatically updated the Readme file
# What does this PR do? grammatically updated the readme file <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. @sgugger Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
07-01-2022 05:18:10
07-01-2022 05:18:10
_The documentation is not available anymore as the PR was closed or merged._<|||||>okay thankyou
transformers
17,979
closed
Avoid flaky trainer test failures
# What does this PR do? Avoid flaky trainer test failures on multi GPUs. I ran the test 1000 times on single/muti GPU VMs. - single gpu: all 0.0 - multi gpu: a lot of 0.0, but the largest can go 1e-4.
07-01-2022 02:58:36
07-01-2022 02:58:36
_The documentation is not available anymore as the PR was closed or merged._<|||||>> I assume that 1e-7 and 1e-6 were still flakey, right? Just concerned with very low margin where it could potentially hide issues. > > But otherwise looks good. Thank you for fixing, @ydshieh From Slack report, I could see a few times with `2e-6`. On my manual tests (1000 times), multi GPUs could reach `1e-4`. I choose `1e-5`, thinking the test failure would be super rare.<|||||>Super, thank you for sharing the details of why this value was chosen, @ydshieh! All is good.
transformers
17,978
closed
Training with fp16 precision gives nan in Longt5
### System Info - `transformers` version: 4.10.0.dev0 - Platform: Linux-3.10.0-1160.62.1.el7.x86_64-x86_64-with-glibc2.17 - Python version: 3.8.13 - PyTorch version (GPU?): 1.9.0+cu111 (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: yes - Using distributed or parallel set-up in script?: yes ### Who can help? @patrickvonplaten ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction I'm currently running the [scrolls_benchmark](https://github.com/tau-nlp/scrolls). I'm interested to see the performance of longt5 model on scrolls, so I changed the model name to [google/long-t5-tglobal-base](https://huggingface.co/google/long-t5-tglobal-base) and run training with fp16 enabled (If I run with fp32, I get CUDA OOM errors). However, the output loss is always nan. I googled for fixes and found this post: [t5-fp16-fixed](https://discuss.huggingface.co/t/t5-fp16-issue-is-fixed/3139). I searched in the transformers repo and found that the [modelling_longt5](https://github.com/huggingface/transformers/blob/main/src/transformers/models/longt5/modeling_longt5.py) file doesn't seem to incorporate the `clamp_value` change. I wonder if this is the problem that fp16 is not working in longt5? And if so, is there a way to fix it by a similar approach like what you guys have done for t5? Thank you very much! fyi: You probably noticed that the transformers version is 4.10.0 which does not have longt5. I manually added the longt5 files in a forked scrolls repo here [longt5_folder](https://github.com/Leonard907/scrolls_ilcc/tree/test_add_longt5_folder). It indeed works properly under a small parameter setting. ### Expected behavior longt5 model not producing nan loss on fp16
07-01-2022 01:15:18
07-01-2022 01:15:18
This is due to `-1e10` is used as attention mask. A fix similar to what has done in #17306 should work. Would you like to open a PR for LongT5, @Leonard907 ? <|||||>@ydshieh Sorry for the late reply, I ran a few experiments and found that fixing both the clamp value and attention mask seems to work (I should mention that at first I only fixed the attention mask and still gives nan. Then I added clamp value and there is no nan). For more detail, there are lines like [longt5 704](https://github.com/huggingface/transformers/blob/main/src/transformers/models/longt5/modeling_longt5.py#L704) that needs to be set to `torch.finfo(dtype).min`. For the clamp value part, my changes are identical to `modeling_t5.py` since I found the forward function to be similar. I did not read the code fully hence my modifications are hacky (i.e. using direct float16 attribute). Shall I publish a branch and open a PR for further review? This is my first time doing this so if there is a guide on how to publish changes and open PR I would be appreciated. Thanks!<|||||>I had also NaN loss with `google/mt5-small` on PyTorch<|||||>@Leonard907 @ZJaume Could you provide code snippets (training script) that reproduce this issue? I am a bit surprised that `torch.finfo(dtype).min` is not enough to avoid this issue. But what have done in T5 seems reasonable!<|||||>In general T5 just doesn't work well with `fp16` since it was trained on bfloat16 and it seems like the model requires quite large values which are not supported by fp16. See: https://github.com/huggingface/transformers/issues/14189 Note that LongT5, MT5, T5, ByT5 all use the same architecture more or less <|||||>@ydshieh It was a bit hard to provide a training script as I'm experimenting on the SCROLLS dataset, there are a lot of setups and I also made some modifications myself. Probably one can reference this discussion [stancld-longt5](https://huggingface.co/Stancld/longt5-tglobal-large-16384-pubmed-3k_steps/discussions/2#62b047bb53d878042fab1c3e) and use the script by changing the `bf16` to `fp16`, but I haven't given it a try. @patrickvonplaten Thank you for mentioning more issues regarding this. I have trained the model using `clamp_value` and `attention_mask` fix and the results are poor. Without using fp16 I can actually get better results even using fewer parameters. Nevertheless, it's just a guess and I need to experiment on that more, but I thought it might be worth knowing. FYI: How can I open a PR? I tried making a branch from main, but when I tried to push it I found that I had no permission to do so. For a workaround, I uploaded my changed file to one of my own repository here: [longt5_fix](https://github.com/Leonard907/temp/blob/main/longt5_fix.py). My fixes are hardcode, so it should be changed for general purposes. <|||||>Please see https://huggingface.co/docs/transformers/contributing#start-contributing-pull-requests (in short: you should create a branch, push to your fork, and open an PR)<|||||>I'm actually against the `clamp` hack here. We've seen multiple times that it's not sufficient to correct the fp16 problem in T5 so I doubt it'll be sufficient here. However adding `clamp` at multiple positions in the code slows down the forward pass and makes the code less readable. @patil-suraj @stas00 what do you think here?<|||||>Based on my observations, using the `clamp_value` fix produces much worse results than just using fp32 under same configurations. And with the further comment from @patrickvonplaten , I realized that this fix also causes the training to take more time (I need around 1hr to generate predictions on test set with this fix, while using fp32 I only need around 20-30min). <|||||>(not promoting `clamp_value` here) I actually think, **if** we want to add `clamp_value`, it should be done only when the model is in training mode. (if we do so, this also solves the slow down issue) (in inference/eval mode, large values almost implies something is wrong in the model) <|||||>Indeed, the clamping works around the overflow but it's not really a good mathematical approach. I suppose the model learns to deal with this type of penalty, but it doesn't make things smooth for the training. 1. the OP in https://github.com/huggingface/transformers/pull/10956 suggests penalizing large activation as it was done in the original code. it's done more consistently than clamping and probably easier for the model to learn. 2. Then a group of people developed a weight scaling approach (in addition to normal mixed precision scaling) https://github.com/huggingface/transformers/pull/10956#issuecomment-961030728 I also proposed removing clamping in t5: https://github.com/huggingface/transformers/pull/10956#issuecomment-951139733<|||||>I have the same problem when training with bf16 set to True<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
17,977
closed
rust_model.ot not saved by save_pretrained()
### System Info I am running my experiments on a machine without internet, so I need to save the tokenizers first. However, I found that with the `save_pretrained()` method, the `rust_model.ot` is not saved to the directory. The model is `deberta-v3-base` and the version is 4.20.1. ### Who can help? @LysandreJik @SaulLu ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ``` ckpt = 'microsoft/deberta-v3-base' tokz = AutoTokenizer.from_pretrained(ckpt, use_fast=True) model = AutoModel.from_pretrained(ckpt) tokz.save_pretrained(ckpt) model.save_pretrained(ckpt) $ls > added_tokens.json pytorch_model.bin spm.model > config.json special_tokens_map.json tokenizer_config.json ``` ### Expected behavior rust_model.ot saved by save_pretrained()
07-01-2022 01:07:34
07-01-2022 01:07:34
Rust models are not supported by `transformers`. I believe the models you're talking about are the ones coming from @guillaume-be's [rust-bert](https://github.com/guillaume-be/rust-bert) repository.<|||||>@LysandreJik Thanks. I got it. I was thinking about https://huggingface.co/microsoft/deberta-v3-base/blob/main/rust_model.ot.
transformers
17,976
closed
Rename second input dimension for ONNX-supported CV models
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> The second input dimension of `pixel_values` for CV models with ONNX support is currently named "sequence". This PR renames it to "num_channels". Also: - MobileViT is added to the list of models to test in `tests/onnx/test_onnx_v2.py` - for DETR, the second input dimension is removed for `pixel_masks` because there is actually no channel dimension (`batch` x `height` x `width`) - for MobileViT, a second input dimension is added to `pixel_values` so that it follows the same pattern as other vision models ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
06-30-2022 21:47:57
06-30-2022 21:47:57
As discussed with @NielsRogge, it does not make much sense to call the second input dimension of `pixel_values` "sequence" while it is actually the number of channels. This can be misleading if one takes a closer look at the ONNX model (e.g. with Netron).<|||||>I re-ran all slow tests and they all passed.<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>> I am not familiar enough with ONNX to evaluate if this breaks exiting code in any way, so could you please tell us whether a user will have to change anything in their code due to this change? Axe names do not matter for exporting the model or performing inference with it. The only use case I could think of is if one has a script to parse the ONNX graph and does any kind of operation involving axe names, but this would rely on tools that are not provided by Transformers anyway. What do you think @lewtun @michaelbenayoun @mfuntowicz ?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Gently pinging @lewtun here<|||||>Pinging @michaelbenayoun and @JingyaHuang as Lewis is off for a few weeks.<|||||>Yes, I do not think it is a breaking change either. Same question as @JingyaHuang : do we need dynamic axes for the channels?<|||||>@JingyaHuang @michaelbenayoun I guess some models can deal with both grayscale and RGB images. I let @NielsRogge confirm here.<|||||>Yes technically the channel dimension can be 1 for greyscale images, 3 for RGB images, and 4 for RGBA images.<|||||>Thank you! I was super confused while working on #18587, where I don't know if change to better names will break things or not.
transformers
17,975
closed
Grid search ProgressCallback leads to encoding issue on Windows
### System Info - `transformers` version: 4.20.0 - Platform: Windows-10-10.0.19041-SP0 - Python version: 3.8.8 - Huggingface_hub version: 0.7.0 - PyTorch version (GPU?): 1.12.0+cu116 (True) ### Who can help? @richardliaw @amogkam ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction When I use `ray[tune]` grid search on Windows, the script crashes due to encoding issues. The problem seems to originate in tqdm - so when the progress bar is reaching a certain point (right after 1%) the script will crash. The reason is that to plot the progress, TQDM makes use of different characters - some small/large blocks. But apparently not all of these work well on Windows. ``` (pid=9060, ip=127.0.0.1, repr=_objective) File "python\ray\_raylet.pyx", line 665, in ray._raylet.execute_task File "python\ray\_raylet.pyx", line 669, in ray._raylet.execute_task File "python\ray\_raylet.pyx", line 616, in ray._raylet.execute_task.function_executor File "C:\Users\bramv\.virtualenvs\transformers-finetuner-ah-81wJc\lib\site-packages\ray\_private\function_manager.py", line 675, in actor_method_executor return method(__ray_actor, *args, **kwargs) File "C:\Users\bramv\.virtualenvs\transformers-finetuner-ah-81wJc\lib\site-packages\ray\util\tracing\tracing_helper.py", line 462, in _resume_span return method(self, *_args, **_kwargs) File "C:\Users\bramv\.virtualenvs\transformers-finetuner-ah-81wJc\lib\site-packages\ray\tune\trainable.py", line 360, in train result = self.step() File "C:\Users\bramv\.virtualenvs\transformers-finetuner-ah-81wJc\lib\site-packages\ray\util\tracing\tracing_helper.py", line 462, in _resume_span return method(self, *_args, **_kwargs) File "C:\Users\bramv\.virtualenvs\transformers-finetuner-ah-81wJc\lib\site-packages\ray\tune\function_runner.py", line 404, in step self._report_thread_runner_error(block=True) File "C:\Users\bramv\.virtualenvs\transformers-finetuner-ah-81wJc\lib\site-packages\ray\util\tracing\tracing_helper.py", line 462, in _resume_span return method(self, *_args, **_kwargs) File "C:\Users\bramv\.virtualenvs\transformers-finetuner-ah-81wJc\lib\site-packages\ray\tune\function_runner.py", line 574, in _report_thread_runner_error raise e File "C:\Users\bramv\.virtualenvs\transformers-finetuner-ah-81wJc\lib\site-packages\ray\tune\function_runner.py", line 277, in run self._entrypoint() File "C:\Users\bramv\.virtualenvs\transformers-finetuner-ah-81wJc\lib\site-packages\ray\tune\function_runner.py", line 349, in entrypoint return self._trainable_func( File "C:\Users\bramv\.virtualenvs\transformers-finetuner-ah-81wJc\lib\site-packages\ray\util\tracing\tracing_helper.py", line 462, in _resume_span return method(self, *_args, **_kwargs) File "C:\Users\bramv\.virtualenvs\transformers-finetuner-ah-81wJc\lib\site-packages\ray\tune\function_runner.py", line 645, in _trainable_func output = fn() File "C:\Users\bramv\.virtualenvs\transformers-finetuner-ah-81wJc\lib\site-packages\transformers\integrations.py", line 288, in dynamic_modules_import_trainable return trainable(*args, **kwargs) File "C:\Users\bramv\.virtualenvs\transformers-finetuner-ah-81wJc\lib\site-packages\ray\tune\utils\trainable.py", line 410, in inner trainable(config, **fn_kwargs) File "C:\Users\bramv\.virtualenvs\transformers-finetuner-ah-81wJc\lib\site-packages\transformers\integrations.py", line 189, in _objective local_trainer.train(resume_from_checkpoint=checkpoint, trial=trial) File "C:\Users\bramv\.virtualenvs\transformers-finetuner-ah-81wJc\lib\site-packages\transformers\trainer.py", line 1409, in train return inner_training_loop( File "C:\Users\bramv\.virtualenvs\transformers-finetuner-ah-81wJc\lib\site-packages\transformers\trainer.py", line 1726, in _inner_training_loop self.control = self.callback_handler.on_step_end(args, self.state, self.control) File "C:\Users\bramv\.virtualenvs\transformers-finetuner-ah-81wJc\lib\site-packages\transformers\trainer_callback.py", line 369, in on_step_end return self.call_event("on_step_end", args, state, control) File "C:\Users\bramv\.virtualenvs\transformers-finetuner-ah-81wJc\lib\site-packages\transformers\trainer_callback.py", line 388, in call_event result = getattr(callback, event)( File "C:\Users\bramv\.virtualenvs\transformers-finetuner-ah-81wJc\lib\site-packages\transformers\trainer_callback.py", line 472, in on_step_end self.training_bar.update(state.global_step - self.current_step) File "C:\Users\bramv\.virtualenvs\transformers-finetuner-ah-81wJc\lib\site-packages\tqdm\std.py", line 1256, in update self.refresh(lock_args=self.lock_args) File "C:\Users\bramv\.virtualenvs\transformers-finetuner-ah-81wJc\lib\site-packages\tqdm\std.py", line 1361, in refresh self.display() File "C:\Users\bramv\.virtualenvs\transformers-finetuner-ah-81wJc\lib\site-packages\tqdm\std.py", line 1509, in display self.sp(self.__str__() if msg is None else msg) File "C:\Users\bramv\.virtualenvs\transformers-finetuner-ah-81wJc\lib\site-packages\tqdm\std.py", line 350, in print_status fp_write('\r' + s + (' ' * max(last_len[0] - len_s, 0))) File "C:\Users\bramv\.virtualenvs\transformers-finetuner-ah-81wJc\lib\site-packages\tqdm\std.py", line 343, in fp_write fp.write(_unicode(s)) File "C:\Users\bramv\.virtualenvs\transformers-finetuner-ah-81wJc\lib\site-packages\tqdm\utils.py", line 145, in inner return func(*args, **kwargs) File "C:\Users\bramv\.virtualenvs\transformers-finetuner-ah-81wJc\lib\site-packages\ray\tune\utils\util.py", line 228, in write self.stream2.write(*args, **kwargs) File "C:\Users\bramv\AppData\Local\Programs\Python\Python38\lib\encodings\cp1252.py", line 19, in encode return codecs.charmap_encode(input,self.errors,encoding_table)[0] UnicodeEncodeError: 'charmap' codec can't encode character '\u258f' in position 6: character maps to <undefined> ``` The reason that I post this in `transformers` is that I have never had any issues with the progress bars in the rest of the library, but something odd is going on with the ones that are present during grid search. I don't know the integration code base enough to find what might be causing this. Interestingly, in the rest of the library, a tqdm progress bar always takes up the whole screen and no issues happen (because for progress, tqdm can use large block characters). But for grid search, the progress bars seem a fixed, small width. That's why it requires different characters to plot the progress (smaller increments/blocks -> different characters). So if we can change the TQDM that is being used during grid search to be the same as the other ones in the library, than there should not be any issues I believe. ### Expected behavior No encoding issues, like in the rest of the library. This issue only occurs with grid search.
06-30-2022 20:02:15
06-30-2022 20:02:15
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>This is a sort of Ray issue. Solution is to set the environment variable to force UTF8 encode. See details here: https://peps.python.org/pep-0540/.
transformers
17,974
closed
openai's CLIP model not working with pytorch 1.12 in some environments
### System Info - `transformers` version: 4.20.1 - Platform: Linux-5.4.170+-x86_64-with-glibc2.31 - Python version: 3.9.12 - Huggingface_hub version: 0.8.1 - PyTorch version (GPU?): 1.11.0+cu113 (True) - Tensorflow version (GPU?): 2.7.0 (False) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: yes - Using distributed or parallel set-up in script?: no ### Who can help? @patil-suraj ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction The following works as expected with torch 1.11, but generates the below error in version 1.12: ``` python import io import requests import torch from PIL import Image from transformers import CLIPModel, CLIPProcessor def load_image(bytes, max_width=100, max_height=100, force_rgb=True): """Create and optionally resize an image from bytes.""" img = Image.open(io.BytesIO(bytes)) width, height = img.size if width > max_width or height > max_height: img.thumbnail(size=(max_width, max_height)) if img.mode != "RGB" and force_rgb: img = img.convert("RGB") return img urls = [ "https://placekitten.com/408/287", "https://placekitten.com/200/138" ] images = [load_image(requests.get(url).content) for url in urls] name = "openai/clip-vit-base-patch32" proc = CLIPProcessor.from_pretrained(name) model = CLIPModel.from_pretrained(name) model.to(torch.device("cuda")) inputs = proc(images=images, return_tensors="pt").to(torch.device("cuda")) embeddings = model.get_image_features(**inputs).detach().cpu().numpy() ``` This results in: ``` log --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) Input In [1], in <cell line: 41>() 38 model.to(torch.device("cuda")) 40 inputs = proc(images=images, return_tensors="pt").to(torch.device("cuda")) ---> 41 embeddings = model.get_image_features(**inputs).detach().cpu().numpy() RuntimeError: CUDA error: an illegal memory access was encountered CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1. ``` Here is how I test the different versions, keep all else the same: ``` !pip uninstall -y torch torchvision torchaudio !pip install --no-cache-dir torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu113 # !pip install --no-cache-dir torch==1.11.0+cu113 torchvision==0.12.0+cu113 torchaudio==0.11.0 --extra-index-url https://download.pytorch.org/whl/cu113 ``` And here some more info about the hardware environment: ``` log ┌─────────── System Report ────────────┐ │ Linux │ │ Linux-5.4.170+-x86_64-with-glibc2.31 │ │ │ │ CPUs │ │ ┌──────────┬────────┐ │ │ │ cores │ # │ │ │ ├──────────┼────────┤ │ │ │ logical │ 2 │ │ │ │ physical │ 1 │ │ │ │ usable │ [0, 1] │ │ │ └──────────┴────────┘ │ │ RAM │ │ ┌───────────┬──────┐ │ │ │ kind │ gb │ │ │ ├───────────┼──────┤ │ │ │ total │ 7.3 │ │ │ │ available │ 5.6 │ │ │ │ used │ 1.5 │ │ │ │ free │ 3.1 │ │ │ │ active │ 2.7 │ │ │ │ inactive │ 1.1 │ │ │ │ buffers │ 0.4 │ │ │ │ cached │ 2.4 │ │ │ │ shared │ 0.0 │ │ │ │ slab │ 0.3 │ │ │ └───────────┴──────┘ │ │ Disk (/home) │ │ ┌───────┬──────┐ │ │ │ kind │ gb │ │ │ ├───────┼──────┤ │ │ │ total │ 48.9 │ │ │ │ used │ 2.3 │ │ │ │ free │ 46.6 │ │ │ └───────┴──────┘ │ │ GPU │ │ ┌────────────────┬─────────────────┐ │ │ │ property │ value │ │ │ ├────────────────┼─────────────────┤ │ │ │ name │ Tesla K80 │ │ │ │ driver_version │ 450.119.04 │ │ │ │ vbios_version │ 80.21.25.00.04 │ │ │ │ memory.total │ 11441 MiB │ │ │ │ memory.free │ 11438 MiB │ │ │ │ memory.used │ 3 MiB │ │ │ └────────────────┴─────────────────┘ │ │ Packages │ │ ┌──────────────┬──────────────┐ │ │ │ Package │ Version │ │ │ ├──────────────┼──────────────┤ │ │ │ numpy │ 1.22.0 │ │ │ │ torch │ 1.12.0+cu113 │ │ │ │ transformers │ 4.20.1 │ │ │ └──────────────┴──────────────┘ │ └──────────────────────────────────────┘ ``` ### Expected behavior The code should run without CUDA errors.
06-30-2022 19:57:46
06-30-2022 19:57:46
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Hey @buhrmann, there are some known issues with torch 1.12. Torch 1.12.1 was released 4 days ago, do you get the same issues with it?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
17,973
closed
XLA train step fixes
This PR makes a bunch of changes to the TF codebase to improve XLA support, in preparation for our upcoming big TF release. The goal is to allow users to use `jit_compile` on the vast majority of our models, which should yield large performance improvements for TF. In particular: - Rewrites to the `train_step` and `test_step` so that any mutable Python input dicts are not modified in the step. This was a bad idea anyway, but it causes particular problems with XLA, which is very functional and hates side effects, like JAX. - Rewrites to the common `hf_compute_loss` functions to ensure that static shapes are maintained throughout, so that XLA compilation is possible. - Add a test to ensure that we can still fit models when XLA compilation is used. XLA compilation is quite expensive, which makes this test quite slow, so it's restricted to `core` models for now and tagged as `@slow`. Left to do: - [x] Fix XLA-incompatible model-specific `hf_compute_loss` functions. On a quick search it looked like there were 4-5 of these, so it shouldn't take too long. Any use of `tf.boolean_mask` is a surefire sign that XLA compilation will break, because output shapes become data-dependent. - [x] See if there's a way to test non-core models for XLA fit support without crippling performance. (No, but we're using the XLA losses in non-XLA tests by default, so that partially tests it for all models)
06-30-2022 19:35:35
06-30-2022 19:35:35
_The documentation is not available anymore as the PR was closed or merged._<|||||>I'd be interested in having @ydshieh's review as well<|||||>> If I understand correctly, this changes the loss returned by TensorFlow models from a matrix to a vector, which is obviously breaking. While I don't know how many TensorFlow users rely on the current structure of the loss, we at least need to have a flag (probably `use_xla=False`) to enable the previous behavior for users who relied on it. > > Could you confirm first that my understanding is correct? I believe both prev. and current version return a vector. The difference is on the size: - prev: number of active tokens (non-padding tokens) - now: batch size<|||||>@ydshieh I was completely wrong earlier - `SparseCategoricalCrossentropy` only returns `nan` for invalid labels when running on GPU! On CPU, inputs are validated and TensorFlow throws an error. I'll rewrite my loss functions to not depend on that behaviour, and change the loss computation tests to mask some positions to ensure that gets tested, so I don't miss anything like this in future.<|||||>@Rocketknight1 https://github.com/huggingface/transformers/blob/f17136c80dfec2a78890d012105634079531dcd9/src/transformers/modeling_tf_utils.py#L211-L213 As in an earlier comment, I think this loss value is incorrect. Imagine we have 2 sequences of length 100. - 1st sentence: 1 active token + 99 pad tokens (somehow non-sense 😄 ) - 2nd sentence: 20 active token + 80 pad tokens In this latest version, the unique token in sentence 1 get an weight (when computing the loss) 20 times larger than each token in the 2nd sentence. (As you first average the loss along sequence dimension). Furthermore, this doesn't correspond to PyTorch's computation, which leads to test failures (I didn't check in detail if this is the cause, but I believe it is). **Q: Is there any reason we don't want to sum each token's loss value?** cc @gante @patrickvonplaten @sgugger <|||||>Hi @ydshieh I'm sorry, I think you're right there! Let me investigate and see if I can make a PR to weight tokens properly, which should hopefully resolve the issue.<|||||>@patrickvonplaten Agreed! I fixed that in https://github.com/huggingface/transformers/pull/18013
transformers
17,972
closed
[Do NOT merge 🙏 ] Skip a particular exception in `test_sample_generate`
# What does this PR do? A continuation of #17937 to fix a CI failure ``` # sample probs = nn.functional.softmax(next_token_scores, dim=-1) > next_tokens = torch.multinomial(probs, num_samples=1).squeeze(1) E RuntimeError: probability tensor contains either `inf`, `nan` or element < 0 ``` As @patrickvonplaten , when a broken generation happens due to all `-inf` scores along the vocab dimension, nothing we can do. This is likely to happen only with random models however. Let's say goodbye to this flaky situation!
06-30-2022 19:07:27
06-30-2022 19:07:27
_The documentation is not available anymore as the PR was closed or merged._<|||||>Let's maybe put this PR on hold for 1,2 weeks to see if #18053 has solved the issue or not :-)<|||||>OK, thank you for taking time on this.
transformers
17,971
closed
TrainingArguments does not support `mps` device (Mac M1 GPU)
### System Info - `transformers` version: 4.21.0.dev0 - Platform: macOS-12.4-arm64-arm-64bit - Python version: 3.8.9 - Huggingface_hub version: 0.8.1 - PyTorch version (GPU?): 1.12.0 (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: yes - Using distributed or parallel set-up in script?: no ### Who can help? @sgugger ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ```bash export TASK_NAME=wnli python run_glue.py \ --model_name_or_path bert-base-cased \ --task_name $TASK_NAME \ --do_train \ --do_eval \ --max_seq_length 128 \ --per_device_train_batch_size 32 \ --learning_rate 2e-5 \ --num_train_epochs 3 \ --output_dir /tmp/$TASK_NAME/ ``` ### Expected behavior When running the `Trainer.train` on a machine with an MPS GPU, it still just uses the CPU. I expected it to use the MPS GPU. This is supported by `torch` in the newest version 1.12.0, and we can check if the MPS GPU is available using `torch.backends.mps.is_available()`. It seems like the issue lies in the [`TrainingArguments._setup_devices` method](https://github.com/huggingface/transformers/blob/49cd736a288a315d741e5c337790effa4c9fa689/src/transformers/training_args.py#L1266), which doesn't appear to allow for the case where `device = "mps"`.
06-30-2022 18:55:49
06-30-2022 18:55:49
A simple hack fixed the issue, by simply overwriting the `device` attribute of `TrainingArguments`: ```python import torch from transformers import TrainingArguments class TrainingArgumentsWithMPSSupport(TrainingArguments): @property def device(self) -> torch.device: if torch.cuda.is_available(): return torch.device("cuda") elif torch.backends.mps.is_available(): return torch.device("mps") else: return torch.device("cpu") ``` This at least shows that it might just be the aforementioned `_setup_devices` that needs changing.<|||||>Another observation: Some PyTorch operations have not been implemented in `mps` and will throw an error. One way to get around that is to set the environment variable `PYTORCH_ENABLE_MPS_FALLBACK=1`, which will fallback to CPU for these operations. It still throws a `UserWarning` however.<|||||>This is not supported yet, as this has been introduced by PyTorch 1.12, which also breaks all speech models due to a regression there. We will look into the support for Mac M1 GPUs once we officially support PyTorch 1.12 (probably won't be before they do a patch 1.12.1).<|||||>@sgugger And it's not possible to add a `use_mps` flag to `TrainingArguments`, which just requires PyTorch 1.12.x, alongside a warning of some kind? Or is that too unstable?<|||||>I have no idea, since we haven't tried and tested it out yet. And as I said our whole CI is constrained by PyTorch < 1.12 right now, so until that pin is dropped we can't test the integration :-). You can certainly try it on your own fork in the meantime!<|||||>I'm seeing this odd behavior. I'm trying a code from [here](https://github.com/nlp-with-transformers/notebooks/blob/main/02_classification.ipynb), adapted with @saattrupdan solution. It runs but the problem is that I'm getting very different results when using `cpu` and `mps`. With `device_type = "cpu"` I get the expected results (`f1=0.92`) but when using `device_type = "mps"` I'm getting a very low f1 (~0.3), likely as a result a random guess. ```python device_type = "mps" device = torch.device(device_type) # Tokenizer from transformers import AutoTokenizer model_ckpt = "distilbert-base-uncased" tokenizer = AutoTokenizer.from_pretrained(model_ckpt) def tokenize(batch): return tokenizer(batch["text"],padding = True,truncation = True) # Load Data from datasets import list_datasets from datasets import load_dataset emotions = load_dataset("emotion") emotions_encoded = emotions.map(tokenize,batched = True,batch_size = None) #Model from transformers import AutoModelForSequenceClassification num_labels = 6 model = (AutoModelForSequenceClassification.from_pretrained(model_ckpt,num_labels = num_labels).to(device)) #Metric from sklearn.metrics import accuracy_score,f1_score def compute_metrics(pred): labels = pred.label_ids preds = pred.predictions.argmax(1) f1 = f1_score(labels,preds,average = "weighted") acc = accuracy_score(labels,preds) return {"accuracy":acc,"f1":f1} #Train from transformers import Trainer,TrainingArguments class TrainingArgumentsWithMPSSupport(TrainingArguments): @property def device(self) -> torch.device: if device_type == "mps": return torch.device("mps") else: return torch.device("cpu") batch_size = 64 loggin_steps = len(emotions_encoded["train"]) model_name = f"{model_ckpt}-finetuned-emotion" train_args = TrainingArgumentsWithMPSSupport(output_dir = model_name, num_train_epochs = 2, learning_rate = 2e-5, per_device_train_batch_size = batch_size, per_device_eval_batch_size = batch_size, weight_decay = 0.01, evaluation_strategy = "epoch", disable_tqdm = False, logging_steps = loggin_steps, push_to_hub = False, log_level = "error" ) trainer = Trainer(model = model,args = train_args, compute_metrics = compute_metrics, train_dataset = emotions_encoded["train"], eval_dataset = emotions_encoded["validation"], tokenizer = tokenizer) print("Trainner device:",trainer.args.device) trainer.train() ``` <|||||>We've also observed a drop in metrics when training, see [this issue](https://github.com/pytorch/pytorch/issues/82707).<|||||>Now that PyTorch `1.12.1` is out I think we should reopen this issue! cc @pacman100 <|||||>Note that on the inference side, pipelines now support `device="mps"` since #18494<|||||>@julien-c That's great to hear! In my own scripts I've used [this implementation](https://github.com/saattrupdan/ScandEval/blob/main/src/scandeval/training_args_with_mps_support.py), just tweaking the `TrainingArguments._setup_devices` method. I also guess that the `no_cuda` training argument has to either be changed to `no_gpu`, if the current functionality should be preserved, or otherwise the handling of this keyword needs to be changed in the method (potentially adding a `no_mps` argument as well then, but I'm not sure if that's desirable). I can open a PR if needed 🙂 <|||||>Hi Team. Thanks for the mac integration. Quick question - Is this not part of the most recent `pip install`? Because I have the latest pip package version (`4.21.2`) but couldn't find `--use_mps_device` function parameter in it. Here's a simple snippet from the jupyter notebook ``` from transformers import TrainingArguments args = TrainingArguments(use_mps_device=False) ``` and the error message: ``` TypeError Traceback (most recent call last) /var/folders/_p/8spsq7dj5mg51p7kdrqzlmgr0000gn/T/ipykernel_2337/1658168950.py in <module> ----> 1 args = TrainingArguments(use_mps_device=False) TypeError: __init__() got an unexpected keyword argument 'use_mps_device'<|||||>Hello @V-Sher, it is yet to be released. For time being, you can install transformers from the source to use this feature via the below command ```bash pip install git+https://github.com/huggingface/transformers ```<|||||>Hi All: I am finetuning a BERT model with HuggingFace Trainer API in Mac OS Ventura (Intel), Python 3.10 and Torch 2.0.0. It takes 14 min in a simple scenery with CPU, with no problem. I changed to GPU with mps. Initially, GPU was not used, but after redefining TrainingArguments in this way, it worked ``` class TrainingArgumentsWithMPSSupport(TrainingArguments): @property def device(self) -> torch.device: return torch.device(device) training_args = TrainingArgumentsWithMPSSupport(...) ``` But the problem is that improvement over CPU is scarce (barely from 14 min to 10 min). Monitor says %GPU is only 15% peak. Any idea about why such poor improvement? Thanks for any help Alberto The is the full code ``` from transformers import BertForSequenceClassification, BertTokenizerFast, Trainer, TrainingArguments import nlp import torch from torch.utils.data import Dataset, DataLoader device = torch.device("mps:0") _DATASET = '../IMDB.csv' dataset = nlp.load_dataset('csv', data_files=[_DATASET], split='train[:1%]') dataset = dataset.train_test_split(test_size=0.3) train_set = dataset['train'] test_set = dataset['test'] class CustomDataset(Dataset): def __init__(self, dataset, mytokenizer): self.tokenizer = mytokenizer self.dataset = dataset self.texts = dataset["text"] def __len__(self): return len(self.dataset) def __getitem__(self, index): theText = self.dataset[index]['text'] theLabel = self.dataset[index]['label'] inputs = self.tokenizer(theText, max_length=512, padding='max_length', truncation=True) ids = inputs['input_ids'] mask = inputs['attention_mask'] token_type_ids = inputs["token_type_ids"] ids = torch.tensor(ids, dtype=torch.long).to(device) mask = torch.tensor(mask, dtype=torch.long).to(device) theLabel = torch.tensor(theLabel, dtype=torch.long).to(device) result = { 'input_ids': ids, 'attention_mask': mask, 'label': theLabel } return result model = BertForSequenceClassification.from_pretrained('bert-base-uncased') tokenizer = BertTokenizerFast.from_pretrained('bert-base-uncased') training_set = CustomDataset(train_set, tokenizer) testing_set = CustomDataset(test_set, tokenizer) batch_size = 8 epochs = 2 warmup_steps = 500 weight_decay = 0.01 class TrainingArgumentsWithMPSSupport(TrainingArguments): @property def device(self) -> torch.device: return torch.device(device) training_args = TrainingArgumentsWithMPSSupport( output_dir='./results', num_train_epochs=epochs, per_device_train_batch_size=batch_size, per_device_eval_batch_size=batch_size, warmup_steps=warmup_steps, weight_decay=weight_decay, # evaluate_during_training=True, evaluation_strategy='steps', logging_dir='./logs', ) trainer = Trainer( model=model.to(device), args=training_args, train_dataset=training_set, eval_dataset=testing_set ) trainer.train() # full finetune trainer.evaluate() ``` <|||||>After installing `transformers` package from source as suggested by @pacman100 like this: ```bash pip install git+https://github.com/huggingface/transformers ``` the `mps` device is used with the standard `TrainingArguments` class. Does not require the custom `TrainingArgumentsWithMPSSupport` class. Now the M1 Mac GPU is ~90% utilized. ![Screenshot 2023-06-14 at 16 03 57](https://github.com/huggingface/transformers/assets/98090437/a7583667-64be-4670-b9ba-934a63798468)
transformers
17,970
closed
Ensure PT model is in evaluation mode and lightweight forward pass done
Small update to the `pt-to-tf` CLI. Sets the pytorch model into evaluate model and uses `no_grad` context to make the memory requirements lighter. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests?
06-30-2022 18:37:56
06-30-2022 18:37:56
_The documentation is not available anymore as the PR was closed or merged._
transformers
17,969
closed
TF: T5 can now handle a padded past (i.e. XLA generation)
# What does this PR do? In TF T5, we now fetch the correct slice of `position_bias` -- [the same way we do it in FLAX](https://github.com/huggingface/transformers/blob/main/src/transformers/models/t5/modeling_flax_t5.py#L339). The key difference is that FLAX relies on an [external variable](https://github.com/huggingface/transformers/blob/main/src/transformers/models/t5/modeling_flax_t5.py#L312) for the generated length that gets incremented every time past gets updated, and here the same value is obtained dynamically from the past array (latest filled past index = generated length - 1, where latest filled past index corresponds to the maximum index with non-0 values). All slow tests are passing and we no longer have length restrictions on the XLA beam search test, which means that: 1. Although the code for eager execution was changed, all outputs remain the same; 2. XLA generation matches non-XLA generation.
06-30-2022 18:25:04
06-30-2022 18:25:04
related issue: https://github.com/huggingface/transformers/issues/17935<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>Great job on finding and fixing the bug here @gante - cool that T5 works now :-)
transformers
17,968
closed
Mask t5 relative position bias then head pruned
# What does this PR do? Fixes #17886 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @patrickvonplaten
06-30-2022 18:20:36
06-30-2022 18:20:36
_The documentation is not available anymore as the PR was closed or merged._<|||||>Thanks for the PR @hadaev8! Could you add a test for the newly added functionality ? :-) <|||||>@patrickvonplaten Never did it, how it should looks like?<|||||>Hey @hadaev8, The test should be added to this file here: https://github.com/huggingface/transformers/blob/main/tests/models/t5/test_modeling_t5.py In this test it would be great if you could do the following for example: Create a dummy T5 model and run a forward pass with `output_attentions=True`. The prune a head and run a forward pass again with `output_attentions=True`. Then you can compare that the attentions returned by the second forward pass will be 0 or just have fewer tensors because the head was pruned<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>@patrickvonplaten I spotted another thing. In encoder-decoder only one broken function to prune. Should I split in two?<|||||>@hadaev8 let's maybe do this in another PR :-) <|||||>Okay, i only will make test<|||||>Hey @hadaev8, Sorry last thing - could you maybe remove the accidently added `datasets` folder? See: https://github.com/huggingface/transformers/pull/17968/files#diff-714284abfa95a1447d7c34554c2d65b16fcfb1af22a44fc15489d13b76e951e5<|||||>@patrickvonplaten Sorry, still had no time to write the test. Removed datasets folder.<|||||>It seems there is an issue with your CircleCI permissions, the tests won't run. Could you try refreshing your permissions as shown [here](https://support.circleci.com/hc/en-us/articles/360048210711-How-to-Refresh-User-Permissions-)?<|||||>Seems I can't register account at CircleCI because sanctioned country.<|||||>Arf that's super annoying, sorry about that @hadaev8. I'll look into triggering it for you.<|||||>I pushed the same commits under a different branch: https://github.com/huggingface/transformers/tree/fix_t5_pruning-lysandre It used my token permissions so it could run. Sorry you're experiencing this, I'll handle the triggers if some tests need to be fixed.<|||||>All tests pass, thank you @hadaev8! Merging the PR.<|||||>@LysandreJik Cool, thank you.
transformers
17,967
closed
Drop columns after loading samples in prepare_tf_dataset
Another super-small fix to `prepare_tf_dataset()` - this time we apply the same fix we applied to `to_tf_dataset()`, and keep columns until after samples have been loaded from the dataset. This ensures that columns that are needed to compute the transform aren't dropped.
06-30-2022 17:32:05
06-30-2022 17:32:05
_The documentation is not available anymore as the PR was closed or merged._<|||||>I think this PR is ready to go, but it's waiting on https://github.com/huggingface/datasets/pull/4553 to be merged and a release in Datasets. Tests will fail until that makes it through, so I won't merge until then!
transformers
17,966
closed
[Flax] Bump to v0.4.1
# What does this PR do? The `flatten_dict` operator with the kwarg argument `sep` was added to `modeling_flax_utils` in https://github.com/huggingface/transformers/pull/17760: https://github.com/huggingface/transformers/blob/f25457b273348733bfeb19a51ab0d21bd30a08b8/src/transformers/modeling_flax_utils.py#L127 This kwarg was only added to Flax in v0.4.1: https://github.com/google/flax/releases/tag/v0.4.1 This PR bumps the required Flax version in Transformers from v0.3.5 to v0.4.1. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. cc @ArthurZucker
06-30-2022 16:10:40
06-30-2022 16:10:40
Nice finding! I am not competent regarding which versions we want to always support, but LGTM<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>Thanks @patil-suraj! The latest version would be v0.5.2 (see https://github.com/google/flax/releases). Is the best way to test this by setting `flax>=0.5.2` and observe if the tests past?<|||||>IMO we should also try to stay compatible with older Flax versions, so let's not go as high as we can but only as high as we have to! PR looks good to me - but let's try to not always use the most recent Flax features in order to stay compatible with older versions as well <|||||>Thanks for the details @patrickvonplaten. I'm in agreement that we should avoid potentially breaking use cases purely for the sake of being on the latest version, but try and integrate the latest features where applicable. Interestingly, I actually went away and had a play with three different versions of Flax on my personal research project https://github.com/sanchit-gandhi/seq2seq-speech: 1. **v0.3.5:** issues regarding the `sep` arg of `flatten_dict` in `modeling_flax_utils` (described above) 2. **v0.4.2:** no apparent issues 3. **v0.5.2:** (latest) had issues with Flax's `scan` not working depending on JAX version Seems like v0.4.2 sits in the sweet spot for newer Flax version whilst providing backwards compatibility!<|||||>Yes, but it's often worth to also not directly implement all the new features of Flax since: - a) they might not work very well because they are new - b) it breaks backwards comp
transformers
17,965
closed
time series forecasting model
# What does this PR do? This PR implements a vanilla encoder-decoder Transformer for time-series forecasting.
06-30-2022 15:38:26
06-30-2022 15:38:26
_The documentation is not available anymore as the PR was closed or merged._<|||||>cc'ing @mishig25 here - there seems to be an issue with the docs being built. The model is added to the toctree, but it's saying: ``` Traceback (most recent call last): File "/usr/local/bin/doc-builder", line 8, in <module> sys.exit(main()) File "/usr/local/lib/python3.8/site-packages/doc_builder/commands/doc_builder_cli.py", line 47, in main args.func(args) File "/usr/local/lib/python3.8/site-packages/doc_builder/commands/build.py", line 96, in build_command build_doc( File "/usr/local/lib/python3.8/site-packages/doc_builder/build_doc.py", line 427, in build_doc sphinx_refs = check_toc_integrity(doc_folder, output_dir) File "/usr/local/lib/python3.8/site-packages/doc_builder/build_doc.py", line 482, in check_toc_integrity raise RuntimeError( RuntimeError: The following files are not present in the table of contents: - model_doc/time_series_transformer Add them to ../transformers/docs/source/en/_toctree.yml. ```<|||||>Mishig the failure on the doc was due to a typo (comment is hidden now since the suggestion was accepted) nothing to do for you :-)
transformers
17,964
closed
skip some ipex tests until it works with torch 1.12
# What does this PR do? skip some ipex tests until it works with torch 1.12
06-30-2022 15:31:24
06-30-2022 15:31:24
_The documentation is not available anymore as the PR was closed or merged._
transformers
17,963
closed
BLOOM - modifying slow tests
# What does this PR do? - changed the non passing tests to fp32 - reduced sequence length - remove padding test All these matters have been discussed on Slack but mainly: 1- Generations tests were not passing because the linear layers does not give the same results between torch 1.11 and torch 1.12 2- batched generation can be flaky sometimes in half precision mode, this should be expected. Therefore we reduce the sequence length of the generated output 3- One should **always** use `padding_side=left` when doing batched generations cc @ydshieh @patrickvonplaten
06-30-2022 15:27:49
06-30-2022 15:27:49
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_17963). All of your documentation changes will be reflected on that endpoint.<|||||>Hi, @younesbelkada Could you explain a bit more on ` One should always use padding_side=left when doing batched generations`? And what are examples of test failures when using `padding=right` ? I couldn't find you mentioning this on Slack. Thanks! <|||||>Hi @ydshieh ! From the internal discussions here is a summary of why one should always use `padding_side=left` (cc @patrickvonplaten ): - Imagine: `["hello my name is", "hey <pad> <pad> <pad>"]` For the first input the correct token will be sampled from "is" - however for the second input, generate would incorrectly sample from `"<pad>"` where as it should sample from `"hey"`. Making sure everything is batched on the left circumvents this problem !<|||||>@younesbelkada - IMO we should not expect the generation to be flaky ever, why is this the case here?<|||||>Hi @younesbelkada : I have 3 questions 🙏 - with fp16: - do we get stable results in a specific torch version (i.e. the same result across many runs) - after changing to fp32 (without reducing the seq length) - do we get the same results across torch 1.11 and 1.12? - do we get stable results in a specific torch version (i.e. the same result across many runs)<|||||>Hi @ydshieh ! After merging this PR: https://github.com/huggingface/transformers/pull/17866 the slow tests are now passing. Our conclusion is that: 1- In half precision mode we might not get the same results across batched generation and it should be expected 2- This behavior is observed ONLY on small models !<|||||>I'm still a bit confused by this PR - generations are not flaky for pretrained models normally and it's a bit weird to me that all this test does is modifying slow generation tests<|||||>Closing as it has been fixed by https://github.com/huggingface/transformers/pull/18344
transformers
17,962
closed
IPEX integration in Trainer breaks with PyTorch 1.12
All the tests for the IPEX integration in Trainer started to break with the latest PyTorch release. Error is: ``` ImportError: /usr/local/lib/python3.8/dist-packages/intel_extension_for_pytorch/lib/libintel-ext-pt-cpu.so: undefined symbol: _ZNK3c1010TensorImpl5sizesEv ``` cc @hshen14
06-30-2022 13:55:24
06-30-2022 13:55:24
@sgugger Thanks for reporting. IPEX v1.12 will be releasing to support latest PyTorch version by next week. The team will look into the issue and let you know whether the issue is getting fixed in upcoming release. cc @jianan-gu <|||||>@sgugger , this is Eikan from IPEX team. This is a version-mismatch issue. IPEX 1.11 is on top of PyTorch 1.11. And the upcoming release of IPEX(1.12) will resolve this issue as the latest PyTorch is 1.12. I will keep you posted as long as the IPEX is released.<|||||>Hi @sgugger, IPEX 1.12 release is available https://intel.github.io/intel-extension-for-pytorch/1.12.0/tutorials/installation.html ; And we also open a PR https://github.com/huggingface/transformers/pull/18072 to enhance the integration for this version mismatch issue to avoid breaking Trainer; Thanks!
transformers
17,961
closed
add ONNX support for BLOOM
# What does this PR do? add ONNX support for BLOOM ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [X] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [X] Did you write any new necessary tests? ## Who can review? @michaelbenayoun <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
06-30-2022 12:42:48
06-30-2022 12:42:48
_The documentation is not available anymore as the PR was closed or merged._<|||||>As you told me offline that the slow tests were passing (under torch1.11.0), looks good to me! Thanks for working on that 🔥 <|||||>I'm not too sure about the changes in `modeling_bloom.py`. Looks like not leveraging the bool type and converting to int32 will hurt performance. Wdyt @younesbelkada ?<|||||>I think the changes in `modeling_bloom.py` come from the fact that boolean tensors cannot be added in ONNX (not 100% sure). Two suggestions then: - Reformulate the addition to [torch.logical_or](https://pytorch.org/docs/stable/generated/torch.logical_or.html#torch-logical-or) - Cast the input to int8 I think that the first solution is both faster and more aligned with the original implementation. WDYT?<|||||>@sgugger I do not think this will hurt performances in terms of logits since slow tests are passing, but might hurt indeed the inference time performance for large and/or batched sequences.. We need to benchmark that though to be sure<|||||>@michaelbenayoun I think option 1 sounds good, yes!<|||||>Also make sure all the tests pass before merging.<|||||>All tests for `tests/onnx/test_onnx_v2.py -k "bloom"` and `tests/models/bloom` are passing. Here are the ones that are skipped (which is fine according to @younesbelkada) ``` ================================================================================= short test summary info ================================================================================= SKIPPED [1] tests/test_modeling_common.py:2006: test is PT+FLAX test SKIPPED [1] tests/test_modeling_common.py:1934: test is PT+FLAX test SKIPPED [1] tests/test_modeling_common.py:1758: test is PT+TF test SKIPPED [1] tests/test_tokenization_common.py:1960: This test is only for slow tokenizers SKIPPED [1] tests/test_tokenization_common.py:2189: test is PT+TF test ================================================================= 159 passed, 5 skipped, 35 warnings in 449.50s (0:07:29) ```<|||||>There is a difference between a copy in BLOOM and the original in GPT-2 which is why the CI is failing. Make sure to run `make fic-copies` or remove the `Copied from`.
transformers
17,960
closed
Suggestion for introducing "shift_labels" argument for Trainer
### Feature request Add an argument to determine shifting the `labels` or not. In [TrainingArguments](https://github.com/huggingface/transformers/blob/v4.20.1/src/transformers/training_args.py#L104) class, an argument named `shift_labels` should be added. During training, at [here](https://github.com/huggingface/transformers/blob/v4.20.1/src/transformers/models/gpt2/modeling_gpt2.py#L1073) and [here](https://github.com/huggingface/transformers/blob/d0acc9537829e7d067edbb791473bbceb2ecf056/src/transformers/models/gpt2/modeling_gpt2.py#L1280), `model` must check both `labels is not None` and `self.shift_labels is True` e.g. ``` if labels is not None and self.shift_labels: # changed # Shift so that tokens < n predict n shift_logits = lm_logits[..., :-1, :].contiguous() shift_labels = labels[..., 1:].contiguous() ``` Default values for `shift_labels` is `False`, except for causal language models such as `GPT2PreTrainedModel` Related to gpt2 : @patil-suraj and trainer @sgugger ### Motivation In the current state of the code, the shifting of `labels` for training GPT2LMHeadModel is changing under the use of `label_smoothing`, which I assume is unintended. Specifically, training a GPT2LMHeadModel with `args.label_smoothing_factor==0` (which is default), the [code](https://github.com/huggingface/transformers/blob/d0acc9537829e7d067edbb791473bbceb2ecf056/src/transformers/models/gpt2/modeling_gpt2.py#L1075) shifts the `labels` and computes the loss inside the `model.forward()`. This assumes that `labels` have not been shifted to be properly aligned with corresponding `input_ids`. However, if I train GPT2LMHeadModel with `args.label_smoothing_factor > 0`, then the loss is computed [here](https://github.com/huggingface/transformers/blob/692e61e91a0b83f5b847902ed619b7c74c0a5dda/src/transformers/trainer.py#L2384), inside the `compute_loss()` function of the `Trainer`. This part assumes `labels` are already shifted, and does not proceed to shift the labels. I believe whether to shift `labels` or not should be explicitly determined by its own argument, not by another argument like `label_smoothing_factor`. In my case, our team was very frustrated that our training results were totally different by only changing the `label_smoothing` with same given `labels` and `input_ids`. The reason was due to the misalignment of `labels` and `input_ids` when turning on the `label_smoothing`. ### Your contribution I'm willing to make PR after your confirmation.
06-30-2022 11:33:38
06-30-2022 11:33:38
I don't think a new TrainingArgument is the right answer here. Some models shift the labels internally, I think it's all the models for causal LM (not jsut GPT-2), so I think instead of a flag, there should be a check when the loss is computed by the `Trainer` for label smoothing to see if the model class name is inside the `MODEL_FOR_CAUSAL_LM_MAPPING_NAMES` (to import from the auto module) and then shift the labels. Let me know if you'd like to proceed with a PR for this fix!<|||||>Thanks for quick reply. Your approach seems plausible and I'd like to proceed it. I've read the document for contribution guide thoroughly. Can I just start now? or is there anything I should know before begin?<|||||>You can start, good luck! :-)
transformers
17,959
closed
CLI: convert sharded PT models
# What does this PR do? This PR adds a major upgrade and a minor change to the `pt-to-tf` CLI. Major upgrade: we can now convert sharded PT models. It updates how the `from_pt` loading works so as to be able to load from shards. It also updates how the `pt-to-tf` CLI stores the models, so it uses sharding capabilities when needed. Minor change: adds a flag to control the maximum hidden layer admissible error. It is relatively common to find models where the outputs from the PT and TF models are nearly the same, but the hidden layers have a larger mismatch. This flag allows us to temporarily increase the admissible error if the model seems to be behaving properly (for instance, all RegNet models had a hidden layer difference between 1e-4 and 1e-2, but the outputs were behaving properly). Example of sharded TF model PR, using the updated tools: https://huggingface.co/facebook/regnet-y-10b-seer-in1k/discussions/1
06-30-2022 11:26:19
06-30-2022 11:26:19
_The documentation is not available anymore as the PR was closed or merged._<|||||>BTW could we add 2 tests, `test_load_sharded_tf_to_pt` and `load_sharded_pt_to_tf` <|||||>TF shards -> PT probably won't work, but I will add the test for PT shards -> TF 👍
transformers
17,958
closed
[wip] testing new docstring ui
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
06-30-2022 10:58:30
06-30-2022 10:58:30
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_17958). All of your documentation changes will be reflected on that endpoint.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.