repo
stringclasses
1 value
number
int64
1
25.3k
state
stringclasses
2 values
title
stringlengths
1
487
body
stringlengths
0
234k
created_at
stringlengths
19
19
closed_at
stringlengths
19
19
comments
stringlengths
0
293k
transformers
18,257
closed
Owlvit docs test
- Fixes a typo in OwlViTForObjectDetection forward function docs: transformers/models/owlvit/modeling_owlvit.py - Adds docs test for OWL-ViT - Makes `OwlViTFeatureExtractor.post_process` callable from `OwlViTProcessor` - Improves code examples to demonstrate how to use the post_process method
07-22-2022 13:22:56
07-22-2022 13:22:56
_The documentation is not available anymore as the PR was closed or merged._<|||||>> LGTM! Just wondering why there are 90+ commits That was my mistake, I merged the owlvit branch of my forked transformers repo with the main and created this branch. I squashed the commits on the main but don't know how to fix this one.<|||||>Let me know if you'd like some help to squash the commits of this PR @alaradirik!
transformers
18,256
closed
Change how `take_along_axis` is computed in DeBERTa to stop confusing XLA
The previous code for `take_along_axis()` in DeBERTa used dynamic TF shapes like `tf.shape()` and `tf.rank()` in conditionals. This is a data-dependent conditional, which is forbidden in XLA. Replacing these with the static shape equivalents `x.shape` and `x.shape.rank` works fine, and now DeBERTa can be compiled successfully with XLA.
07-22-2022 12:57:11
07-22-2022 12:57:11
_The documentation is not available anymore as the PR was closed or merged._<|||||>@gante The original torch code used `take_along_axis`, so I guess this is a complete TF reimplementation of it! That approach makes way more sense, though - let me make some changes!<|||||>> The original torch code used take_along_axis That would explain it!
transformers
18,255
closed
[Don't merge] debug CircleCI test timing
# What does this PR do?
07-22-2022 11:42:29
07-22-2022 11:42:29
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_18255). All of your documentation changes will be reflected on that endpoint.
transformers
18,254
closed
Can not import Trainer
### System Info python 3.9 I install transformer with pip install transformer. Using a terminar I open python from: >from transformers import Trainer I get: `Traceback (most recent call last): File "/usr/local/lib/python3.9/site-packages/transformers/utils/import_utils.py", line 957, in _get_module return importlib.import_module("." + module_name, self.__name__) File "/usr/local/Cellar/[email protected]/3.9.0_4/Frameworks/Python.framework/Versions/3.9/lib/python3.9/importlib/__init__.py", line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "<frozen importlib._bootstrap>", line 1030, in _gcd_import File "<frozen importlib._bootstrap>", line 1007, in _find_and_load File "<frozen importlib._bootstrap>", line 986, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 680, in _load_unlocked File "<frozen importlib._bootstrap_external>", line 790, in exec_module File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed File "/usr/local/lib/python3.9/site-packages/transformers/trainer.py", line 176, in <module> import datasets File "/usr/local/lib/python3.7/site-packages/keras/datasets/__init__.py", line 3, in <module> from . import mnist File "/usr/local/lib/python3.7/site-packages/keras/datasets/mnist.py", line 7, in <module> from ..utils.data_utils import get_file ImportError: attempted relative import beyond top-level package The above exception was the direct cause of the following exception: Traceback (most recent call last): File "<stdin>", line 1, in <module> File "<frozen importlib._bootstrap>", line 1055, in _handle_fromlist File "/usr/local/lib/python3.9/site-packages/transformers/utils/import_utils.py", line 947, in __getattr__ module = self._get_module(self._class_to_module[name]) File "/usr/local/lib/python3.9/site-packages/transformers/utils/import_utils.py", line 959, in _get_module raise RuntimeError( RuntimeError: Failed to import transformers.trainer because of the following error (look up to see its traceback): attempted relative import beyond top-level package` Any idea how I can fix it ? Many thanks, Ele ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction from transformers import Trainer ### Expected behavior Traceback (most recent call last): File "/usr/local/lib/python3.9/site-packages/transformers/utils/import_utils.py", line 957, in _get_module return importlib.import_module("." + module_name, self.__name__) File "/usr/local/Cellar/[email protected]/3.9.0_4/Frameworks/Python.framework/Versions/3.9/lib/python3.9/importlib/__init__.py", line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "<frozen importlib._bootstrap>", line 1030, in _gcd_import File "<frozen importlib._bootstrap>", line 1007, in _find_and_load File "<frozen importlib._bootstrap>", line 986, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 680, in _load_unlocked File "<frozen importlib._bootstrap_external>", line 790, in exec_module File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed File "/usr/local/lib/python3.9/site-packages/transformers/trainer.py", line 176, in <module> import datasets File "/usr/local/lib/python3.7/site-packages/keras/datasets/__init__.py", line 3, in <module> from . import mnist File "/usr/local/lib/python3.7/site-packages/keras/datasets/mnist.py", line 7, in <module> from ..utils.data_utils import get_file ImportError: attempted relative import beyond top-level package The above exception was the direct cause of the following exception: Traceback (most recent call last): File "<stdin>", line 1, in <module> File "<frozen importlib._bootstrap>", line 1055, in _handle_fromlist File "/usr/local/lib/python3.9/site-packages/transformers/utils/import_utils.py", line 947, in __getattr__ module = self._get_module(self._class_to_module[name]) File "/usr/local/lib/python3.9/site-packages/transformers/utils/import_utils.py", line 959, in _get_module raise RuntimeError( RuntimeError: Failed to import transformers.trainer because of the following error (look up to see its traceback): attempted relative import beyond top-level package >>>
07-22-2022 11:30:51
07-22-2022 11:30:51
Hey @Eleo22, that's interesting, it seems that `import datasets` in the trainer led to an import of the `keras.datasets` package. Do you know why that might be? Could you try to uninstall keras (you don't need it for the trainer) and to reinstall `datasets` ? Thanks!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
18,253
closed
Fix OwlViT tests
# What does this PR do? Should fix the errors on main with `ImportError: cannot import name 'OwlViTFeatureExtractor' from 'transformers'` on runners with torch not installed.
07-22-2022 11:23:04
07-22-2022 11:23:04
_The documentation is not available anymore as the PR was closed or merged._
transformers
18,252
closed
How to convert pytorch bart model to tf1.x ?
I have trained a pytorch bart model, and how can I convert it to tf1.x ?
07-22-2022 10:05:48
07-22-2022 10:05:48
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
18,251
closed
Add PYTEST_TIMEOUT for CircleCI test jobs
⚠️ Before merging, we need to **run the full tests on CircleCI** to see if there are slower tests that will fail and decide what to do with them. # What does this PR do? Add `PYTEST_TIMEOUT: 30` for CircleCI jobs: ``` environment: ... PYTEST_TIMEOUT: 30 ``` The main goal is to avoid CircleCI's default 10 minute timeout that cancels the jobs. Also with this PR, we can see clearly which test (s) timeout.
07-22-2022 09:08:44
07-22-2022 09:08:44
_The documentation is not available anymore as the PR was closed or merged._<|||||>I like the idea of getting a hard error instead of silently getting new tests that slow down the CI by quite a lot! Now we just ahve to get to all tests passing below that threshold 😅 For the examples, you can authorize 60s before timing out as those are end-to-end small training so take more time.<|||||>Currently set to `PYTEST_TIMEOUT: 120`. As mentioned on Slack, tests sometimes get much longer to run. For example, ``` test_modeling_data2vec_audio.py::Data2VecAudioModelTest::test_mask_time_prob_ctc 44.16s call 37.64s call 12.46s call 12.60s call ``` and ``` test_modeling_plbart.py::PLBartBaseIntegrationTest::test_base_generate 48.18 call 11.20s call 11.53s call ``` This makes it difficult to determine a good threshold that won't be flaky. ### Current longest 2 tests (observed on a CircleCI workflow run): ``` 73.08s call test_pipelines_image_segmentation.py::ImageSegmentationPipelineTests::test_pt_DetrConfig_DetrForSegmentation_notokenizer_DetrFeatureExtractor 65.33s call longt5/test_modeling_flax_longt5.py::FlaxLongT5ModelTest::test_jit_compilation ```<|||||>Reverted the change in `setup.py` (was for running the full tests). The final timeout limit is 2 minutes (to avoid flaky failures). I will merge this PR today unless @sgugger has a different opinion.
transformers
18,250
closed
Skip passes report for `--make-reports`
# What does this PR do? We sometimes have timeout on CircleCI. It turns out that the tests are finished (running in the workers, which exit at the end), but the main process is busy doing some reporting work when we specify `--make-reports`. More precisely, it is the `passes` report which takes time (as we include `Pp` in `tr.reportchars = "wPpsxXEf"`). From the 2 screenshots below (running with 64 models), we can see that currently it takes extra ~2-3 minutes at the end. It seems that `passes` report doesn't contain any useful information to us, therefore this PR skips generating it to avoid timeout. - **without this PR** <img width="448" alt="no-fix" src="https://user-images.githubusercontent.com/2521628/180396971-69f19b12-978b-4e19-8842-17d1af307d51.png"> - **with this PR** <img width="452" alt="fix" src="https://user-images.githubusercontent.com/2521628/180396917-bb866363-efa3-4959-b5e3-e99ab283cc6c.png"> ### One failed CircleCI job run [Job](https://app.circleci.com/pipelines/github/huggingface/transformers/43738/workflows/325901ce-948e-4737-9a79-8a7fe9d6e27d/jobs/506868/resources) <img width="452" alt="real" src="https://user-images.githubusercontent.com/2521628/180399481-a7f571bc-75f3-4826-b5d5-8c50c3164678.png">
07-22-2022 08:19:17
07-22-2022 08:19:17
_The documentation is not available anymore as the PR was closed or merged._<|||||>Same as Sylvain :)
transformers
18,249
closed
Behavior of shift_tokens_right on padded input_ids
### System Info - `transformers` version: 4.20.1 ### Who can help? @patrickvonplaten @patil-suraj @sgugger ### Reproduction When I applied shift_tokens_right on a padded input_ids, I get this ```python from transformers import AutoTokenizer from transformers.models.bart.modeling_flax_bart import shift_tokens_right tokenizer = AutoTokenizer.from_pretrained("roberta-base") labels = tokenizer("My dog is cute", padding='max_length', max_length=8, return_tensors='np').input_ids decoder_input_ids = shift_tokens_right(labels, tokenizer.pad_token_id, tokenizer.eos_token_id) print(tokenizer.batch_decode(labels)) # ['<s>My dog is cute</s><pad><pad>'] print(tokenizer.batch_decode(decoder_input_ids)) # ['</s><s>My dog is cute</s><pad>'] ``` ### Expected behavior Should the desired behavior of shift_token_right be the following? ```python print(tokenizer.batch_decode(labels)) # ['<s>My dog is cute</s><pad><pad>'] print(tokenizer.batch_decode(decoder_input_ids)) # ['</s><s>My dog is cute<pad><pad>'] ```
07-22-2022 07:54:22
07-22-2022 07:54:22
Hey @duongna21! This method isn't exposed in the main init so we consider it to be private (this should really be written somewhere if we haven't done so yet). It's used internally by the BART model, but we don't validate it to work for any other purpose.<|||||>@LysandreJik Yeah, I specifically raise above question in the context of BART training. IMO `shift_tokens_right` should not take the `<pad>` token into account (this is actually the behavior of [fairseq's original code](https://github.com/facebookresearch/fairseq/blob/8e804cb38a1575c65a1fc981d75ae5a97c24dd5b/fairseq/data/data_utils.py#L69)). Also, I believe this issue also applies to other models using `shift_tokens_right`, such as T5.<|||||>Any comment? I'm happy to create a PR if my assumption is correct.<|||||>Pinging @patil-suraj and @patrickvonplaten regarding the `shift_tokens_right` method and its purpose.<|||||>Hey @<|||||>Hey @duongna21, Good question! Note however that it doesn't really matter whether you pass ```py ['</s><s>My dog is cute<pad><pad>'] ``` or ```python ['</s><s>My dog is cute</s><pad>'] ``` to the model if the labels are: ```py ['<s>My dog is cute</s><pad><pad>'] ``` because every loss token that gets mapped to the `<pad>` token is ignored. More specifically this means that during training the following happens: - The model learns that `</s>` should predict `<s>` - then `</s><s>` should predict `My` - then `</s><s>My` should predict `dog` - ... until - `</s><s>My dog is cute` should predict `</s>` **Now** it doesn't matter whether `</s><s>My dog is cute</s>` or `</s><s>My dog is cute<pad>` is passed because both will be ignored as the predicted token is `<pad>` and the model should never learn to predict pad tokens -> so this loss will be ignored. Does this make sense? <|||||>@patrickvonplaten Thanks for elaborating on it. Totally agree with you that it doesn't matter with seq2seq training. I just tried to make sure it doesn't create any side-effect somewhere :D.
transformers
18,248
closed
Changed to filter out oddball list sizes
This PR fixes the issue of lists occasionally being of different sizes before being sent to `trainer.py` <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes #18167 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @sgugger My apologies for the third ping in a week. I believe I found a solution that would be more acceptable. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
07-22-2022 07:51:33
07-22-2022 07:51:33
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_18248). All of your documentation changes will be reflected on that endpoint.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
18,247
closed
Pin rouge_score
# What does this PR do? Temporarily pin `rouge_score` (to avoid latest version 0.7.0) until the issue is fixed on their side: - https://github.com/google-research/google-research/issues/1212 See: - https://github.com/huggingface/datasets/issues/4734
07-22-2022 07:37:57
07-22-2022 07:37:57
_The documentation is not available anymore as the PR was closed or merged._<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Hey @albertvillanova, sorry for missing this! Should this be merged? Should we replace by `!=0.7.0` so that we're still compatible with newer versions?<|||||>They made several failed releases until they got to fix the error... Let me update this PR with all the versions to be avoided.<|||||>@LysandreJik there is a non-passing test though...<|||||>Hmmm weird, it seems like it passed when it was <0.07 [here](https://github.com/huggingface/transformers/runs/7464301285?check_suite_focus=true) (installing version v0.0.4), but not after setting `rouge-score!=0.0.7,!=0.0.8,!=0.1,!=0.1.1` (installing version 1.2.0). I see that when installing `rouge-score` version v1.2.0, it was installing `rouge-score` using a legacy approach: ``` Using legacy 'setup.py install' for rouge-score, since package 'wheel' is not installed. ``` Could this be the cause of the failure? If you put `<0.7` once again, does it pass the test?<|||||>The test does not pass now either with `rouge-score<0.0.7`, @LysandreJik. But it passed when I opened this PR: see https://github.com/huggingface/transformers/pull/18247/commits/4e7a46dcfe4bfbab25858052017a39459c7831d4
transformers
18,246
closed
RuntimeError: "triu_tril_cuda_template" not implemented for 'BFloat16'
### System Info transformers==4.19.2 ### Who can help? use bf16 with accelerate config ``` compute_environment: LOCAL_MACHINE deepspeed_config: {} distributed_type: MULTI_GPU fsdp_config: {} machine_rank: 0 main_process_ip: null main_process_port: null main_training_function: main mixed_precision: bf16 num_machines: 1 num_processes: 8 use_cpu: false ``` model is LongformerModel, and get error File "~/miniconda3/envs/bai/lib/python3.8/site-packages/transformers/models/longformer/modeling_longformer.py", line 788, in _mask_invalid_locations beginning_mask_2d = input_tensor.new_ones(affected_seq_len, affected_seq_len + 1).tril().flip(dims=[0]) RuntimeError: "triu_tril_cuda_template" not implemented for 'BFloat16' ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction none ### Expected behavior none
07-22-2022 07:24:29
07-22-2022 07:24:29
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
18,245
closed
Not able to load the Facebook OPT model
### System Info - `transformers` version: 4.20.1 - Platform: Linux-5.4.0-122-generic-x86_64-with-glibc2.10 - Python version: 3.8.3 - Huggingface_hub version: 0.5.1 - PyTorch version (GPU?): 1.7.1 (True) - Tensorflow version (GPU?): 2.6.0 (True) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed ### Who can help? @LysandreJik ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer model = AutoModelForCausalLM.from_pretrained("facebook/opt-13b", torch_dtype=torch.float16) ``` Errors: KeyError Traceback (most recent call last) <ipython-input-15-00179d7539d3> in <module> 2 from transformers import AutoModelForCausalLM, AutoTokenizer 3 ----> 4 model = AutoModelForCausalLM.from_pretrained("facebook/opt-13b", torch_dtype=torch.float16) ~/workspace/anaconda3/lib/python3.8/site-packages/transformers/models/auto/auto_factory.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs) 421 kwargs["_from_auto"] = True 422 if not isinstance(config, PretrainedConfig): --> 423 config, kwargs = AutoConfig.from_pretrained( 424 pretrained_model_name_or_path, return_unused_kwargs=True, trust_remote_code=trust_remote_code, **kwargs 425 ) ~/workspace/anaconda3/lib/python3.8/site-packages/transformers/models/auto/configuration_auto.py in from_pretrained(cls, pretrained_model_name_or_path, **kwargs) 670 671 Examples: --> 672 673 ```python 674 >>> from transformers import AutoConfig ~/workspace/anaconda3/lib/python3.8/site-packages/transformers/models/auto/configuration_auto.py in __getitem__(self, key) 385 ("xlsr_wav2vec2", "XLSR-Wav2Vec2"), 386 ("yolos", "YOLOS"), --> 387 ("yoso", "YOSO"), 388 ] 389 ) KeyError: 'opt' ### Expected behavior The code is expected to run with any error.
07-22-2022 05:19:49
07-22-2022 05:19:49
Hey @xiajinxiong, this seems to be a version error! In order to verify, could you run the following snippet and copy-paste the output? ``` import torch from transformers import AutoModelForCausalLM, AutoTokenizer, __version__ print("Version", __version__) model = AutoModelForCausalLM.from_pretrained("facebook/opt-13b", torch_dtype=torch.float16) ```<|||||>I reaffirmed that the transformers version was 4.20.1. But it's because my jupyter kernel didn't synchronize with the transformers version. After I restart the jupyter, it works fine. Thanks.<|||||>I reaffirmed that the transformers version was 4.20.1. But it's because my jupyter kernel didn't synchronize with the transformers version. After I restart the jupyter, it works fine. Thanks.
transformers
18,244
closed
patch for smddp import
# What does this PR do? Fixes an `invalid backend` error when starting a HF job with smddp in a method that goes through src/transformers/training_args.py by adding the import statement for smddp, which registers smddp as a torch.distributed backend. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
07-21-2022 23:47:33
07-21-2022 23:47:33
_The documentation is not available anymore as the PR was closed or merged._
transformers
18,243
open
Onnx Runtime Errors With LongT5
### System Info - `optimum` version: 1.2.3 (installed via Github installation) - `transformers` version: 4.20.1 - Platform: Linux-5.4.188+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.13 - Huggingface_hub version: 0.8.1 - PyTorch version (GPU?): 1.11.0+cu113 (False) - Tensorflow version (GPU?): 2.8.2 (False) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: no - Using distributed or parallel set-up in script?: no ### Who can help? @stancld @echarlaix @LysandreJik ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [x] My own task or dataset (give details below) ### Reproduction LongT5 with TGlobal Attention isn't able to run sequences longer than **global_block_size * 2**. This is because during the model tracing [num_globals > 0](https://github.com/huggingface/transformers/blob/main/src/transformers/models/longt5/modeling_longt5.py#L191) is being converted to False. I originally posted the error in Optimum (https://github.com/huggingface/optimum/issues/285) but @echarlaix asked me to open an issue here because this error concerns the ONNX export. Code to reproduce is below: ``` !pip install transformers !pip install transformers[onnx] !python -m pip install git+https://github.com/huggingface/optimum.git !python -m pip install git+[https://github.com/huggingface/optimum.git#egg=optimum[onnxruntime]](https://github.com/huggingface/optimum.git#egg=optimum%5Bonnxruntime%5D) !pip install datasets ``` ```py from optimum.onnxruntime import ORTModelForSeq2SeqLM model = ORTModelForSeq2SeqLM.from_pretrained("longt5-tglobal-base", from_transformers=True) from transformers import AutoTokenizer, pipeline tokenizer = AutoTokenizer.from_pretrained('google/long-t5-tglobal-base') onnx_summarization = pipeline("summarization", model=model, tokenizer=tokenizer) text = # Something longer than 32 tokens if I don't change the number of global blocks pred = onnx_summarization(text)` ``` ``` RuntimeException: [ONNXRuntimeError] : 6 : RUNTIME_EXCEPTION : Non-zero status code returned while running LessOrEqual node. Name:'LessOrEqual_648' Status Message: /onnxruntime_src/onnxruntime/core/providers/cpu/math/element_wise_ops.h:603 onnxruntime::Broadcaster::Broadcaster(gsl::span, gsl::span) largest <= 1 was false. Can broadcast 0 by 0 or 1. 16 is invalid. ``` ### Expected behavior Should work for very large seq lens on default global block size without error
07-21-2022 20:55:03
07-21-2022 20:55:03
Hey @reelmath, thanks for opening an issue, it seems you and @echarlaix managed to find the source of the problem. We unfortunately don't have a lot of bandwidth to dive into solving that code, so I'll add an `onnx` tag and a `Good second issue` tag so that experienced users know that this is an issue that could be fixed. If you'd like to try your hand at it, please go ahead!<|||||>Hi, I would like to work on this if it has not been assigned to anyone, but could take some time if that is ok?<|||||>Hey @yhl48, this would be great indeed :-)<|||||>Hello @reelmath , I was trying to mimic your error with my setting as follows: - transformers version 4.23.1 - Python version: 3.10.5 - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: no - Using distributed or parallel set-up in script?: no but I faced the same errors with you.<|||||>![abcds](https://user-images.githubusercontent.com/35699839/195714963-171a83dd-d204-4464-b14a-b9449dbbdd58.png) <|||||>It looks like the pretrained model is not available anymore? Upon running the following line ``` model = ORTModelForSeq2SeqLM.from_pretrained("longt5-tglobal-base", from_transformers=True) ``` The following error was raised ``` OSError: longt5-tglobal-base is not a local folder and is not a valid model identifier listed on 'https://huggingface.co/models' If this is a private repository, make sure to pass a token having permission to this repo with `use_auth_token` or log in with `huggingface-cli login` and pass `use_auth_token=True`. ```<|||||>@yhl48 I think you need to use `google/long-t5-tglobal-base` name<|||||>Thanks @stancld! Has this issue been resolved? I can no longer replicate the error.
transformers
18,242
closed
Fix `no_trainer` CI
# What does this PR do? This PR fixes the no_trainer tests silently failing due to a similar reason in Accelerate [here](https://github.com/huggingface/accelerate/pull/517) - Adds a new way to call subprocess that properly contains the stack trace raised in the error - Reduces the passing result needed for the image classification example, as on a single GPU it reaches 62.5% but on multi gpu it hits 60% Requires https://github.com/huggingface/accelerate/pull/547 to be merged first ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @sgugger
07-21-2022 16:43:41
07-21-2022 16:43:41
_The documentation is not available anymore as the PR was closed or merged._
transformers
18,241
closed
Flax Support NLLB (or M2M100) model
### Feature request Add Flax/Jax support for M2M100 so it can be optimize in TPU ### Motivation NLLB is great model translation that support many languages and also have good accuracy it can be used to translate available english large dataset to another language but this require much resource such as multiple gpu parallelism to cut time for translation since tpu access more easy to get(through trc program) than multi-gpu access,it would be nice if we have NLLB/M2M100 for flax to give another reason flaxmarian in tpu is amazing... it can translate 100k English texts to Spanish in less than 4 minutes using Flax on TPUs with jax parallelism [flax community slack](https://huggingface.slack.com/archives/C025LJDP962/p1626488974341000) me myself already translated almost 100m sentence using marian flax with 3 days ~~ ### Your contribution although i am not expert yet at flax/jax ,i can make attemp to implement but i need some pointer how to do that - AFAIK,NLLB or M2M100 has similar architecture as MBart,since mbart flax has already implemented maybe it can start from that(?) - need to figure out how to "convert" NLLLB/M2M100 position embedding,etc to jax CMIIW
07-21-2022 16:20:39
07-21-2022 16:20:39
cc @patil-suraj @sanchit-gandhi in case you'd like to guide @acul3 to contribute the Flax version of M2M100<|||||>Hey @acul3! Awesome, let's do it 💪 Happy to help you through the implementation! On a high-level, the process for adding this model will look something as follows: 1. Copy across the modelling code from Flax Bart 2. Modify the Flax modelling code to match the PyTorch NLLB/M2M100 implementation \+ write any necessary tests along the way 3. Check whether the Flax logits match with the PyTorch ones 4. Iterate on step 2 until the check in step 3 passes! Once we have a Flax model that matches the PyTorch logits, we can be confident our implementation is correct :) As a starting point, we can copy across the Flax Bart modelling code. You can do this through the 'add-new-model-like' command: https://github.com/huggingface/transformers/tree/main/templates/adding_a_new_model#add-new-model-like-command from the Flax Bart model: https://github.com/huggingface/transformers/tree/main/src/transformers/models/bart/modeling_flax_bart.py Once you've done that, feel free to open a WIP PR. We can go from there!<|||||>Hi @sanchit-gandhi Thank you for the help I'll start to work on this today by following the step and open WIP PR first Will ask some question/pointer after that..thanks again<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
18,240
closed
Add callback that saves only best checkpoints
### Feature request A new class for a callback that saves checkpoint only if it performs better than the previous checkpoint on the evaluation dataset. It can work similar to the [EvalCallback](https://stable-baselines3.readthedocs.io/en/master/guide/callbacks.html) that is proposed by stable_baselines3. ### Motivation This callback will enable evaluating model frequently without using a lot of memory, because only several checkpoints will be kept. ### Your contribution Submitting PR
07-21-2022 15:37:22
07-21-2022 15:37:22
cc @sgugger <|||||>We already have `save_total_limits` to limit the number of checkpoint saved, and with `load_best_model_at_end=True` the best checkpoint is always kept. Which use case that is not currently available would this new callback permit?<|||||>I haven't noticed that it is possible to use these options together. The only thing that is different in proposed callback is that all saved checkpoints perform better on the evaluation dataset than the rest of the checkpoints. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
18,239
closed
TF2 DeBERTaV2 runs super slow on TPUs
### System Info latest version of transformers, Colab TPU, tensorflow 2 ### Who can help? @kamalkraj @Rocketknight1 @BigBird01 ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction It's currently hard to share code and access to the google bucket. But I believe any TF2 DeBERTaV2 code running on TPUs will have this issue ### Expected behavior I've been trying to train a deberta v3 model on GPU and TPUs. I got it to work on multi-node and multi-gpus using Nvidia deeplearning examples libraries https://github.com/NVIDIA/DeepLearningExamples/blob/master/TensorFlow2/LanguageModeling/ I basically used the training setup and loop from the BERT code, the dataset utils from the ELECTRA code, and the model from Huggingface transformers with some changes in order to share embeddings. On 6xA40 45gb gpus i get around 1370 sentences per seconds during training (which is lower than what Nvidia gets for Electra but it's fine). Ok, now the problem.... on TPU i get **20** sentences per second I traced the issue back to the tf.gather function here https://github.com/huggingface/transformers/blob/main/src/transformers/models/deberta_v2/modeling_tf_deberta_v2.py#L525 I ran TPU profiling and this is the output: ![image](https://user-images.githubusercontent.com/44616226/180247092-6bb99a22-05aa-418a-a684-f6fa632918ce.png) GatherV2 takes most of the time: ![image](https://user-images.githubusercontent.com/44616226/180248277-d6145680-963e-49ff-99f7-5837672d0e92.png) zoomed in pictures of the fast ops ![image](https://user-images.githubusercontent.com/44616226/180248860-a7429388-0023-4c20-9f5d-5b9726c0dda0.png) Also, I'm not sure if this is TPU specific since on GPUs the training ~30% slower compared to regular ELECTRA.
07-21-2022 15:16:43
07-21-2022 15:16:43
Hi @WissamAntoun, this is an interesting issue! I honestly have no idea what the cause could be, but the fact that it highlights that function is interesting. The reason is that the DeBERTa code was ported from PyTorch, and so we wrote our own implementation of `take_along_axis` because TF didn't have one. One thing to try would be to edit the code to use `tf.experimental.numpy.take_along_axis` instead of that function. If that doesn't work then we might have to see if we can do things in a different, more performant way. Also, just in case XLA compilation is the issue, have you tried using `jit_compile=True` in `compile()` when running DeBERTa on GPU? If that also causes performance degradation then the problem is caused by XLA and not TPUs, and we can investigate from there.<|||||>Also cc @sanchit-gandhi because I'm not a TPU expert - don't worry about investigating this deeply, but if anything comes to mind when you read it, let me know!<|||||>@Rocketknight1 I read all the discussions that you had with Kamal about the `torch.gather` and `take_along_axis` . On GPUs I already enabled XLA via `tf.config.optimizer.set_jit` and via T`F_XLA_FLAGS="--tf_xla_auto_jit=2 --tf_xla_cpu_global_jit"` but I was reading that this isn't the optimal way to do it, so I'm now trying the `jit_compile=True` and will report back. Also I just finished testing `tf.experimental.numpy.take_along_axis`, on GPUs it improved performance by ~10% yet on TPUs I still have the same issue. I will also test the `jit_compile` on TPUs but I don't think it will solve anything. Thanks a lot for the replies and for the effort you put in convert the pytorch code into TF <|||||>runnig the training with `jit_compile=True` on GPU revealed a new bug. Then it is now an XLA/JIT issue not a TPU one <details> <summary style="font-size:14px">View log dump</summary> <p> ```md 2022-07-21 23:36:18.107830: W tensorflow/core/framework/op_kernel.cc:1745] OP_REQUIRES failed at bcast_ops.cc:50 : INVALID_ARGUMENT: Input 0 to node `pretraining_model/tf_deberta_v2_for_masked_lm/deberta/encoder/layer_._0/attention/self/BroadcastArgs` with op BroadcastArgs must be a compile-time constant. XLA compilation requires that operator arguments that represent shapes or dimensions be evaluated to concrete values at compile time. This error means that a shape or dimension argument could not be evaluated at compile time, usually because the value of the argument depends on a parameter to the computation, on a variable, or on a stateful operation such as a random number generator. Stack trace for op definition: File "run_pretraining.py", line 204, in <module> config = main(start_time) File "run_pretraining.py", line 184, in main trained_model = run_customized_training_loop( File "/workspaces/nv-deberta-tf2/electra/model_training_utils.py", line 675, in run_customized_training_loop train_steps_strategy( File "/workspaces/nv-deberta-tf2/electra/model_training_utils.py", line 407, in train_steps_strategy if num_grad_accumulates != 1: File "/workspaces/nv-deberta-tf2/electra/model_training_utils.py", line 408, in train_steps_strategy for step_idx in tf.range(steps * num_grad_accumulates): File "/workspaces/nv-deberta-tf2/electra/model_training_utils.py", line 410, in train_steps_strategy strategy.run(_forward, args=(next(iterator),)) File "/workspaces/nv-deberta-tf2/electra/model_training_utils.py", line 324, in _forward loss, model_outputs = model(inputs, is_training=True) File "/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py", line 64, in error_handler return fn(*args, **kwargs) File "/usr/local/lib/python3.8/dist-packages/keras/engine/base_layer.py", line 1096, in __call__ outputs = call_fn(inputs, *args, **kwargs) File "/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py", line 92, in error_handler return fn(*args, **kwargs) File "/workspaces/nv-deberta-tf2/electra/modeling_tf_deberta_v2.py", line 2491, in call if config.uniform_generator: File "/workspaces/nv-deberta-tf2/electra/modeling_tf_deberta_v2.py", line 2496, in call mlm_output = self._get_masked_lm_output( File "/workspaces/nv-deberta-tf2/electra/modeling_tf_deberta_v2.py", line 2541, in _get_masked_lm_output if self._config.uniform_generator: File "/workspaces/nv-deberta-tf2/electra/modeling_tf_deberta_v2.py", line 2550, in _get_masked_lm_output outputs = generator( File "/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py", line 64, in error_handler return fn(*args, **kwargs) File "/usr/local/lib/python3.8/dist-packages/keras/engine/base_layer.py", line 1096, in __call__ outputs = call_fn(inputs, *args, **kwargs) File "/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py", line 92, in error_handler return fn(*args, **kwargs) File "/workspaces/nv-deberta-tf2/electra/modeling_tf_utils.py", line 1872, in run_call_with_unpacked_inputs ) File "/workspaces/nv-deberta-tf2/electra/modeling_tf_deberta_v2.py", line 1880, in call outputs = self.deberta( File "/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py", line 64, in error_handler return fn(*args, **kwargs) File "/usr/local/lib/python3.8/dist-packages/keras/engine/base_layer.py", line 1096, in __call__ outputs = call_fn(inputs, *args, **kwargs) File "/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py", line 92, in error_handler return fn(*args, **kwargs) File "/workspaces/nv-deberta-tf2/electra/modeling_tf_utils.py", line 1872, in run_call_with_unpacked_inputs ) File "/workspaces/nv-deberta-tf2/electra/modeling_tf_deberta_v2.py", line 1617, in call encoder_outputs = self.encoder( File "/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py", line 64, in error_handler return fn(*args, **kwargs) File "/usr/local/lib/python3.8/dist-packages/keras/engine/base_layer.py", line 1096, in __call__ outputs = call_fn(inputs, *args, **kwargs) File "/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py", line 92, in error_handler return fn(*args, **kwargs) File "/workspaces/nv-deberta-tf2/electra/modeling_tf_deberta_v2.py", line 527, in call for i, layer_module in enumerate(self.layer): File "/workspaces/nv-deberta-tf2/electra/modeling_tf_deberta_v2.py", line 532, in call layer_outputs = layer_module( File "/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py", line 64, in error_handler return fn(*args, **kwargs) File "/usr/local/lib/python3.8/dist-packages/keras/engine/base_layer.py", line 1096, in __call__ outputs = call_fn(inputs, *args, **kwargs) File "/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py", line 92, in error_handler return fn(*args, **kwargs) File "/workspaces/nv-deberta-tf2/electra/modeling_tf_deberta_v2.py", line 317, in call attention_outputs = self.attention( File "/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py", line 64, in error_handler return fn(*args, **kwargs) File "/usr/local/lib/python3.8/dist-packages/keras/engine/base_layer.py", line 1096, in __call__ outputs = call_fn(inputs, *args, **kwargs) File "/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py", line 92, in error_handler return fn(*args, **kwargs) File "/workspaces/nv-deberta-tf2/electra/modeling_tf_deberta_v2.py", line 226, in call self_outputs = self.self( File "/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py", line 64, in error_handler return fn(*args, **kwargs) File "/usr/local/lib/python3.8/dist-packages/keras/engine/base_layer.py", line 1096, in __call__ outputs = call_fn(inputs, *args, **kwargs) File "/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py", line 92, in error_handler return fn(*args, **kwargs) File "/workspaces/nv-deberta-tf2/electra/modeling_tf_deberta_v2.py", line 876, in call if self.relative_attention: File "/workspaces/nv-deberta-tf2/electra/modeling_tf_deberta_v2.py", line 878, in call rel_att = self.disentangled_att_bias( File "/workspaces/nv-deberta-tf2/electra/modeling_tf_deberta_v2.py", line 991, in disentangled_att_bias if "c2p" in self.pos_att_type: File "/workspaces/nv-deberta-tf2/electra/modeling_tf_deberta_v2.py", line 1012, in disentangled_att_bias c2p_att = tnp.take_along_axis( 2022-07-21 23:36:18.184105: W tensorflow/core/framework/op_kernel.cc:1745] OP_REQUIRES failed at xla_ops.cc:248 : INVALID_ARGUMENT: Input 0 to node `pretraining_model/tf_deberta_v2_for_masked_lm/deberta/encoder/layer_._0/attention/self/BroadcastArgs` with op BroadcastArgs must be a compile-time constant. XLA compilation requires that operator arguments that represent shapes or dimensions be evaluated to concrete values at compile time. This error means that a shape or dimension argument could not be evaluated at compile time, usually because the value of the argument depends on a parameter to the computation, on a variable, or on a stateful operation such as a random number generator. Stack trace for op definition: File "run_pretraining.py", line 204, in <module> config = main(start_time) File "run_pretraining.py", line 184, in main trained_model = run_customized_training_loop( File "/workspaces/nv-deberta-tf2/electra/model_training_utils.py", line 675, in run_customized_training_loop train_steps_strategy( File "/workspaces/nv-deberta-tf2/electra/model_training_utils.py", line 407, in train_steps_strategy if num_grad_accumulates != 1: File "/workspaces/nv-deberta-tf2/electra/model_training_utils.py", line 408, in train_steps_strategy for step_idx in tf.range(steps * num_grad_accumulates): File "/workspaces/nv-deberta-tf2/electra/model_training_utils.py", line 410, in train_steps_strategy strategy.run(_forward, args=(next(iterator),)) File "/workspaces/nv-deberta-tf2/electra/model_training_utils.py", line 324, in _forward loss, model_outputs = model(inputs, is_training=True) File "/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py", line 64, in error_handler return fn(*args, **kwargs) File "/usr/local/lib/python3.8/dist-packages/keras/engine/base_layer.py", line 1096, in __call__ outputs = call_fn(inputs, *args, **kwargs) File "/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py", line 92, in error_handler return fn(*args, **kwargs) File "/workspaces/nv-deberta-tf2/electra/modeling_tf_deberta_v2.py", line 2491, in call if config.uniform_generator: File "/workspaces/nv-deberta-tf2/electra/modeling_tf_deberta_v2.py", line 2496, in call mlm_output = self._get_masked_lm_output( File "/workspaces/nv-deberta-tf2/electra/modeling_tf_deberta_v2.py", line 2541, in _get_masked_lm_output if self._config.uniform_generator: File "/workspaces/nv-deberta-tf2/electra/modeling_tf_deberta_v2.py", line 2550, in _get_masked_lm_output outputs = generator( File "/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py", line 64, in error_handler return fn(*args, **kwargs) File "/usr/local/lib/python3.8/dist-packages/keras/engine/base_layer.py", line 1096, in __call__ outputs = call_fn(inputs, *args, **kwargs) File "/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py", line 92, in error_handler return fn(*args, **kwargs) File "/workspaces/nv-deberta-tf2/electra/modeling_tf_utils.py", line 1872, in run_call_with_unpacked_inputs ) File "/workspaces/nv-deberta-tf2/electra/modeling_tf_deberta_v2.py", line 1880, in call outputs = self.deberta( File "/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py", line 64, in error_handler return fn(*args, **kwargs) File "/usr/local/lib/python3.8/dist-packages/keras/engine/base_layer.py", line 1096, in __call__ outputs = call_fn(inputs, *args, **kwargs) File "/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py", line 92, in error_handler return fn(*args, **kwargs) File "/workspaces/nv-deberta-tf2/electra/modeling_tf_utils.py", line 1872, in run_call_with_unpacked_inputs ) File "/workspaces/nv-deberta-tf2/electra/modeling_tf_deberta_v2.py", line 1617, in call encoder_outputs = self.encoder( File "/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py", line 64, in error_handler return fn(*args, **kwargs) File "/usr/local/lib/python3.8/dist-packages/keras/engine/base_layer.py", line 1096, in __call__ outputs = call_fn(inputs, *args, **kwargs) File "/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py", line 92, in error_handler return fn(*args, **kwargs) File "/workspaces/nv-deberta-tf2/electra/modeling_tf_deberta_v2.py", line 527, in call for i, layer_module in enumerate(self.layer): File "/workspaces/nv-deberta-tf2/electra/modeling_tf_deberta_v2.py", line 532, in call layer_outputs = layer_module( File "/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py", line 64, in error_handler return fn(*args, **kwargs) File "/usr/local/lib/python3.8/dist-packages/keras/engine/base_layer.py", line 1096, in __call__ outputs = call_fn(inputs, *args, **kwargs) File "/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py", line 92, in error_handler return fn(*args, **kwargs) File "/workspaces/nv-deberta-tf2/electra/modeling_tf_deberta_v2.py", line 317, in call attention_outputs = self.attention( File "/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py", line 64, in error_handler return fn(*args, **kwargs) File "/usr/local/lib/python3.8/dist-packages/keras/engine/base_layer.py", line 1096, in __call__ outputs = call_fn(inputs, *args, **kwargs) File "/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py", line 92, in error_handler return fn(*args, **kwargs) File "/workspaces/nv-deberta-tf2/electra/modeling_tf_deberta_v2.py", line 226, in call self_outputs = self.self( File "/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py", line 64, in error_handler return fn(*args, **kwargs) File "/usr/local/lib/python3.8/dist-packages/keras/engine/base_layer.py", line 1096, in __call__ outputs = call_fn(inputs, *args, **kwargs) File "/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py", line 92, in error_handler return fn(*args, **kwargs) File "/workspaces/nv-deberta-tf2/electra/modeling_tf_deberta_v2.py", line 876, in call if self.relative_attention: File "/workspaces/nv-deberta-tf2/electra/modeling_tf_deberta_v2.py", line 878, in call rel_att = self.disentangled_att_bias( File "/workspaces/nv-deberta-tf2/electra/modeling_tf_deberta_v2.py", line 991, in disentangled_att_bias if "c2p" in self.pos_att_type: File "/workspaces/nv-deberta-tf2/electra/modeling_tf_deberta_v2.py", line 1012, in disentangled_att_bias c2p_att = tnp.take_along_axis( [[{{node pretraining_model/tf_deberta_v2_for_masked_lm/deberta/encoder/layer_._0/attention/self/BroadcastArgs}}]] Traceback (most recent call last): File "run_pretraining.py", line 204, in <module> config = main(start_time) File "run_pretraining.py", line 184, in main trained_model = run_customized_training_loop( File "/workspaces/nv-deberta-tf2/electra/model_training_utils.py", line 675, in run_customized_training_loop train_steps_strategy( File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/traceback_utils.py", line 153, in error_handler raise e.with_traceback(filtered_tb) from None File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/execute.py", line 54, in quick_execute tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name, tensorflow.python.framework.errors_impl.InvalidArgumentError: Graph execution error: Input 0 to node `pretraining_model/tf_deberta_v2_for_masked_lm/deberta/encoder/layer_._0/attention/self/BroadcastArgs` with op BroadcastArgs must be a compile-time constant. XLA compilation requires that operator arguments that represent shapes or dimensions be evaluated to concrete values at compile time. This error means that a shape or dimension argument could not be evaluated at compile time, usually because the value of the argument depends on a parameter to the computation, on a variable, or on a stateful operation such as a random number generator. Stack trace for op definition: File "run_pretraining.py", line 204, in <module> config = main(start_time) File "run_pretraining.py", line 184, in main trained_model = run_customized_training_loop( File "/workspaces/nv-deberta-tf2/electra/model_training_utils.py", line 675, in run_customized_training_loop train_steps_strategy( File "/workspaces/nv-deberta-tf2/electra/model_training_utils.py", line 407, in train_steps_strategy if num_grad_accumulates != 1: File "/workspaces/nv-deberta-tf2/electra/model_training_utils.py", line 408, in train_steps_strategy for step_idx in tf.range(steps * num_grad_accumulates): File "/workspaces/nv-deberta-tf2/electra/model_training_utils.py", line 410, in train_steps_strategy strategy.run(_forward, args=(next(iterator),)) File "/workspaces/nv-deberta-tf2/electra/model_training_utils.py", line 324, in _forward loss, model_outputs = model(inputs, is_training=True) File "/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py", line 64, in error_handler return fn(*args, **kwargs) File "/usr/local/lib/python3.8/dist-packages/keras/engine/base_layer.py", line 1096, in __call__ outputs = call_fn(inputs, *args, **kwargs) File "/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py", line 92, in error_handler return fn(*args, **kwargs) File "/workspaces/nv-deberta-tf2/electra/modeling_tf_deberta_v2.py", line 2491, in call if config.uniform_generator: File "/workspaces/nv-deberta-tf2/electra/modeling_tf_deberta_v2.py", line 2496, in call mlm_output = self._get_masked_lm_output( File "/workspaces/nv-deberta-tf2/electra/modeling_tf_deberta_v2.py", line 2541, in _get_masked_lm_output if self._config.uniform_generator: File "/workspaces/nv-deberta-tf2/electra/modeling_tf_deberta_v2.py", line 2550, in _get_masked_lm_output outputs = generator( File "/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py", line 64, in error_handler return fn(*args, **kwargs) File "/usr/local/lib/python3.8/dist-packages/keras/engine/base_layer.py", line 1096, in __call__ outputs = call_fn(inputs, *args, **kwargs) File "/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py", line 92, in error_handler return fn(*args, **kwargs) File "/workspaces/nv-deberta-tf2/electra/modeling_tf_utils.py", line 1872, in run_call_with_unpacked_inputs ) File "/workspaces/nv-deberta-tf2/electra/modeling_tf_deberta_v2.py", line 1880, in call outputs = self.deberta( File "/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py", line 64, in error_handler return fn(*args, **kwargs) File "/usr/local/lib/python3.8/dist-packages/keras/engine/base_layer.py", line 1096, in __call__ outputs = call_fn(inputs, *args, **kwargs) File "/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py", line 92, in error_handler return fn(*args, **kwargs) File "/workspaces/nv-deberta-tf2/electra/modeling_tf_utils.py", line 1872, in run_call_with_unpacked_inputs ) File "/workspaces/nv-deberta-tf2/electra/modeling_tf_deberta_v2.py", line 1617, in call encoder_outputs = self.encoder( File "/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py", line 64, in error_handler return fn(*args, **kwargs) File "/usr/local/lib/python3.8/dist-packages/keras/engine/base_layer.py", line 1096, in __call__ outputs = call_fn(inputs, *args, **kwargs) File "/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py", line 92, in error_handler return fn(*args, **kwargs) File "/workspaces/nv-deberta-tf2/electra/modeling_tf_deberta_v2.py", line 527, in call for i, layer_module in enumerate(self.layer): File "/workspaces/nv-deberta-tf2/electra/modeling_tf_deberta_v2.py", line 532, in call layer_outputs = layer_module( File "/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py", line 64, in error_handler return fn(*args, **kwargs) File "/usr/local/lib/python3.8/dist-packages/keras/engine/base_layer.py", line 1096, in __call__ outputs = call_fn(inputs, *args, **kwargs) File "/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py", line 92, in error_handler return fn(*args, **kwargs) File "/workspaces/nv-deberta-tf2/electra/modeling_tf_deberta_v2.py", line 317, in call attention_outputs = self.attention( File "/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py", line 64, in error_handler return fn(*args, **kwargs) File "/usr/local/lib/python3.8/dist-packages/keras/engine/base_layer.py", line 1096, in __call__ outputs = call_fn(inputs, *args, **kwargs) File "/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py", line 92, in error_handler return fn(*args, **kwargs) File "/workspaces/nv-deberta-tf2/electra/modeling_tf_deberta_v2.py", line 226, in call self_outputs = self.self( File "/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py", line 64, in error_handler return fn(*args, **kwargs) File "/usr/local/lib/python3.8/dist-packages/keras/engine/base_layer.py", line 1096, in __call__ outputs = call_fn(inputs, *args, **kwargs) File "/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py", line 92, in error_handler return fn(*args, **kwargs) File "/workspaces/nv-deberta-tf2/electra/modeling_tf_deberta_v2.py", line 876, in call if self.relative_attention: File "/workspaces/nv-deberta-tf2/electra/modeling_tf_deberta_v2.py", line 878, in call rel_att = self.disentangled_att_bias( File "/workspaces/nv-deberta-tf2/electra/modeling_tf_deberta_v2.py", line 991, in disentangled_att_bias if "c2p" in self.pos_att_type: File "/workspaces/nv-deberta-tf2/electra/modeling_tf_deberta_v2.py", line 1012, in disentangled_att_bias c2p_att = tnp.take_along_axis( [[{{node pretraining_model/tf_deberta_v2_for_masked_lm/deberta/encoder/layer_._0/attention/self/BroadcastArgs}}]] [[while/body/_1/while/StatefulPartitionedCall]] [Op:__inference_train_steps_strategy_177980] ``` </p></details><|||||>@WissamAntoun Confirmed reproduction of the issue here. Our TF DeBERTa implementation seems to have issues with XLA - I'm investigating now.<|||||>@WissamAntoun We have a potential fix - I've confirmed that I can compile `microsoft/deberta-v3-small` with XLA on my local machine. Can you try installing this branch and let me know if this fixes the problem for you? You can use `pip install git+https://github.com/huggingface/transformers.git@deberta-xla-fixes`<|||||>I confirm it works on GPUs with XLA, and I got ~20% improved speedup. I'm still testing now on TPUs, will let you know ASAP<|||||>Weirdly enough TPUs didn't seem to care about the changes 😅 even after we removed all the if branches<|||||>Hmm. Can you check that you don't get the slowdown if you switch the model to another model, like BERT or ELECTRA, while keeping all of the other code the same (especially data loading)? I know the profiling indicates that the `GatherV2` is the problem, but I'm a little suspicious!<|||||>I tried disabling `relative_attention` in deberta, which makes the model a regular BERT, and the performance improved 40x 😅<|||||>@WissamAntoun So the issue really is in that gather! That's extremely interesting - with the simplified code, it's just a single call to `tf.gather`, but perhaps the `batch_dims` argument is not handled elegantly on TPU, or XLA converts it in a way that doesn't run well on TPU. Is it possible that some kind of memory spill is occurring? Can you try lowering your batch size and increasing steps_per_execution? If that isn't it, then I have no idea - maybe there's some way to rewrite the gather, but I don't really know what to try!<|||||>@Rocketknight1 I tried your suggestions without any success, sadly! Then I tried replacing the whole `take_along_axis` function with `tf.gather(..,...,batch_dims=2)` which is equivalent, according to this test I made. GPU still runs fine, TPU still has the same issue 😔. I also ran out of ideas to try, now I'm just waiting for the TPU gods 😅 <details> <summary style="font-size:14px">View code</summary> <p> ```python #%% import tensorflow as tf #%% x_shape = [32, 128, 512] indices_shape = [32, 128, 128] x = tf.random.uniform(shape=x_shape) indices = tf.random.uniform(shape=indices_shape, minval=1, maxval=128, dtype=tf.int32) #%% flat_x = tf.reshape(x, (-1, x_shape[-1])) print(flat_x.shape) # (4096, 512) flat_indices = tf.reshape(indices, (-1, indices_shape[-1])) print(flat_indices.shape) # (4096, 128) #%% gathered = tf.gather( params=flat_x, indices=flat_indices, batch_dims=1, validate_indices=None ) print(gathered.shape) # (4096, 128) gathered_reshaped = tf.reshape(gathered, indices.shape) print(gathered_reshaped.shape) # ( 32, 128, 128) # %% gathered2 = tf.gather(params=x, indices=indices, batch_dims=2, validate_indices=None) print(gathered2.shape) # (32, 128, 128) # %% tf.assert_equal(gathered2, gathered_reshaped) # passes # %% ``` </p></details> <|||||>I'm clueless in that case - @patrickvonplaten @sanchit-gandhi do you have any idea why a `gather` or `take_along_axis` op which is performant on GPU and compiles with XLA would become a huge bottleneck on TPU?<|||||>In our JAX BLOOM experiments, we experienced significant improvements in performance by changing how we indexed. Swapping scatter ops for one-host broadcasts, we obtained 3-4x speed-ups in practice. The logic is largely lifted from T5X: https://github.com/google-research/t5x/blob/63d9addf628c6d8c547a407a32095fcb527bb20b/t5x/examples/scalable_t5/layers.py#L280-L284 I wonder if applying similar logic here and swapping the gather op to one-hot indexing might help?<|||||>DO you mean something to BERT one-hot embeddings ?https://github.com/tensorflow/models/blob/master/official/nlp/modeling/layers/on_device_embedding.py#L79<|||||>Simply modifying the bottleneck function: https://github.com/huggingface/transformers/blob/f4e172716b91b477ce3cddc9a253094b7121a4b8/src/transformers/models/deberta_v2/modeling_tf_deberta_v2.py#L525 To use `one_hot` encodings as opposed to a `gather` op. The example you've liked looks like the right idea! Worth a try IMO!<|||||>I tried this, although I'm not sure if it's the best implementation ```python def take_along_axis(x, indices): one_hot_indices = tf.one_hot(indices, depth=x.shape[-1], dtype=x.dtype) # [B, S, P, D] => [B, 128, 128, 512] # [B, S, P, D] . [B, S, D, 1] = [B, S, P, 1] gathered = tf.squeeze(tf.matmul(one_hot_indices, tf.expand_dims(x, axis=-1)), axis=-1) return gathered ``` It improved the speed from 20 seq/s to 110 seq/s. For reference, regular ELECTRA/BERT got ~800 seq/s. Now it's the reshape and squeeze operations that are "wasting" time: ![image](https://user-images.githubusercontent.com/44616226/180846643-495520a6-3605-4362-a6f9-31a69ed4fccc.png) <|||||>@sanchit-gandhi is there a better implementation than mine, without `expand_dims` or `squeeze` since these are unfavorable operations on TPUs<|||||>Nice! A 5x speed up is a good start. If we can get another 5x we'll be in business. Thanks for linking the Tensorboard profile! Super helpful in identifying bottlenecks like these 🙏 Interesting to see the `expand_dims` and `squeeze` are now accruing large amounts of runtime. I'm not a TF user (it's mainly JAX on TPU for me!), so I'm not up to speed with implementation details, but my impression from the profile is that the shapes are unfavourable for XLA. Perhaps you could have a play around and see whether changing the tensor shapes / choice of TF ops have any effect? It's been the case for me in the past that using tensors of different shape can give big speed-ups. Is there a repo you could reference for XLA optimised TF code? For JAX, we usually look to the T5X repo when deciding on tensor shapes and trying out 'hacks' like these: https://github.com/google-research/t5x/tree/main/t5x cc @Rocketknight1 who's more up to speed in the TF sphere!<|||||>Hey @WissamAntoun! Any luck with this? Maybe also worth trying https://www.tensorflow.org/api_docs/python/tf/experimental/numpy/take_along_axis<|||||>Hey @sanchit-gandhi , I have already tried the exp. numpy function with no improvement at all compared to `gather` with `batch_dims=2`. I also tried going up to sequence length of `512`, I got the exact same speedup but it is still much slower than expected (around 20 seq/s for sentence length 512). I also changed batch sizes with no effect at all <|||||>Okay probably worth sticking with the one-hot encoding hack then, seems most promising! I'm not a TF user so can't comment on the exact implementations changes you could make with the `expand_dims` or `squeeze` ops. Perhaps @gante could take a look here with his experience using TF and XLA?<|||||>> Now it's the reshape and squeeze operations that are "wasting" time Interesting -- I spent some time with TPU profiling on a different application (TF text generation with a myriad of models), and found that those two operations were part of the bottleneck (along XLA's `dynamic_update_slice`). They accounted for 50-70% of the execution time. Do you know if it is also a bottleneck for FLAX, @sanchit-gandhi (e.g. the cache updates [here](https://github.com/huggingface/transformers/blob/0b8c1b6994082950044452a670e8417a5ebc2db0/src/transformers/models/gpt2/modeling_flax_gpt2.py#L163))? <|||||>For JAX BLOOM we couldn't even compile the 176B parameter model with the naive implementation of `concatenate_to_cache`, yet alone benchmark which operations consumed the bulk of the execution time! We swapped it for this more efficient implementation (with one-hot encodings etc): https://github.com/huggingface/bloom-jax-inference/blob/2a04aa519d262729d54adef3d19d63879f81ea89/bloom_inference/modeling_bloom/modeling_bloom.py#L119 Coincidentally, we've just run the JAX profiler for this implementation and are going through the traceback it with some of the Google JAX guys later today. Will report back on how performance fares!<|||||>> ```python > def take_along_axis(x, indices): > > one_hot_indices = tf.one_hot(indices, depth=x.shape[-1], dtype=x.dtype) # [B, S, P, D] => [B, 128, 128, 512] > > # [B, S, P, D] . [B, S, D, 1] = [B, S, P, 1] > gathered = tf.squeeze(tf.matmul(one_hot_indices, tf.expand_dims(x, axis=-1)), axis=-1) > return gathered > ``` @gante Do you think the one-hot trick can be done without the `expands_dims` and `squeeze`, maybe then we can just dodge the whole problem<|||||>@sanchit-gandhi that's interesting! I'd be interested in knowing the pro tips for XLA (which should also apply to TF) @WissamAntoun Yeah, we can rework it with [`tf.einsum`](https://www.tensorflow.org/api_docs/python/tf/einsum) magic, assuming the operation can be rewritten with [Einstein notation](https://en.wikipedia.org/wiki/Einstein_notation) -- in this case, it is possible! Check the implementation below, give it a try, and let us know if it helped with speed on a TPU (my debug runs confirmed that they are numerically equivalent) ```python def take_along_axis(x, indices): # [B, S, P] -> [B, S, P, D] one_hot_indices = tf.one_hot(indices, depth=x.shape[-1], dtype=x.dtype) # if we ignore the first two dims, this is equivalent to multiplying a matrix (one hot) by a vector (x) # grossly abusing notation: [B, S, P, D] . [B, S, D] = [B, S, P] gathered = tf.einsum('ijkl,ijl->ijk', one_hot_indices, x) return gathered ```<|||||>@gante I tested the `tf.einsum` implementation. It gave me the same performance as the `one_hot` trick, which is about ~120 seq/second. I tried it with different batch sizes but still it didn't change much. This is a screenshot of the profiler: ![Screenshot 2022-08-03 155826](https://user-images.githubusercontent.com/44616226/182791801-021e4e4b-cff1-476a-8d94-95cec54b43d7.jpg) <|||||>I'm out of suggestions :( I suspect this is a good question for Google's XLA and TPU teams -- the problem is probably at a compiler/hardware level.<|||||>Yeah this is a weird and unexpected bug. Do you know someone we can get in contact with from Google's XLA or TPU team? And thanks a lot for the efforts you guys put into this issue!<|||||>@sanchit-gandhi do you know a good point of contact for TPU problems?<|||||>Ping @JackCaoG for help :) <|||||><del>Thanks, I will try to take a look or finding someone from my team to help. </del> nvm, this is tf2, I only knows pt/xla lol<|||||>> @sanchit-gandhi do you know a good point of contact for TPU problems? Only for JAX on TPU, I'll ask around and see if there is anyone who can help with TF!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
18,238
closed
Update all no_trainer scripts
# What does this PR do? This PR updates the `no_trainer` scripts with the latest capabilities in accelerate: - Includes gradient_accumulation wrapper - Adds the `gather_for_metrics` wrapper - Removes the explicit `step` param since it breaks wandb trackers (it will never be pushed) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests?
07-21-2022 13:11:32
07-21-2022 13:11:32
_The documentation is not available anymore as the PR was closed or merged._
transformers
18,237
closed
ONNX runtime error after export of Deberta v3 SequenceClassification model
### System Info - Transformers: 4.20.1.dev0 (master branch as of 2022-07-21) - Platform: Windows-10-10.0.19044-SP0 - Python version: 3.8.13 - Huggingface_hub version: 0.8.1 - PyTorch version (GPU?): 1.12.0+cu113 - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: No Issue both occurs on a Linux notebook with GPU (databricks platform) and on windows without GPU. **Do note that I use the latest development version of transformers, i.e. the current master branch of this repo.** This is necessary because there are changes to symbolic ops in the Deberta V3 model that have not made it into a stable release yet. ### Who can help? @LysandreJik ### Information - [X] My own modified scripts ### Tasks - [X] My own task or dataset (give details below) ### Reproduction I am trying to make an ONNX export of a fine-tuned Deberta sequence classification model. Below are the steps to make such a model and export it to ONNX. 1. First initiate a deberta sequence model. This example will just use the random weights, as there is no need for actual fine-tuning in this minimal example 2. Export to onnx 3. Test an inference using `onnxruntime` ```Python from pathlib import Path from onnxruntime import InferenceSession from transformers.models.deberta_v2 import DebertaV2OnnxConfig from transformers.onnx import export from transformers import AutoTokenizer, AutoConfig, AutoModelForSequenceClassification # Step 1 model_base = 'microsoft/deberta-v3-xsmall' config = AutoConfig.from_pretrained(model_base) tokenizer = AutoTokenizer.from_pretrained(model_base, use_fast=True) model = AutoModelForSequenceClassification.from_pretrained(model_base) # Step 2 onnx_path = Path(f"deberta.onnx") onnx_config = DebertaV2OnnxConfig(config, task="sequence-classification") export(tokenizer, model, onnx_config, 15, onnx_path) # Step 3 session = InferenceSession(onnx_path.as_posix()) inputs = tokenizer("Using DeBERTa with ONNX Runtime!", return_tensors="np", return_token_type_ids=False) input_feed = {k: v.astype('int64') for k, v in inputs.items()} outputs = session.run(output_names=['logits'], input_feed=input_feed) ``` I would expect outputs from the inference model. However the error I am getting is: ``` onnxruntime.capi.onnxruntime_pybind11_state.InvalidArgument: [ONNXRuntimeError] : 2 : INVALID_ARGUMENT : Non-zero status code returned while running Expand node. Name:'Expand_674' Status Message: invalid expand shape ``` ### Expected behavior Surprisingly, this model doesn't seem to work when the sequence length is anything else but 8. For example: ```Python # Anything with a sequence length of 8 runs fine: inputs = tokenizer(["Using Deberta V3!"], return_tensors="np", return_token_type_ids=False) inputs1 = {k: v.astype('int64') for k, v in inputs.items()} outputs = session.run(output_names=['logits'], input_feed=inputs1) # Anything else doesnt: inputs = tokenizer(["Using Deberta V3 with ONNX Runtime!"], return_tensors="np", return_token_type_ids=False) inputs2 = {k: v.astype('int64') for k, v in inputs.items()} outputs = session.run(output_names=['logits'], input_feed=inputs2) # Multiples of 8 will also not work: inputs = tokenizer(["Hello world. This is me. I will crash this model now!"], return_tensors="np", return_token_type_ids=False) inputs3 = {k: v.astype('int64') for k, v in inputs.items()} outputs = session.run(output_names=['logits'], input_feed=inputs3) ``` I was wondering if it maybe has anything to do with the dynamic axes. However when I check the graph, it seems correct: ```Python import onnx m = onnx.load(str(onnx_path)) print(m.graph.input) ``` ``` [name: "input_ids" type { tensor_type { elem_type: 7 shape { dim { dim_param: "batch" } dim { dim_param: "sequence" } } } } , name: "attention_mask" type { tensor_type { elem_type: 7 shape { dim { dim_param: "batch" } dim { dim_param: "sequence" } } } } ] ```
07-21-2022 12:24:23
07-21-2022 12:24:23
Hi @iiLaurens, thanks for the PR on fixing the export of DeBERTa! In terms of your use case, another possibility to simplify all the code would be using the [optimum library](https://github.com/huggingface/optimum) which is an extension of transformers. You can use directly [ORTModels](https://github.com/huggingface/optimum/blob/main/optimum/onnxruntime/modeling_ort.py#L526) and the pipeline for inference which are natively integrated with transformers. Here is a snippet adapted to your case: ```python from optimum.onnxruntime.modeling_ort import ORTModelForSequenceClassification from transformers import AutoTokenizer ort_model = ORTModelForSequenceClassification.from_pretrained(model_id="results", file_name="deberta_v3_seq.onnx") # Or download directly from the hub once your fix makes its way to the main of transformers # ort_model = ORTModelForSequenceClassification.from_pretrained('microsoft/deberta-v3-xsmall') tokenizer = AutoTokenizer.from_pretrained('microsoft/deberta-v3-xsmall', use_fast=True) inputs = tokenizer("Using DeBERTa with ONNX Runtime!", return_tensors="pt", return_token_type_ids=False) pred = ort_model(**inputs) ``` ``` >>> pred SequenceClassifierOutput(loss=None, logits=tensor([[-0.0199, 0.1397]]), hidden_states=None, attentions=None) ``` Besides, you can also leverage other tools in optimum(graph optimization, quantization...) for accelerating your inference. Cheers!
transformers
18,236
closed
Fix command of doc tests for local testing
# What does this PR do? In the utils/prepare_for_doc_test.py file, the command to test the doc test locally had typo, this fixes it <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @ydshieh Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
07-21-2022 12:12:31
07-21-2022 12:12:31
_The documentation is not available anymore as the PR was closed or merged._<|||||>Hey! You should update line 39 in a similar fashion :)<|||||>> Hey! You should update line 39 in a similar fashion :) Yes, realised bit later, Done<|||||>@ydshieh Can we close this ? @LysandreJik Please point to a next good bug to pick up.<|||||>@oneraghavan, thanks for wanting to contribute! There are a lot of issues available [here](https://github.com/huggingface/transformers/issues). Feel free to take a look and find one you'd like to try your hand at!
transformers
18,235
closed
Correct BLOOM parameters to 176B
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
07-21-2022 12:06:14
07-21-2022 12:06:14
_The documentation is not available anymore as the PR was closed or merged._<|||||>> The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_18235). All of your documentation changes will be reflected on that endpoint. Great!<|||||>Thanks for the fix ! 🚀 <|||||>awesome, thanks for fixing @muhammad-ahmed-ghani!
transformers
18,234
closed
Longformer, BigBird take same time to run in sparse mode as well as full-mode
### System Info Transformers: 4.20.1 Python: 3.8.12 Pretrained models & tokenizer from HF: "allenai/longformer-base-4096" and "google/bigbird-roberta-base" Longformer: Take same time to train (finetume) a pretrained model for different sliding window sizes of 256, 512, 1024 or 2048. One would expect that at lower sliding window sizes, the training times should be lower. BigBird: Same problem as above. In fact BigBird has a simple switch to change from sparse-attention to full-attention. The training time taken in both cases is roughly the same which seems to point to some issue. Small but complete source code to simulate: https://colab.research.google.com/drive/1nm7a-qJseNSCkAB5_3QNkVSrHc8zePAV?usp=sharing ### Who can help? @ydshieh ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction https://colab.research.google.com/drive/1nm7a-qJseNSCkAB5_3QNkVSrHc8zePAV?usp=sharing ### Expected behavior Longformer: Take different time to train (finetume) a pretrained model for different sliding window sizes of 256, 512, 1024 or 2048. One would expect that at lower sliding window sizes, the training times should be lower. BigBird: Same problem as above. In fact BigBird has a simple switch to change from sparse-attention to full-attention. The training time taken in both cases is roughly the same which seems to point to some issue.
07-21-2022 11:56:17
07-21-2022 11:56:17
@ydshieh It gets a bit more wierder. Today I tried to directly use the Longformer bypassing Huggingface. It needed minor changes to the above code. The link is here: https://colab.research.google.com/drive/1R5uDsbl3ZmUIccZtefVNBs3CXU_vcDZd?usp=sharing The observations continues to be perplexing: CASE 1: ATT_MODE = 'sliding_chunks'; 100% LOCAL attention ie attention_mask = 1 for all tokens SLIDE_WIN_SIZE = 256(default) takes between 9-10 hours to train SLIDE_WIN_SIZE = 1024 takes between 9-10 hours to train Observation: Sparse attention with 256 tokens windowsize should not take same fine-tuning time as 1024 tokens CASE 2: ATT_MODE = 'sliding_chunks'; NO attention ie attention_mask = 0 for all tokens SLIDE_WIN_SIZE is immaterial Observation: It is observed that even if none of tokens attend to each other, training time taken is same as case 1 above ie 9-10 hours which should not be the case CASE 3: ATT_MODE = 'sliding_chunks'; 100% Global attention: ie attention_mask = 2 SLIDE_WIN_SIZE is immaterial Observation: With 100% global attention, every token attends to each other. It is observed that if all tokens attend to each other, training time taken is 16-17 hours. This training time should be similar to Case 4 which is NOT the case Case 4: This is the most bizzarre ATT_MODE = 'n2' We can simply set choose the attention mode = 'n2' which is regular quadratic attention. Theoritically this should take same training time as Case 3 (when all tokens are marked as global) Observation: n2 attention takes the lowest training time of approx 2 hours only which is exact opposite of what LOngformer is supposed to do !!! Should I open a bug directly with the Longformer GITHUB?<|||||>Hi @allohvk After doing some experiments, I think we need **really** long sequences and attention window size to see the benefits of attention window size. Here is the main summary, which is from the 2 tables below: ## Summary - with tiny model, the effect of attention window size is more clear, especially on **CPU** - large model size has more overhead on other layers (for example, intermediate linear layers) - for a fixed model size, the effect is even more clear when the `max_len` get larger - with GPU, (which is very fast), the effect is less clear, but we can still see it with very long sequence/att_win (16384) ### Model size - **Tiny**: n_layers = 1, hidden_size = 1, intermediate_size = 1 - **Base**: n_layers = 12, hidden_size = 256, intermediate_size = 1024 - **Large**: n_layers = 24, hidden_size = 1024, intermediate_size = 4096 ⚠️ **(Be careful with `it/s` and `s/it` below)** ### CPU (256G RAM) | CPU | Tiny | Base | Large | | ------------- | -------------: | -------------: | -------------: | | max_len 2048 , attn_win 512 | 19.74 it/s | 1.02 s/it | 5.92 s/it | | max_len 2048 , attn_win 1024 | 14.42 it/s | 1.25 s/it | 6.47 s/it | | max_len 2048 , attn_win 2048 | 13.25 it/s | 1.48 s/it | 6.69 s/it | | max_len 4096, attn_win 512 | 16.55 it/s | 1.61 s/it | 10.31 s/it | | max_len 4096, attn_win 1024 | 10.00 it/s | 2.20 s/it | 11.29 s/it | | max_len 4096, attn_win 2048 | 4.84 it/s | 3.85 s/it | 13.47 s/it | | max_len 4096, attn_win 4096 | 3.18 it/s | 6.15 s/it | 15.49 s/it | | max_len 16384, attn_win 512 | 3.51 it/s | 5.61 s/it | 42.33 s/it | | max_len 16384, attn_win 1024 | 2.03 it/s | 8.08 s/it | 48.13 s/it | | max_len 16384, attn_win 2048 | 1.12 it/s | 12.03 s/it | 56.93 s/it | | max_len 16384, attn_win 4096 | 1.62 s/it | 20.22 s/it | 87.87 s/it | | max_len 16384, attn_win 8192 | 3.02 s/it | 34.67 s/it | 131.81 s/it | | max_len 16384, attn_win 16384 | 5.00 s/it | 56.79 s/it | 187.91 s/it | ### GPU (A100) | GPU | Tiny | Base | Large | | ------------- | -------------: | -------------: | -------------: | | max_len 2048 , attn_win 512 | 25.48 it/s | 5.15 it/s | 2.57 it/s | | max_len 2048 , attn_win 1024 | 26.33 it/s | 5.10 it/s | 2.42 it/s | | max_len 2048 , attn_win 2048 | 26.52 it/s | 5.09 it/s | 2.10 it/s | | max_len 4096, attn_win 512 | 25.55 it/s | 5.26 it/s | 2.32 it/s | | max_len 4096, attn_win 1024 | 25.73 it/s | 5.10 it/s | 2.01 it/s | | max_len 4096, attn_win 2048 | 24.23 it/s | 4.63 it/s | 1.52 it/s | | max_len 4096, attn_win 4096 | 21.30 it/s | 3.76 it/s | 1.05 it/s | | max_len 16384, attn_win 512 | 7.39 it/s | 4.24 it/s | 1.07 it/s | | max_len 16384, attn_win 1024 | 13.30 it/s | 3.37 it/s | 1.25 s/it | | max_len 16384, attn_win 2048 | 20.17 it/s | 2.33 it/s | 1.88 s/it | | max_len 16384, attn_win 4096 | 16.50 it/s | 1.44 it/s | N/A | | max_len 16384, attn_win 8192 | 13.46 it/s | 1.21 s/it | N/A | | max_len 16384, attn_win 16384 | 9.04 it/s | 2.16 s/it | N/A |<|||||>For the record, here are the 2 scripts I used to measure running time (copied from yours with modification) ```python python run.py ``` ### run.py ```python import os import json def run(attention_window, steps, batch_size, max_length): os.system("rm -rf output.txt") os.system(f"python debug.py {attention_window} {steps} {batch_size} {max_length} > output.txt 2>&1") with open("output.txt") as fp: for line in fp: if f"{steps - 1}/{steps}" in line: line = line.strip() idx = line.find(f"{steps - 1}/{steps}") line = line[idx:] if "Initializing global" in line: idx = line.find("Initializing global") line = line[:idx] line = line.strip() return line res = {} steps = 10 for batch_size in [1]: for max_length in [2048, 4096, 16384]: for attention_window in [512, 1024, 2048, 4096, 8192, 16384]: if attention_window > max_length: continue r = run(attention_window=attention_window, steps=steps, batch_size=batch_size, max_length=max_length) print(f"(attn_win: {attention_window}, batch_size: {batch_size}, max_len: {max_length}) --> {r}") print("=" * 40) res[f"(attn_win: {attention_window}, batch_size: {batch_size}, max_len: {max_length})"] = r with open("results.json", "w") as fp: json.dump(res, fp, indent=4, ensure_ascii=False) ``` ### debug.py ```python import sys import torch import datasets import transformers from transformers import BigBirdForSequenceClassification, Trainer, TrainingArguments, AutoTokenizer, AutoModel from transformers.models.longformer.modeling_longformer import LongformerForSequenceClassification, LongformerConfig from sklearn.metrics import accuracy_score from torch.utils.data import Dataset import logging # logging.disable(logging.INFO) def measure(attention_window, steps, batch_size, max_length): SLIDE_WIN_SIZE = attention_window STEPS = steps BATCH_SIZE = batch_size GRAD_ACCUMULATION_STEPS = 1 LEN = max_length MODEL = 'allenai/longformer-base-4096' LONGFORMER = True CACHE_ROOT = "./" train_data, test_data = datasets.load_dataset('imdb', split=['train', 'test'], cache_dir=f'{CACHE_ROOT}/data') config = LongformerConfig.from_pretrained(MODEL, num_labels=2, return_dict=True) config.num_hidden_layers = 12 config.hidden_size = 256 config.num_attention_heads = 1 config.intermediate_size = 1024 config.attention_window = SLIDE_WIN_SIZE model = LongformerForSequenceClassification(config=config) tokenizer = AutoTokenizer.from_pretrained(MODEL, max_length=LEN, cache_dir=f'{CACHE_ROOT}/data') print("DEFAULT - Sliding window width across layers", model.config.attention_window) model.config.attention_window = SLIDE_WIN_SIZE print("UPDATED - Sliding window width across layers", model.config.attention_window) def tokenization(batched_text): return tokenizer(batched_text['text'], padding = 'max_length', truncation=True, max_length = LEN) train_data = train_data.map(tokenization, batched = True, batch_size = len(train_data)) test_data = test_data.map(tokenization, batched = True, batch_size = len(test_data)) train_data.set_format('torch', columns=['input_ids', 'attention_mask', 'label']) test_data.set_format('torch', columns=['input_ids', 'attention_mask', 'label']) def compute_metrics(pred): labels = pred.label_ids preds = pred.predictions.argmax(-1) acc = accuracy_score(labels, preds) return {'accuracy': acc} training_args = TrainingArguments( output_dir=f'{CACHE_ROOT}/results', # num_train_epochs=1, per_device_train_batch_size=BATCH_SIZE, max_steps=STEPS, gradient_accumulation_steps=GRAD_ACCUMULATION_STEPS, warmup_steps=160, weight_decay=0.01, learning_rate=2e-5, fp16=False, # True, dataloader_num_workers=2, logging_strategy="steps", logging_steps=1, ) trainer = Trainer(model=model, args=training_args, compute_metrics=compute_metrics, train_dataset=train_data) trainer.train() if __name__ == "__main__": data = sys.argv[1:] print(data) data = [int(x) for x in data] measure(*data) ```<|||||>Thank you so much @ydshieh The observations are fascinating. One would think they would be part of the actual BigBird and Longformer papers but they are not. The benefits of changing the hyperparameters like sliding_window, global tokens etc manifest at really high seq sizes (not 2048 or even 4096 but 8000 or 16000). Because I was testing on a GPU and at a size of 2048, I could hardly see any difference. Thank you for your detailed testing and observations. In fact this means that there is a gap to squeeze in couple of new transformer models/white papers which specifically address the max_seqlen of 512 - 4096 space in a non-quadratic way such that it makes a meaningful difference in training time. Hope someone comes out with a new model soon :)<|||||>No good
transformers
18,233
closed
Make errors for loss-less models more user-friendly
# What does this PR do? A common mistake beginners encounter is trying to fine-tune with the `Trainer` one of the AutoModel which do not have any head, and can't be fine-tuned directly. This PR makes the Trainer error at init when it received one such model, and also adds a more helpful error message when the outputs of the model don't have a loss.
07-21-2022 09:45:57
07-21-2022 09:45:57
_The documentation is not available anymore as the PR was closed or merged._
transformers
18,232
closed
Fix TrainingArguments help section
# What does this PR do? A typo was introduced in #18134 with a trailing comma that has nothing to do here. It broke the `--help` for all example scripts, as reported in #18222, this PR fixes it and adds a type annotation. Fixes #18222
07-21-2022 08:55:45
07-21-2022 08:55:45
_The documentation is not available anymore as the PR was closed or merged._
transformers
18,231
closed
Conflict between pyctcdecode and Wav2Vec2ProcessorWithLM
### System Info transformers 4975002df50c472cbb6f8ac3580e475f570606ab pyctcdecode 9afead58560df07c021aa01285cd941f70fe93d5 ### Who can help? @patrici ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Error: ` The tokens {'', '⁇', ' '} are defined in the tokenizer's vocabulary, but not in the decoder's alphabet. Make sure to include {'', '⁇', ' '} in the decoder's alphabet.` Reason: `get_missing_alphabet_tokens` will replace special tokens https://github.com/huggingface/transformers/blob/4975002df50c472cbb6f8ac3580e475f570606ab/src/transformers/models/wav2vec2_with_lm/processing_wav2vec2_with_lm.py#L196 however if we `build_ctcdecoder` using the same tokenizer vocab, it will be always mismatch. ### Expected behavior A straight fix is do the same mapping on build_ctcdecoder ```python from transformers import AutoProcessor from pyctcdecode.alphabet import BLANK_TOKEN_PTN, UNK_TOKEN, UNK_TOKEN_PTN, Alphabet from pyctcdecode import build_ctcdecoder from transformers import Wav2Vec2ProcessorWithLM model_to_add_lm = "wav2vec2-large-xxxxx" lm_arpa_path = "xxxxx.arpa" processor = AutoProcessor.from_pretrained(model_to_add_lm) vocab_dict = processor.tokenizer.get_vocab() sorted_vocab_dict = {k: v for k, v in sorted(vocab_dict.items(), key=lambda item: item[1])} alphabet = list(sorted_vocab_dict.keys()) for i, token in enumerate(alphabet): if BLANK_TOKEN_PTN.match(token): alphabet[i] = "" if token == processor.tokenizer.word_delimiter_token: alphabet[i] = " " if UNK_TOKEN_PTN.match(token): alphabet[i] = UNK_TOKEN decoder = build_ctcdecoder( labels=alphabet, kenlm_model_path=lm_arpa_path, ) decoder._alphabet._labels = alphabet processor_with_lm = Wav2Vec2ProcessorWithLM( feature_extractor=processor.feature_extractor, tokenizer=processor.tokenizer, decoder=decoder ) processor_with_lm.save_pretrained("xxxxxx") ```
07-21-2022 08:48:54
07-21-2022 08:48:54
Maybe of interest to @patrickvonplaten @anton-l @sanchit-gandhi** <|||||>Hi @voidful. The function [`get_missing_alphabet_tokens`](https://github.com/huggingface/transformers/blob/99eb9b523f9b9ea6096323ce5610ce6633acc88a/src/transformers/models/wav2vec2_with_lm/processing_wav2vec2_with_lm.py#L187) will only replace 'special' tokens associated with CTC decoding, namely: - The 'blank' token - The 'pad' token - The 'word delimiter' token The function is used to highlight discrepancies between the tokenizer and decoder vocabularies. The tokens highlighted as missing `{'', '⁇', ' '}` are not filtered by this function, and thus appear to be missing in the decoder vocabulary. May I ask, what is it exactly that you are proposing? If we you could provide a code snippet to reproduce this behaviour it would be much appreciated.<|||||>My situation is that I have a fine-tuned xlsr model, and I want to add kenlm on top of it. And I build the decoder using `build_ctcdecoder`, the label will be the same as our tokenizer vocabulary. Therefore It will have discrepancies on https://github.com/huggingface/transformers/blob/99eb9b523f9b9ea6096323ce5610ce6633acc88a/src/transformers/models/wav2vec2_with_lm/processing_wav2vec2_with_lm.py#L187 I suggest not to replace the token when tokenizer and decoder vocabularies are the same. Here is my code: https://colab.research.google.com/drive/1IR8cwVjkflJhj0e7te_iAdYfuNlKDVzr?usp=sharing <|||||>Thanks for the code-snippet! I haven't been able to reproduce on the other template examples (e.g. https://discuss.huggingface.co/t/how-to-create-wav2vec2-with-language-model/12703). Will look more in-depth as to why the exception is being thrown for the use case in the Colab!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Hey @voidful, sorry about the delayed reply. I've taken a deeper look into your issue - it looks as though there is a mis-match between the tokeniser and LM's vocabularies (12305 tokens to be exact): https://colab.research.google.com/drive/1v1qd4CUdSXKmrSYIMqMzMk_KCUMfMWu9?usp=sharing For LM boosted beam-search decoding for CTC, we need the vocabulary of the LM to match that of the tokeniser one-to-one. You can ensure this by training your LM using the same method that you use to train the Wav2Vec2 tokeniser. You then shouldn't have to override the method `decoder._alphabel.labels`: the vocabularies should already match (barring the special tokens). See this example for creating a tokeniser: https://github.com/sanchit-gandhi/seq2seq-speech/blob/main/get_ctc_tokenizer.py And this example for creating a corresponding LM: https://github.com/sanchit-gandhi/seq2seq-speech/blob/main/get_ctc_ngram.py This blog also explains succinctly how one can train and instantiate an LM: https://huggingface.co/blog/wav2vec2-with-ngram<|||||>> Hey @voidful, sorry about the delayed reply. I've taken a deeper look into your issue - it looks as though there is a mis-match between the tokeniser and LM's vocabularies (12305 tokens to be exact): https://colab.research.google.com/drive/1v1qd4CUdSXKmrSYIMqMzMk_KCUMfMWu9?usp=sharing > > For LM boosted beam-search decoding for CTC, we need the vocabulary of the LM to match that of the tokeniser one-to-one. You can ensure this by training your LM using the same method that you use to train the Wav2Vec2 tokeniser. You then shouldn't have to override the method `decoder._alphabel.labels`: the vocabularies should already match (barring the special tokens). > > See this example for creating a tokeniser: https://github.com/sanchit-gandhi/seq2seq-speech/blob/main/get_ctc_tokenizer.py > > And this example for creating a corresponding LM: https://github.com/sanchit-gandhi/seq2seq-speech/blob/main/get_ctc_ngram.py > > This blog also explains succinctly how one can train and instantiate an LM: https://huggingface.co/blog/wav2vec2-with-ngram I see, the reason is that I use a bpe vocabulary to train the ctc model, it will not be match to KenLM, so I have to patch the vocabulary to make sure not deleting the bpe token.
transformers
18,230
closed
Translation/debugging
# What does this PR do? * added debugging.mdx * updated _toctree.yml See issue: [#17459](https://github.com/huggingface/transformers/issues/17459) ## Before submitting - [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [X] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @omarespejel @sgugger @mfumanelli
07-21-2022 08:31:21
07-21-2022 08:31:21
_The documentation is not available anymore as the PR was closed or merged._
transformers
18,229
closed
start from 1.12, torch_ccl is renamed as oneccl_bindings_for_pytorch …
…and should import it before use Signed-off-by: Wang, Yi A <[email protected]> # What does this PR do? when run the transformer with torch 1.12 and we should pip install one ccl (version 1.12) as well to enable DDP finetune in cpu. python -m pip install oneccl_bind_pt==1.12.0 -f https://developer.intel.com/ipex-whl-stable from 1.12.0 the module name will be changed to oneccl_bindings_for_pytorch. and should be imported before use. or else error will happen. Fixes # (issue) as described above. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Library: - trainer: @sgugger
07-21-2022 08:20:42
07-21-2022 08:20:42
@yao-matrix @liangan1 please review<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>@sgugger document has been uploaded<|||||>Hi, @sgugger ,this fix is aligned with what we do in the accelerate PR, without the correct module import, the DDP could not work with CCL backend<|||||>@sgugger thanks for the careful review. doc is updated based one your comment
transformers
18,228
closed
VisualBERT, visual feature projection.
The default implementation takes in 1x197x768 visual features and gives an error while multiplying 197x768 with 2048x768 (or 1021/512 x 768 depending upon the model used). Do we really need to modify the inner visual projection code for VisualBERT? Feels weird. Can someone help pl?
07-21-2022 08:16:29
07-21-2022 08:16:29
cc @gchhablani <|||||>@Shiv681991 Can you please share some code examples of what you are trying to do? It'll help me replicate and understand the issue better.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
18,227
closed
Can't load tokenizer for longt5-xl
### System Info transformers version: 4.20.0 Platform: Linux-4.15.0-135-generic Python version: 3.8.13 PyTorch version (GPU?): torch==1.10.2+cu113 Using GPU in script?: no Using distributed or parallel set-up in script?: no ### Who can help? @patrickvonplate ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction I'm using the example provided on this page: https://huggingface.co/google/long-t5-tglobal-xl: ``` from transformers import AutoTokenizer, LongT5Model tokenizer = AutoTokenizer.from_pretrained("google/long-t5-tglobal-xl") ``` ### Expected behavior A tokenizer for a longt5-xl works
07-21-2022 07:43:44
07-21-2022 07:43:44
I have the same issue: When I try to load the model with its tokenizer I get the following error message: ``` OSError: Can't load tokenizer for 'google/long-t5-tglobal-xl'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'google/long-t5-tglobal-xl' is the correct path to a directory containing all relevant files for a T5TokenizerFast tokenizer. ```<|||||>Indeed, it seems the tokenizer files were not uploaded to that repository. Pinging @stancld, could you mention which tokenizer files should be used here? I'm happy to add these to `google`'s repositories. <|||||>@LysandreJik AFAIK, the `LongT5` models use the same tokenizer as the `T5` model. I'd, therefore, just copy the `tokenizer.json` config e.g. from `long-t5-tglobal-large` to the XL repo, and it should work as expected.<|||||>Sounds good, I'll take care of that. Thanks!<|||||>Should work now, this was the only repository that needed to be updated (was lacking the tokenizer files). Feel free to close this issue if your problem is solved!
transformers
18,226
closed
Fix `TFSwinSelfAttention` to have relative position index as non-trainable weight
# What does this PR do? This PR fixes `TFSwinSelfAttention` to have `relative_position_index` as non-trainable weight. ## Problem When trying to convert `SwinModel` to `TFSwinModel` by using `TFSwinModel.from_pretrained(weight_path, config, from_pt=True)`, I faced the warning below: ``` Some weights of the PyTorch model were not used when initializing the TF 2.0 model TFSwinModel: ['encoder.layers.2.blocks.0.attention.self.relative_position_index', 'encoder.layers.2.blocks.1.attention.self.relative_position_index', 'encoder.layers.2.blocks.6.attention.self.relative_position_index', 'encoder.layers.2.blocks.7.attention.self.relative_position_index', ... ``` **I checked that `SwinModel` has those keys on its weight while `TFSwinModel` hasn't.** `SwinModel` assigned this value as non-trainable weight by using `self.register_buffer`, but on `TFSwinModel` it was just assigned as class members (`self.relative_position_index = tf.reduce_sum(...)`). ## Fix I added `relative_position_index` as non-trainable parameter by using `self.add_weight` on `build()`, so that `relative_position_index` can have proper key name on `model.weights` list. I checked the conversion that I failed is successfully done after applying this fix. I also tried to just change `self.relative_position_index` to `tf.Variable(..., trainable=False)`, but It didn't work due to the key name. This will set key name as `relative_position_index:0`, not like `tf_swin_model/swin/encoder/layers.0/.../self/relative_position_index:0`. ## Review This PR is related with Swin Transformer and TensorFlow. TensorFlow: @LysandreJik
07-21-2022 07:01:00
07-21-2022 07:01:00
_The documentation is not available anymore as the PR was closed or merged._<|||||>@sgugger Adding you for a final TF review before merging
transformers
18,225
closed
Add canine in documentation_tests_file
# What does this PR do? modeling_canine has doc test setup by not included in documentation_tests.txt , this PR adds it <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes #16292 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @ArthurZucker Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
07-21-2022 06:11:05
07-21-2022 06:11:05
_The documentation is not available anymore as the PR was closed or merged._<|||||>@ydshieh Requesting you to review .<|||||>Sorry, I missed this PR. Look it now.<|||||>@oneraghavan , the doctest would fail for `canine` at this moment. We have to add the expected values (loss or some outputs). Would you like to follow the changes in PR #16441 for `modeling_longformer.py`, more precisely in `LongformerForSequenceClassification` and `LongformerForTokenClassification`. See those changes [here](https://github.com/huggingface/transformers/pull/16441/files). Don't hesitate if you have any question. Thank you!<|||||>@ydshieh Will add those changes. Request you to reopen this PR.<|||||>@oneraghavan Thank you 🤗 . I reopened the PR. Before continue the work, don't forget to update your local `main` branch first, then rebase your working branch on `main` branch.<|||||>@ydshieh I request you to reopen the PR again. I have fixed the checkpoints, the tests should pass now.<|||||>@ydshieh I think this is good to merge. <|||||>Yes, agreed @ydshieh . In general any result that is LABEL_0 or a list of those should really not be included.<|||||>@ydshieh I agree to the part where label_x is not so meaningful. Duplicating the function will make later debugging hard. I will remove the test for token classification. @sgugger Can we make add_code_sample_docstrings decorator use the expected output in optional way ? like if the function does not have the expected output, just don't validate the expected output ?<|||||>I don't think we have an easy way to ignore the doctest in this case. The `>>> predicted_tokens_classes` part in `PT_TOKEN_CLASSIFICATION_SAMPLE` in the file `src/transformers/utils/doc.py` requires some expected outputs for `predicted_tokens_classes`. If there is none, the test just fails. ```python >>> predicted_tokens_classes {expected_output} ```<|||||>@ydshieh @sgugger Can we do add a paramerter in add_code_sample_docstrings in function and leave the default to None. Then when places we need to use custom sample, we can call it from there . The function definition will look like this def add_code_sample_docstrings( *docstr, processor_class=None, checkpoint=None, output_type=None, config_class=None, mask="[MASK]", qa_target_start_index=14, qa_target_end_index=15, model_cls=None, modality=None, expected_output="", expected_loss="", code_sample="", ): Inside I can use code_sample if it has been passed or look up the code sample from the templates. Let me know if this is okay. <|||||>I don't see how [the latest change](https://github.com/huggingface/transformers/pull/18225/commits/e1e98b9576c6a331f2b74f730ddd08c6f47421d6) is better than just putting the docstring under `CanineForTokenClassification` directly. I will leave @sgugger to give his opinion.<|||||>We don't need any other tooling here. Either the model falls in the "automatic docstring" category or it does not. If it does not, we just write the docstring (with the replace return decorator).
transformers
18,224
closed
Fix typo in add_new_pipeline.mdx
fix typo # What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> @sgugger
07-21-2022 05:08:48
07-21-2022 05:08:48
_The documentation is not available anymore as the PR was closed or merged._
transformers
18,223
closed
Tensorflow example squad's run_qa.py miss token_type_ids inputs
### System Info transformers==4.20.1, torch==1.9.0, tensorflow2==2.9. ### Who can help? _No response_ ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Steps to reproduce the behavior 1. Download SQuADv.11 fine-tuned bert large weights from: https://huggingface.co/bert-large-uncased-whole-word-masking-finetuned-squad 2. Successfully reproduce the inference F1-score by runing this [pytorch example](https://github.com/huggingface/transformers/blob/main/examples/pytorch/question-answering/run_qa.py). 3. But fail to reproduce the inference f1-score by runing this [tensorflow2 example](https://github.com/huggingface/transformers/blob/main/examples/tensorflow/question-answering/run_qa.py). 4. The reason is the tensorflow example miss the token_type_ids inputs. I add this input at following position to solved this problem: https://github.com/huggingface/transformers/blob/main/examples/tensorflow/question-answering/run_qa.py#L640 ` tensor_keys = ["attention_mask", "token_type_ids", "input_ids"]` https://github.com/huggingface/transformers/blob/main/examples/tensorflow/question-answering/run_qa.py#L661 ``` eval_inputs = { "input_ids": tf.ragged.constant(processed_datasets["validation"]["input_ids"]).to_tensor(), "token_type_ids": tf.ragged.constant(processed_datasets["validation"]["token_type_ids"]).to_tensor(), "attention_mask": tf.ragged.constant(processed_datasets["validation"]["attention_mask"]).to_tensor(), } ``` https://github.com/huggingface/transformers/blob/main/examples/tensorflow/question-answering/run_qa.py#L681 ``` predict_inputs = { "input_ids": tf.ragged.constant(processed_datasets["test"]["input_ids"]).to_tensor(), "token_type_ids": tf.ragged.constant(processed_datasets["test"]["token_type_ids"]).to_tensor(), "attention_mask": tf.ragged.constant(processed_datasets["test"]["attention_mask"]).to_tensor(), } ``` ### Expected behavior Both pytorch and tensorflow example produce same F1-score based on [this weights](https://huggingface.co/bert-large-uncased-whole-word-masking-finetuned-squad).
07-21-2022 04:42:23
07-21-2022 04:42:23
cc @Rocketknight1 <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>This issue has been resolved by #18451
transformers
18,222
closed
Running `examples/pytorch/summarization/run_summarization.py --help` gives `TypeError: can only concatenate tuple (not "str") to tuple`
### System Info - `transformers` version: 4.21.0.dev0 - Platform: macOS-12.3-x86_64-i386-64bit - Python version: 3.10.0 - Huggingface_hub version: 0.7.0 - PyTorch version (GPU?): 1.11.0 (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): 0.5.0 (cpu) - Jax version: 0.3.13 - JaxLib version: 0.3.10 - Using GPU in script?: no - Using distributed or parallel set-up in script?: no ### Who can help? @sgugger @patil-suraj ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Running `examples/pytorch/summarization/run_summarization.py --help` gives `TypeError: can only concatenate tuple (not "str") to tuple` in my environment. 1. `git clone https://github.com/huggingface/transformers` 2. `cd transformers` 3. `pip install .` 4. `pip install -r examples/pytorch/summarization/requirements.txt` 5. `python examples/pytorch/summarization/run_summarization.py --help` ### Expected behavior (full traceback) ``` Traceback (most recent call last): File "/Users/matthewf/transformers/examples/pytorch/summarization/run_summarization.py", line 735, in <module> main() File "/Users/matthewf/transformers/examples/pytorch/summarization/run_summarization.py", line 304, in main model_args, data_args, training_args = parser.parse_args_into_dataclasses() File "/Users/matthewf/.pyenv/versions/3.9.7/envs/transformers/lib/python3.9/site-packages/transformers/hf_argparser.py", line 217, in parse_args_into_dataclasses namespace, remaining_args = self.parse_known_args(args=args) File "/Users/matthewf/.pyenv/versions/3.9.7/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/argparse.py", line 1853, in parse_known_args namespace, args = self._parse_known_args(args, namespace) File "/Users/matthewf/.pyenv/versions/3.9.7/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/argparse.py", line 2062, in _parse_known_args start_index = consume_optional(start_index) File "/Users/matthewf/.pyenv/versions/3.9.7/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/argparse.py", line 2002, in consume_optional take_action(action, args, option_string) File "/Users/matthewf/.pyenv/versions/3.9.7/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/argparse.py", line 1930, in take_action action(self, namespace, argument_values, option_string) File "/Users/matthewf/.pyenv/versions/3.9.7/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/argparse.py", line 1094, in __call__ parser.print_help() File "/Users/matthewf/.pyenv/versions/3.9.7/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/argparse.py", line 2550, in print_help self._print_message(self.format_help(), file) File "/Users/matthewf/.pyenv/versions/3.9.7/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/argparse.py", line 2534, in format_help return formatter.format_help() File "/Users/matthewf/.pyenv/versions/3.9.7/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/argparse.py", line 283, in format_help help = self._root_section.format_help() File "/Users/matthewf/.pyenv/versions/3.9.7/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/argparse.py", line 214, in format_help item_help = join([func(*args) for func, args in self.items]) File "/Users/matthewf/.pyenv/versions/3.9.7/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/argparse.py", line 214, in <listcomp> item_help = join([func(*args) for func, args in self.items]) File "/Users/matthewf/.pyenv/versions/3.9.7/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/argparse.py", line 214, in format_help item_help = join([func(*args) for func, args in self.items]) File "/Users/matthewf/.pyenv/versions/3.9.7/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/argparse.py", line 214, in <listcomp> item_help = join([func(*args) for func, args in self.items]) File "/Users/matthewf/.pyenv/versions/3.9.7/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/argparse.py", line 530, in _format_action help_text = self._expand_help(action) File "/Users/matthewf/.pyenv/versions/3.9.7/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/argparse.py", line 626, in _expand_help return self._get_help_string(action) % params File "/Users/matthewf/.pyenv/versions/3.9.7/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/argparse.py", line 697, in _get_help_string help += ' (default: %(default)s)' TypeError: can only concatenate tuple (not "str") to tuple ```
07-20-2022 23:57:15
07-20-2022 23:57:15
Thanks for flagging! The PR mentioned above should fix it.
transformers
18,221
closed
Add support for Sagemaker Model Parallel >= 1.10 new checkpoint API
# What does this PR do? This PR adds support for Sagemaker Model Parallel >= 1.10's new checkpoint API while keeping SMP < 1.10 functionality. * Support loading checkpoints saved with SMP < 1.10 in SMP < 1.10 and SMP >= 1.10 * Support loading checkpoints saved with SMP >= 1.10 in SMP >= 1.10 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
07-20-2022 19:10:31
07-20-2022 19:10:31
_The documentation is not available anymore as the PR was closed or merged._
transformers
18,220
closed
transformers[tf-cpu] fails because torch isn't installed
### System Info transformers-cli-env crashes, so I'm typing things manually, lmk if you need something specific. ``` Windows 10=19043.1826 Miniconda3=4.12.0 pip=22.1.2 python=3.9.13 cudatoolkit=11.3.1 cudnn=8.1.0.77 tensorboard=2.9.1 tensorboard-data-server=0.6.1 tensorboard-plugin-wit=1.8.1 tensorflow-cpu=2.9.1 tensorflow-estimator=2.9.0 tensorflow-io-gcs-filesystem=0.26.0 ``` ### Who can help? @Rocketknight1 - looks like you are listed for tensorflow. Apologies if this is wrong, or if I misinterpreted something. ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Follow the installation instructions for tf-cpu from the [documentation](https://www.tensorflow.org/install/pip#windows). 1. `conda create -n hf python=3.9 pip` 2. `conda activate hf` 3. `pip install transformers[tf-cpu]` 6. Verify tensorflow install: `python -c "import tensorflow as tf; print(tf.config.list_physical_devices('CPU'))"` 7. Verify the hugging face install `python -c "from transformers import AutoModelForSequenceClassification; model=AutoModelForSequenceClassification.from_pretrained('distilbert-base-uncased-finetuned-sst-2-english')"` It fails complaining that torch is not installed. -- Yes I can create an env with torch, but ... the tf-cpu branch should be working with tensorflow not torch. ``` Traceback (most recent call last): File "<string>", line 1, in <module> File "C:\Users\Mikey\miniconda3\envs\hf\lib\site-packages\transformers\utils\import_utils.py", line 821, in __getattr__ requires_backends(cls, cls._backends) File "C:\Users\Mikey\miniconda3\envs\hf\lib\site-packages\transformers\utils\import_utils.py", line 809, in requires_backends raise ImportError("".join(failed)) ImportError: AutoModelForSequenceClassification requires the PyTorch library but it was not found in your environment. Checkout the instructions on the installation page: https://pytorch.org/get-started/locally/ and follow the ones that match your environment. ``` I have also tried installing CUDA and CuDNN, but it did not have any effect. `conda install -c conda-forge cudatoolkit=11.3 cudnn=8.1.0` ### Expected behavior The tensorflow version of hugging face should work with tensorflow and not raise exceptions about torch being missing.
07-20-2022 18:43:06
07-20-2022 18:43:06
Hi @BrainSlugs83, the issue there is that the `AutoModelForSequenceClassification` is actually a Torch class - if you want the TF version you should use `TFAutoModelForSequenceClassification`. Can you try that change and let me know if it fixes things?<|||||>I see, that's helpful to know -- I think it would fix it (though not for that specific model). -- And we can close this issue as PEBCAK on my part. (Definitely PEBCAK as this is documented, I just didn't notice it when I was trying to figure this out yesterday. 🤦🏻‍♂️ -- I really appreciate the guidance, so thank you @Rocketknight1. 🙂) Though I would like to give the feedback (if you're open to it): 1. It seems like a missed opportunity for the Auto classes (i.e. it seems like the Auto classes are designed to look up the class that you actually need and hand that back to you, so as to promote code reuse.) Therefore, I feel like the auto classes *should* be able to know the difference and just hand you back a TF specific class if you're using TF or a Torch specific class if you're using Torch... Because, as-is, this prevents code-reuse (i.e. I can't share the same code between the two frameworks as they have different class names.) 2. At the very least, it seems like the error message should be telling me to use a different class name, and not to be reinstalling my dev environment and switching ML stacks. 😅 Thank you again though -- I really appreciate the hand holding here!<|||||>@BrainSlugs83 Honestly, we like the idea! I'm going to draft a PR - I'll link you when it's ready.<|||||>@BrainSlugs83 PR is open at #18280!
transformers
18,219
closed
Tokeniser support in java
### Feature request Current I have tested the sentence transformers paraphrase-multilingual-MiniLM-L12-v2 model in python. The model seems to be performing very well. I want to use the model in java so I converted the model to onnx model. But I could not find a way to convert the tokeniser in java or some equivalent tokeniser library in java . So would like to know is there a way to use the tokeniser in java. ### Motivation Tokeniser support in java ### Your contribution .
07-20-2022 17:12:31
07-20-2022 17:12:31
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
18,218
closed
Generate: validate arguments
# What does this PR do? NOTE: this PR is very experimental, feel free to trash it in the review process :) A common cause for issues in `generate` is around it not behaving as expected, as arguments can be silently ignored as part of the selected generation submethod (greedy_search, sample, ...). Typos also often fly under the radar, as the method accepts `**model_kwargs`, which in turn are passed to models that also accept `**kwargs`. This PR adds argument validation to `generate` in two separate steps: 1. `model_kwargs` are verified as soon as the method is called. Only arguments that the model actually uses in `prepare_inputs_for_generation` or in its forward pass are accepted. This means that typos are caught immediately. The exception enumerates all arguments that triggered this failed check, so the user can correct them. 2. Before calling the appropriate generate submethod, which is picked from the arguments, checks that all passed arguments will actually be used. If the user passes an argument that is not used in that particular submethod, throws an exception indicating the submethod that was triggered and the unaccepted arguments, so the user can fix either problem (correct the submethod or correct the arguments). Although I think the checks are super useful, the code around it is not the prettiest. The first check has some logic for edge cases, and the second case requires passing the list of methods that will be called before the submethod in question. The PR is heavily commented in GH, feel free to cast your judgment! P.S.: (seemingly) unrelated accelerate tests are failing in `run_examples_torch` ### Related issues - https://github.com/huggingface/transformers/issues/18130 - https://github.com/huggingface/transformers/pull/17196 - (many other issues where users were confused because they were trying to use certain arguments that had no effect on the picked submethod)
07-20-2022 17:04:22
07-20-2022 17:04:22
_The documentation is not available anymore as the PR was closed or merged._<|||||>> For 2, the way you chose feels very very magical with lots of ad-hoc code that is going to be hard to maintain. Yeah, I agree, that was the number 1 reason why I left so many comments and caveats. It works but would be annoying to maintain. (@sgugger) If I got it right, the suggestion was to pop used arguments from `generation_inputs` as we call functions, correct? Something like `consume_arguments(generation_inputs, <function that was just called>)` after most calls, with a small validation function at the end of generate? Meanwhile, I'm going to do as suggested, and move the model kwargs validation to its own PR :)<|||||>> Something like consume_arguments(generation_inputs, <function that was just called>) after most calls, with a small validation function at the end of generate No, something more like `result, generation_inputs = <function to call>(generation_inputs)`<|||||>Closing in place of two PRs: - https://github.com/huggingface/transformers/pull/18261 for the model_kwargs validation - TBD for the validation of other arguments, as per comments above
transformers
18,217
closed
BLOOM model parameters mentioned in hub-docs
<h3>The model mentioned in Hugging-face hub is actually 176B parameters model by BLOOM. But written as 175B in "docs/source/en/model_doc/bloom.mdx"</h3> <h4>Visit below link to confirm</h4> [link](https://huggingface.co/docs/transformers/model_doc/bloom) ![image](https://user-images.githubusercontent.com/63394104/179976308-46ecf0d0-41b8-44e8-bc19-c6c1d60d2969.png)
07-20-2022 15:04:01
07-20-2022 15:04:01
transformers
18,216
closed
Support private (Opacus) training of BART by altering BartLearnedPositionalEmbedding's forward method
### Feature request Alter the signature of `BartLearnedPositionalEmbedding`'s forward method to take a `torch.Tensor` instead of `torch.Size` input. ### Motivation This will support private fine-tuning of BART via DP-SGD in Opacus. To use Opacus on a custom `nn.Module` like `BartLearnedPositionalEmbedding` there is a fairly reasonable assumption that layers take tensors as input. This assumption falls over with `BartLearnedPositionalEmbedding` since it take a `torch.Shape` input instead. In particular, `opacus/grad_sample/grad_sample_module.py` line 190 (the `capture_activations_hook` method) tries to detach the input from device via: `module.activations.append(forward_input[0].detach())` If we pass the tensor instead, we can start fine-tuning BART-type summarization models with differential privacy. ### Your contribution A few lines of code need to be changed in `modeling_bart.py`. In particular, the `forward` signature of `BartLearnedPositionalEmbedding.forward()` and references to this method. I already have a change with BART-related tests passing. More than happy to create a PR :)
07-20-2022 14:36:14
07-20-2022 14:36:14
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
18,215
closed
[Don't merge] Debug testing
# What does this PR do? Debug
07-20-2022 13:29:44
07-20-2022 13:29:44
_The documentation is not available anymore as the PR was closed or merged._
transformers
18,214
closed
Save and load
Hello community, I am having the same problem described to save and load a fine tuned model using transformers and tensorflow. I have used save_pretrained, save_weights and model.save with save_format=tf. I have been able to load the model with from_pretrained but it loads no weights and when I perform evaluation performance is too low compared when training while the fine tuned model is in memory. You could check my code at GitHub in LeninGF/clasificaion_robos_fge in model_train_huggingface.ipynb and evaluate Notebook.ipynb
07-20-2022 13:07:32
07-20-2022 13:07:32
cc @Rocketknight1 @gante<|||||>Hi @LeninGF 👋 I had a look into your notebooks but they are very long, which makes it very hard to pin the problem. Would you be able to share a short notebook (as short as possible) where the problem can be reproduced? Thanks :)<|||||>Hi Huggingface/Transformers I will do it . The only problem is that I will use other dataset different from the one I am working because if privacy policy... Give some hours to upload it On Mon, Aug 1, 2022, 4:18 PM Joao Gante ***@***.***> wrote: > Hi @LeninGF <https://github.com/LeninGF> 👋 I had a look into your > notebooks but they are very long, which makes it very hard to pin the > problem. Would you be able to share a short notebook (as short as possible) > where the problem can be reproduced? Thanks :) > > — > Reply to this email directly, view it on GitHub > <https://github.com/huggingface/transformers/issues/18214#issuecomment-1201732303>, > or unsubscribe > <https://github.com/notifications/unsubscribe-auth/AH7TWKIUSR6BG5KVBRGMBO3VXA5KPANCNFSM54DTBWPA> > . > You are receiving this because you were mentioned.Message ID: > ***@***.***> > <|||||>Hi Joao Please you can check my training code here: https://colab.research.google.com/gist/LeninGF/89234ab4ba45147d34b8e8657caff761/model_train_huggingface_gitnew.ipynb I think that the problem should happen with any dataset used. I am using a multi labelled dataset. For company reasons I am not yet able to share it. Please let me know if you would need a sample of it to reproduce the problem. The following colab shows how I am trying to train the model https://colab.research.google.com/gist/LeninGF/89234ab4ba45147d34b8e8657caff761/model_train_huggingface_gitnew.ipynb The following colab shows that I am trying to load the weights of the trained model to test it again with the test set by using a new colab notebook https://colab.research.google.com/gist/LeninGF/08b2824b73692134ec27979a7e6011ea/testingsavedfthfmodel.ipynb you can reach me at ***@***.*** too The problem is as follows: it does not matter how I train the model. While the notebook where it was trained is active, you can see that the model.evaluate(test_dataset) achieves a satisfactory 0.8 accuracy (even though there is some overfitting) However, once I saved the model and I try to load it again, it does not work and you can see that repeating the weights load, model compile and model evaluate gives me an accuracy off 0.08 thanks for your kind help. If It is not to bother you a lot I am trying to replicate this problem using the tweet emotion dataset that huggingface has, I can send you the gist if you agree. I have already trained the model and I am about to test if the downloaded model will be working Atentamente, Lenin Falconí Estrada El lun, 1 ago 2022 a las 16:18, Joao Gante ***@***.***>) escribió: > Hi @LeninGF <https://github.com/LeninGF> 👋 I had a look into your > notebooks but they are very long, which makes it very hard to pin the > problem. Would you be able to share a short notebook (as short as possible) > where the problem can be reproduced? Thanks :) > > — > Reply to this email directly, view it on GitHub > <https://github.com/huggingface/transformers/issues/18214#issuecomment-1201732303>, > or unsubscribe > <https://github.com/notifications/unsubscribe-auth/AH7TWKIUSR6BG5KVBRGMBO3VXA5KPANCNFSM54DTBWPA> > . > You are receiving this because you were mentioned.Message ID: > ***@***.***> > <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
18,213
closed
Change to FlavaProcessor in PROCESSOR_MAPPING_NAMES
# What does this PR do? `FLAVAProcessor` in `PROCESSOR_MAPPING_NAMES` should be `FlavaProcessor`. Not getting problem when using `AutoProcessor.from_pretrained`, but `PROCESSOR_MAPPING[FlavaConfig]` will fail ### Errors ```python from transformers import PROCESSOR_MAPPING, FlavaConfig, CLIPConfig, LayoutLMv2Config processor_types = PROCESSOR_MAPPING[CLIPConfig] print(processor_types) processor_types = PROCESSOR_MAPPING[LayoutLMv2Config] print(processor_types) # This fails processor_types = PROCESSOR_MAPPING[FlavaConfig] print(processor_types) ``` with errors ```bash Traceback (most recent call last): File "C:\Users\33611\Desktop\Project\transformers\temp.py", line 9, in <module> processor_types = PROCESSOR_MAPPING[FlavaConfig] File "C:\Users\33611\Desktop\Project\transformers\src\transformers\models\auto\auto_factory.py", line 565, in __getitem__ return self._load_attr_from_module(model_type, model_name) File "C:\Users\33611\Desktop\Project\transformers\src\transformers\models\auto\auto_factory.py", line 579, in _load_attr_from_module return getattribute_from_module(self._modules[module_name], attr) File "C:\Users\33611\Desktop\Project\transformers\src\transformers\models\auto\auto_factory.py", line 539, in getattribute_from_module return getattribute_from_module(transformers_module, attr) File "C:\Users\33611\Desktop\Project\transformers\src\transformers\models\auto\auto_factory.py", line 539, in getattribute_from_module return getattribute_from_module(transformers_module, attr) File "C:\Users\33611\Desktop\Project\transformers\src\transformers\models\auto\auto_factory.py", line 539, in getattribute_from_module return getattribute_from_module(transformers_module, attr) [Previous line repeated 986 more times] File "C:\Users\33611\Desktop\Project\transformers\src\transformers\models\auto\auto_factory.py", line 538, in getattribute_from_module transformers_module = importlib.import_module("transformers") File "C:\Users\33611\miniconda3\envs\py39\lib\importlib\__init__.py", line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "<frozen importlib._bootstrap>", line 1030, in _gcd_import File "<frozen importlib._bootstrap>", line 1004, in _find_and_load File "<frozen importlib._bootstrap>", line 157, in __enter__ File "<frozen importlib._bootstrap>", line 183, in _get_module_lock File "<frozen importlib._bootstrap>", line 59, in __init__ RecursionError: maximum recursion depth exceeded while calling a Python object Process finished with exit code 1 ```
07-20-2022 10:05:13
07-20-2022 10:05:13
_The documentation is not available anymore as the PR was closed or merged._<|||||>Test error is unrelated - merge now ``` error: failed to fetch some objects from 'https://user:[email protected]/__DUMMY_TRANSFORMERS_USER__/test-trainer-step.git/info/lfs ```
transformers
18,212
closed
Private model usage problem
I upload a private model for myself, and when I want to use it by “AutoModel.from_pretrained” there appears a error as I show bleow. I have used huggingface-cli login with the access token with read grant and use “trust_remote_code=True” as it recommands but it still has error 401. How can I use my private model? Explicitly passing a revision is encouraged when loading a configuration with custom code to ensure no malicious code has been contributed in a newer revision. Explicitly passing a revision is encouraged when loading a model with custom code to ensure no malicious code has been contributed in a newer revision. Could not locate the model.py inside micktsai/resnet50_try. Traceback (most recent call last): File “test.py”, line 9, in model = AutoModel.from_pretrained(“micktsai/resnet50_try”, trust_remote_code=True,use_auth_token=True) File “C:\Users\User\AppData\Local\Programs\Python\Python37\lib\site-packages\transformers\models\auto\auto_factory.py”, line 441, in from_pretrained pretrained_model_name_or_path, module_file + “.py”, class_name, **kwargs File “C:\Users\User\AppData\Local\Programs\Python\Python37\lib\site-packages\transformers\dynamic_module_utils.py”, line 382, in get_class_from_dynamic_module local_files_only=local_files_only, File “C:\Users\User\AppData\Local\Programs\Python\Python37\lib\site-packages\transformers\dynamic_module_utils.py”, line 239, in get_cached_module_file use_auth_token=use_auth_token, File “C:\Users\User\AppData\Local\Programs\Python\Python37\lib\site-packages\transformers\utils\hub.py”, line 292, in cached_path local_files_only=local_files_only, File “C:\Users\User\AppData\Local\Programs\Python\Python37\lib\site-packages\transformers\utils\hub.py”, line 495, in get_from_cache _raise_for_status(r) File “C:\Users\User\AppData\Local\Programs\Python\Python37\lib\site-packages\transformers\utils\hub.py”, line 418, in _raise_for_status f"401 Client Error: Repository not found for url: {response.url}. " transformers.utils.hub.RepositoryNotFoundError: 401 Client Error: Repository not found for url: [https://huggingface.co/micktsai/resnet50_try/resolve/main/model.py 1](https://huggingface.co/micktsai/resnet50_try/resolve/main/model.py). If the repo is private, make sure you are authenticated. C:\Users\User\Downloads>py test.py Explicitly passing a revision is encouraged when loading a configuration with custom code to ensure no malicious code has been contributed in a newer revision. Explicitly passing a revision is encouraged when loading a model with custom code to ensure no malicious code has been contributed in a newer revision. Could not locate the model.py inside micktsai/resnet50_try. Traceback (most recent call last): File “test.py”, line 10, in “micktsai/resnet50_try”, use_auth_token=True, trust_remote_code=True) File “C:\Users\User\AppData\Local\Programs\Python\Python37\lib\site-packages\transformers\models\auto\auto_factory.py”, line 441, in from_pretrained pretrained_model_name_or_path, module_file + “.py”, class_name, **kwargs File “C:\Users\User\AppData\Local\Programs\Python\Python37\lib\site-packages\transformers\dynamic_module_utils.py”, line 382, in get_class_from_dynamic_module local_files_only=local_files_only, File “C:\Users\User\AppData\Local\Programs\Python\Python37\lib\site-packages\transformers\dynamic_module_utils.py”, line 239, in get_cached_module_file use_auth_token=use_auth_token, File “C:\Users\User\AppData\Local\Programs\Python\Python37\lib\site-packages\transformers\utils\hub.py”, line 292, in cached_path local_files_only=local_files_only, File “C:\Users\User\AppData\Local\Programs\Python\Python37\lib\site-packages\transformers\utils\hub.py”, line 495, in get_from_cache _raise_for_status(r) File “C:\Users\User\AppData\Local\Programs\Python\Python37\lib\site-packages\transformers\utils\hub.py”, line 418, in _raise_for_status f"401 Client Error: Repository not found for url: {response.url}. " transformers.utils.hub.RepositoryNotFoundError: 401 Client Error: Repository not found for url: [https://huggingface.co/micktsai/resnet50_try/resolve/main/model.py 1](https://huggingface.co/micktsai/resnet50_try/resolve/main/model.py). If the repo is private, make sure you are authenticated.
07-20-2022 09:22:05
07-20-2022 09:22:05
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Same problem, something wrong with private models with unsupported architecture. It doesn't see modeling file.
transformers
18,211
closed
The problem in BATCH generation of GPT model
When I tried to use GPT model (including GPT-2, GPT-NEO-2.7B, GPT-J-6B, GPT-NEOX) to generate text, I found some strange results. When I set the batch size to 1, all results are normal. BUT when I set the batch_size more than 1, such as 4, 8, ..., the generated text on GPT-J-6B, GPT-NEOX is abmoral, which contains a large number of repeated consecutive letters or words, for example "AAAAAAAAAAAAAAA" or "The The The The The The The The The". I can not find the root of this question. Could you please give some suggestions to solve this problem? Thank you!
07-20-2022 08:09:10
07-20-2022 08:09:10
Hi, See this thread for batched generation: https://github.com/huggingface/transformers/pull/7552#issue-714062850<|||||>> Hi, > > See this thread for batched generation: [#7552 (comment)](https://github.com/huggingface/transformers/pull/7552#issue-714062850) Thanks a lot! And could you please tell me whether the current version supports correct sampling generation with batched setting? Thanks! <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Closing this as the issue seems resolved.
transformers
18,210
closed
TFAutoModel does not work with gpt2 and .generate
### System Info - `transformers` version: 4.20.1 - Platform: Linux-5.4.188+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.13 - Huggingface_hub version: 0.8.1 - PyTorch version (GPU?): 1.12.0+cu113 (False) - Tensorflow version (GPU?): 2.8.2 (False) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: no ### Who can help? @patil-suraj, @patrickvonplaten, @LysandreJik ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ``` from transformers import TFAutoModel, AutoTokenizer model = TFAutoModel.from_pretrained("gpt2") tokenizer = AutoTokenizer.from_pretrained("gpt2") tokens = tokenizer(["hey there"], return_tensors='tf') model.generate(input_ids=tokens['input_ids'], attention_mask=tokens['attention_mask']) ``` will return `AttributeError: 'TFBaseModelOutputWithPastAndCrossAttentions' object has no attribute 'logits'` Full stack trace: ``` Setting `pad_token_id` to 50256 (first `eos_token_id`) to generate sequence --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) [<ipython-input-45-eb17c4f14b9f>](https://localhost:8080/#) in <module>() ----> 1 output = model.generate(input_ids=tokens['input_ids'], attention_mask=tokens['attention_mask']) 2 output 3 frames [/usr/local/lib/python3.7/dist-packages/transformers/generation_tf_utils.py](https://localhost:8080/#) in generate(self, input_ids, max_length, min_length, do_sample, early_stopping, num_beams, temperature, top_k, top_p, repetition_penalty, bad_words_ids, bos_token_id, pad_token_id, eos_token_id, length_penalty, no_repeat_ngram_size, num_return_sequences, attention_mask, decoder_start_token_id, use_cache, output_scores, output_attentions, output_hidden_states, return_dict_in_generate, forced_bos_token_id, forced_eos_token_id, **model_kwargs) 594 return_dict_in_generate=return_dict_in_generate, 595 forced_bos_token_id=forced_bos_token_id, --> 596 forced_eos_token_id=forced_eos_token_id, 597 ) 598 [/usr/local/lib/python3.7/dist-packages/transformers/generation_tf_utils.py](https://localhost:8080/#) in _generate(self, input_ids, max_length, min_length, do_sample, early_stopping, num_beams, temperature, top_k, top_p, repetition_penalty, bad_words_ids, bos_token_id, pad_token_id, eos_token_id, length_penalty, no_repeat_ngram_size, num_return_sequences, attention_mask, decoder_start_token_id, use_cache, seed, output_scores, output_attentions, output_hidden_states, return_dict_in_generate, forced_bos_token_id, forced_eos_token_id, **model_kwargs) 1589 output_scores=output_scores, 1590 return_dict_in_generate=return_dict_in_generate, -> 1591 **model_kwargs, 1592 ) 1593 elif is_sample_gen_mode: [/usr/local/lib/python3.7/dist-packages/transformers/generation_tf_utils.py](https://localhost:8080/#) in greedy_search(self, input_ids, max_length, pad_token_id, eos_token_id, logits_processor, output_attentions, output_hidden_states, output_scores, return_dict_in_generate, **model_kwargs) 2082 # 1st generation step has to be run before to initialize `past` 2083 generated, finished_sequences, next_tokens, cur_len, model_kwargs = greedy_search_body_fn( -> 2084 generated, finished_sequences, input_ids, cur_len, model_kwargs 2085 ) 2086 [/usr/local/lib/python3.7/dist-packages/transformers/generation_tf_utils.py](https://localhost:8080/#) in greedy_search_body_fn(generated, finished_sequences, next_tokens, cur_len, model_kwargs) 2025 output_hidden_states=output_hidden_states, 2026 ) -> 2027 next_token_logits = outputs.logits[:, -1] 2028 2029 # Store scores, attentions and hidden_states when required AttributeError: 'TFBaseModelOutputWithPastAndCrossAttentions' object has no attribute 'logits' ``` ### Expected behavior Not sure if you consider this to be a bug, but it is a stumbling block for beginners. If you use `TFAutoModel` to load `gpt2` you will get a `TFGPT2Model`. This class has a `generate` method but it doesn't work (because it expects a linear layer to generate logits, i.e. it only works on `TFGPT2LMHeadModel`). I'd argue that it's a bug because if `TFGPT2Model` doesn't support generation, then it shouldn't have a `generate` method. Possible alternative fixes: * Throw an easier-to-understand error in this situation * Make `TFGPT2Model` not implement `generate` * Have `TFAutoModel` return a `TFGPT2LMHeadModel` (though this would be a breaking change)
07-20-2022 08:01:35
07-20-2022 08:01:35
cc @gante as well<|||||>Hi @ehrencrona 👋 The correct class to use for generation with decoder-only models is `TFAutoModelForCausalLM`. You can use it the same way as `TFAutoModel` but, contrarily to it, it has a language modeling head. As for the suggested fixes -- I agree `generate` should not exist here (or better yet, that the error should be informative, as new users might not know which class to use). I've added that to the list of generate goodies to add in the near future :) Thank you for flagging the issue and for the suggestions!
transformers
18,209
closed
Argument inconsistency between processor and tokenizer
### System Info transformers on master ### Who can help? @sgugger who modified the line lastly from git blame ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction In `processing_utils.py`, to use the slow version, one needs to specify `use_fast=False` in `from_pretrained`, https://github.com/huggingface/transformers/blob/d0acc9537829e7d067edbb791473bbceb2ecf056/src/transformers/processing_utils.py#L222-L226 while in `tokenization_utils_base.py`, to use the slow version, one needs to specify `from_slow=True` in `from_pretrained` https://github.com/huggingface/transformers/blob/d0acc9537829e7d067edbb791473bbceb2ecf056/src/transformers/tokenization_utils_base.py#L1804-L1815 This inconsistency leads to strange usages. For example, when we want to use the slow version of LayoutLMv2 processor, we have to pass both arguments simultaneously: ```python processor = LayoutLMv2Processor.from_pretrained("microsoft/layoutlmv2-base-uncased", use_fast=False, from_slow=True) ``` ### Expected behavior I suggest we change the option in `processing_utils.py` from `use_fast` to `from_slow`.
07-20-2022 06:45:00
07-20-2022 06:45:00
No `from_slow` is an internal argument that determines whether the tokenizer should be loaded from slow tokenizer files or a fast tokenizer file. That is why you're not finding it in the documentation for instance.<|||||>@sgugger Oh thanks. I confused the purpose of `from_slow`. Now it looks that I can only use `use_fast=False` to get the slow version.
transformers
18,208
closed
length_penalty behavior is inconsistent with documentation
### System Info - `transformers` version: 4.20.1 - Platform: Linux-5.4.120+-x86_64-with-glibc2.27 - Python version: 3.9.12 - Huggingface_hub version: 0.8.1 - PyTorch version (GPU?): 1.12.0+cu116 (True) - Tensorflow version (GPU?): 2.9.1 (False) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: yes - Using distributed or parallel set-up in script?: no ### Who can help? @patrickvonplaten ### Information - [X] The official example scripts - [X] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction `length_penalty` in language generation has different effects on the the length of the generation. Sometimes it makes the generation longer, sometimes it makes it shorter. This is very confusing as it is different from what the documentation says. Two previous issues touch on this problem: #4915 #16930 In Bart CNN/DM `length_penalty` **lengthens** the output. ```python from transformers import pipeline summarizer = pipeline("summarization", model='facebook/bart-large-cnn') ARTICLE = """ New York (CNN)When Liana Barrientos was 23 years old, she got married in Westchester County, New York. A year later, she got married again in Westchester County, but to a different man and without divorcing her first husband. Only 18 days after that marriage, she got hitched yet again. Then, Barrientos declared "I do" five more times, sometimes only within two weeks of each other. In 2010, she married once more, this time in the Bronx. In an application for a marriage license, she stated it was her "first and only" marriage. Barrientos, now 39, is facing two criminal counts of "offering a false instrument for filing in the first degree," referring to her false statements on the 2010 marriage license application, according to court documents. Prosecutors said the marriages were part of an immigration scam. On Friday, she pleaded not guilty at State Supreme Court in the Bronx, according to her attorney, Christopher Wright, who declined to comment further. After leaving court, Barrientos was arrested and charged with theft of service and criminal trespass for allegedly sneaking into the New York subway through an emergency exit, said Detective Annette Markowski, a police spokeswoman. In total, Barrientos has been married 10 times, with nine of her marriages occurring between 1999 and 2002. All occurred either in Westchester County, Long Island, New Jersey or the Bronx. She is believed to still be married to four men, and at one time, she was married to eight men at once, prosecutors say. Prosecutors said the immigration scam involved some of her husbands, who filed for permanent residence status shortly after the marriages. Any divorces happened only after such filings were approved. It was unclear whether any of the men will be prosecuted. The case was referred to the Bronx District Attorney\'s Office by Immigration and Customs Enforcement and the Department of Homeland Security\'s Investigation Division. Seven of the men are from so-called "red-flagged" countries, including Egypt, Turkey, Georgia, Pakistan and Mali. Her eighth husband, Rashid Rajput, was deported in 2006 to his native Pakistan after an investigation by the Joint Terrorism Task Force. If convicted, Barrientos faces up to four years in prison. Her next court appearance is scheduled for May 18. """ print(summarizer(ARTICLE, max_length=512, min_length=30, do_sample=False, length_penalty=1)) print(summarizer(ARTICLE, max_length=512, min_length=30, do_sample=False, length_penalty=2)) ``` Output: `[{'summary_text': 'Liana Barrientos has been married 10 times, with nine of her marriages occurring between 1999 and 2002. She is believed to still be married to four men, and at one time, she was married to eight men at once.'}]` `[{'summary_text': 'Liana Barrientos, 39, is charged with two counts of "offering a false instrument for filing in the first degree" In total, she has been married 10 times, with nine of her marriages occurring between 1999 and 2002. She is believed to still be married to four men.'}]` In GPT-2 increasing `length_penalty` **shortens** the output. ```python from transformers import pipeline generator = pipeline('text-generation', model='gpt2', device=5) print(generator("The White man worked as a", max_length=512, length_penalty=1)) print(generator("The White man worked as a", max_length=512, length_penalty=2)) ``` Output: `[{'generated_text': 'The White man worked as a receptionist for the British Consulate in Cairo and returned to Alexandria, where he was promoted to a military officer in 1953; in 1960 he worked as a consular officer, serving as secretary of state to President John F. Kennedy, and as a consul. In a conversation last fall, his grandfather told his sister Catherine, "We are going to make sure you are well."\n\nThe family is now living in a modest apartment, in a small part of town in the suburb of Alexandria.\n\n"We love you, and we love you," Catherine said, before she walked the five miles to the airport, where her husband, the first Egyptian president, has a $1 million plane ticket. The couple are still in touch with their three children, and will visit one next week.\n\nIn addition to the family, there are three other family members, one of whom has spent years as a caretaker for the hospital, which was the site of the largest civil conflict ever seen in modern Egypt. One was a nurse and family friend, who was paralyzed in a July 1975 accident.\n\n"It\'s just unbelievable," he told a reporter.\n\nThe funeral for one of the women who took her life last summer was held Wednesday at a church in the town of Dikun.\n\nIn his own words, the young woman\'s death marks a departure from his life.\n\n"I don\'t know if people would say I\'m the most important person in the world: I\'m the most beautiful person," he said. "But I did, but I will never forget that."'}]` `[{'generated_text': "The White man worked as a mechanic.\n\nHe is said to have been very close with the White man's wife and three children. Other information came through during the early years of the investigation.\n\nPolice said they had asked the man to tell his story to police in order to gain information related to the white man's death.\n\nA source close to the father said the motive for the killings is still being investigated and the suspect was not a white man."}]` ### Expected behavior Effect of `length_penalty` to be consistent with [documentation](https://huggingface.co/docs/transformers/v4.20.1/en/main_classes/text_generation#transformers.generation_utils.GenerationMixin.generate.length_penalty). Currently the documentation says: "Exponential penalty to the length. 1.0 means that the beam score is penalized by the sequence length. 0.0 means no penalty. Set to values < 0.0 in order to encourage the model to generate longer sequences, to a value > 0.0 in order to encourage the model to produce shorter sequences."
07-20-2022 04:41:39
07-20-2022 04:41:39
cc @gante as well<|||||>Hi @artidoro 👋 Thank you for raising this issue! There are actually two distinct problems, the first one was already on my radar: 1. `length_penalty` is only used with `beam_search`-based generation techniques. `facebook/bart-large-cnn` uses them by default, and `gpt2` doesn't. So, in fact, `length_penalty` has no effect on `gpt2`, the different results you're seeing are a consequence of sampling being on by default for `gpt2` (all these hidden defaults are also going through a deprecation phase 😉 ) 👉 solution: raise warnings/exceptions when these options have no effect (already being worked on) 2. The docstring really describes the opposite of what happens. As described in #4915: larger `length_penalty` -> larger denominator, increasing with output length -> larger score (because it is a negative value), increasing with output length -> benefits long outputs 👉 solution: fix the docstring (@patrickvonplaten FYI) I'll keep this issue open until the 2nd problem gets fixed.<|||||>Confirming point 2.) @gante we could directly fix this here: https://github.com/huggingface/transformers/blob/06d1ba1a55a12b3fb3ca081bdd4f812fda800c37/src/transformers/generation_beam_search.py#L140 as well.
transformers
18,207
closed
torch.jit.trace can trace shared weights, no need to clone weights when tracing
### System Info [source code of "tie or clone weights"](https://github.com/huggingface/transformers/blob/8a61fe023430115bb61ec328a29d35571f4fc2c4/src/transformers/modeling_utils.py#L1137) [document](https://huggingface.co/docs/transformers/v4.20.1/en/serialization#torchscript-flag-and-tied-weights) I did a experiment and results showed that `torch.jit.trace` can trace shared weights and use the `TorchScript` for training. Correct me if I was wrong, thx! ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ``` import torch import torch.nn as nn batch_size = 32 seq_len = 32 emb_size = 128 vocab_size = 32 class Model(nn.Module): def __init__(self): super().__init__() self.emb1 = nn.Embedding(seq_len, emb_size) self.emb2 = nn.Embedding(seq_len, emb_size) def forward(self, x): y1 = self.emb1(x) y2 = self.emb2(x) return (y1, y2) model = Model() model.emb2.weight = model.emb1.weight model.eval() with torch.no_grad(): example_input = torch.randint(vocab_size, [batch_size, seq_len]) example_output = model(example_input) s = example_output[0].size() weight_before_train = model.emb1.weight.clone() print(weight_before_train) # origin model model.train() loss_fn = nn.L1Loss() optimizer = torch.optim.SGD(model.parameters(), 0.1) for _ in range(100): optimizer.zero_grad() inputs = torch.randint(vocab_size, [batch_size, seq_len]) targets = (torch.randn(s), torch.randn(s)) outputs = model(inputs) assert torch.allclose(outputs[0], outputs[1]) loss = loss_fn(targets[0], outputs[0]) + loss_fn(targets[1], outputs[1]) loss.backward() optimizer.step() model.eval() with torch.no_grad(): weight_after_train = model.emb1.weight.clone() print(weight_after_train) assert torch.equal(model.emb1.weight, model.emb2.weight) assert not torch.allclose(weight_before_train, weight_after_train) # traced model traced = torch.jit.trace(model, example_input) traced.eval() with torch.no_grad(): weight_before_train = traced.emb1.weight.clone() print(weight_before_train) traced.train() loss_fn = nn.L1Loss() optimizer = torch.optim.SGD(traced.parameters(), 0.1) for _ in range(100): optimizer.zero_grad() inputs = torch.randint(vocab_size, [batch_size, seq_len]) targets = (torch.randn(s), torch.randn(s)) outputs = traced(inputs) assert torch.allclose(outputs[0], outputs[1]) loss = loss_fn(targets[0], outputs[0]) + loss_fn(targets[1], outputs[1]) loss.backward() optimizer.step() traced.eval() with torch.no_grad(): weight_after_train = traced.emb1.weight.clone() print(weight_after_train) assert torch.equal(traced.emb1.weight, traced.emb2.weight) assert not torch.allclose(weight_before_train, weight_after_train) ``` ### Expected behavior shared weights can be traced
07-20-2022 03:23:11
07-20-2022 03:23:11
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
18,206
closed
The saved trained albert-base-v2 model does not work properly
### System Info 2022-07-19 15:17:57.094050: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'cudart64_110.dll'; dlerror: cudart64_110.dll not found 2022-07-19 15:17:57.094178: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine. WARNING:tensorflow:From C:\Users\19715\anaconda3\lib\site-packages\transformers\commands\env.py:52: is_gpu_available (from tensorflow.python.framework.test_util) is deprecated and will be removed in a future version. Instructions for updating: Use `tf.config.list_physical_devices('GPU')` instead. 2022-07-19 15:18:02.167646: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX AVX2 To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags. 2022-07-19 15:18:02.185125: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'cudart64_110.dll'; dlerror: cudart64_110.dll not found 2022-07-19 15:18:02.187022: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'cublas64_11.dll'; dlerror: cublas64_11.dll not found 2022-07-19 15:18:02.188373: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'cublasLt64_11.dll'; dlerror: cublasLt64_11.dll not found 2022-07-19 15:18:02.189480: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'cusolver64_11.dll'; dlerror: cusolver64_11.dll not found 2022-07-19 15:18:02.190403: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'cusparse64_11.dll'; dlerror: cusparse64_11.dll not found 2022-07-19 15:18:02.191308: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'cudnn64_8.dll'; dlerror: cudnn64_8.dll not found 2022-07-19 15:18:02.191460: W tensorflow/core/common_runtime/gpu/gpu_device.cc:1850] Cannot dlopen some GPU libraries. Please make sure the missing libraries mentioned above are installed properly if you would like to use GPU. Follow the guide at https://www.tensorflow.org/install/gpu for how to download and setup the required libraries for your platform. Skipping registering GPU devices... Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points. - `transformers` version: 4.20.1 - Platform: Windows-10-10.0.22000-SP0 - Python version: 3.9.7 - Huggingface_hub version: 0.6.0 - PyTorch version (GPU?): 1.10.2+cu102 (True) - Tensorflow version (GPU?): 2.9.0 (False) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? @vanpelt @arfon @pvl @xeb @LysandreJik ### Information - [ ] The official example scripts - [x] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [x] My own task or dataset (give details below) ### Reproduction The complete code of the project can be found in the following link https://github.com/1gst/CCAC2022/tree/master introduction: 1.models folder is the base model, best_model folder is the model saved after training, the net.py under the models are defined for themselves the network model 2.main function of the tasker.train(best_ model=False, use_fgm=False), best_model control whether to load the best model saved, True is to load the best model, otherwise load base model 3.tasker.print_model() is to print the model parameters, inside this function you can modify the file name, and the path to load the model, self.best_config_path and self.best_model_ Path is the path of the optimal model, self.config_path and self.model_path the path of the base model 4.To change the model, you need to change the model parameter of the __init__() self.init_path in the Tasker Class (model="albert-base-v2") Problem: The problem I have is, In the process of using the albert-base-v2 model, I load the albert-base-v2 base model for training, after the training, use the trainer.save_model () to save the optimal model in the training process (saved in the best_model), the saved model can be predicted normally, the saved model I loaded again for training, the model will not be trained, the f1 value will always be 0, and the prediction will be invalid after interrupting the training. And the saved model each time it prints its parameters are different, the same code, if you use the roberta model will not happen, I think it may be a problem with the model saving, or a problem with the model loading, but I have modified, always not, please help me. ### Expected behavior The expectation is to use the albert-base-v2 model for training, the end of the training to save the best model, and then load the model for training, training will not be invalid, but also able to predict normally, and each time the printed out parameters should be the same, loading the model will not warn that the internal encoder and other parameter weights inside the albert-base-v2 model are not initialized
07-20-2022 03:22:37
07-20-2022 03:22:37
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
18,205
closed
Split docs on modality
Currently, audio and computer vision lack content or the existing content is mixed with NLP. This PR splits the `toctree` on modality to make it easier to discover content for audio/computer vision while also allowing us to also scale to any additional modalities we want to support. As we create additional content, these new sections make it easier to collect specific content in one place. For example, the upcoming `generate` docs can be placed in the NLP section. This structure can also help us identify gaps in the docs between each modality to ensure documentation is complete. For example, NLP has a page about tokenizers, and we can create a similar page for the other modalities using feature extractors and processors. Other sections include: - General usage for modality-neutral content. - Performance and scalability for content related to large models. - Contribute for how to test, open a PR, and add models/pipelines. After we split the docs, the next step would be to start planning and creating additional content to make the audio/computer vision sections more complete. Looking forward to hearing what you think :)
07-19-2022 22:50:58
07-19-2022 22:50:58
_The documentation is not available anymore as the PR was closed or merged._<|||||>Not really convinced by this as it's now very unclear that all the task tutorials are task-specific tutorials. I liked it better when they were grouped altogether under task. It also backfires from the intent as this shows we don't really support vision (one entry) or speech (two entries) compared to NLP (9 entries). This revamp also puts advanced guides on the top when they should be way lower in the table of contents (such as Benchmarks or Migrating from other packages).<|||||>Thanks for the feedback! 🤗 > Not really convinced by this as it's now very unclear that all the task tutorials are task-specific tutorials. Hmm, do you mean it’s more unclear now because each task tutorial is separated by modality? This seems clearer to me since the section headers are more scannable. > It also backfires from the intent as this shows we don't really support vision (one entry) or speech (two entries) compared to NLP (9 entries). Good perspective, and I totally see what you mean! I think another way to look at it is by creating these new sections, we’re signaling that we plan to create more content for audio/CV. This gives these sections more prominence, which shows we want to focus on audio/CV. So even though it looks pretty bare right now, I think that’s ok since these sections will grow. > This revamp also puts advanced guides on the top when they should be way lower in the table of contents (such as Benchmarks or Migrating from other packages). I don’t think we should put guides lower because they are more advanced. Instead, it may be better to prioritize guides that users are more likely to find useful. For example, I think the Migration/Train with a script guides are pretty useful. This may be a symptom of how I grouped all these guides under General Usage, in which case, we can try breaking up the section and reordering these guides by their utility.<|||||>> Hmm, do you mean it’s more unclear now because each task tutorial is separated by modality? They were all under a "Task" section, which is not the case anymore in your proposal. In NLP, you go from fast tokenizers and multilingual to a task-specific tutorial with no warning to the user. > I don’t think we should put guides lower because they are more advanced. Instead, it may be better to prioritize guides that users are more likely to find useful. Benchmarks or migrating are both advanced and not useful (benchmarks have 0 issues and we are even questioning whether they should stay in the library and pytorch-pretrained-bert ceased to exist a **while** ago). You should check the analytics to be certain, but I'm pretty sure they are very far from the most-visited pages and they are definitely very low on the list of pages we want to nudge the users on.<|||||>Ok I see now! You're worried users won't know the task-specific guides are guides about fine-tuning a model for a task if it is just thrown into the NLP section. I think there are some things we can do to help make this clearer to users (in order of preference): 1. Include an overview page for each modality section explaining what users can expect to find. 2. Update the task-specific guides to have clearer titles like, How to fine-tune for text classification. 3. Create another nested section in each modality that focuses on the task-specific guides. > Benchmarks or migrating are both advanced and not useful (benchmarks have 0 issues and we are even questioning whether they should stay in the library and pytorch-pretrained-bert ceased to exist a while ago). For sure! Benchmark, Migration, and Troubleshoot are bottom-3 in page views in the General Usage section. I can bump these out and move them closer to the bottom. <|||||>Option 2 or 3 are good compromise (my preference goes to 3 if nested-ness is not an issue). I'd leave Troubleshoot in the General Usage section (hopefully we can make it better so it gets more views), but yeah, the other two are out of place there IMO. Let's see what other people think as well, @LysandreJik @patrickvonplaten to name a few :-)<|||||>I also think that option 3) sounds like the best approach. I don't have a problem with adding a nesting level.<|||||>I nested the NLP section but it looks a little off since the content inside isn't aligned on the same level (I pinged @mishig25 on this). I didn't add a nested level for the audio and image sections since there's no content in those sections yet, and it might look a little strange. <|||||>Hi team, just wanted to circle back on this and see if there are any more comments or feedback about how the docs are split. Otherwise, I think we're ready to merge! 🙂<|||||>Option 3.) Also looks like the right one to me :-) However I'm not a big fan of "Image" as a title. Could we maybe try to align those sections a bit with how we call the modalities on the Hub: https://huggingface.co/tasks -> so maybe replace "Image" with "Computer Vision"? Wdyt @sgugger @LysandreJik @osanseviero
transformers
18,204
closed
Global RiGL w/ mup
Adding mup transformer configurations to existing GPT2 models
07-19-2022 18:46:57
07-19-2022 18:46:57
transformers
18,203
closed
Update cache for CircleCI tests
# What does this PR do? After PR #18197, we need to create new cache, otherwise we get some errors, as shown in [this run](https://app.circleci.com/pipelines/github/huggingface/transformers/44104/workflows/1b63ec34-ef95-4678-adc2-773de35342ab/jobs/511895/steps), coming from some checks in `datasets` regarding module imports. Run all tests with newly created cache + Run torch example tests with new cache loaded --> all pass
07-19-2022 15:56:51
07-19-2022 15:56:51
_The documentation is not available anymore as the PR was closed or merged._
transformers
18,202
closed
Reduce console spam when using the KerasMetricCallback
Right now, `KerasMetricCallback` calls `model.predict()` while iterating over the input dataset. This results in some unwanted console spam when using metrics that do not call `generate()` (because `predict()` always creates a progress bar). Replacing it with `predict_on_batch` removes the spam and also improves performance of the callback.
07-19-2022 15:17:21
07-19-2022 15:17:21
_The documentation is not available anymore as the PR was closed or merged._
transformers
18,201
closed
TF: Add missing cast to GPT-J
# What does this PR do? Adds a missing cast, which was breaking the tests for mixed precision (which, for some weird cause, was also causing subsequent tests to fail 🤔 ) All slow tests pass after this change.
07-19-2022 13:57:02
07-19-2022 13:57:02
_The documentation is not available anymore as the PR was closed or merged._<|||||>Ah, that explains why following tests fail! Changing it 👍
transformers
18,200
closed
[TRACKER] Add BLOOM Meg-DS optimizer states
### Feature request Add BLOOM Meg-DS optimizer state on the Hub. Feature request from: https://twitter.com/Asuna_FPS_/status/1549137254588633093?s=20&t=FhO7Tlv01Gn6r_inZGyBug
07-19-2022 13:20:30
07-19-2022 13:20:30
Currently uploading here: https://huggingface.co/bigscience/bloom-optimizer-states<|||||>Closing since the models have been added on the forementioned repo
transformers
18,199
closed
Exported DeBERTa ONNX model is incorrect
### System Info - `transformers` version: 4.21.0.dev0 - Platform: Linux-5.4.0-121-generic-x86_64-with-glibc2.29 - Python version: 3.8.10 - Huggingface_hub version: 0.8.1 - PyTorch version (GPU?): 1.11.0+cu113 (True) ### Who can help? @LysandreJik ### Reproduction __Reproduction__ ```python from pathlib import Path from transformers.onnx import export from transformers import AutoTokenizer, AutoModel, AutoConfig from transformers.models.deberta_v2 import DebertaV2OnnxConfig # load model and tokenizer onnx_path = Path("results/deberta-v2-model.onnx") model_ckpt = "microsoft/deberta-v2-xxlarge" base_model = AutoModel.from_pretrained(model_ckpt) tokenizer = AutoTokenizer.from_pretrained(model_ckpt) onnx_config = DebertaV2OnnxConfig(base_model.config) # export to onnx onnx_inputs, onnx_outputs = export(tokenizer, base_model, onnx_config, onnx_config.default_onnx_opset, onnx_path) ``` __Trace Warnings__ ``` /usr/local/lib/python3.8/dist-packages/transformers/models/deberta_v2/modeling_deberta_v2.py:564: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! q_ids = np.arange(0, query_size) /usr/local/lib/python3.8/dist-packages/transformers/models/deberta_v2/modeling_deberta_v2.py:564: TracerWarning: Converting a tensor to a Python float might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! q_ids = np.arange(0, query_size) /usr/local/lib/python3.8/dist-packages/transformers/models/deberta_v2/modeling_deberta_v2.py:565: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! k_ids = np.arange(0, key_size) /usr/local/lib/python3.8/dist-packages/transformers/models/deberta_v2/modeling_deberta_v2.py:565: TracerWarning: Converting a tensor to a Python float might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! k_ids = np.arange(0, key_size) /usr/local/lib/python3.8/dist-packages/transformers/models/deberta_v2/modeling_deberta_v2.py:569: TracerWarning: torch.tensor results are registered as constants in the trace. You can safely ignore this warning if you use this function to create tensors out of constant variables that would be the same every time you call this function. In any other case, this might cause the trace to be incorrect. rel_pos_ids = torch.tensor(rel_pos_ids, dtype=torch.long) /usr/local/lib/python3.8/dist-packages/transformers/models/deberta_v2/modeling_deberta_v2.py:698: TracerWarning: Converting a tensor to a Python float might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! scale = math.sqrt(query_layer.size(-1) * scale_factor) /usr/local/lib/python3.8/dist-packages/transformers/models/deberta_v2/modeling_deberta_v2.py:752: UserWarning: __floordiv__ is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor'). ).repeat(query_layer.size(0) // self.num_attention_heads, 1, 1) /usr/local/lib/python3.8/dist-packages/transformers/models/deberta_v2/modeling_deberta_v2.py:754: UserWarning: __floordiv__ is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor'). query_layer.size(0) // self.num_attention_heads, 1, 1 /usr/local/lib/python3.8/dist-packages/transformers/models/deberta_v2/modeling_deberta_v2.py:773: TracerWarning: Converting a tensor to a Python float might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! scale = math.sqrt(pos_key_layer.size(-1) * scale_factor) /usr/local/lib/python3.8/dist-packages/transformers/models/deberta_v2/modeling_deberta_v2.py:785: TracerWarning: Converting a tensor to a Python float might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! scale = math.sqrt(pos_query_layer.size(-1) * scale_factor) /usr/local/lib/python3.8/dist-packages/transformers/models/deberta_v2/modeling_deberta_v2.py:786: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! if key_layer.size(-2) != query_layer.size(-2): /usr/local/lib/python3.8/dist-packages/transformers/models/deberta_v2/modeling_deberta_v2.py:113: TracerWarning: torch.tensor results are registered as constants in the trace. You can safely ignore this warning if you use this function to create tensors out of constant variables that would be the same every time you call this function. In any other case, this might cause the trace to be incorrect. output = input.masked_fill(rmask, torch.tensor(torch.finfo(input.dtype).min)) ``` There are some operations which are not torch native(numpy, math) lead to the failure of tracing. e.g. the following graph corresponds to https://github.com/huggingface/transformers/blob/d0acc9537829e7d067edbb791473bbceb2ecf056/src/transformers/models/deberta/modeling_deberta.py#L627 https://github.com/huggingface/transformers/blob/d0acc9537829e7d067edbb791473bbceb2ecf056/src/transformers/models/deberta/modeling_deberta.py#L633-L634 <img width="437" alt="image" src="https://user-images.githubusercontent.com/44135271/179747651-815a9dd1-8ad6-44e7-9b44-f4d35380fca0.png"> As shown in the graph, the `sqrt` node has been ignored and the value of `scale` is treated as a constant. ### Expected behavior Correctly export ONNX model without triggering `TraceWarning` For that need to replace numpy and math ops into natively supported torch ops. I can open a PR for the replacement.
07-19-2022 12:20:49
07-19-2022 12:20:49
I might be suffering from similar issues. See also #18237. A PR would be appreciated @JingyaHuang<|||||>Thanks for reporting @JingyaHuang! Could you take a look at @iiLaurens' PR to see if it fixes your issue?<|||||>Thanks for the PR @iiLaurens, will look at the PR @LysandreJik 👌 .
transformers
18,198
closed
Improve `generate` docstring
# What does this PR do? The generate docstring is not correct, because it has a lot of defaults that read from `model.config` and that is not clearly stated in the method description. ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). ## Who can review? @sgugger @patrickvonplaten I believe this one is for one of you two?
07-19-2022 11:29:52
07-19-2022 11:29:52
_The documentation is not available anymore as the PR was closed or merged._<|||||>> I think it's best to leave the default as they were (since they are ultimately the defaults for the model config) and put a big warning at the top of the arg section of the docstring stating that all of them will be overridden by the model config. What do you think @patrickvonplaten ? Things like `model.config.num_beams` change frequently from model to model. Looking at the 'defaults to 1' was very misleading for me.<|||||>Thanks for the feedback here @JoaoLages! I understand the reason behind your PR and am inclined to merge it as is - would like to get some input from @gante here as well though before merging<|||||>This is a tough one. The change (as it is) is possibly good for generate-savvy users but will make it more confusing for most use cases -- all those config values have their own defaults which are in fact almost always used. We would lose that very useful part of the documentation to make this caveat more visible. In general, we can all agree that defaulting to the config specification is confusing (and a giant source of issues) -- @JoaoLages we are working on a plan to remove them, which is actually the root problem here. This means that documentation changes as a result of this PR will be temporary :) Personally, because of the two paragraphs above, I am more inclined toward @sgugger's suggestion -- the most common situation stays clearly documented, and a temporary warning gets added. @JoaoLages WDYT? <|||||>> In general, we can all agree that defaulting to the config specification is confusing (and a giant source of issues) Totally agree with this statement! > Personally, because of the two paragraphs above, I am more inclined toward @sgugger's suggestion -- the most common situation stays clearly documented, and a temporary warning gets added. @JoaoLages WDYT? The warning would help 👍 <|||||>Awesome, I think we can move forward with it then :) One detail -- this warning should go in FLAX's and TF's docstring as well. If it is not asking too much @JoaoLages, can you copy it to the other frameworks as well? 🙏 <|||||>> Awesome, I think we can move forward with it then :) > > One detail -- this warning should go in FLAX's and TF's docstring as well. If it is not asking too much @JoaoLages, can you copy it to the other frameworks as well? 🙏 Actually,[ the warning is already in the docstring](https://github.com/huggingface/transformers/blob/a68454bdfcc14e40e67502722a4d802a2ae26999/src/transformers/generation_utils.py#L910), right? I guess it is not that visible 😅 <|||||>> Thanks for iterating with us! You were too fast 😂 I also added the changes for TF and FLAX. Opened another PR https://github.com/huggingface/transformers/pull/18432
transformers
18,197
closed
Use next-gen CircleCI convenience images
# What does this PR do? Use next-gen CircleCI convenience images. From [CircleCI page](https://circleci.com/docs/circleci-images?utm_source=google&utm_medium=sem&utm_campaign=sem-google-dg--emea-en-dsa-maxConv-auth-brand&utm_term=g_-_c__dsa_&utm_content=&gclid=CjwKCAjwrNmWBhA4EiwAHbjEQJ4yXbmT654kFoIgTkjKea44E56-j7BGvVrqOkVAwCq97F_Je6EsohoC0OkQAvD_BwE): *Legacy images with the prefix “circleci/” were deprecated on December 31, 2021. For faster builds, upgrade your projects with next-generation convenience images.* It mentions [the following](https://circleci.com/docs/circleci-images?utm_source=google&utm_medium=sem&utm_campaign=sem-google-dg--emea-en-dsa-maxConv-auth-brand&utm_term=g_-_c__dsa_&utm_content=&gclid=CjwKCAjwrNmWBhA4EiwAHbjEQJ4yXbmT654kFoIgTkjKea44E56-j7BGvVrqOkVAwCq97F_Je6EsohoC0OkQAvD_BwE#next-generation-convenience-images): - Faster spin-up time (but I didn't measure the spin-up time) - Improved reliability and stability There are some tiny things I observed: for example, running new images on GCP VM, I can use arrow up to get to the previous commands.
07-19-2022 09:38:48
07-19-2022 09:38:48
_The documentation is not available anymore as the PR was closed or merged._<|||||>Can you just explain why this change is needed? What's better about them?<|||||>> Can you just explain why this change is needed? What's better about them? Sorry, I forgot to mention them in the description. I updated it. My main motivation is to avoid the deprecated (on December 31, 2021) images).
transformers
18,196
closed
Update docs README with instructions on locally previewing docs
# What does this PR do? This small PR updates the README in `/docs/` with instructions on how to use `doc-builder` to locally preview the documentation before submitting a PR. The current docs say previewing is not possible. However, the `doc-builder` [repo](https://github.com/huggingface/doc-builder#previewing) contains previewing instructions. ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. Documentation: @sgugger @stevhliu
07-19-2022 07:58:30
07-19-2022 07:58:30
_The documentation is not available anymore as the PR was closed or merged._<|||||>Thanks again for your contribution!
transformers
18,195
closed
Typo in readme
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
07-19-2022 07:54:12
07-19-2022 07:54:12
_The documentation is not available anymore as the PR was closed or merged._
transformers
18,194
closed
Add vision example to README
# What does this PR do? The main README was only showing NLP examples, this PR removes the question answering example to replace it with an object detection one. You can see the new README [here](https://github.com/huggingface/transformers/tree/readme_vision).
07-19-2022 07:43:26
07-19-2022 07:43:26
_The documentation is not available anymore as the PR was closed or merged._
transformers
18,193
closed
when i use TFGPT2LMHeadModel, how can i build labels and input_ids?
### System Info when i use TFGPT2LMHeadModel,i don't know how to build input_ids and labels! ### Who can help? @patil-suraj ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction def encode_example(ds, limit=-1): print(len(ds)) input_ids_list = [] attention_maks_list = [] label_list = [] for row in ds: input_ids_list.append(row["input_ids"]) attention_maks_list.append(row["attention_mask"]) label_list.append(row["labels"]) return tf.data.Dataset.from_tensor_slices( (input_ids_list, attention_maks_list, label_list)).map(map_example_to_dict) or like this def encode_example(ds, limit=-1): print(len(ds)) input_ids_list = [] attention_maks_list = [] label_list = [] for row in ds: input_ids_list.append(row["input_ids"][:-1]) attention_maks_list.append(row["attention_mask"][:-1]) label_list.append([-100 if k == 1 else k for k in row["labels"][1:]]) return tf.data.Dataset.from_tensor_slices( (input_ids_list, attention_maks_list, label_list)).map(map_example_to_dict) ### Expected behavior who can tell me the input_ids and label format?
07-19-2022 06:01:27
07-19-2022 06:01:27
Hey @Orient12! The `TFGPT2LMHeaDModel` works with CLM objectives. To that end, I think the best way for you to understand how it works would be to try it using the following script: https://github.com/huggingface/transformers/tree/main/examples/tensorflow/language-modeling This fine-tunes models with the CLM objective.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
18,192
closed
Remove use_auth_token from the from_config method
# What does this PR do? Fixes `TypeError: __init__() got an unexpected keyword argument 'use_auth_token'` in `run_mlm_flax.py`, `run_clm_flax.py`, `run_t5_mlm_flax.py`, `run_summarization_flax.py`, `run_image_classification.py` by removing the `use_auth_token` argument from the `from_config` method. ![imgur](https://i.imgur.com/SVtQTWY.png) ## Who can review? cc potential reviewers: @patrickvonplaten, @sgugger, @patil-suraj
07-19-2022 06:00:13
07-19-2022 06:00:13
_The documentation is not available anymore as the PR was closed or merged._
transformers
18,191
closed
add Decision Transformer ONNX config to Transformers
### Feature request Add Decision Transformer OnnxConfig to make this model available for conversion. ### Motivation This is part of adding OnnxConfigs for unsupported models https://huggingface.co/docs/transformers/v4.20.1/en/serialization#exporting-a-model-for-an-unsupported-architecture ### Your contribution I will be submitting a new PR to address DecisionTransformer model
07-19-2022 00:52:22
07-19-2022 00:52:22
@ChainYo @regisss Issue is here, will add PR once its in a workable state<|||||>@ChainYo @regisss I am finally starting work on this, sorry about the delay, so I was reading through the code in this PR and using that as an example: https://github.com/huggingface/transformers/pull/14059/files, one question here, I was trying to understand how we determine what goes in the json structure below, I understand about the last config term but its the terms before it that I was trying to dig into, any insight you guys can provide into this would be most helpful: "camembert": supported_features_mapping( "default", "causal-lm", "sequence-classification", "token-classification", "question-answering", onnx_config_cls=CamembertOnnxConfig, ),<|||||>> @ChainYo @regisss I am finally starting work on this, sorry about the delay, so I was reading through the code in this PR and using that as an example: https://github.com/huggingface/transformers/pull/14059/files, one question here, I was trying to understand how we determine what goes in the json structure below, I understand about the last config term but its the terms before it that I was trying to dig into, any insight you guys can provide into this would be most helpful: Hi @skanjila, if you check the associated docs for `Decision Transformer`, you can see that there is no other feature than the default: https://huggingface.co/docs/transformers/model_doc/decision_transformer ![image](https://user-images.githubusercontent.com/50595514/189597188-c1a6d021-3611-4f12-9e5f-8118cf863451.png) I think that for this model, `default` is the only convenient feature. <|||||>@ChainYo I think what you're saying is that the only parameters that are needed are the following as mentioned in the documentation in the configuration section, is that correct? ( state_dim = 17act_dim = 4hidden_size = 128max_ep_len = 4096action_tanh = Truevocab_size = 1n_positions = 1024n_embd = 768n_layer = 3n_head = 1n_inner = Noneactivation_function = 'relu'resid_pdrop = 0.1embd_pdrop = 0.1attn_pdrop = 0.1layer_norm_epsilon = 1e-05initializer_range = 0.02summary_type = 'cls_index'summary_use_proj = Truesummary_activation = Nonesummary_proj_to_labels = Truesummary_first_dropout = 0.1scale_attn_weights = Trueuse_cache = Truebos_token_id = 50256eos_token_id = 50256scale_attn_by_inverse_layer_idx = Falsereorder_and_upcast_attn = False**kwargs ) Let me know if I am missing anything here<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
18,190
closed
Longformer EncoderDecoder (LED)-Large model finetuning for summarization results in </s><s><s><s><s><s><s><s><s><s><s>... output
### System Info - `transformers` version: 4.20.0.dev0 - Platform: Linux-4.18.0-348.23.1.el8_5.x86_64-x86_64-with-centos-8.6-Green_Obsidian - Python version: 3.7.13 - Huggingface_hub version: 0.7.0 - PyTorch version (GPU?): 1.11.0 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed ### Who can help? @ydshieh ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ``` OUTPUT_DIR=/home/ratish/project python -m torch.distributed.launch --nproc_per_node=1 examples/pytorch/summarization/run_summarization.py \ --model_name_or_path allenai/led-large-16384 \ --do_train \ --do_eval \ --dataset_name xsum \ --output_dir ${OUTPUT_DIR} \ --per_device_train_batch_size=4 \ --per_device_eval_batch_size=4 \ --overwrite_output_dir \ --predict_with_generate \ --overwrite_output_dir \ --logging_dir logs \ --evaluation_strategy steps \ --eval_steps 100 \ --logging_steps 100 \ --report_to tensorboard \ --save_total_limit 5 \ --save_steps 100 \ --load_best_model_at_end \ --greater_is_better True \ --metric_for_best_model rougeL \ --max_eval_samples 100 \ --num_beams 3 ``` The logs shows that at checkpoint 1800 the rouge becomes zero. `{'eval_loss': 2.172360897064209, 'eval_rouge1': 0.0, 'eval_rouge2': 0.0, 'eval_rougeL': 0.0, 'eval_rougeLsum': 0.0, 'eval_gen_len': 20.0, 'eval_runtime': 10.2823, 'eval_samples_per_second': 9.725, 'eval_steps_per_second': 2.431, 'epoch': 0.04}` I evaluate the model output using the below function: ``` def generate_output(): import torch from transformers import LEDTokenizer, LEDForConditionalGeneration MODEL="/home/ratish/checkpoint-1800" model = LEDForConditionalGeneration.from_pretrained(MODEL) tokenizer = LEDTokenizer.from_pretrained(MODEL) ARTICLE_TO_SUMMARIZE = "The tower is 324 metres (1,063 ft) tall, about the same height as an 81-storey building, and the tallest structure in Paris. Its base is square, measuring 125 metres (410 ft) on each side. During its construction, the Eiffel Tower surpassed the Washington Monument to become the tallest man-made structure in the world, a title it held for 41 years until the Chrysler Building in New York City was finished in 1930. It was the first structure to reach a height of 300 metres. Due to the addition of a broadcasting aerial at the top of the tower in 1957, it is now taller than the Chrysler Building by 5.2 metres (17 ft). Excluding transmitters, the Eiffel Tower is the second tallest free-standing structure in France after the Millau Viaduct." inputs = tokenizer.encode(ARTICLE_TO_SUMMARIZE, return_tensors="pt") global_attention_mask = torch.zeros_like(inputs) global_attention_mask[:, 0] = 1 summary_ids = model.generate(inputs, global_attention_mask=global_attention_mask, num_beams=3, max_length=32) print(tokenizer.decode(summary_ids[0], skip_special_tokens=False, clean_up_tokenization_spaces=False)) ``` It produces the output `</s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s>` ### Expected behavior The model should produce the summary of the news article.
07-18-2022 18:51:48
07-18-2022 18:51:48
Hi @ratishsp . Thanks for reporting, I will take a look. Do you have (some) results from the previous checkpoints? Do they have better rouge scores and a bit meaningful outputs than checkpoint 1800?<|||||>Hi @ydshieh thanks for looking into the issue. In a previous checkpoint 1500, the model produced a good output for the above news article: `</s><s>The Eiffel Tower is the tallest building in the world, with a height of 300 metres (1,063 ft).</s>`<|||||>What is surprising is that the eval rouge fluctuates a lot till checkpoint 1500, after which it remains close to 0. I have attached below a tensorboard image of eval_rouge1 ![image](https://user-images.githubusercontent.com/3006607/179771469-a4eb9c8b-61dd-46e1-8471-7bf9324ed008.png) <|||||>Even more suprising, LED-Base model seems to be doing quite well! ![image](https://user-images.githubusercontent.com/3006607/179777619-16b51619-eb76-4067-ab1c-0b6d9f6287e1.png) Model output (checkpoint 1600): `</s><s>The Eiffel Tower in Paris is the tallest structure in the world.</s>`<|||||>Actually I checked the output of base models... Was really quite good. Better if increase max_length Like 64/ ...128 <|||||>I had the same issue. `allenai/led-base-16384` works well but `allenai/led-large-16384` and `allenai/PRIMERA` simply generates `""` after about a few hundreds steps of training.<|||||>I assume that it is an error in the `generate` method, since the training loss curves for the `base` and `large` models look really similar and both of them are reasonable. <|||||>Hi @ydshieh, checking if you were able to look into the issue.<|||||>Hi, @ratishsp I will look this issue this week :-) hope I can have some insight!<|||||>Hi, @ratishsp I haven't running the script myself, but I see something already. You mentioned you use `examples/pytorch/summarization/run_summarization.py`. That file is a general training script. However, `LEDModel/LEDForConditionalGeneration` is somehow special: it uses `global_attention_mask`. As you are running summarization, it is `LEDForConditionalGeneration`. For this model, we should put `1` for the `global_attention_mask` on the first token `<s>` in the encoder input sequence. - [doc](https://huggingface.co/docs/transformers/model_doc/led): search `For summarization, it is advised to put`. - [model card](https://huggingface.co/allenai/led-large-16384-arxiv) In fact, in your inference code snippet, you also have it: ```python global_attention_mask = torch.zeros_like(inputs) global_attention_mask[:, 0] = 1 ``` So (one of) the problem(s) must come from the fact that you don't include `global_attention_mask` in your **training** script. It should be fairly to add it. But you can also check [this notebook](https://colab.research.google.com/drive/12LjJazBl7Gam0XBPy_y0CTOJZeZ34c2v?usp=sharing) by my colleague @patrickvonplaten (I believe he is the author of this notebook). Let me know if you get desired results once you train with global attention! (I am surprised the base model works fine however)<|||||>Hi @ydshieh I had missed to mention this in the original issue description. I had experimented with setting the global attention mask during training. But it didn't change the outcome.<|||||>Would you like to share you entire code, so we can avoid the difference between your code and mine :-) (the one you have with global attention)<|||||>I had added the line `model_inputs["global_attention_mask"] = [[1 if y == tokenizer.cls_token_id else 0 for y in x] for x in model_inputs["input_ids"]]` into the code after https://github.com/huggingface/transformers/blob/0d0aada56444ad554021947addaa035feb55948f/examples/pytorch/summarization/run_summarization.py#L536<|||||>Hi @ratishsp After a long investigation, although not fully understanding the model behavior, here is the observation `led-large` (without further finetuning) will produce the same LM logits for `[2, 0]`, i.e. the tokens `[<eos>, <bos>]` (or say `[</s>, <s>]`), no matter what the encoder input sequences are (at least for `xsum` datasets), and therefore the same predicted token ids. I provide the script to confirm this below, and the results in the next 2 comments. The results for `led-large` is [here](https://github.com/huggingface/transformers/issues/18190#issuecomment-1216585363). During training however, `<eos>` is required to predict the label `<bos>`, and `<bos>` is required to predict the first **non-special** tokens in a sentence. Since they have the same logits, it causes the training difficulty , and ends up learning ```bash <eos> --> <bos> <bos> --> <bos> ``` (as both have the same predicted logits). There is one related discussion [here](https://github.com/huggingface/transformers/issues/15559#issuecomment-1062880564). The solution is to `perturb the representation of bos_token`. I haven't tried it yet, but it makes sense to me. However, why `led-large` (or say, `bart-large`) has this issue is still mysterious to me! ## To verify To have more information printed ```bash git fetch https://github.com/ydshieh/transformers.git check_gen:check_gen git checkout check_gen ``` Run this script (inside `/examples/pytorch/summarization/`) ```python import numpy as np import torch from transformers import AutoTokenizer from transformers import LEDModel, LEDForConditionalGeneration import datasets summarization_name_mapping = { "cnn_dailymail": ("article", "highlights"), "xsum": ("document", "summary"), } ckpt_led_base = "allenai/led-base-16384" ckpt_led_large = "allenai/led-large-16384" tokenizer = AutoTokenizer.from_pretrained(ckpt_led_base) model = LEDForConditionalGeneration.from_pretrained(ckpt_led_base) def get_dataset(dataset_name): max_source_length = 1024 max_target_length = 128 padding = True ignore_pad_token_for_loss = True padding = "max_length" prefix = "" max_train_samples = 1024 max_eval_samples = 256 preprocessing_num_workers = 8 raw_datasets = datasets.load_dataset(dataset_name) text_column, summary_column = summarization_name_mapping[dataset_name] def foo(x): if x == tokenizer.cls_token_id: return 1 elif x == tokenizer.pad_token_id: return -1 else: return 0 def preprocess_function(examples): # remove pairs where at least one record is None inputs, targets = [], [] for i in range(len(examples[text_column])): if examples[text_column][i] and examples[summary_column][i]: inputs.append(examples[text_column][i]) targets.append(examples[summary_column][i]) inputs = [prefix + inp for inp in inputs] model_inputs = tokenizer(inputs, max_length=max_source_length, padding=padding, truncation=True) # Tokenize targets with the `text_target` keyword argument labels = tokenizer(text_target=targets, max_length=max_target_length, padding=padding, truncation=True) # If we are padding here, replace all tokenizer.pad_token_id in the labels by -100 when we want to ignore # padding in the loss. if padding == "max_length" and ignore_pad_token_for_loss: labels["input_ids"] = [ [(l if l != tokenizer.pad_token_id else -100) for l in label] for label in labels["input_ids"] ] model_inputs["labels"] = labels["input_ids"] if model.__class__.__name__.startswith("LED"): model_inputs["global_attention_mask"] = [[foo(y) for y in x] for x in model_inputs["input_ids"]] decoder_input_ids = model.prepare_decoder_input_ids_from_labels(labels=torch.tensor(model_inputs["labels"], dtype=torch.int32)) decoder_input_ids = decoder_input_ids.numpy().tolist() model_inputs["decoder_input_ids"] = decoder_input_ids return model_inputs train_dataset = raw_datasets["train"] eval_dataset = raw_datasets["validation"] train_dataset = train_dataset.select(range(max_train_samples)) eval_dataset = eval_dataset.select(range(max_eval_samples)) train_dataset = train_dataset.map( preprocess_function, batched=True, num_proc=preprocessing_num_workers, remove_columns=['document', 'summary', 'id'], desc="Running tokenizer on train dataset", ) eval_dataset = eval_dataset.map( preprocess_function, batched=True, num_proc=preprocessing_num_workers, remove_columns=['document', 'summary', 'id'], desc="Running tokenizer on validation dataset", ) return train_dataset, eval_dataset train_dataset, eval_dataset = get_dataset("xsum") for idx, eval_example in enumerate(eval_dataset): eval_example.pop("labels") decoder_input_ids = eval_example.pop("decoder_input_ids") eval_example["decoder_input_ids"] = [2, 0] + decoder_input_ids[2:5] for k in eval_example: eval_example[k] = torch.tensor([eval_example[k]], dtype=torch.int32) model.led.decoder.buffer = {} output = model(**eval_example) print(f"example idx: {idx}") for k in model.led.decoder.buffer: h = model.led.decoder.buffer[k] if not isinstance(h, dict): pass # print(f'max diff in {k}: {np.amax(np.abs((h[0, 0] - h[0, 1]).detach().to("cpu").numpy()))}') else: layer_idx = k buffer = h for name in buffer: h = buffer[name] #print(f'layer {layer_idx} - {name}: max <eos> = {torch.max(torch.abs(h[0, 0]))}') #print(f'layer {layer_idx} - {name}: max <bos> = {torch.max(torch.abs(h[0, 1]))}') #print(f'layer {layer_idx} - {name}: max <eos> dim = {torch.argmax(torch.abs(h[0, 0]), dim=-1)}') #print(f'layer {layer_idx} - {name}: max <bos> dim = {torch.argmax(torch.abs(h[0, 1]), dim=-1)}') #top = torch.topk(torch.abs(h[0, 0]), k=8, dim=-1, largest=True, sorted=True) #print(f'layer {layer_idx} - {name}: top <eos> indices = {top.indices}') #print(f'layer {layer_idx} - {name}: top <eos> values = {top.values}') #print(f'layer {layer_idx} - {name}: var <eos> = {torch.var(h[0, 0], unbiased=False)}') #print(f'layer {layer_idx} - {name}: var <bos> = {torch.var(h[0, 1], unbiased=False)}') if "hidden_states: ffn: final_layer_norm" in name: print(f'max diff in layer {layer_idx} - {name}: {np.amax(np.abs((h[0, 0] - h[0, 1]).detach().to("cpu").numpy()))}') print(f"-" * 20) print(f'max diff in lm logits: {np.amax(np.abs((output.logits[0, 0] - output.logits[0, 1]).detach().to("cpu").numpy()))}') print(f"-" * 20) pred = torch.argmax(output.logits, dim=-1).detach().to("cpu").numpy().tolist() print(f'predidcted token ids: {pred}') print(f"=" * 40) if idx >= 10: break ```<|||||>For `led-large`: note the difference is the maximal value of the absolute value of the hidden states between the 0-th position and 1-th position. More precisely: `np.amax(np.abs(h[0, 0] - h[0, 1])`. As you can see, no matter what the encoder input sequences are, the difference becomes really small along the layer depth. ```bash example idx: 0 max diff in layer 0 - hidden_states: ffn: final_layer_norm: 0.029722318053245544 max diff in layer 1 - hidden_states: ffn: final_layer_norm: 0.0003014765679836273 max diff in layer 2 - hidden_states: ffn: final_layer_norm: 9.097158908843994e-06 max diff in layer 3 - hidden_states: ffn: final_layer_norm: 2.812594175338745e-07 max diff in layer 4 - hidden_states: ffn: final_layer_norm: 5.960464477539063e-08 max diff in layer 5 - hidden_states: ffn: final_layer_norm: 4.470348358154297e-08 max diff in layer 6 - hidden_states: ffn: final_layer_norm: 1.7881393432617188e-07 max diff in layer 7 - hidden_states: ffn: final_layer_norm: 2.384185791015625e-07 max diff in layer 8 - hidden_states: ffn: final_layer_norm: 3.725290298461914e-09 max diff in layer 9 - hidden_states: ffn: final_layer_norm: 2.9802322387695312e-08 max diff in layer 10 - hidden_states: ffn: final_layer_norm: 1.4901161193847656e-08 max diff in layer 11 - hidden_states: ffn: final_layer_norm: 1.1920928955078125e-06 max diff in lm logits: 6.67572021484375e-06 predidcted token ids: [[133, 133, 4913, 815, 19931]] ======================================== example idx: 1 max diff in layer 0 - hidden_states: ffn: final_layer_norm: 0.02129286527633667 max diff in layer 1 - hidden_states: ffn: final_layer_norm: 0.0002829432487487793 max diff in layer 2 - hidden_states: ffn: final_layer_norm: 8.203089237213135e-06 max diff in layer 3 - hidden_states: ffn: final_layer_norm: 2.6635825634002686e-07 max diff in layer 4 - hidden_states: ffn: final_layer_norm: 5.960464477539063e-08 max diff in layer 5 - hidden_states: ffn: final_layer_norm: 4.470348358154297e-08 max diff in layer 6 - hidden_states: ffn: final_layer_norm: 5.960464477539063e-08 max diff in layer 7 - hidden_states: ffn: final_layer_norm: 2.384185791015625e-07 max diff in layer 8 - hidden_states: ffn: final_layer_norm: 4.76837158203125e-07 max diff in layer 9 - hidden_states: ffn: final_layer_norm: 2.9802322387695312e-08 max diff in layer 10 - hidden_states: ffn: final_layer_norm: 5.960464477539063e-08 max diff in layer 11 - hidden_states: ffn: final_layer_norm: 3.814697265625e-06 max diff in lm logits: 1.0013580322265625e-05 predidcted token ids: [[448, 448, 40741, 3463, 1034]] ======================================== example idx: 2 max diff in layer 0 - hidden_states: ffn: final_layer_norm: 0.015403840690851212 max diff in layer 1 - hidden_states: ffn: final_layer_norm: 0.000291973352432251 max diff in layer 2 - hidden_states: ffn: final_layer_norm: 9.2238187789917e-06 max diff in layer 3 - hidden_states: ffn: final_layer_norm: 4.172325134277344e-07 max diff in layer 4 - hidden_states: ffn: final_layer_norm: 5.960464477539063e-08 max diff in layer 5 - hidden_states: ffn: final_layer_norm: 2.9802322387695312e-08 max diff in layer 6 - hidden_states: ffn: final_layer_norm: 1.1920928955078125e-07 max diff in layer 7 - hidden_states: ffn: final_layer_norm: 7.450580596923828e-09 max diff in layer 8 - hidden_states: ffn: final_layer_norm: 3.725290298461914e-09 max diff in layer 9 - hidden_states: ffn: final_layer_norm: 5.960464477539063e-08 max diff in layer 10 - hidden_states: ffn: final_layer_norm: 5.960464477539063e-08 max diff in layer 11 - hidden_states: ffn: final_layer_norm: 4.76837158203125e-06 max diff in lm logits: 1.1444091796875e-05 predidcted token ids: [[0, 0, 385, 9, 6912]] ======================================== ```<|||||>For `led-base`. Note that `lm_logits` have a significant difference in the range `[20, 30]`. ```bash max diff in layer 0 - hidden_states: ffn: final_layer_norm: 9.92125129699707 max diff in layer 1 - hidden_states: ffn: final_layer_norm: 6.954092502593994 max diff in layer 2 - hidden_states: ffn: final_layer_norm: 8.275293350219727 max diff in layer 3 - hidden_states: ffn: final_layer_norm: 13.49088191986084 max diff in layer 4 - hidden_states: ffn: final_layer_norm: 4.469869613647461 max diff in layer 5 - hidden_states: ffn: final_layer_norm: 29.27507972717285 max diff in lm logits: 26.215885162353516 predidcted token ids: [[0, 133, 12, 815, 5142]] ======================================== example idx: 1 max diff in layer 0 - hidden_states: ffn: final_layer_norm: 9.919170379638672 max diff in layer 1 - hidden_states: ffn: final_layer_norm: 6.953605651855469 max diff in layer 2 - hidden_states: ffn: final_layer_norm: 8.259047508239746 max diff in layer 3 - hidden_states: ffn: final_layer_norm: 13.197162628173828 max diff in layer 4 - hidden_states: ffn: final_layer_norm: 4.224005699157715 max diff in layer 5 - hidden_states: ffn: final_layer_norm: 29.185691833496094 max diff in lm logits: 28.350433349609375 predidcted token ids: [[0, 846, 40741, 3463, 3449]] ======================================== example idx: 2 max diff in layer 0 - hidden_states: ffn: final_layer_norm: 9.921760559082031 max diff in layer 1 - hidden_states: ffn: final_layer_norm: 6.953545570373535 max diff in layer 2 - hidden_states: ffn: final_layer_norm: 8.30044937133789 max diff in layer 3 - hidden_states: ffn: final_layer_norm: 13.065882682800293 max diff in layer 4 - hidden_states: ffn: final_layer_norm: 3.919126510620117 max diff in layer 5 - hidden_states: ffn: final_layer_norm: 28.759159088134766 max diff in lm logits: 26.200252532958984 predidcted token ids: [[0, 35731, 385, 9, 6912]] ======================================== ```<|||||>Hmm that's very interesting. A couple of pointers that might help: 1. `bart-large` always forces the second token to be the BOS token during generation (see https://huggingface.co/facebook/bart-large/blob/main/config.json#L27) where as led-large doesn't. However `led-large` should probably do this as well since `led-large` is based of `bart-large` 2. IIRC `led-large` has exactly the same weights as `bart-large`. The only difference is that `led-large` has some additionally randomely initialized layers for the global attention 3. It might help to look into the original training script to see how led was fine-tuned for summarization: https://github.com/allenai/longformer/blob/master/scripts/summarization.py Also @ibeltagy - have you seen something like the above already by any chance? <|||||>Also one last comment, note that just because `"</s> <s>"` always predicts the same token regardless of the encoder outputs doesn't mean training is necessarily broken. During training all `decoder_input_ids` start with `</s><s>` and then the model should learn the correct behavior, but it might indeed be a good idea to perturb the bos token. In general, I wouldn't recommend using both `</s>` and `<s>` as prompt tokens for the `decoder_input_ids` but that's how fairseq has done it with BART<|||||>For the record: `bart-large` seems learned to predict the first token after `<s>` in the encoder input sequence, for both the first two decoder tokens `[</s>, <s>]`. I provide a script to confirm this in [this comment].(https://github.com/huggingface/transformers/issues/15559#issuecomment-1217894635). For `led-large-16384`, same situation. But when this is not the case, it gives `[<s>, <s>]`. This happens quite often, and I think it explains why we get `[</s>, <s>, <s>, <s>, ...]` after finetuning. <|||||>@ratishsp I could confirm that the trick of perturbing the `bos` token's embedding works for `led-large-16384`. You can simply adding the following block after the line https://github.com/huggingface/transformers/blob/49e44b216b2559e34e945d5dcdbbe2238859e29b/examples/pytorch/summarization/run_summarization.py#L425 would work. Please let us know if this works for you! Here is the code to add: ```python import torch from transformers.modeling_utils import _load_state_dict_into_model d = model.state_dict() d["led.decoder.embed_tokens.weight"][0] = d["led.decoder.embed_tokens.weight"][0] + torch.randn(1024) _load_state_dict_into_model(model, d, "led.") ```<|||||>Hi @ratishsp Hope the above solution works for you. I am going to close this issue, but if you have further question, don't hesitate to reopen.<|||||>Hi @ydshieh, sorry for the late reply... I had got busy with other stuff. I tried the above fix of perturbing the weights for bos. But it didn't work for me. <|||||>@ratishsp Sorry to hear that, I am not sure what I can help further here, as the issue is found and a fix is provided which worked on my side (and some other users previously). If you can open a new working branch, add your fix there and share it with us + with the list of training arguments used in your latest attempt, we could try to find some time to see if there are other things go wrong there. <|||||>Hi @ydshieh I have followed an identical setup as mentioned at the beginning of the thread but with latest version of Transformers repo. Sure, I can open a branch, add a fix and share with you. Meanwhile, will it be possible for you to share tensorboard log of your run similar to the one here https://github.com/huggingface/transformers/issues/18190#issuecomment-1189139463?<|||||>Hi @ratishsp . If you ever try to run it again with a branch that is aimed to share with us, there 2 two fixes to take into account: https://github.com/huggingface/transformers/issues/18190#issuecomment-1210958506 https://github.com/huggingface/transformers/issues/18190#issuecomment-1218408325 I also strongly suggest that you manually investigate if the bos token embedding is changed before and after this (newly added) line ```python _load_state_dict_into_model(model, d, "led.") ``` I didn't keep the training log - I tried the fix with a training up to around 2K (or 3K maybe) steps, and didn't see this `</s><s><s>...` anymore (while I tried without the fix, it did occur as you described) Once you have the code (with the fixes mentioned above that you will add), we can see if there is some mistake. And if you still get `</s><s><s>...`, I will try to run it myself. (BTW, I won't be available next week).<|||||>Hi @ydshieh, I have created a branch with fixes at https://github.com/ratishsp/transformers-fix. I trained two models: LED-Base and LED-Large with the identical code. The training commands are the same as given earlier in the thread https://github.com/huggingface/transformers/issues/18190#issue-1308379298. Below tensorboard logs show that the issue still exists. ![image](https://user-images.githubusercontent.com/3006607/193236203-97709957-81ee-40e8-aa78-ec5b2b9e209e.png) <|||||>Thanks @ratishsp . Will take a look once I am back!<|||||>Hi @ratishsp As promised, I checked. You are right, perturbing bos token embedding is not helping for the checkpoint `allenai/led-large-16384`. (well, it helps a bit at the first few iterations, but once the steps continue, we get the same `</s><s><s>`.) I ran out of the ideas, the only thing works is to avoid using `</s> <s> <tok_1> <tok_2> ...` when preparing `labels`. Instead, just using `</s> <tok_1> <tok_2> ...`. To do so, add the following block after the line https://github.com/huggingface/transformers/blob/4dd784c32f76fb8285f205b94e2a6ebde731a1cd/examples/pytorch/summarization/run_summarization.py#L536 ### To add ```python # Originally, the `labels` are of the form: </s> <s> ..., which causes trouble for finetuning some checkpoints. # Let's try to remove <s> (`bos` token) in `labels`, i.e. keep only the decoder_start_token (here </s>). model_inputs["labels"] = [x[1:] for x in model_inputs["labels"]] ``` Or you can simplify using my branch [debug_led_large_bad_generation](https://github.com/ydshieh/transformers/tree/debug_led_large_bad_generation) - this will save the generations after each evaluation. You can verify the effect with (and without) this change by running a tiny training (with very few examples) below: ```bash ./run_summarization.py \ --model_name_or_path allenai/led-large-16384 \ --dataset_name xsum \ --output_dir ./led-large-16384-xsum-no-bos-dummy-1 \ --overwrite_output_dir \ --logging_dir ./led-large-16384-xsum-no-bos-dummy-logs-1 \ --do_train \ --do_eval \ --predict_with_generate \ --report_to tensorboard \ --load_best_model_at_end \ --greater_is_better True \ --metric_for_best_model rougeL \ --per_device_train_batch_size=1 \ --per_device_eval_batch_size=4 \ --evaluation_strategy steps \ --max_steps 500 \ --max_train_samples 500 \ --max_eval_samples 100 \ --logging_steps 100 \ --eval_steps 100 \ --save_steps 100 \ --save_total_limit 10 \ --generation_max_length 128 \ --num_beams 3 ``` Let me know if you can get normal results with this change 🙏 Thank you!<|||||>Hi @ydshieh, it works! Thanks. ![image](https://user-images.githubusercontent.com/3006607/195759899-5ab6b8d6-7b1f-47f4-bcff-9e0f3bc2c1a3.png) <|||||>@ratishsp I am super glad it also works for you 🤗 ! I will discuss with my colleagues where to put this information in our documentation, so there will be more clear reference to this issue and workaround. <|||||>Hi @ydshieh , I'm facing the same problem but for another model, here is a link to the [issue](https://discuss.huggingface.co/t/encoder-decoder-model-only-generates-bos-tokens-s-s-s/26470), The finetuning works fine, and the loss is decreasing as expected, but the model doesn't generate any sequences, is there a way to modify the generation logic to get something out, without re-finetuning the model? <|||||>I answered in your post on the forum, but just in case > Hi @customer101 , could you provide a script (or the command you used to launch the training) that could reproduce the issue, please? If you want to proceed on GitHub , it's better to open a new issue instead of in this thread. Thank you.
transformers
18,189
closed
run_summarization_no_trainer
@sgugger Hello! I just tried to run the code to explore this example https://github.com/huggingface/transformers/blob/main/examples/pytorch/summarization/run_summarization_no_trainer.py this is my yml file to build the env > name: sum > > channels: > - pytorch > - conda-forge > - defaults > > dependencies: > - jupyterlab > - pip > - python=3.9 > - pytorch > - tensorboard > - torchaudio > - torchvision > - tqdm > - tokenizers > - prettytable > - einops > - matplotlib > - accelerate > - datasets > - sentencepiece != 0.1.92 > - protobuf > - nltk > - py7zr > - transformers > then pip install rouge-score after that simply I ran thhe command `accelerate launch run_summarization_no_trainer.py --model_name_or_path t5-small --dataset_name cnn_dailymail --dataset_config '3.0.0' --source_prefix 'summarize: ' --output_dir output/tst-summarization` and got the error > Traceback (most recent call last): > File "/home/arij/anaconda3/envs/sum/bin/accelerate", line 10, in <module> > sys.exit(main()) > File "/home/arij/anaconda3/envs/sum/lib/python3.9/site-packages/accelerate/commands/accelerate_cli.py", line 43, in main > args.func(args) > File "/home/arij/anaconda3/envs/sum/lib/python3.9/site-packages/accelerate/commands/launch.py", line 568, in launch_command > simple_launcher(args) > File "/home/arij/anaconda3/envs/sum/lib/python3.9/site-packages/accelerate/commands/launch.py", line 235, in simple_launcher > mixed_precision = PrecisionType(args.mixed_precision.lower()) > AttributeError: 'NoneType' object has no attribute 'lower' How to fix it?
07-18-2022 16:43:15
07-18-2022 16:43:15
Did you run `accelerte config`? What's the result of `accelerate env`?<|||||>accelerate env Copy-and-paste the text below in your GitHub issue - `Accelerate` version: 0.10.0 - Platform: Linux-5.15.0-41-generic-x86_64-with-glibc2.31 - Python version: 3.9.13 - Numpy version: 1.22.3 - PyTorch version (GPU?): 1.12.0 (True) - `Accelerate` default config: Not found accelerate test Running: accelerate-launch --config_file=None /home/arij/anaconda3/envs/sum/lib/python3.9/site-packages/accelerate/test_utils/test_script.py stderr: Traceback (most recent call last): stderr: File "/home/arij/anaconda3/envs/sum/bin/accelerate-launch", line 10, in <module> stderr: sys.exit(main()) stderr: File "/home/arij/anaconda3/envs/sum/lib/python3.9/site-packages/accelerate/commands/launch.py", line 574, in main stderr: launch_command(args) stderr: File "/home/arij/anaconda3/envs/sum/lib/python3.9/site-packages/accelerate/commands/launch.py", line 523, in launch_command stderr: defaults = load_config_from_file(args.config_file) stderr: File "/home/arij/anaconda3/envs/sum/lib/python3.9/site-packages/accelerate/commands/config/config_args.py", line 45, in load_config_from_file stderr: with open(config_file, "r", encoding="utf-8") as f: stderr: FileNotFoundError: [Errno 2] No such file or directory: '/home/arij/.cache/huggingface/accelerate/default_config.yaml' Traceback (most recent call last): File "/home/arij/anaconda3/envs/sum/bin/accelerate", line 10, in <module> sys.exit(main()) File "/home/arij/anaconda3/envs/sum/lib/python3.9/site-packages/accelerate/commands/accelerate_cli.py", line 43, in main args.func(args) File "/home/arij/anaconda3/envs/sum/lib/python3.9/site-packages/accelerate/commands/test.py", line 52, in test_command result = execute_subprocess_async(cmd, env=os.environ.copy()) File "/home/arij/anaconda3/envs/sum/lib/python3.9/site-packages/accelerate/test_utils/testing.py", line 276, in execute_subprocess_async raise RuntimeError( RuntimeError: 'accelerate-launch --config_file=None /home/arij/anaconda3/envs/sum/lib/python3.9/site-packages/accelerate/test_utils/test_script.py' failed with returncode 1 The combined stderr from workers follows: Traceback (most recent call last): File "/home/arij/anaconda3/envs/sum/bin/accelerate-launch", line 10, in <module> sys.exit(main()) File "/home/arij/anaconda3/envs/sum/lib/python3.9/site-packages/accelerate/commands/launch.py", line 574, in main launch_command(args) File "/home/arij/anaconda3/envs/sum/lib/python3.9/site-packages/accelerate/commands/launch.py", line 523, in launch_command defaults = load_config_from_file(args.config_file) File "/home/arij/anaconda3/envs/sum/lib/python3.9/site-packages/accelerate/commands/config/config_args.py", line 45, in load_config_from_file with open(config_file, "r", encoding="utf-8") as f: FileNotFoundError: [Errno 2] No such file or directory: '/home/arij/.cache/huggingface/accelerate/default_config.yaml' accelerte config accelerte: command not found <|||||>That was a typo, sorry. You need to run `accelerate config` before running `accelerate launch` and answer the small questionnaire.<|||||>one of the questions is Do you want to use DeepSpeed? [yes/NO]: what is the better choice here?<|||||>could you please send any link that helps how to figure the questionaire using deepspeed?<|||||>Any way these are my steps > (sum) arij@dgx3:~/summarization/tutorial$ accelerate config > In which compute environment are you running? ([0] This machine, [1] AWS (Amazon SageMaker)): 0 > Which type of machine are you using? ([0] No distributed training, [1] multi-CPU, [2] multi-GPU, [3] TPU): 2 > How many different machines will you use (use more than 1 for multi-node training)? [1]: 3 > What is the rank of this machine (from 0 to the number of machines - 1 )? [0]: 9 > What is the IP address of the machine that will host the main process? ###########33(hidden for security) > What is the port you will use to communicate with the main process? 8887 > Do you want to use DeepSpeed? [yes/NO]: yes > Do you want to specify a json file to a DeepSpeed config? [yes/NO]: yes > Please enter the path to the json DeepSpeed config file: > Do you want to enable `deepspeed.zero.Init` when using ZeRO Stage-3 for constructing massive models? [yes/NO]: > Which Type of launcher do you want to use [0] pdsh, [1] standard, [2] openmpi, [3] mvapich)? [0]: > DeepSpeed configures multi-node compute resources with hostfile. Each row is of the format `hostname slots=[num_gpus]`, e.g., `localhost slots=2`; for more information please refer official [documentation](https://www.deepspeed.ai/getting-started/#resource-configuration-multi-node). Please specify the location of hostfile: > Do you want to specify exclusion filter string? [yes/NO]: > Do you want to specify inclusion filter string? [yes/NO]: > How many GPU(s) should be used for distributed training? [1]:8 > (sum) arij@dgx3:~/summarization/tutorial$ accelerate launch run_summarization_no_trainer.py --model_name_or_path t5-small --dataset_name cnn_dailymail --dataset_config '3.0.0' --source_prefix 'summarize: ' --output_dir output/tst-summarization > [2022-07-18 20:47:06,728] [WARNING] [runner.py:159:fetch_hostfile] Unable to find hostfile, will proceed with training with local resources only. > [2022-07-18 20:47:06,728] [INFO] [runner.py:457:main] cmd = /home/arij/anaconda3/envs/sum/bin/python3.9 -u -m deepspeed.launcher.launch --world_info=eyJsb2NhbGhvc3QiOiBbMCwgMV19 --master_addr=127.0.0.1 --master_port=29500 --no_local_rank run_summarization_no_trainer.py --model_name_or_path t5-small --dataset_name cnn_dailymail --dataset_config 3.0.0 --source_prefix summarize: --output_dir output/tst-summarization > [2022-07-18 20:47:08,004] [INFO] [launch.py:103:main] WORLD INFO DICT: {'localhost': [0, 1]} > [2022-07-18 20:47:08,004] [INFO] [launch.py:109:main] nnodes=1, num_local_procs=2, node_rank=0 > [2022-07-18 20:47:08,004] [INFO] [launch.py:122:main] global_rank_mapping=defaultdict(<class 'list'>, {'localhost': [0, 1]}) > [2022-07-18 20:47:08,004] [INFO] [launch.py:123:main] dist_world_size=2 > [2022-07-18 20:47:08,004] [INFO] [launch.py:125:main] Setting CUDA_VISIBLE_DEVICES=0,1 > args: > > Namespace(dataset_name='cnn_dailymail', dataset_config_name='3.0.0', train_file=None, validation_file=None, ignore_pad_token_for_loss=True, max_source_length=1024, source_prefix='summarize: ', preprocessing_num_workers=None, overwrite_cache=None, max_target_length=128, val_max_target_length=None, max_length=128, num_beams=None, pad_to_max_length=False, model_name_or_path='t5-small', config_name=None, tokenizer_name=None, text_column=None, summary_column=None, use_slow_tokenizer=False, per_device_train_batch_size=8, per_device_eval_batch_size=8, learning_rate=5e-05, weight_decay=0.0, num_train_epochs=3, max_train_steps=None, gradient_accumulation_steps=1, lr_scheduler_type=<SchedulerType.LINEAR: 'linear'>, num_warmup_steps=0, output_dir='output/tst-summarization', seed=None, model_type=None, push_to_hub=False, hub_model_id=None, hub_token=None, checkpointing_steps=None, resume_from_checkpoint=None, with_tracking=False, report_to='all') > [2022-07-18 20:47:30,042] [INFO] [launch.py:210:main] Process 1054725 exits successfully. > args: > > Namespace(dataset_name='cnn_dailymail', dataset_config_name='3.0.0', train_file=None, validation_file=None, ignore_pad_token_for_loss=True, max_source_length=1024, source_prefix='summarize: ', preprocessing_num_workers=None, overwrite_cache=None, max_target_length=128, val_max_target_length=None, max_length=128, num_beams=None, pad_to_max_length=False, model_name_or_path='t5-small', config_name=None, tokenizer_name=None, text_column=None, summary_column=None, use_slow_tokenizer=False, per_device_train_batch_size=8, per_device_eval_batch_size=8, learning_rate=5e-05, weight_decay=0.0, num_train_epochs=3, max_train_steps=None, gradient_accumulation_steps=1, lr_scheduler_type=<SchedulerType.LINEAR: 'linear'>, num_warmup_steps=0, output_dir='output/tst-summarization', seed=None, model_type=None, push_to_hub=False, hub_model_id=None, hub_token=None, checkpointing_steps=None, resume_from_checkpoint=None, with_tracking=False, report_to='all') > [2022-07-18 20:47:39,051] [INFO] [launch.py:210:main] Process 1054726 exits successfully. Still something wrong)<|||||>I think there should be full instructions on how to use accelerate , it is not clear. Thanks for your reply <|||||>Interesting that I was facing the exact same issue right now. The fix for me was to pass the local config I created. `accelerate launch --config_file <your config file> your_file.py`<|||||>@soumyasanyal could you please tell the steps I am absolutely new) or post your config<|||||>Sure! I just followed the steps in this [link](https://huggingface.co/docs/accelerate/quicktour). The steps I followed are: ``` accelerate config --config_file ./accelerate.yaml --> answer all the questions in the questionnaire accelerate test --config_file ./accelerate.yaml accelerate launch --config_file ./accelerate.yaml script.py ``` My config file is as follows (but it can change as per your requirements. I just wanted to run a job on 8 GPUs in a single node, without DeepSpeed or mixed precision): ``` compute_environment: LOCAL_MACHINE deepspeed_config: {} distributed_type: MULTI_GPU fsdp_config: {} machine_rank: 0 main_process_ip: null main_process_port: null main_training_function: main mixed_precision: 'no' num_machines: 1 num_processes: 8 use_cpu: false ``` I was previously running `accelerate launch script.py` without mentioning the config file when I faced the issue that you reported here. Also FYI, note that the doc says that integration of accelerate with DeepSpeed is [experimental](https://huggingface.co/docs/accelerate/quicktour#deepspeed).<|||||>@sgugger sorry for reopenning the issue while using this script using T5 over cnn-dialy dataset under this configuration > compute_environment: LOCAL_MACHINE > deepspeed_config: {} > distributed_type: MULTI_GPU > fsdp_config: {} > machine_rank: 0 > main_process_ip: null > main_process_port: null > main_training_function: main > mixed_precision: 'no' > num_machines: 1 > num_processes: 2 > use_cpu: false I got the error ``` AttributeError: 'Accelerator' object has no attribute 'gather_for_metrics' generated_tokens, labels = accelerator.gather_for_metrics((generated_tokens, labels)) AttributeError: 'Accelerator' object has no attribute 'gather_for_metrics' generated_tokens, labels = accelerator.gather_for_metrics((generated_tokens, labels)) AttributeError: 'Accelerator' object has no attribute 'gather_for_metrics' ``` For this error replacing gather_for_metrics with just `gather` as old version of this code, gives me zero list of gathered `decoded_preds, decoded_labels` . and gather for metrics did not work. with this configuration > compute_environment: LOCAL_MACHINE > deepspeed_config: > gradient_accumulation_steps: 1 > offload_optimizer_device: none > offload_param_device: none > zero3_init_flag: false > zero_stage: 2 > distributed_type: DEEPSPEED > fsdp_config: {} > machine_rank: 0 > main_process_ip: null > main_process_port: null > main_training_function: main > mixed_precision: 'no' > num_machines: 1 > num_processes: 8 > use_cpu: false I get this error > RuntimeError: CUDA out of memory. Tried to allocate 1024.00 MiB (GPU 7; 10.92 GiB total capacity; 9.83 GiB already allocated; 293.50 MiB free; 9.91 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF > ret = input.softmax(dim) > RuntimeError: CUDA out of memory. Tried to allocate 1024.00 MiB (GPU 1; 10.92 GiB total capacity; 9.83 GiB already allocated; 245.50 MiB free; 9.91 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF > <|||||>@muellerzr <|||||>@Arij-Aladel in this case you should reduce your batch size most likely, but I'll be running it myself in just a moment<|||||>I did already still problem of not finding gather_for_metric attribute <|||||>You can simply run the example as is<|||||>Thanks @Arij-Aladel, I think I have found the fix. Can you try running the following training script on your end to verify? (I have wget to make your life easy): (Also as mentioned in the other post please make sure you have a pypi version of accelerate >= 0.12.0 to run the scripts, a PR was just merged yesterday to make them a requirement for all these scripts) ```bash wget https://raw.githubusercontent.com/huggingface/transformers/muellerzr-fix-no-trainer/examples/pytorch/summarization/run_summarization_no_trainer.py ```<|||||>@muellerzr thanks for your response! As I understand your fix is just deleting this line > 706 decoded_preds, decoded_labels = accelerator.gather_for_metrics(decoded_preds, decoded_labels) ?? my life with wget was not easier))) > wget https://raw.githubusercontent.com/huggingface/transformers/muellerzr-fix-no-trainer/examples/pytorch/summarization/run_summarization_no_trainer.py > --2022-11-17 10:45:24-- https://raw.githubusercontent.com/huggingface/transformers/muellerzr-fix-no-trainer/examples/pytorch/summarization/run_summarization_no_trainer.py > Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.109.133, 185.199.111.133, 185.199.108.133, ... > Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|185.199.109.133|:443... connected. > HTTP request sent, awaiting response... 404 Not Found > 2022-11-17 10:45:24 ERROR 404: Not Found. <|||||>![image](https://user-images.githubusercontent.com/68355048/202458650-c33c85ae-d134-49ee-8749-1a44e3c0cd2e.png) Really I do not know what is wrong with this script .....<|||||>@Arij-Aladel yes the fix got merged yesterday, you can find it here: https://github.com/huggingface/transformers/blob/main/examples/pytorch/summarization/run_summarization_no_trainer.py I would highly recommend doing `pip install -r transformers/examples/pytorch/summarization/requirements.txt -U` (the txt file here: https://github.com/huggingface/transformers/blob/main/examples/pytorch/summarization/requirements.txt) to avoid these dependency issues you have been struggling with as the script ran just fine for me.<|||||>![image](https://user-images.githubusercontent.com/68355048/202462056-64ce20dd-2d23-4046-a478-f62d2142bbfe.png) After > pip install -r transformers/examples/pytorch/summarization/requirements.txt -U :)<|||||>Ok seems it was package installation issue after your fix, I have uninstalled all packages then reinstall packages according to requirements file. It works now thanks @muellerzr <|||||>Great! Can this be closed now @Arij-Aladel? :) <|||||>Yes , thanks . I am closing it.
transformers
18,188
closed
Skip test_multi_gpu_data_parallel_forward for BEiT and Data2VecVision
# What does this PR do? Similar to #17890 and #17864, BEiT and `Data2VecVision` use `add_module` and cause problem for this test.
07-18-2022 16:16:45
07-18-2022 16:16:45
_The documentation is not available anymore as the PR was closed or merged._
transformers
18,187
closed
fix typo inside bloom documentation
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes #18178 As @rhvaz noticed, the current global variable for the documentation of Bloom doesn't give the working snippet. This PR proposes to fix the name of the checkpoint. ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. @sgugger and @younesbelkada, if you want to have a look :slightly_smiling_face: <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
07-18-2022 15:32:11
07-18-2022 15:32:11
_The documentation is not available anymore as the PR was closed or merged._
transformers
18,186
closed
Same training time for different values of sliding window in Longformer
### System Info Transformers: 4.20.1 Python: 3.8.12 Pretrained models & tokenizer from HF: "allenai/longformer-base-4096" The training time does not change for any value of sliding window. For e.g. a sliding window of 2 or 512 (which is the default) or 1024 takes the same training time. This seems to be a bug to me. I need a very small local window span (sliding window max 64 across 4096 tokens) and the model is simply unusable in this scenario due to excessive training time ### Who can help? @ydshieh ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction A simple: model.config.attention_window = [SLIDE_WIN_ATTN]*12 ### Expected behavior I would expect training time to fall somewhat quadratically for lower values of SLIDE_WIN_ATTN (say for 64) as compared to the default which is 512. However the training time for both cases is the same (around 24 hours per epoch). In fact SLIDE_WIN_ATTN values from 2 to 1024 roughly take the same training time which should not be the case
07-18-2022 15:05:02
07-18-2022 15:05:02
@allohvk Could you provide a (minimal) training script that demonstrates this issue, probably using a dataset from HF Hub if necessary?<|||||>- I will try to do that. I need to look for a large enough dataset such that the training times show a tangible difference between different scenarios - I was exploring alternative architectures (like Big Bird) and came across a disclaimer there stating that benefits of sparse attention become visible for only 1024 max-seq-length and beyond. Perhaps Longformer too has this limitation and if so, this becomes just a documentation issue. Maybe Longformer is just not optimized to handle sliding windows of length < 512 and hence shows no tangible difference in execution time for sliding window size=2 or sliding window size = 512.<|||||>Thanks for the info. @allohvk . BTW, on what task you trained this model? It's also a good idea to double check the way you prepare `global_attention_mask` (if you ever use it).<|||||>Taking some time to measure model `forward` timing with window size ` [4, 8, 16, 32, 64, 128, 256, 512, 1024, 2048]`. Here is the result. ~~(Will do more measurement when I have time)~~ ### Results (in seconds, for `32` forwards per window size) ``` [47.823847, 46.043184, 46.290201, 46.691181, 48.692595, 53.176747, 62.156357, 81.265798, 149.261874, 273.367502] ``` It indeed looks like the advantage appears for longer enough length. ### Code ```python import torch from transformers import LongformerModel, LongformerTokenizer, LongformerConfig def measure(w_size=512): config = LongformerConfig.from_pretrained("allenai/longformer-base-4096") config.attention_window = w_size model = LongformerModel.from_pretrained("allenai/longformer-base-4096", config=config) tokenizer = LongformerTokenizer.from_pretrained("allenai/longformer-base-4096") print(model.config.attention_window) SAMPLE_TEXT = " ".join(["Hello world! "] * 1000) # long input document input_ids = torch.tensor(tokenizer.encode(SAMPLE_TEXT)).unsqueeze(0) # batch of size 1 attention_mask = torch.ones( input_ids.shape, dtype=torch.long, device=input_ids.device ) # initialize to local attention global_attention_mask = None import datetime s = datetime.datetime.now() for i in range(32): outputs = model(input_ids, attention_mask=attention_mask, global_attention_mask=global_attention_mask) e = datetime.datetime.now() l = (e-s).total_seconds() print(l) sequence_output = outputs.last_hidden_state pooled_output = outputs.pooler_output print(sequence_output.shape) return l ls = [measure(w) for w in [4, 8, 16, 32, 64, 128, 256, 512, 1024, 2048]] print(ls) ```<|||||>Thanks @ydshieh for taking time out to test this and I must apologize if I wasn't clear. I was actually referring to the training time taken, by which I mean the time to fine-tune a pre-trained model with additional training data before actually inferring. I would assume that just like the inference time, the training time too should change based on the length of the sliding window. It should be shorter for a window of (say)128 compared to a window of 512 but the training hours don't change. I will share a small but complete working code with you in a couple of days. To answer your question - I am training a simple classifier using the pertained weights of the base model. I just pass the last state output (768 dim) to a linear regression head. The dataset is actually composed of short NL statements appended with an associated context which are long programming code snippets (something on the lines of what CodeBert does). Just as an FYI, I tried BigBird today and had the same issue, the training time taken taken for "sparse_attention" is the same as the training time taken for a "full_attention" for a 2048 seq_len. "sparse_attention" option actually just attends to 2 x 64 + 3 x 64 + 2 x 64 = 448 tokens which is far less than 2048 and should be much much faster. You can choose to close this ticket if you so wish. I will change my dataset to IMDB and share a simulatable code in couple of days.<|||||>Hi @allohvk , I know you are talking about the training time. However, even with just the `forward` method of the model, we already see that the effect of `window_size` (used for local attentions), i.e. to have linear time instead of quadratic time, will appear **only for large** enough `window_size` (and therefore with long enough sequences). For small `window_size`, some overhead will prevent this much desired effect. From this observation, I am afraid that this holds for training too. If you try to measure this line directly https://github.com/huggingface/transformers/blob/8a61fe023430115bb61ec328a29d35571f4fc2c4/src/transformers/models/longformer/modeling_longformer.py#L820 (without any other parts, and therefore no other overhead), you will see this linear/quadratic running time.<|||||>- Got it. I suppose this is very reasonable ramification of using a specialised attention model which handles long sequences. There is no visible benefit in having sliding window size < 128. Possibly it can just be documented somewhere. I will close this as "not a bug" for now. - I may still have a problem with the model taking quadratic time for longer sequences even with default values of sliding window. However will recheck if it is a bug in my training code. If not, will share a simulatable code by which the problem can be replicated. I will open a new ticket for that.
transformers
18,185
closed
Fix BLOOM's softmax for half precisions
This PR aims at fixing the following issues: - In [this line](https://github.com/huggingface/transformers/blob/6561fbcc6e6d6e1a29fb848dc34710aa25feae78/src/transformers/models/bloom/modeling_bloom.py#L305), if we use minimum dtype values in the attention mask to mask some values. After adding a positive value, the masked values would come back to life. This PR proposes to use `-inf` in the attention mask instead, and only after the addition, we replace the inf values by the respective max/min dtype values ```python input_dtype = attention_scores.dtype attn_weights = (attention_scores * self.layer_number) + attention_mask # torch.finfo(torch.float16).min + 1 is no longer torch.finfo(torch.float16).min (no longer hidden) attn_weights = torch.max(attn_weights, torch.tensor(torch.finfo(attn_weights.dtype).min)) attention_probs = nn.functional.softmax(attn_weights, dim=-1, dtype=torch.float32).to(input_dtype) ``` - Use `torch.clip` instead of `torch.max` to ensure we avoid both `-inf` and `+inf` for softmax - [Only relevent if we use `torch.finfo(dtype).min` in attention mask] In [this line](https://github.com/huggingface/transformers/blob/6561fbcc6e6d6e1a29fb848dc34710aa25feae78/src/transformers/models/bloom/modeling_bloom.py#L600), if we use the minimum dtype values, after performing the addition, we get mixed `-inf` and `torch.finfo(dtype).min` in the attention mask ```python if attention_mask is not None: # [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len] expanded_attn_mask = _expand_mask(attention_mask, inputs_embeds.dtype, tgt_len=input_shape[-1]) combined_attention_mask = ( expanded_attn_mask if combined_attention_mask is None else expanded_attn_mask + combined_attention_mask # this gives `-inf` when we substract a number from `torch.finfo(dtype).min` ) ``` All tests (including slow ones) are passing. ✅ Related to: https://github.com/huggingface/transformers/pull/17437 Co-authored by: @younesbelkada cc @ydshieh @stas00
07-18-2022 13:35:12
07-18-2022 13:35:12
_The documentation is not available anymore as the PR was closed or merged._<|||||>The two situations you described indeed exist. However, I think there is no **real** necessity to deal with them. As long as there is at least one position to attend to, it doesn't matter if we have mixed `-inf` & `torch.finfo(...).min`, as well as if we have a positive value added to ``torch.finfo(...).min`. As long as the score(s) for the attended position(s) is/are within reasonable range, their scores will dominate the other unattended scores. (This should hold during the inference of a trained model, otherwise the model is broken.) And for a sequence without any position to attend, nothing we can't do. If we want to go really rigorous, we should multiply the softmaxed-scores by zeros for the unattended places. <|||||>@ydshieh Are we sure `attention_scores` can never have very large values ? Because the worst case scenario would be for `attention_scores` to have the biggest value for a hidden token. Also by comparing the outputs before and after this PR. It does seem that we get better generations (less repetition). But It needs more testing to be confirmed<|||||>@NouamaneTazi I don't think there is such guarantee, and what you mentioned is possible. However, it would be great if you can provide some examples for which you find this PR helps to get better results or solve some issues. Thank you!<|||||>So stupid question: instead of running `+` operator, can we not run `min` with an attention mask that's `torch.finfo(dtype).max` in not masked values and `torch.finfo(dtype).min` in masked values and be done with it? Or `torch.masked_fill(attention_mask, torch.findo(dtype).min)`? <|||||>> So stupid question: instead of running `+` operator, can we not run `min` with an attention mask that's `torch.finfo(dtype).max` in not masked values and `torch.finfo(dtype).min` in masked values and be done with it? Or `torch.masked_fill(attention_mask, torch.findo(dtype).min)`? I'm not sure what `+`operator are you refering to? Is it after the softmax operation? Or when creating the attention mask?<|||||>> I'm not sure what `+`operator are you refering to? Is it after the softmax operation? Or when creating the attention mask? I think @thomasw21 is talking about the place where an attn. score (where you say it could be positive) is added by the mask. Regarding @thomasw21 question, it's also a valid approach (it's like a clamp in different order and reducing some ops). The current approach (simply `+`) is probably from the first model(s), like BERT/GPT2. <|||||>Should be fixed in this PR: https://github.com/huggingface/transformers/pull/18344
transformers
18,184
closed
[From pretrained] Allow download from subfolder inside model repo
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Currently it is not possible for `transformers` to download a model that is located inside a subfolder of a repo. E.g. for diffusion pipelines, a transformer model is often only one part of a pipeline of models so it makes a lot of sense to save checkpoints inside folders of model repos, see: https://huggingface.co/fusing/latent-diffusion-text2im-large/tree/main/bert Similarly for Dalle-mini where one would have a Bart and a VQ-VAE model inside the same repo. The PR would allow the user to do the following (which fails on master currently): ```py from transformers import BertModel BertModel.from_pretrained("fusing/latent-diffusion-text2im-large", revision="d5eab56", subfolder="bert") ``` **🚨🚨 IMPORTANT 🚨🚨**: This PR adds subfolder loading and saving functionality for both sharded and non-sharded PyTorch checkpoints. It should also work when loading a model with `from_tf=True` or `from_flax=True` - however this is currently not tested. I would be great if such tests could be added in a follow-up PR. Also cc @julien-c FYI
07-18-2022 13:33:46
07-18-2022 13:33:46
_The documentation is not available anymore as the PR was closed or merged._
transformers
18,183
closed
Better default for offload_state_dict in from_pretrained
# What does this PR do? Seeing issues arise since the release of big model inference, I realized it's very confusing for users to have to set `offload_state_dict=True` when the device map picked with `device_map="auto"` contains some disk-offloaded weights. Therefore, this PR changes the default to `None` to pick a good default (basically `False` if there is no disk offload and `True` otherwise) while still letting the user choose the behavior they want by passing a value.
07-18-2022 13:33:12
07-18-2022 13:33:12
_The documentation is not available anymore as the PR was closed or merged._
transformers
18,182
closed
Fix template for new models in README
# What does this PR do? This PR fixes the template for when `make fix-copies` adds new models to the README. They are probably new models that should be documented in main and not stable.
07-18-2022 13:25:42
07-18-2022 13:25:42
_The documentation is not available anymore as the PR was closed or merged._
transformers
18,181
open
Test summary with previous PyTorch/TensorFlow versions
Initialized by @LysandreJik, we ran the tests with previous PyTorch/TensorFlow versions. The goal is to determine if we should drop (some) earlier PyTorch/TensorFlow versions. - This is not exactly the same as the scheduled daily CI (`torch-scatter`, `accelerate` not installed, etc.) - Currently we only have the global summary (i.e. there is no number of test failures per model) Here is the results (running on ~June 20, 2022): - PyTorch testing has ~27100 tests - TensorFlow testing has ~15700 tests | Framework | No. Failures | | :--------------- | ----------: | | PyTorch 1.10 | 50 | | PyTorch 1.9 | 710 | | PyTorch 1.8 | 1301 | | PyTorch 1.7 | 1567 | | PyTorch 1.6 | 2342 | | PyTorch 1.5 | 3315 | | PyTorch 1.4 | 3949 | | TensorFlow 2.8 | 118 | | TensorFlow 2.7 | 122 | | TensorFlow 2.6 | 122 | | TensorFlow 2.5 | 128 | | TensorFlow 2.4 | 167 | It looks like the number of failures in TensorFlow testing doesn't increase much. ### So far my thoughts: - All TF >= 2.4 should be (still) kept in the list of supported versions ### Questions - What's you opinion regarding which versions to drop support? - Would you like to see the number of test failures per model? - TensorFlow 2.3 needs CUDA 10.1 and requires the build of a special docker image. Do you think we should make the effort on it to have the results for `TF 2.3`?
07-18-2022 12:51:37
07-18-2022 12:51:37
cc @LysandreJik @sgugger @patrickvonplaten @Rocketknight1 @gante @anton-l @NielsRogge @amyeroberts @alaradirik @stas00 @hollance to have your comments<|||||>TF 2.3 is quite old by now, and I wouldn't make a special effort to support it. Several nice TF features (like the Numpy-like API) only arrived in TF 2.4, and we're likely to use those a lot in future.<|||||>Hey @ydshieh, would you have a summary of the failing tests handy? I'm curious to see the reason why there are so many failures for PyTorch as soon as we leave the latest version. I'm quite confident that it's an issue in our tests rather than in our internal code, so seeing the failures would help. Thanks!<|||||>@LysandreJik I will re-run it. The previous run(s) have huge tables in the reports, and sending to Slack failed (3001 character limit). I finally ran it by disabling those blocks. Before re-running it, I need a approve for #17921 <|||||>I ran the past CI again which returns more information. Looking the report for `PyTorch 1.4` quickly, here are some observations: There is one error occurring in almost all models: - `from_pretrained`: OSError: Unable to load weights from pytorch checkpoint file for` - `torch.load`: Attempted to read a PyTorch file with version 3, but the maximum supported version for reading is 2. Your PyTorch installation may be too old. Another one also occurs a lot (torchscript tests) - (line 625) AttributeError: module 'torch.jit' has no attribute '_state' An error occurs (specifically) to vision models (probably due to the convolution layers) - (line 97) RuntimeError: cuDNN error: CUDNN_STATUS_NOT_SUPPORTED. This error may appear if you passed in a non-contiguous input. `BART` has 108/106 failures: - (line 240) RuntimeError: CUDA error: device-side assert triggered - Don't know what's wrong here yet Others - Other `AttributeError`: (not exhaustive) - AttributeError: module 'torch' has no attribute 'minimum' - AttributeError: 'builtin_function_or_method' object has no attribute 'fftn' - AttributeError: module 'torch' has no attribute 'square' - AttributeError: module 'torch.nn' has no attribute 'Hardswish' - AttributeError: module 'torch' has no attribute 'logical_and' - AttributeError: module 'torch' has no attribute 'pi' - AttributeError: module 'torch' has no attribute 'multiply'<|||||>Thanks for the report! Taking a look at the PyTorch versions, here are the dates at which they were releases: - 1.4: [Jan 16, 2020](https://pypi.org/project/torch/1.4.0/) - 1.5: [Apr 21, 2020](https://pypi.org/project/torch/1.5.0/) - 1.6: [Jul 28, 2020](https://pypi.org/project/torch/1.6.0/) - 1.7: [Oct 27, 2020](https://pypi.org/project/torch/1.7.0/) - 1.8: [Mar 4, 2021](https://pypi.org/project/torch/1.8.0/) - 1.9: [Jun 15, 2021](https://pypi.org/project/torch/1.9.0/) - 1.10: [Oct 21, 2021](https://pypi.org/project/torch/1.10.0/) - 1.11: [Mar 10, 2021](https://pypi.org/project/torch/1.11.0/) Most of the errors in `from_pretrained` seem to come from the zipfile format introduced by PyTorch 1.6. I think this is the most annoying one to patch by far. From a first look, I'd offer to drop support for all PyTorch version inferior to < 1.6 as these have been released *more than two years ago*. Do you have a link to a job containing all these failures? I'd be interested in seeing if the 2342 errors in PyTorch 1.6 are solvable simply or if they will require a significant refactor.<|||||>The link is [here](https://github.com/huggingface/transformers/actions/runs/2742416113). But since it contains too many jobs (all models x all versions ~= 3200 jobs), it just shows `[Unicorn!] This page is taking too long to load`. I can re-run specifically for PyTorch 1.6 only, and will post a link later.<|||||>> From a first look, I'd offer to drop support for all PyTorch version inferior to < 1.6 as these have been released more than two years ago. I second that. While we are at it, do we want to establish an official shifting window of how far back we want to support pytorch versions for? As in minimum - we support at least 2 years of pytorch? If it's easy to support longer we would but it'd be easy to cut off if need be. The user always has the older `transformers` that they can pin to if they really need a very old pytorch support.<|||||>Yes, that would work fine with me. If I understand correctly, that's how libraries in the PyData ecosystem (scikit-learn, numpy) manage the support of Python versions: they drop support for versions older than 2 years (https://github.com/scikit-learn/scikit-learn/issues/20965, https://github.com/scikit-learn/scikit-learn/issues/20084, [scipy toolchaib](https://scipy.github.io/devdocs/dev/toolchain.html), https://github.com/scipy/scipy/pull/14655). Dropping support for PyTorch/Flax/TensorFlow versions that have been released more than two years ago sounds good to me. That is somewhat already the case (see failing tests), but we're just not aware.<|||||>Hi, I am wondering what it means `a PyTorch/TensorFlow/Flax version is supported`. I guess it doesn't imply all models work under those framework versions, but would like to know if there is more explicit definition (for `transformers`, or more generally, in open source projects).<|||||>Ideally it should mean that all models work/all tests pass apart from functionality explicitly having versions tests (like CUDA bfloat16 or torch FX where we test against a specific PyTorch version).
transformers
18,180
closed
failed to use PyTorch jit mode due to: forward() is missing value for argument 'position_ids'.
I want to use pytorch jit to improve my inference speed on CPU. My model is BertForTokenClassification. And I found position_ids must be given to model if I want to use pytorch jit. Could you please give some suggestions to solve this problem? Or the position_ids is indispensable. Thank you!
07-18-2022 12:15:29
07-18-2022 12:15:29
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
18,179
closed
Cannot save TFTapasModel as SavedModel
### System Info - `transformers` version: 4.20.1 - Platform: Linux-5.4.188+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.13 - Huggingface_hub version: 0.8.1 - PyTorch version (GPU?): 1.12.0+cu113 (True) - Tensorflow version (GPU?): 2.8.2 (True) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help? @Rocketknight1 @gante ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ``` from transformers import TapasTokenizer, TFTapasModel import pandas as pd tokenizer = TapasTokenizer.from_pretrained("google/tapas-base") model = TFTapasModel.from_pretrained("google/tapas-base") data = { "Actors": ["Brad Pitt", "Leonardo Di Caprio", "George Clooney"], "Age": ["56", "45", "59"], "Number of movies": ["87", "53", "69"], } table = pd.DataFrame.from_dict(data) queries = ["How many movies has George Clooney played in?", "How old is Brad Pitt?"] inputs = tokenizer(table=table, queries=queries, padding="max_length", return_tensors="tf") outputs = model(**inputs) last_hidden_states = outputs.last_hidden_state model.save_pretrained("test",saved_model=True) ``` ``` --------------------------------------------------------------------------- ValueError Traceback (most recent call last) [<ipython-input-11-637c488e6341>](https://localhost:8080/#) in <module>() ----> 1 model.save_pretrained("test",saved_model=True) 2 frames [/usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/func_graph.py](https://localhost:8080/#) in autograph_handler(*args, **kwargs) 1145 except Exception as e: # pylint:disable=broad-except 1146 if hasattr(e, "ag_error_metadata"): -> 1147 raise e.ag_error_metadata.to_exception(e) 1148 else: 1149 raise ValueError: in user code: File "/usr/local/lib/python3.7/dist-packages/transformers/modeling_tf_utils.py", line 806, in serving * output = self.call(inputs) File "/usr/local/lib/python3.7/dist-packages/transformers/modeling_tf_utils.py", line 981, in run_call_with_unpacked_inputs * return func(self, **unpacked_inputs) File "/usr/local/lib/python3.7/dist-packages/transformers/models/tapas/modeling_tf_tapas.py", line 1008, in call * outputs = self.tapas( File "/usr/local/lib/python3.7/dist-packages/keras/utils/traceback_utils.py", line 67, in error_handler ** raise e.with_traceback(filtered_tb) from None ValueError: Exception encountered when calling layer "tapas" (type TFTapasMainLayer). in user code: File "/usr/local/lib/python3.7/dist-packages/transformers/modeling_tf_utils.py", line 981, in run_call_with_unpacked_inputs * return func(self, **unpacked_inputs) File "/usr/local/lib/python3.7/dist-packages/transformers/models/tapas/modeling_tf_tapas.py", line 790, in call * embedding_output = self.embeddings( File "/usr/local/lib/python3.7/dist-packages/keras/utils/traceback_utils.py", line 67, in error_handler ** raise e.with_traceback(filtered_tb) from None ValueError: Exception encountered when calling layer "embeddings" (type TFTapasEmbeddings). in user code: File "/usr/local/lib/python3.7/dist-packages/transformers/models/tapas/modeling_tf_tapas.py", line 223, in call * col_index = IndexMap(token_type_ids[:, :, 1], self.type_vocab_sizes[1], batch_dims=1) ValueError: Index out of range using input dim 2; input has only 2 dims for '{{node tapas/embeddings/strided_slice_2}} = StridedSlice[Index=DT_INT32, T=DT_INT32, begin_mask=3, ellipsis_mask=0, end_mask=3, new_axis_mask=0, shrink_axis_mask=4](token_type_ids, tapas/embeddings/strided_slice_2/stack, tapas/embeddings/strided_slice_2/stack_1, tapas/embeddings/strided_slice_2/stack_2)' with input shapes: [?,?], [3], [3], [3] and with computed input tensors: input[3] = <1 1 1>. Call arguments received: • input_ids=tf.Tensor(shape=(None, None), dtype=int32) • position_ids=None • token_type_ids=tf.Tensor(shape=(None, None), dtype=int32) • inputs_embeds=None • training=False Call arguments received: • self=tf.Tensor(shape=(None, None), dtype=int32) • input_ids=None • attention_mask=tf.Tensor(shape=(None, None), dtype=int32) • token_type_ids=tf.Tensor(shape=(None, None), dtype=int32) • position_ids=None • head_mask=None • inputs_embeds=None • output_attentions=False • output_hidden_states=False • return_dict=True • training=False ``` ### Expected behavior It is supposed to make a SavedModel but instead, I get this error mentioned above. The SavedModel is needed for TensorFlow Serving .
07-18-2022 11:34:51
07-18-2022 11:34:51
Following merging of #18153 the reproduction snippet runs on main without error.
transformers
18,178
closed
ImportError: cannot import name 'BloomTokenizer' from 'transformers'
### System Info transformers==4.20.1 torch==1.12.0 Python 3.9.13 GPU: yes (running on GCP) ### Who can help? @SaulLu ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ``` from transformers import BloomTokenizer, BloomModel ``` I was following https://huggingface.co/docs/transformers/model_doc/bloom Specifically ``` from transformers import BloomTokenizer, BloomModel import torch tokenizer = BloomTokenizer.from_pretrained("bigscience/Bloom") model = BloomModel.from_pretrained("bigscience/Bloom") inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") outputs = model(**inputs) last_hidden_states = outputs.last_hidden_state ``` ### Expected behavior No import error.
07-18-2022 11:08:42
07-18-2022 11:08:42
Hi @rhvaz, I am sincerely sorry that you have encountered this issue. We do have a small typo in our documentation, which has been resolved in the PR https://github.com/huggingface/transformers/pull/18005 but not yet deployed on our website. In the meantime, here is the snippet that should work: ```python from transformers import BloomTokenizerFast, BloomModel import torch tokenizer = BloomTokenizerFast.from_pretrained("bigscience/Bloom") model = BloomModel.from_pretrained("bigscience/Bloom") inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") outputs = model(**inputs) last_hidden_states = outputs.last_hidden_state ``` <|||||>Hi @SaulLu many thanks for looking at this so quickly! I tried the snippet you shared and I am now having the following permission issues ``` OSError: bigscience/Bloom is not a local folder and is not a valid model identifier listed on 'https://huggingface.co/models' If this is a private repository, make sure to pass a token having permission to this repo with `use_auth_token` or log in with `huggingface-cli login` and pass `use_auth_token=True`. ``` I get the exception when I try to run either of the lines below ``` tokenizer = BloomTokenizerFast.from_pretrained("bigscience/Bloom") model = BloomModel.from_pretrained("bigscience/Bloom") ```<|||||>You spotted another typo in the name of the checkpoint! Here's a new snippet that should work: ```python from transformers import BloomTokenizerFast, BloomModel import torch tokenizer = BloomTokenizerFast.from_pretrained("bigscience/bloom") model = BloomModel.from_pretrained("bigscience/bloom") inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") outputs = model(**inputs) last_hidden_states = outputs.last_hidden_state ``` I also took this opportunity to share the same fix in the documentation in PR #18187
transformers
18,177
closed
Fix expected loss values in some (m)T5 tests
# What does this PR do? Fix CI failures regarding some T5 and MT5 tests. The PR #18013 and the subsequent fix in #18029 probably tried to get the expected loss values without setting ```python os.environ["NVIDIA_TF32_OVERRIDE"] = "0" ```
07-18-2022 10:50:39
07-18-2022 10:50:39
_The documentation is not available anymore as the PR was closed or merged._
transformers
18,176
closed
Model Loading Imbalance
### System Info - `transformers` version: 4.20.1 - Platform: Linux-5.4.0-81-generic-x86_64-with-glibc2.31 - Python version: 3.10.4 - Huggingface_hub version: 0.8.1 - PyTorch version (GPU?): 1.12.0 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: I guess I tried :) ### Who can help? @patil-suraj @patrickvonplaten @LysandreJik This is a OPT related issue. ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction ```python with init_empty_weights(): model = OPTModel.from_pretrained("facebook/opt-6.7b", device_map="auto") ``` Model loading imbalance across the GPUs. <img width="902" alt="image" src="https://user-images.githubusercontent.com/45140242/179495680-b1a4dae5-be85-4818-a969-8a58346be57d.png"> ### Expected behavior Model parameter numbers are balanced.
07-18-2022 10:47:02
07-18-2022 10:47:02
I think @sgugger is working on that as we speak<|||||>Yes, we will add support for more options to `device_map`, one of which is `"balanced"` after the next release. It's already available in Accelerate if you want to try it on.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
18,175
closed
BLOOM minor fixes small test
Small modifications - Modified docstring on tests - Added correct revision on 350m model - removed right padding left padding test cc @ydshieh @NouamaneTazi @Muennighoff
07-18-2022 09:50:02
07-18-2022 09:50:02
_The documentation is not available anymore as the PR was closed or merged._
transformers
18,174
closed
NLLB model file for the 600M model
1. Please where can I locate the MODEL_FILE i.e the Path to python file containing the model architecture. I believe the model architecture file will contain only one class definition extended from torch.nn.modules. 2. Please where can I locate the Handler file that can be use for TorchServe inference logic. Please me out with the location to the model_File and handler file
07-18-2022 09:06:26
07-18-2022 09:06:26
Hi, The 600 million parameter model can be found here: https://huggingface.co/facebook/nllb-200-distilled-600M. The weights of the model can be found in the "files and versions" tab (check the "pytorch_model.bin" file): https://huggingface.co/facebook/nllb-200-distilled-600M/tree/main.<|||||>Closing this issue as I feel like this has been answered, feel free to reopen.
transformers
18,173
closed
Can't Run UL2
### System Info - `transformers` version: 4.20.1 - Platform: Linux-5.4.0-81-generic-x86_64-with-glibc2.31 - Python version: 3.10.4 - Huggingface_hub version: 0.8.1 - PyTorch version (GPU?): 1.12.0 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: I guess I tried :) ### Who can help? @patrickvonplaten , led porting UL2. ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction ```python with init_empty_weights(): model = OPTModel.from_pretrained("facebook/opt-30b", device_map="auto", offload_folder='./offload_folder').to('cuda') ``` Error message ```python Exception has occurred: ValueError weight is on the meta device, we need a `value` to put in on 1. File "/media/ntu/volume1/home/s121md302_06/workspace/code/yalb/ul2_test.py", line 25, in <module> model = T5ForConditionalGeneration.from_pretrained("google/ul2", low_cpu_mem_usage=True, torch_dtype=torch.bfloat16, device_map='auto').to('cuda') ``` ![image](https://user-images.githubusercontent.com/45140242/179469229-5784bc16-9405-489d-b493-838b7f290710.png) ### Expected behavior I expect the model to be successfully loaded with sharded parameters.
07-18-2022 08:07:25
07-18-2022 08:07:25
Looks like lm_head.weight is causing the problem. But I thought this item was in _keys_to_ignore_on_load_missing of T5ForConditionalGeneration. How could this happen?<|||||>google/ul2 · Splitting the model of multiple GPU's https://huggingface.co/google/ul2/discussions/4
transformers
18,172
closed
FIX: set save state in EarlyStoppingCallback
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes #16620 (issue) ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR.
07-18-2022 04:08:48
07-18-2022 04:08:48
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_18172). All of your documentation changes will be reflected on that endpoint.<|||||>As mentioned in the issue you link to, this is not the right fix. This callback is not responsible for saving, only for interrupting training.<|||||>control.should_save = True not work for me<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
18,171
closed
add ONNX support for swin transformer
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your great contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same person ---sometimes notifications get lost. --> <!-- Remove if not applicable --> Addresses #16308 ## Before submitting - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
07-18-2022 03:15:22
07-18-2022 03:15:22
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_18171). All of your documentation changes will be reflected on that endpoint.<|||||>Hey, @bibhabasumohapatra, Thanks for contributing to ONNX Config support. The PR looks almost good. Could you run `make fix-copies` to fix the CI, as stated in the CI error statement?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>To avoid your work from falling into limbo, I will ping @lewtun and @sgugger.<|||||>@lewtun could have a look at this PR?<|||||>Hi @bibhabasumohapatra, sorry just getting back to this PR - would you mind rebasing on `main` to resolve the merge conflicts and then pushing again to check the CI is green? After that, I think this will be good to go!<|||||>Sorry for the closed PR, actually while rebasing by mistake I clicked "update branch" on my repo, which deleted the commits and automatically closed the PR, I will do it again quickly with other PR @lewtun
transformers
18,170
closed
Allow loading pretrained shared Pytorch checkpoints into flax models
Motivation: Sharded pytorch checkpoints cannot currently be loaded into flax models; this may be desirable in some cases (e.g. "google/ul2"). Changes: I added an few lines to `modeling_flax_utils.py` to support this behavior. The behavior of the added code exactly matches how sharded checkpoints are loaded in `modeling_utils.py` for pytorch models. @patrickvonplaten, @patil-suraj
07-18-2022 03:01:12
07-18-2022 03:01:12
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_18170). All of your documentation changes will be reflected on that endpoint.<|||||>Oops! Thanks, just added that import.<|||||>Now you'll need to run `make style` to fix the formatting issues :-)<|||||>done!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
18,169
closed
Update translation.mdx
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. https://github.com/huggingface/transformers/issues/18166 - [ n/a] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ n/a] Did you write any new necessary tests? ## Who can review? t5: @patrickvonplaten, @patil-suraj Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
07-18-2022 02:57:21
07-18-2022 02:57:21
_The documentation is not available anymore as the PR was closed or merged._
transformers
18,168
closed
[DRAFT] Update group_texts in run_clm.py
Had to make this change to prevent error's when fine-tuning on SageMaker. Without this change, there would be text groups that were too short. # What does this PR do? I made a small change to run_clm.py that fixed an error I received on amazon sagemaker during finetuning. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
07-18-2022 02:13:34
07-18-2022 02:13:34
_The documentation is not available anymore as the PR was closed or merged._
transformers
18,167
closed
Group_texts in run_clm.py will add shorter than block_size groups on intermediately sized training sets.
### System Info SageMaker using transformers 4.17 and attempting to fine-tune GPT2 and GPT-Neo. ### Who can help? @sgugger ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction Steps to reproduce: 1: Use a dataset of about 10MB 2: Run run_clm.py on SageMaker using the latest supported version (4.17 at the time of writing) 3: Part way through training receive error stating: [ValueError: expected sequence of length 1024 at dim 1 (got 507)](https://discuss.huggingface.co/t/valueerror-expected-sequence-of-length-1024-at-dim-1-got-507/20390) ### Expected behavior group_texts in run_clm should drop all sequences that are not the correct blocksize to prevent such an error. Additional context: This bug appears to be introduced by commit: 6f1adc43344a4ebe6fb1ecc018df9d6c092370cf Removing the check for total_length < block_size resolves the issue and training completes without issue.
07-18-2022 02:00:02
07-18-2022 02:00:02
If you remove that line, you will get no training at all, since there is going to be 0 batches left. It is there to ensure a small dataset yields exactly one batch.<|||||>It looks like I must have hit some sort of an edge case. I think what happened was that it got to the end of my file and no longer had 1000 examples to provide the function `group_texts`. So in the case where the dataset lines up wrong, it won't work. Perhaps the `drop_last_batch` flag should be set by default during the group_texts phase and a parse_flag for "small datasets" should be introduced?<|||||>Using run_clm.py from the commit by @spanglies solved this issue (I am forever grateful, this has been frustrating). You have to remove references to telemetrics and to check_min_version() to make it work on Sagemaker with estimator/fit. However the same error shows up in evaluation, which I am disabling for now. Happy to finally have made it to a saved model, as that was my goal for this summer...<|||||>Hey glad that helped you @nittonfemton It wouldn't be hard to apply the same filter to the evaluation data. I hadn't because it was also my goal to have a trained model and I was less concerned with validating it. The way I had gotten it to work on sage maker was to apply the commit to v4.17.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
18,166
closed
Getting error following the official docs for T5 translation fine-tuning: 'tensorflow.python.framework.ops.EagerTensor' object has no attribute 'new_zeros'
### System Info - `transformers` version: 4.20.1 - Platform: Linux-5.4.188+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.13 - Huggingface_hub version: 0.8.1 - PyTorch version (GPU?): 1.12.0+cu113 (False) - Tensorflow version (GPU?): 2.8.2 (False) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help? @patrickvonplaten ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction When I follow the TF steps on [huggingface.co/docs/transformers/tasks/translation](https://huggingface.co/docs/transformers/tasks/translation), I get the error `'tensorflow.python.framework.ops.EagerTensor' object has no attribute 'new_zeros'`. I created a Colab notebook that reproduces the issue https://github.com/gorkemozkaya/Data-Science-Notes/blob/master/reproducing_bugs/Error_with_the_translation_fine_tuning_example.ipynb ### Expected behavior Getting the data pipeline in the TF-dataset form without getting an error
07-17-2022 21:51:24
07-17-2022 21:51:24
I noticed the issue is due to using the PT model instead of the TF, but the documentation still needs to be fixed, because it is using the `model` for creating the `data_collator` before the model is actually loaded, for both TF and PT. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>https://github.com/huggingface/transformers/pull/18169#issuecomment-1186713812 solves the issue
transformers
18,165
closed
Fix beam search computing wrong `next_indices`
# What does this PR do? Beam search used `next_indices = (next_tokens / vocab_size).long()` to compute the indices of the best beams. This, however, uses [torch.true_divide](https://pytorch.org/docs/stable/generated/torch.true_divide.html#torch-true-divide) which would lead to numerical errors and make the following snippet fail: ```py import torch next_tokens = torch.tensor([[0, 50257]], dtype=torch.int64, device='cuda:0') vocab_size = 50257 expected_next_indices = torch.tensor([[0,1]], dtype=torch.int64, device='cuda:0') next_indices = (next_tokens / vocab_size).long() print(next_indices) assert torch.all(next_indices == expected_next_indices) # Fails ``` The simple fix in this PR uses floor division to avoid the aforementioned problem: ```py # ... next_indices = torch.div(next_tokens, vocab_size, rounding_mode='floor').long() print(next_indices) assert torch.all(next_indices == expected_next_indices) # Passes ``` I have dug out this bug while recomputing the beam search scores by hand for gpt2. If needed, I can add an end-to-end high-level reproducibility test with gpt2 and the tricky input_ids, and possibly add it as a unit test. ## Who can review? @patrickvonplaten
07-17-2022 17:24:14
07-17-2022 17:24:14
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_18165). All of your documentation changes will be reflected on that endpoint.
transformers
18,164
closed
Cannot save TFSwinForImageClassification as SavedModel
### System Info - `transformers` version: 4.20.1 - Platform: Linux-5.4.188+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.13 - Huggingface_hub version: 0.8.1 - PyTorch version (GPU?): 1.12.0+cu113 (True) - Tensorflow version (GPU?): 2.8.2 (True) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: No - Using distributed or parallel set-up in script?:No ### Who can help? @Rocketknight1 @sgugger ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction ``` from transformers import AutoFeatureExtractor, TFSwinForImageClassification from PIL import Image import requests url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) feature_extractor = AutoFeatureExtractor.from_pretrained(swinModel) model = TFSwinForImageClassification.from_pretrained(swinModel) inputs = feature_extractor(images=image, return_tensors="tf") outputs = model(inputs.pixel_values) logits = outputs.logits # model predicts one of the 1000 ImageNet classes predicted_class_idx = tf.math.argmax(logits,-1).numpy()[0] print("Predicted class:", model.config.id2label[predicted_class_idx]) class MySwin(TFSwinForImageClassification): @tf.function( input_signature=[ { "pixel_values": tf.TensorSpec((None, None,None,None), tf.float32, name="serving1_pixel_values"), } ] ) def serving1(self, inputs): outputs = self.call(pixel_values=inputs["pixel_values"]) return self.serving_output(outputs) myswin = MySwin.from_pretrained(swinModel) tf.saved_model.save(myswin, swin_EXPORT_PATH, signatures={ "serving1": myswin.serving1, # "serving2": mygpt2.serving2 }) ``` ``` All model checkpoint layers were used when initializing MySwin. All the layers of MySwin were initialized from the model checkpoint at microsoft/swin-tiny-patch4-window7-224. If your task is similar to the task the model of the checkpoint was trained on, you can already use MySwin for predictions without further training. --------------------------------------------------------------------------- OperatorNotAllowedInGraphError Traceback (most recent call last) [<ipython-input-13-b219bb00369a>](https://localhost:8080/#) in <module>() 1 myswin = MySwin.from_pretrained(swinModel) 2 tf.saved_model.save(myswin, swin_EXPORT_PATH, signatures={ ----> 3 "serving1": myswin.serving1, 4 # "serving2": mygpt2.serving2 5 }) 14 frames [/usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/func_graph.py](https://localhost:8080/#) in autograph_handler(*args, **kwargs) 1145 except Exception as e: # pylint:disable=broad-except 1146 if hasattr(e, "ag_error_metadata"): -> 1147 raise e.ag_error_metadata.to_exception(e) 1148 else: 1149 raise OperatorNotAllowedInGraphError: in user code: File "<ipython-input-11-84a42b1aca69>", line 10, in serving1 * outputs = self.call(pixel_values=inputs["pixel_values"]) File "/usr/local/lib/python3.7/dist-packages/transformers/modeling_tf_utils.py", line 1426, in run_call_with_unpacked_inputs * return func(self, **unpacked_inputs) File "/usr/local/lib/python3.7/dist-packages/transformers/models/swin/modeling_tf_swin.py", line 1439, in call * outputs = self.swin( File "/usr/local/lib/python3.7/dist-packages/keras/utils/traceback_utils.py", line 67, in error_handler ** raise e.with_traceback(filtered_tb) from None OperatorNotAllowedInGraphError: Exception encountered when calling layer "swin" (type TFSwinMainLayer). in user code: File "/usr/local/lib/python3.7/dist-packages/transformers/modeling_tf_utils.py", line 1426, in run_call_with_unpacked_inputs * return func(self, **unpacked_inputs) File "/usr/local/lib/python3.7/dist-packages/transformers/models/swin/modeling_tf_swin.py", line 1142, in call * encoder_outputs = self.encoder( File "/usr/local/lib/python3.7/dist-packages/keras/utils/traceback_utils.py", line 67, in error_handler ** raise e.with_traceback(filtered_tb) from None OperatorNotAllowedInGraphError: Exception encountered when calling layer "encoder" (type TFSwinEncoder). in user code: File "/usr/local/lib/python3.7/dist-packages/transformers/models/swin/modeling_tf_swin.py", line 906, in call * layer_outputs = layer_module( File "/usr/local/lib/python3.7/dist-packages/keras/utils/traceback_utils.py", line 67, in error_handler ** raise e.with_traceback(filtered_tb) from None OperatorNotAllowedInGraphError: Exception encountered when calling layer "layers.0" (type TFSwinStage). in user code: File "/usr/local/lib/python3.7/dist-packages/transformers/models/swin/modeling_tf_swin.py", line 838, in call * layer_outputs = layer_module( File "/usr/local/lib/python3.7/dist-packages/keras/utils/traceback_utils.py", line 67, in error_handler ** raise e.with_traceback(filtered_tb) from None OperatorNotAllowedInGraphError: Exception encountered when calling layer "blocks.0" (type TFSwinLayer). in user code: File "/usr/local/lib/python3.7/dist-packages/transformers/models/swin/modeling_tf_swin.py", line 733, in call * self.set_shift_and_window_size(input_dimensions) File "/usr/local/lib/python3.7/dist-packages/transformers/models/swin/modeling_tf_swin.py", line 672, in set_shift_and_window_size * if min(input_resolution) <= self.window_size: OperatorNotAllowedInGraphError: using a `tf.Tensor` as a Python `bool` is not allowed: AutoGraph did convert this function. This might indicate you are trying to use an unsupported feature. Call arguments received: • hidden_states=tf.Tensor(shape=(None, None, 96), dtype=float32) • input_dimensions=('tf.Tensor(shape=(), dtype=int32)', 'tf.Tensor(shape=(), dtype=int32)') • head_mask=None • output_attentions=False • training=False Call arguments received: • hidden_states=tf.Tensor(shape=(None, None, 96), dtype=float32) • input_dimensions=('tf.Tensor(shape=(), dtype=int32)', 'tf.Tensor(shape=(), dtype=int32)') • head_mask=None • output_attentions=False • training=False Call arguments received: • hidden_states=tf.Tensor(shape=(None, None, 96), dtype=float32) • input_dimensions=('tf.Tensor(shape=(), dtype=int32)', 'tf.Tensor(shape=(), dtype=int32)') • head_mask=['None', 'None', 'None', 'None'] • output_attentions=False • output_hidden_states=False • return_dict=True • training=False Call arguments received: • self=tf.Tensor(shape=(None, None, None, None), dtype=float32) • pixel_values=None • bool_masked_pos=None • head_mask=None • output_attentions=False • output_hidden_states=False • return_dict=True • training=False ``` ### Expected behavior It is supposed to make a SavedModel but instead, I get this error mentioned above. The SavedModel is needed for TensorFlow Serving .
07-17-2022 13:49:23
07-17-2022 13:49:23
cc @gante and @amyeroberts <|||||>@amyeroberts is this related to what you've been investigating? If not, I can have a go at it :)<|||||>@gante Yep - I believe so. I've opened a PR here: https://github.com/huggingface/transformers/pull/18153<|||||>Hey @amyeroberts I updated my transformer code locally with the changes you made in the PR for swin but I still get the same result ``` --------------------------------------------------------------------------- OperatorNotAllowedInGraphError Traceback (most recent call last) [<ipython-input-4-637c488e6341>](https://localhost:8080/#) in <module>() ----> 1 model.save_pretrained("test",saved_model=True) 2 frames [/usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/func_graph.py](https://localhost:8080/#) in autograph_handler(*args, **kwargs) 1145 except Exception as e: # pylint:disable=broad-except 1146 if hasattr(e, "ag_error_metadata"): -> 1147 raise e.ag_error_metadata.to_exception(e) 1148 else: 1149 raise OperatorNotAllowedInGraphError: in user code: File "/usr/local/lib/python3.7/dist-packages/transformers/models/swin/modeling_tf_swin.py", line 979, in serving * output = self.call(inputs) File "/usr/local/lib/python3.7/dist-packages/transformers/modeling_tf_utils.py", line 1457, in run_call_with_unpacked_inputs * return func(self, **unpacked_inputs) File "/usr/local/lib/python3.7/dist-packages/transformers/models/swin/modeling_tf_swin.py", line 1470, in call * outputs = self.swin( File "/usr/local/lib/python3.7/dist-packages/keras/utils/traceback_utils.py", line 67, in error_handler ** raise e.with_traceback(filtered_tb) from None OperatorNotAllowedInGraphError: Exception encountered when calling layer "swin" (type TFSwinMainLayer). in user code: File "/usr/local/lib/python3.7/dist-packages/transformers/modeling_tf_utils.py", line 1457, in run_call_with_unpacked_inputs * return func(self, **unpacked_inputs) File "/usr/local/lib/python3.7/dist-packages/transformers/models/swin/modeling_tf_swin.py", line 1160, in call * encoder_outputs = self.encoder( File "/usr/local/lib/python3.7/dist-packages/keras/utils/traceback_utils.py", line 67, in error_handler ** raise e.with_traceback(filtered_tb) from None OperatorNotAllowedInGraphError: Exception encountered when calling layer "encoder" (type TFSwinEncoder). in user code: File "/usr/local/lib/python3.7/dist-packages/transformers/models/swin/modeling_tf_swin.py", line 905, in call * layer_outputs = layer_module( File "/usr/local/lib/python3.7/dist-packages/keras/utils/traceback_utils.py", line 67, in error_handler ** raise e.with_traceback(filtered_tb) from None OperatorNotAllowedInGraphError: Exception encountered when calling layer "layers.0" (type TFSwinStage). in user code: File "/usr/local/lib/python3.7/dist-packages/transformers/models/swin/modeling_tf_swin.py", line 837, in call * layer_outputs = layer_module( File "/usr/local/lib/python3.7/dist-packages/keras/utils/traceback_utils.py", line 67, in error_handler ** raise e.with_traceback(filtered_tb) from None OperatorNotAllowedInGraphError: Exception encountered when calling layer "blocks.0" (type TFSwinLayer). in user code: File "/usr/local/lib/python3.7/dist-packages/transformers/models/swin/modeling_tf_swin.py", line 732, in call * self.set_shift_and_window_size(input_dimensions) File "/usr/local/lib/python3.7/dist-packages/transformers/models/swin/modeling_tf_swin.py", line 671, in set_shift_and_window_size * if min(input_resolution) <= self.window_size: OperatorNotAllowedInGraphError: using a `tf.Tensor` as a Python `bool` is not allowed: AutoGraph did convert this function. This might indicate you are trying to use an unsupported feature. Call arguments received: • hidden_states=tf.Tensor(shape=(None, None, 96), dtype=float32) • input_dimensions=('tf.Tensor(shape=(), dtype=int32)', 'tf.Tensor(shape=(), dtype=int32)') • head_mask=None • output_attentions=False • training=False Call arguments received: • hidden_states=tf.Tensor(shape=(None, None, 96), dtype=float32) • input_dimensions=('tf.Tensor(shape=(), dtype=int32)', 'tf.Tensor(shape=(), dtype=int32)') • head_mask=None • output_attentions=False • training=False Call arguments received: • hidden_states=tf.Tensor(shape=(None, None, 96), dtype=float32) • input_dimensions=('tf.Tensor(shape=(), dtype=int32)', 'tf.Tensor(shape=(), dtype=int32)') • head_mask=['None', 'None', 'None', 'None'] • output_attentions=False • output_hidden_states=False • return_dict=True • training=False Call arguments received: • self=tf.Tensor(shape=(None, None, None, None), dtype=float32) • pixel_values=None • bool_masked_pos=None • head_mask=None • output_attentions=False • output_hidden_states=False • return_dict=True • training=False ``` ![image](https://user-images.githubusercontent.com/66001253/179504748-a1673b1d-0261-45c8-88e9-8319bf89d243.png) <|||||>OK @ahmedlone127. Thanks for letting me know. I'll dig into this some more.<|||||>Thanks <|||||>Following merging of #18153 the reproduction snippet runs on main without error.
transformers
18,163
closed
Fine tune TrOCR for persian language
### Model description Hello! I'm a newbie and I am trying to use TrOCR for recognizing Persian digital text(like PDF) from image. I don't know what will be the requirements if I want to fine tune pre-trained TrOCR model but with decoder of multilingual cased. I've followed this post https://github.com/huggingface/transformers/issues/15823 but it doesn't work out for Persian with the info they gave. Please guide me on how should I proceed? I've seen that there are some models in https://huggingface.co/models?language=fa&sort=downloads but I can't figure out how to use them. Please guide me. ### Open source status - [x] The model implementation is available - [x] The model weights are available ### Provide useful links for the implementation _No response_
07-17-2022 08:51:39
07-17-2022 08:51:39
Hi, I explain how to train TrOCR on a different language here: https://github.com/huggingface/transformers/issues/14195#issuecomment-1039204836<|||||>Hi Niels! Thank you for your response. the thing is that I use: ``` from transformers import VisionEncoderDecoderModel device = torch.device("cuda" if torch.cuda.is_available() else "cpu") model = VisionEncoderDecoderModel.from_encoder_decoder_pretrained("google/vit-base-patch16-224-in21k", "xlm-roberta-base") model.to(device) ``` And I use your Fine_tune_TrOCR_on_IAM_Handwriting_Database_using_native_PyTorch code and at the end of the code my last result is this error: ValueError: Input image size (384*384) doesn't match model (224*224). What's wrong? <|||||>It seems that the images you provide are of size 384x384, but the model (the ViT encoder) expects them to be of size 224x224.<|||||>I changed the images size but it still says that <|||||>``` import os, sys path = '/content/drive/MyDrive/data_test/image/' new_path = '/content/drive/MyDrive/data_test/newimage/' dirs = os.listdir( path ) def resize(): for item in dirs: source = path + item newsource = new_path + item im = Image.open(source) f, e = os.path.splitext(source) imResize = im.resize((224,224), Image.ANTIALIAS) imResize.save(newsource) ```<|||||>this part still gives 384, 384: ``` encoding = train_dataset[0] for k,v in encoding.items(): print(k, v.shape) encoding = eval_dataset[0] for k,v in encoding.items(): print(k, v.shape)` ```<|||||>the problem in error seems to be because of the ViT and in your own code the training set is 384*384 as the last piece of code I commented shows what's wrong? <|||||>The model I'm fine-tuning in my [notebook](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/TrOCR/Fine_tune_TrOCR_on_IAM_Handwriting_Database_using_native_PyTorch.ipynb) expects images to be of size 384, as seen [here](https://huggingface.co/microsoft/trocr-base-printed/blob/main/config.json#L104).<|||||>I used "google/vit-base-patch16-224-in21k" and "xlm-roberta-base". the first one you suggested in https://github.com/huggingface/transformers/issues/14195#issuecomment-1039204836 what is the issue that says the model has the picture of size 224*224?<|||||>Yes, `google/vit-base-patch16-224-in21k` expects images to be of size 224, but you're resizing the images to 384.<|||||>Thank you it got solved! How much should be my validation CER at the end? what range is good enough?<|||||>I'm fine tuning trocr for Farsi language and I did it once using your code and it was ok and now with another larger dataset I get different label sizes and it's a problem. after this part: `encoding = train_dataset[0] for k,v in encoding.items(): print(k, v.shape) encoding = eval_dataset[0] for k,v in encoding.items(): print(k, v.shape)` I get: > pixel_values torch.Size([3, 224, 224]) > labels torch.Size([261]) > pixel_values torch.Size([3, 224, 224]) > labels torch.Size([272]) label torch sizes are not the same although I'm using https://github.com/NielsRogge/Transformers-Tutorials/tree/master/TrOCR and it the code it says that the max_length for labels should be 128. how can I change the code so it'll be the same size for all of the data?<|||||>> How much should be my validation CER at the end? CER (character error rate) is a number between 0 and 1, the closer to 0 the better. Regarding the labels, you need to make sure each target sequence gets padded/truncated to the same length, to make batching possible. <|||||>I'm using your own code. it has: `labels = self.processor.tokenizer(text, padding="max_length", max_length=self.max_target_length).input_ids` and `self.max_target_length = 128` how am I getting different numbers?<|||||>Yes it doesn't have `truncation=True`, which you need to add.<|||||>Note that the sequence length of 128 was just a choice, you can set it to whatever you think is needed for the language you're training on. If you're training on very long sentences, you might need to increase it.<|||||>Thank you so much it worked out.<|||||>@PersianSpock which processor do you use for training on an other language ? do you use a processor which is build up of the same encoders and decoders, or do you use the handwritten stage 1 processor, which is pre-trained already ? it would really help, If you could post your Model and processor initialization. And maybe also your config. Thank you! <|||||>@jonas-da it says here: https://huggingface.co/docs/transformers/main/model_doc/trocr#transformers.TrOCRProcessor since I am using xlm-roberta-large I do it like this: ``` feature_extractor = ViTFeatureExtractor.from_pretrained('google/vit-base-patch16-224-in21k') tokenizer = AutoTokenizer.from_pretrained("xlm-roberta-base") processor = TrOCRProcessor(feature_extractor = feature_extractor, tokenizer = tokenizer ``` <|||||>Ah thank you! @PersianSpock and as above mentioned you use `model = VisionEncoderDecoderModel.from_encoder_decoder_pretrained("google/vit-base-patch16-224-in21k", "xlm-roberta-base")` as model or ? One more question. How much Training Data do you use and what CER did you achieved ? Thank you very much!<|||||>I used base and large both and for the data I used 7000 data and it still wasn't enough and I think I should use more.<|||||>Closing this issue as it seems resolved.
transformers
18,162
closed
longt5 error in step 13 when torch.distributed.launch
### System Info Hardware: 6 Quadro RTX 6000 and 2 A100-40GB gpus, but I only used 2 A100-40GB gpus for this task. Env: transformers==4.20.1 torch==1.11.0+cu113 ### Who can help? @patrickvonplaten @stancld ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction [@Stancld](https://huggingface.co/Stancld) When ddp, I encountered the following error when I try to run your code in the middle of an epoch. The task is pubmed-summarization and I used [run_summarization.py](https://github.com/huggingface/transformers/blob/v4.20.1/examples/pytorch/summarization/run_summarization.py). Is there anything particular about step 13? Any suggestion for solving this error? ``` 1%|█ | 13/1872 [16:51<43:45:35, 84.74s/it]Traceback (most recent call last): File "run_summarization.py", line 737, in main() File "run_summarization.py", line 656, in main train_result = trainer.train(resume_from_checkpoint=checkpoint) File "/home/miniconda3/envs/t5long/lib/python3.8/site-packages/transformers/trainer.py", line 1409, in train return inner_training_loop( File "/home/miniconda3/envs/t5long/lib/python3.8/site-packages/transformers/trainer.py", line 1649, in _inner_training_loop tr_loss_step = self.training_step(model, inputs) File "/home/miniconda3/envs/t5long/lib/python3.8/site-packages/transformers/trainer.py", line 2345, in training_step loss = self.compute_loss(model, inputs) File "/home/miniconda3/envs/t5long/lib/python3.8/site-packages/transformers/trainer.py", line 2377, in compute_loss outputs = model(**inputs) File "/home/miniconda3/envs/t5long/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl return forward_call(*input, **kwargs) File "/home/miniconda3/envs/t5long/lib/python3.8/site-packages/torch/nn/parallel/distributed.py", line 947, in forward if torch.is_grad_enabled() and self.reducer._rebuild_buckets(): RuntimeError: Expected to have finished reduction in the prior iteration before starting a new one. This error indicates that your module has parameters that were not used in producing loss. You can enable unused parameter detection by passing the keyword argument find_unused_parameters=True to torch.nn.parallel.DistributedDataParallel, and by making sure all forward function outputs participate in calculating loss. If you already have done the above, then the distributed data parallel module wasn't able to locate the output tensors in the return value of your module's forward function. Please include the loss function and the structure of the return value of forward of your module when reporting this issue (e.g. list, dict, iterable). Parameter indices which did not receive grad for rank 0: 6 ``` To Reproduce: almost same as [here](https://huggingface.co/Stancld/longt5-tglobal-large-16384-pubmed-3k_steps) and your [wb](https://wandb.ai/stancld/LongT5/runs/1lwncl8a/overview?workspace=user-stancld). ``` CUDA_VISIBLE_DEVICES=4,5 python -m torch.distributed.launch --nproc_per_node 2 --master_port 56666 run_summarization.py --model_name_or_path Stancld/longt5-tglobal-large-16384-pubmed-3k_steps --do_train --do_eval --do_predict --dataset_name ccdv/pubmed-summarization --max_source_length 16384 --max_target_length 512 --per_device_train_batch_size 1 --gradient_accumulation_steps 64 --optim adafactor --learning_rate 0.001 --lr_scheduler_type constant --num_train_epochs 1 --gradient_checkpointing --bf16=True --per_device_eval_batch_size 2 --predict_with_generate --generation_num_beams 1 --generation_max_length 512 --output_dir ./tmp/longt5_pubmed --run_name LongT5-pubmed-16k-512-bs_128 --report_to all --logging_steps 100 --eval_steps 2000 --evaluation_strategy steps --ddp_find_unused_parameters=False --no_cuda=False ``` Here is the failed [wandb](https://wandb.ai/whaleloops/pubmed_sum/runs/19h5mp66/overview?workspace=). I tried to run with 1 GPU, and it works for 50+ steps without the error above. I also verified that ddp works for LED. ### Expected behavior ddp training like 1GPU training without error above
07-16-2022 21:47:41
07-16-2022 21:47:41
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Hi @whaleloops and sorry for the late response. There exists a hotfix if you filter out sequences shorter than 16 tokens (to avoid examples with empty tglobal attn). Lemme know if it helps. :) Unfortunately, I don't have enough spare time to look at more proper fix and send PR:/<|||||>That's a tricky one it seems @whaleloops ! An alternative to filtering out short sequences could also be to always pad until max length <|||||>Thanks @stancld , I confirmed that issue resolved after hotfix. Though this filter excludes a few examples split before after train 119924 117108 valid 6633 6631 test 6658 6658
transformers
18,161
closed
Fix incorrect type hint for lang in run_summarization.py
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes an incorrect type hint for the `lang` argument. It was `str` but it should be `Optional[str]` as its default value is `None`. This was reported as an error by `mypy`: ```bash examples/pytorch/summarization/run_summarization.py:127: error: Incompatible types in assignment (expression has type "None", variable has type "str") ``` ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> @sgugger, @patil-suraj
07-16-2022 20:29:20
07-16-2022 20:29:20
transformers
18,160
closed
Stride in BERT fast tokenizer doesn't work as I expected
### System Info Hi, I'm using 'bert-base-cased' and 'fast tokenizer'. I set the stride value as 128, but I found that the stride from tokenized results wasn't 128. In the following reproduction script, the stride between two windows was only 54. Is this a bug or intentional? ### Who can help? @LysandreJik @SaulLu ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction ``` from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained('bert-base-cased', do_lower_case=False, use_fast=True) stride = 128 tokenized_examples = tokenizer( ['Big Little Lies (TV series)'], [' Despite originally being billed as a miniseries, HBO renewed the series for a second season. Production on the second season began in March 2018 and is set to premiere in 2019. All seven episodes are being written by Kelley and directed by Andrea Arnold. On August 6, 2014, it was announced Nicole Kidman and Reese Witherspoon had optioned the screen rights to Liane Moriarty\'s novel "Big Little Lies". The actresses were expected to develop the project as a film in which they would both star. Bruna Papandrea and Per Saari were set to executive produce alongside Kidman and Witherspoon. Moriarty was also expected to produce as well. On [MASK] 25, 2014, it was announced that Kidman and Witherspoon had decided to develop the project into a limited television series instead of the originally planned film. Additionally, it was announced that television series would be written by David E. Kelley. On May 8, 2015, it was announced that HBO had given the production a series order and that in addition to writing, Kelley would also executive produce. On October 23, 2015, it was reported that Jean-Marc Vallée was in talks to direct the first episode of the series with the potential to direct more. On December 17, 2015, it was announced that Vallée would direct all seven episodes of the series. On November 28, 2016, it was announced that the series would premiere on February 19, 2017.'], truncation="only_second" , max_length=192, stride=stride, return_overflowing_tokens=True, return_offsets_mapping=True, padding="max_length", ) pos_mask_window0 = tokenizer.convert_ids_to_tokens(tokenized_examples['input_ids'][0]).index('[MASK]') pos_mask_window1 = tokenizer.convert_ids_to_tokens(tokenized_examples['input_ids'][1]).index('[MASK]') print('expected stride: ', stride) print('actual stride: ', pos_mask_window0 - pos_mask_window1) >> expected stride: 128 >> actual stride: 54 ``` The library versions I'm using are as follows: transformers 4.13.0 tokenizers 0.10.1 ### Expected behavior From the reproduction script above, I expect the observed stride is the same as the defined stride (i.e., 128) ``` >> expected stride: 128 >> actual stride: 128 ```
07-16-2022 14:29:16
07-16-2022 14:29:16
Hi @mjeensung, From my point of view, the result returned by the tokenizer is the one expected given the arguments you specified to the tokenizer, and the stride is of length 128. In your example, the tokenized example is composed of a text pair: - text: 'Big Little Lies (TV series)' - text pair: ' Despite originally being billed as a miniseries, HBO renewed the series for a second season. Production on the second season began in March 2018 and is set to premiere in 2019. All seven episodes are being written by Kelley and directed by Andrea Arnold. On August 6, 2014, it was announced Nicole Kidman and Reese Witherspoon had optioned the screen rights to Liane Moriarty\'s novel "Big Little Lies". The actresses were expected to develop the project as a film in which they would both star. Bruna Papandrea and Per Saari were set to executive produce alongside Kidman and Witherspoon. Moriarty was also expected to produce as well. On **[MASK]** 25, 2014, it was announced that Kidman and Witherspoon had decided to develop the project into a limited television series instead of the originally planned film. Additionally, it was announced that television series would be written by David E. Kelley. On May 8, 2015, it was announced that HBO had given the production a series order and that in addition to writing, Kelley would also executive produce. On October 23, 2015, it was reported that Jean-Marc Vallée was in talks to direct the first episode of the series with the potential to direct more. On December 17, 2015, it was announced that Vallée would direct all seven episodes of the series. On November 28, 2016, it was announced that the series would premiere on February 19, 2017.' You ask for only the second sentence to be truncated: the stride will have to be taken from the text pair only. Schematically, the first 2 inputs returned will be: ![image](https://user-images.githubusercontent.com/55560583/179480218-fc205694-8d42-45c5-a7d2-fadecd6796a5.png) So, to know the stride (based on the mask tokens of your input), I think the formula is : ```python from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained('bert-base-cased', do_lower_case=False, use_fast=True) stride = 128 tokenized_examples = tokenizer( ['Big Little Lies (TV series)'], [' Despite originally being billed as a miniseries, HBO renewed the series for a second season. Production on the second season began in March 2018 and is set to premiere in 2019. All seven episodes are being written by Kelley and directed by Andrea Arnold. On August 6, 2014, it was announced Nicole Kidman and Reese Witherspoon had optioned the screen rights to Liane Moriarty\'s novel "Big Little Lies". The actresses were expected to develop the project as a film in which they would both star. Bruna Papandrea and Per Saari were set to executive produce alongside Kidman and Witherspoon. Moriarty was also expected to produce as well. On [MASK] 25, 2014, it was announced that Kidman and Witherspoon had decided to develop the project into a limited television series instead of the originally planned film. Additionally, it was announced that television series would be written by David E. Kelley. On May 8, 2015, it was announced that HBO had given the production a series order and that in addition to writing, Kelley would also executive produce. On October 23, 2015, it was reported that Jean-Marc Vallée was in talks to direct the first episode of the series with the potential to direct more. On December 17, 2015, it was announced that Vallée would direct all seven episodes of the series. On November 28, 2016, it was announced that the series would premiere on February 19, 2017.'], truncation="only_second" , max_length=192, stride=stride, return_overflowing_tokens=True, return_offsets_mapping=True, padding="max_length", ) tokens_ex_0 = tokenizer.convert_ids_to_tokens(tokenized_examples.input_ids[0]) tokens_ex_1 = tokenizer.convert_ids_to_tokens(tokenized_examples.input_ids[1]) sep_position = tokens_ex_0.index("[SEP]") len_sent_0 = sep_position + 1 pos_mask_window0 = tokens_ex_0.index('[MASK]') pos_mask_window1 = tokens_ex_1.index('[MASK]') actual_stride = len(tokens_ex_0) - 1 - pos_mask_window0 + pos_mask_window1 - len_sent_0 ``` and the result is 128. Don't hesitate to tell me if this doesn't answer your question!<|||||>Thanks @SaulLu ! I was confused because the tokenized results was different from the ones processed by [squad.py](https://github.com/huggingface/transformers/blob/main/src/transformers/data/processors/squad.py#L187). But I found that [squad.py](https://github.com/huggingface/transformers/blob/main/src/transformers/data/processors/squad.py#L187) re-defines the stride based on max_seq_length and len(truncated_query). <img width="803" alt="Screen Shot 2022-07-18 at 11 15 04 PM" src="https://user-images.githubusercontent.com/44629366/179656783-892c2d88-e99c-42d1-a5c1-e69be90f8c61.png"> Thanks for correcting my misunderstanding. Now it's clear to me.
transformers
18,159
closed
pretrain longT5
Could you please provide us with the steps of pretraining the longT5 on both MLM and PSG objectives? Denoising rates and other details and steps?
07-16-2022 12:48:08
07-16-2022 12:48:08
https://arxiv.org/pdf/2112.07916.pdf All the requested information is on the paper. I would recommend sending questions over to https://github.com/google-research/longt5 since the authors will be far more able to answer specific questions.<|||||>@reelmath thanks alot! I already know that but the steps are in c++ script and no clear documentation to generate the corpus and using it to calculate loss. Since in Pegasus these are two losses. what is not clear do they calculate just one loss or two losses in longt5 during pretraining? beside I want to pretrain the model from huggingface. https://github.com/google-research/longt5/issues/7#issue-1319140379<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
18,158
closed
LongT5 Summarization Example Not Working
### System Info - OS: Ubuntu 20.04.4 LTS focal - Conda: 4.12.0 - Python: 3.7.14 - Pip: 22.1.2 - Torch: 1.12.0 - Transfromers: 4.16.2 - NVIDIA-SMI: 510.73.05 - Nvcc -V: cuda V11.3.109 ### Who can help? @patrickvonplaten, @ydshieh, @sgugger ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction `git clone --branch v4.16.2-release https://github.com/huggingface/transformers` Example from [Transformers/examples/pytorch/summarization](https://github.com/huggingface/transformers/tree/main/examples/pytorch/summarization), the only change is the `--model_name_or_path` ``` python transformers/examples/pytorch/summarization/run_summarization.py \ --model_name_or_path google/long-t5-tglobal-base \ --do_train \ --do_eval \ --dataset_name cnn_dailymail \ --dataset_config "3.0.0" \ --source_prefix "summarize: " \ --output_dir /tmp/tst-summarization \ --per_device_train_batch_size=4 \ --per_device_eval_batch_size=4 \ --overwrite_output_dir \ --predict_with_generate ``` **Error:** ``` [INFO|configuration_utils.py:644] 2022-07-16 13:23:14,077 >> loading configuration file https://huggingface.co/google/long-t5-tglobal-base/resolve/main/config.json from cache at /home/good/.cache/huggingface/transformers/1b9067139467923bb0ea7749ceb5694acb0950b479ad1ebe47d9014180af8c31.69c5bfb92a1a084ead5ef0d9c9c9f09bac4f07cfd875433aa8fab59199208a7f Traceback (most recent call last): File "transformers/examples/pytorch/summarization/run_summarization.py", line 698, in <module> main() File "transformers/examples/pytorch/summarization/run_summarization.py", line 371, in main use_auth_token=True if model_args.use_auth_token else None, File "/home/good/anaconda3/envs/gpt1/lib/python3.7/site-packages/transformers/models/auto/configuration_auto.py", line 632, in from_pretrained config_class = CONFIG_MAPPING[config_dict["model_type"]] File "/home/good/anaconda3/envs/gpt1/lib/python3.7/site-packages/transformers/models/auto/configuration_auto.py", line 347, in __getitem__ raise KeyError(key) KeyError: 'longt5' ``` ### Expected behavior LongT5 model from [google/long-t5-tglobal-base](https://huggingface.co/google/long-t5-tglobal-base) to start training like a normal T5 model ([T5-base](https://huggingface.co/t5-base)).
07-16-2022 10:31:54
07-16-2022 10:31:54
update transformer to 4.20.0+ should solve this issue. Also I don't think you need --source_prefix "summarize: " according to the paper.<|||||>@whaleloops It works, thanks.