repo
stringclasses 1
value | number
int64 1
25.3k
| state
stringclasses 2
values | title
stringlengths 1
487
| body
stringlengths 0
234k
β | created_at
stringlengths 19
19
| closed_at
stringlengths 19
19
| comments
stringlengths 0
293k
|
---|---|---|---|---|---|---|---|
transformers | 16,953 | closed | Add Information Gain Filtration algorithm | # What does this PR do?
Adding a new feature for fine-tuning transformer models called Information Gain Filtration 'IGF'
### Motivation
The quality of a fine-tuned model depends on the quality of the data samples used for the first few batches. As the process is stochastic, a random seed would influence the quality of the final fine-tuned model. We are proposing a novel and robust fine-tuning method βInformation Gain Filtrationβ (IGF), which filters informative training samples before a fine-tuning (training) and improves the overall training efficiency and final performance of the language model fine-tuning step
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. This can be of interest to @sgugger,
Models:
- gpt2
Examples:
- research_projects/information-gain-filtration: @Tuko,@mraunak
| 04-27-2022 00:50:31 | 04-27-2022 00:50:31 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Could you just run the code quality tool to ensure that the code quality passes? You can install them with the following, from the root of your clone:
```
pip install -e ".[quality]"
```
And then run them with:
```
make fixup
```<|||||>Running the command: make fixup, gives an error that does not include terms from my PR,
The output with error is shown below. Please guide me on it. Thanks
(igfprnew) mraunak@bcl-main1:~/transformers$ make fixup
No library .py files were modified
python utils/custom_init_isort.py
python utils/style_doc.py src/transformers docs/source --max_len 119
running deps_table_update
updating src/transformers/dependency_versions_table.py
python utils/check_copies.py
python utils/check_table.py
None of PyTorch, TensorFlow >= 2.0, or Flax have been found. Models won't be available and only tokenizers, configuration and file/data utilities can be used.
python utils/check_dummies.py
python utils/check_repo.py
None of PyTorch, TensorFlow >= 2.0, or Flax have been found. Models won't be available and only tokenizers, configuration and file/data utilities can be used.
None of PyTorch, TensorFlow >= 2.0, or Flax have been found. Models won't be available and only tokenizers, configuration and file/data utilities can be used.
Checking all models are included.
Checking all models are public.
Checking all models are properly tested.
Checking all objects are properly documented.
Checking all models are in at least one auto class.
utils/check_repo.py:456: UserWarning: Full quality checks require all backends to be installed (with `pip install -e .[dev]` in the Transformers repo, the following are missing: PyTorch, TensorFlow, Flax. While it's probably fine as long as you didn't make any change in one of those backends modeling files, you should probably execute the command above to be on the safe side.
warnings.warn(
python utils/check_inits.py
None of PyTorch, TensorFlow >= 2.0, or Flax have been found. Models won't be available and only tokenizers, configuration and file/data utilities can be used.
Traceback (most recent call last):
File "utils/check_inits.py", line 265, in <module>
check_submodules()
File "utils/check_inits.py", line 256, in check_submodules
raise ValueError(
ValueError: The following submodules are not properly registed in the main init of Transformers:
- sagemaker
- activations
- activations_tf
- convert_slow_tokenizer
- deepspeed
- generation_beam_constraints
- generation_beam_search
- generation_flax_logits_process
- generation_flax_utils
- generation_logits_process
- generation_stopping_criteria
- generation_tf_logits_process
- generation_tf_utils
- generation_utils
- image_utils
- keras_callbacks
- modeling_flax_outputs
- modeling_flax_utils
- modeling_outputs
- modeling_tf_outputs
- modeling_tf_utils
- modeling_utils
- optimization
- optimization_tf
- pytorch_utils
- tf_utils
- trainer
- trainer_pt_utils
- trainer_seq2seq
- trainer_tf
- data.datasets
Make sure they appear somewhere in the keys of `_import_structure` with an empty list as value.
make: *** [repo-consistency] Error 1<|||||>@LysandreJik thanks for the suggestion! We were able to correct the quality check issues.
Let us know if you need us to run/test anything else. Thank you!<|||||>Thank you for your contributions!<|||||>Thank you for accepting our work! |
transformers | 16,952 | closed | cannot import name 'Data2VecForCTC' from 'transformers' | ### System Info
```shell
- `transformers` version: 4.18.0
- Platform: Linux-5.13.0-40-generic-x86_64-with-glibc2.31
- Python version: 3.9.7
- Huggingface_hub version: 0.2.1
- PyTorch version (GPU?): 1.11.0 (True)
- Tensorflow version (GPU?): 2.7.0 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
```
### Who can help?
@patrickvonplaten
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Following the code sample in https://huggingface.co/facebook/data2vec-audio-large-960h I'm trying to import the name **Data2VecForCTC**, but unsuccessful.
Possibly a typo: **Data2VecForCTC** -> **Data2VecAudioForCTC**
### Expected behavior
```shell
Correct code sample is expected.
```
| 04-26-2022 22:57:35 | 04-26-2022 22:57:35 | Hi,
Data2Vec is only available on the main branch for now: pip install git+https://github.com/huggingface/transformers.git.<|||||>Hi
I can't find the name **Data2VecForCTC** in https://github.com/huggingface/transformers/blob/main/src/transformers/__init__.py either.<|||||>Here it is: https://github.com/huggingface/transformers/blob/dced262409177586bb510b6b724c762fb89da0e8/src/transformers/__init__.py#L880
Note that:
```python
from transformers import Data2VecAudioForCTC
model = Data2VecAudioForCTC.from_pretrained("...")
```
should also already work on master<|||||>Data2VecForCTC is not the same as Data2Vec**Audio**ForCTC <|||||>Good observation!
There is no `Data2VecForCTC` ;-)<|||||>Yes, but there is on the [website](https://huggingface.co/facebook/data2vec-audio-large-960h)

<|||||>Soon we'll have a feature that allows you to report this directly on the model repo ;) stay tuned!<|||||>But yes I'll fix it, thanks for reporting <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Opened a PR to fix it: https://huggingface.co/facebook/data2vec-audio-large-960h/discussions/1 |
transformers | 16,951 | closed | :tada: initial commit of scformer | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 04-26-2022 21:39:01 | 04-26-2022 21:39:01 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 16,950 | closed | Revised partial checkpoint support for Sagemaker Model Parallel | # What does this PR do?
- Uses `smp.rdp_rank()` instead of `smp.rank()` for partial checkpoint saving in `should_save`.
- Uses `local_state_dict()` with partial checkpoint saving.
- Uses `smp.save ` for SMP.
- Uses` smp.load ` for SMP. Reorders partial checkpoint loading to happen after wrapping of model, since `smp.load` can only load to a smp model.
- Updated checks for the existence of checkpoint files since smp partial checkpoints contain postfixes in addition to filename(example: `filename_0_0` or` filename_0_0_0`).
- adds `load_best_model_at_end` support for SMP
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 04-26-2022 21:32:18 | 04-26-2022 21:32:18 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_16950). All of your documentation changes will be reflected on that endpoint.<|||||>> Thanks for your PR. Two comments on it:
>
> 1. This breaks the current behavior of the `Trainer` where each checkpoint can be loaded as a model. In particular, this will push to the Hub the partial checkpoints with no config during training when `push_to_hub=True` (whereas a regular training pushes models that can be used).
> 2. The feature is always on. Maybe we should let the user decide if they want it or not?
Thanks for reviewing.
In order user to decide to save/load partial checkpoints or not, we need new training args. I[n my previous PR](https://github.com/huggingface/transformers/pull/16734/files#diff-bfceaff300c851b8e24fc50dc6638482abaec8f7d2a718e877c3828c166bcf79R426-R431), I got feedback not to introduce new HF training args. So we decided to support partial checkpointing as default.
<|||||>There are plenty of other ways to control whether a feature is on or off. For instance, you could use the environment variable `"SM_HP_MP_PARAMETERS"`.
Since this partial checkpointing is completely incompatible with `from_pretrained`, thus won't work with the Hugging Face Hub and its inference widget, it should be turned off by default.<|||||>@sgugger Thanks for you feedback. Based on your comments, we decided to enable partial checkpointing for optimizer state only where model weights will be saved in full. With this approach, model weights will be saved using `save_pretrained`.
Here is the link for the new PR: https://github.com/huggingface/transformers/pull/17219<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 16,949 | closed | LayoutLMV3 | ### Model description
LayoutLMv3 is a pre-trained multimodal Transformer for Document AI with unified text and image masking. The simple unified architecture and training objectives make LayoutLMv3 a general-purpose pre-trained model. For example, LayoutLMv3 can be fine-tuned for both text-centric tasks, including form understanding, receipt understanding, and document visual question answering, and image-centric tasks such as document image classification and document layout analysis.
LayoutLMv3 greatly simplifies training and reduces the number of parameters compared to v3, making it an important milestone in document understanding.
[LayoutLMv3: Pre-training for Document AI with Unified Text and Image Masking](https://arxiv.org/abs/2204.08387) Yupan Huang, Tengchao Lv, Lei Cui, Yutong Lu, Furu Wei, Preprint 2022.
### Open source status
- [X] The model implementation is available
- [X] The model weights are available
### Provide useful links for the implementation
[Huggingface Pretrained Download](https://huggingface.co/microsoft/layoutlmv3-base) | 04-26-2022 20:11:04 | 04-26-2022 20:11:04 | Duplicate of #16914 <|||||>@[NielsRogge](https://github.com/NielsRogge)
I have one question for layoutlmv3, does layoutlmv3 can support RE and SER task for xfund dataset now? |
transformers | 16,948 | closed | Add ResNet to models exportable with ONNX | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
I added OnnxConfig to make ResNet model available for ONNX conversion.
Issue [#16308](https://github.com/huggingface/transformers/issues/16308)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 04-26-2022 14:48:51 | 04-26-2022 14:48:51 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_16948). All of your documentation changes will be reflected on that endpoint.<|||||>Thanks for the suggestions. One test fails when running slow tests. The error is:
> ValueError: Outputs values doesn't match between reference model and ONNX exported model: Got max absolute difference of: 0.000244140625
Should I set --atol=1e-3 or is there some way to fix this?
<|||||>> Should I set --atol=1e-3 or is there some way to fix this?
Ah yes, some models require a lower tolerance due to their architectures. Setting `--atol=1e-3` as the default is fine!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Can't we rebase this branch and add the code for ResNet ?<|||||>I'm taking care of this in #17585, I didn't see this PR before opening mine. @ChainYo
It should be done by the end of the week! <|||||>> It should be done by the end of the week!
Pretty cool! |
transformers | 16,947 | closed | Fix multiple deletions of the same files in save_pretrained | # What does this PR do?
Checkpoint sharding introduced a bug in `save_pretrained` for distributed setups, where the function is called on every process (TPU training for instance, or the scripts with Accelerate (see [this issue](https://github.com/huggingface/accelerate/issues/325)).
This changes the logic to only remove files when:
- they are different from existing ones (which should handle almost all cases since we can expect the save in a folder with existing weights to use the same model)
- we are on process 0.
Except `save_pretrained` does not know if we hare on process zero or not, so I use `save_config` to detect that. It looks like that argument was not aptly named and should be `is_main_process` instead. I can go ahead and deprecate/rename in this PR if you agree. | 04-26-2022 14:34:35 | 04-26-2022 14:34:35 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 16,946 | closed | [HF Argparser] Fix parsing of optional boolean arguments | # What does this PR do?
This PR fixes a weird bug that made optional boolean arguments not being recognized properly in my virtual environment.
Replacing "is" by "==" fixed the issue.
| 04-26-2022 14:11:45 | 04-26-2022 14:11:45 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 16,945 | closed | Move test model folders | # What does this PR do?
Move test model folders | 04-26-2022 13:51:04 | 04-26-2022 13:51:04 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 16,944 | closed | Update codeparrot data preprocessing | This PR updates the preprocessing script of CodeParrot data (python files), we add new filters for:
- config and test files
- uncommon files (those without a mention of Python classic keyworks: `def`, `for`..)
- unusual files (don't use the assignement `=` operator often)
- files with low ratio between number of charcaters and number of tokens after tokenization
The impact of some of these filters is analyzed in this [tweet](https://twitter.com/LoubnaBenAllal1/status/1514300881419878403?s=20&t=IkodNO5Ma3X866-Yj-LvQw).
cc @lvwerra | 04-26-2022 11:45:13 | 04-26-2022 11:45:13 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 16,943 | closed | Fix `HubertRobustTest` PT/TF equivalence test on GPU | # What does this PR do?
Fix `HubertRobustTest` PT/TF equivalence test on GPU.
Note that `HubertRobustModelTest` has
```python
def setUp(self):
self.model_tester = HubertModelTester(
self, conv_stride=(3, 3, 3), feat_extract_norm="layer", do_stable_layer_norm=True
)
```
but `get_config()` had no ` do_stable_layer_norm=self.do_stable_layer_norm`
## To investigate further
- Why no issue on CPU even without this PR
- Why using `conv_stride=(4, 4, 4)` (the default value) has no issue on GPU, even without this PR
(Does this suggest we have PT/TF Hubert behave differently with `do_stable_layer_norm=False` on GPU when `conv_stride=(3, 3, 3)` etc?)
@patrickvonplaten You might have some idea about these points ..?
| 04-26-2022 10:10:12 | 04-26-2022 10:10:12 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Think it might be a great idea to add a check to avoid such situation occur in the future. Will do it in another PR. |
transformers | 16,942 | closed | Pretraining code of LayoutLMv2 | I was wondering is pre-training code of LayoutLMv2 model publicly available. @NielsRogge | 04-26-2022 09:55:34 | 04-26-2022 09:55:34 | Hi,
Microsoft hasn't open-sourced any pretraining code, unfortunately. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 16,941 | closed | Update tokenization_bertweet.py | The emoji version must be either 0.5.4 or 0.6.0. Newer emoji versions have been updated to newer versions of the Emoji Charts, thus not consistent with the one used for pre-processing the pre-training Tweet corpus (i.e. not consistent with the vocab).
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 04-26-2022 09:50:21 | 04-26-2022 09:50:21 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 16,940 | closed | Labels shift in seq2seq example | ### System Info
```shell
- `transformers` version: 4.15.0
- Platform: Linux-4.15.0-54-generic-x86_64-with-glibc2.10
- Python version: 3.8.5
- PyTorch version (GPU?): 1.8.1+cu101 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
```
### Who can help?
@sgugger, @patil-suraj
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Just run https://github.com/huggingface/transformers/blob/main/examples/pytorch/translation/run_translation.py script
### Expected behavior
For each token in seq2seq target sequence we have to predict the next token. However, i did not found this shift by one token in translation example neither in code nor in sources (I checked tokenizer.as_target_tokenizer, for example)
It can be added in this line https://github.com/huggingface/transformers/blob/fa322474060beb3673cf5a3e39ccd3c8ad57ecd3/examples/pytorch/translation/run_translation.py#L436
P.S. While opening this issue, I also checked CLM script example and met the same problem there. https://github.com/huggingface/transformers/blob/fa322474060beb3673cf5a3e39ccd3c8ad57ecd3/examples/pytorch/language-modeling/run_clm.py#L437
Am I wrong and miss something or it is really a problem that requires small fix?
| 04-26-2022 08:39:10 | 04-26-2022 08:39:10 | I should read the documentation carefully... |
transformers | 16,939 | closed | Segmentation fault whenever trying to load model | ### System Info
```shell
`conda list
# packages in environment at /home/faysal/anaconda3/envs/codexglue:
#
# Name Version Build Channel
_libgcc_mutex 0.1 conda_forge conda-forge
_openmp_mutex 4.5 1_llvm conda-forge
ca-certificates 2021.10.8 ha878542_0 conda-forge
certifi 2021.5.30 py36h5fab9bb_0 conda-forge
cudatoolkit 10.1.243 h036e899_10 conda-forge
ld_impl_linux-64 2.36.1 hea4e1c9_2 conda-forge
libblas 3.9.0 14_linux64_openblas conda-forge
libcblas 3.9.0 14_linux64_openblas conda-forge
libffi 3.4.2 h7f98852_5 conda-forge
libgcc-ng 11.2.0 h1d223b6_15 conda-forge
libgfortran-ng 11.2.0 h69a702a_15 conda-forge
libgfortran5 11.2.0 h5c6108e_15 conda-forge
liblapack 3.9.0 14_linux64_openblas conda-forge
libnsl 2.0.0 h7f98852_0 conda-forge
libopenblas 0.3.20 pthreads_h78a6416_0 conda-forge
libstdcxx-ng 11.2.0 he4da1e4_15 conda-forge
libzlib 1.2.11 h166bdaf_1014 conda-forge
llvm-openmp 13.0.1 he0ac6c6_1 conda-forge
mkl 2022.0.1 h8d4b97c_803 conda-forge
ncurses 6.3 h27087fc_1 conda-forge
ninja 1.10.2 h4bd325d_1 conda-forge
numpy 1.19.5 py36hfc0c790_2 conda-forge
openssl 1.1.1n h166bdaf_0 conda-forge
pip 21.3.1 pyhd8ed1ab_0 conda-forge
python 3.6.15 hb7a2778_0_cpython conda-forge
python_abi 3.6 2_cp36m conda-forge
pytorch 1.4.0 py3.6_cuda10.1.243_cudnn7.6.3_0 pytorch
readline 8.1 h46c0cb4_0 conda-forge
setuptools 58.0.4 py36h5fab9bb_2 conda-forge
sqlite 3.38.2 h4ff8645_0 conda-forge
tbb 2021.5.0 h924138e_1 conda-forge
tk 8.6.12 h27826a3_0 conda-forge
tokenizers 0.5.0 pypi_0 pypi
transformers 2.5.0 pypi_0 pypi
tree-sitter 0.20.0 pypi_0 pypi
wheel 0.37.1 pyhd8ed1ab_0 conda-forge
xz 5.2.5 h516909a_1 conda-forge
zlib 1.2.11 h166bdaf_1014 conda-forge`
```
### Who can help?
@LysandreJik
@patrickvonplaten
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Whenever I try to run the following code from (https://github.com/microsoft/CodeXGLUE ), it ends in a segmentation fault
`$ python run.py \
--do_train
--do_eval
--model_type roberta
--model_name_or_path $pretrained_model
--config_name roberta-base
--tokenizer_name roberta-base
--train_filename ../data/train.java-cs.txt.java,../data/train.java-cs.txt.cs
--dev_filename ../data/valid.java-cs.txt.java,../data/valid.java-cs.txt.cs
--output_dir $output_dir
--max_source_length 512
--max_target_length 512
--beam_size 5
--train_batch_size 32
--eval_batch_size 32
--learning_rate 5e-5
--train_steps 100
--eval_steps 50
### Expected behavior
```shell
Original issue in codeXglue:
https://github.com/microsoft/CodeXGLUE/issues/117
They ask me to post the issue here. So posting here for further assistance.
```
| 04-26-2022 05:55:29 | 04-26-2022 05:55:29 | Hey @faysalhossain2007,
Could you please provide a short and reproducible code snippet? E.g. if only the model loading leads to a segmentation fault,
could you please just provide a 4-liner:
```python
from transformers import AutoModel
model = AutoModel.from_pretrained("...")
```<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Same error here: `Segmentation fault (core dumped)`.
Environment:
```
torch==1.11.0+cu113
transformers==4.30.2
Python 3.7.11
```
I just ran the following testing code snippet
`python -c "from transformers import pipeline; print(pipeline('sentiment-analysis')('we love you'))"` |
transformers | 16,938 | closed | tokenizer return_special_tokens does not work correctly with custom special tokens | The tokenizer's returned `special_tokens_mask` does not take into account the newly added `special_tokens`.
### System Info
```shell
- `transformers` version: 4.18.0
- Platform: Linux-5.4.0-92-generic-x86_64-with-glibc2.31
- Python version: 3.9.12
- Huggingface_hub version: 0.5.1
- PyTorch version (GPU?): 1.11.0 (False)
```
### Who can help?
@LysandreJik
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```python
import transformers
print(transformers.__version__)
tokenizer = transformers.AutoTokenizer.from_pretrained('roberta-base')
special_tokens_dict = {"additional_special_tokens": ["<test1>", "<test2>"]}
tokenizer.add_special_tokens(special_tokens_dict)
processed = tokenizer("this <test1> that <test2> this", return_special_tokens_mask=True)
tokens = tokenizer.convert_ids_to_tokens(processed.input_ids)
for i in range(len(processed.input_ids)):
print(f"{processed.input_ids[i]}\t{tokens[i]}\t{processed.special_tokens_mask[i]}")
```
### Expected behavior
```shell
Returned output:
0 <s> 1
9226 this 0
1437 Δ 0
50265 <test1> 0
14 Δ that 0
1437 Δ 0
50266 <test2> 0
42 Δ this 0
2 </s> 1
Expected output:
0 <s> 1
9226 this 0
1437 Δ 0
50265 <test1> 1
14 Δ that 0
1437 Δ 0
50266 <test2> 1
42 Δ this 0
2 </s> 1
| 04-26-2022 02:21:05 | 04-26-2022 02:21:05 | I also found that - even when you add these specialized tokens it does not tokenise correctly.<|||||>I got it working by using the `tokenizer.get_special_tokens_mask` and passing the `already_has_special_tokens=True`, instead of using the `__call__` function of the tokenizer.
In any case, the interface for `tokenizer(..., return_special_tokens_mask=True)` is confusing, and one has to look at the actual [source code](https://github.com/huggingface/transformers/blob/aaee4038c3c34faea58c84e04fc88297e2be6cb2/src/transformers/tokenization_utils_base.py#L3007) to figure out `return_special_tokens_mask=True` doesn't work for additional special tokens.<|||||>This happens for both fast and slow tokenizers.
The problem for fast tokenizers seems to originate in the [rust implementation](https://github.com/huggingface/transformers/blob/31ec2cb2badfbdd4c1ac9c6c9b8a74e974984206/src/transformers/tokenization_utils_fast.py#L425) (?)
The problem for slow tokenizers seems to be as @armancohan pointed out that `already_has_special_tokens` is not set to True [here](https://github.com/huggingface/transformers/blob/aaee4038c3c34faea58c84e04fc88297e2be6cb2/src/transformers/tokenization_utils_base.py#L3009).
Thanks for this @armancohan, saved me quite some time.
<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Hi, I encountered the same problem here. Thanks @armancohan for the fast solutions. Should we open a MR to fix the issue? |
transformers | 16,937 | closed | Fixed broken link on pipelines.mdx | # What does this PR do?
When clicking in task summary the link was redirecting to a 404 not found page.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 04-25-2022 23:24:08 | 04-25-2022 23:24:08 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_16937). All of your documentation changes will be reflected on that endpoint.<|||||>Clicking on it in chrome links to the correct page. What is your browser/setup?<|||||>Hello @LysandreJik I'm using chrome version 101.0.4951.41 on Windows.
The url that I'm redirected when clicking the link:
https://github.com/huggingface/transformers/blob/b9a768b3ffa80c4c19d024f9f42d5917e7d8109e/docs/source/en/task_summary<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 16,936 | closed | Adding tokens to `RobertaTokenizer` is fast, but loading the extended tokenizer from disk takes tens of minutes | ### System Info
```shell
- `transformers` version: 4.18.0
- Platform: Linux-5.10.0-0.bpo.9-amd64-x86_64-with-debian-10.12
- Python version: 3.7.3
- Huggingface_hub version: 0.5.1
- PyTorch version (GPU?): 1.11.0+cu102 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: false
- Using distributed or parallel set-up in script?: false
```
### Who can help?
@SaulLu @Narsil
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I train a BPE tokenizer on a domain-specific dataset and save it as [`tokenizer-latex.json`](https://github.com/huggingface/transformers/files/8557562/tokenizer-latex.json.txt).
``` python
>>> from tokenizers import Tokenizer, normalizers, pre_tokenizers
>>> from tokenizers.models import BPE
>>> from tokenizers.trainers import BpeTrainer
>>>
>>> latex_model = BPE(unk_token='[UNK]')
>>> latex_tokenizer = Tokenizer(latex_model)
>>> latex_tokenizer.pre_tokenizer = pre_tokenizers.WhitespaceSplit()
>>> latex_tokenizer.normalizer = normalizers.Sequence([normalizers.Strip()])
>>> latex_tokenizer_trainer = BpeTrainer(special_tokens=['[UNK]'])
>>> latex_tokenizer.train(['dataset-latex.txt'], latex_tokenizer_trainer)
>>> latex_tokenizer.save('tokenizer-latex.json')
```
Then, I extend [the pre-trained `roberta-base` tokenizer][1] with 28,141 new tokens from the vocabulary of my BPE tokenizer and I save the result to the directory `./extended-roberta-base/`. This finishes in a matter of seconds:
``` python
>>> from tokenizers import Tokenizer
>>> from transformers import RobertaTokenizer
>>>
>>> latex_tokenizer = Tokenizer.from_file('tokenizer-latex.json')
>>>
>>> text_latex_tokenizer = RobertaTokenizer.from_pretrained('roberta-base', add_prefix_space=True)
>>> text_latex_tokenizer.add_tokens(list(latex_tokenizer.get_vocab()))
28141
>>> text_latex_tokenizer.save_pretrained('./extended-roberta-base/')
('./extended-roberta-base/tokenizer_config.json', './extended-roberta-base/special_tokens_map.json',
'./extended-roberta-base/vocab.json', './extended-roberta-base/merges.txt',
'./extended-roberta-base/added_tokens.json', './extended-roberta-base/tokenizer.json')
```
However, when I load the extended `roberta-base` tokenizer from the directory `./extended-roberta-base/`, the library constructs a trie (see https://github.com/huggingface/transformers/pull/13220) over the course of ca 20 minutes:
``` python
>>> from transformers import RobertaTokenizer
>>>
>>> text_latex_tokenizer = RobertaTokenizer.from_pretrained('./extended-roberta-base/')
^C
Traceback (most recent call last):
File "<stdin>", line 2, in <module>
text_latex_tokenizer = RobertaTokenizer.from_pretrained('./extended-roberta-base/')
File "***/python3.7/site-packages/transformers/tokenization_utils_base.py", line 1787, in from_pretrained
**kwargs,
File "***/python3.7/site-packages/transformers/tokenization_utils_base.py", line 1971, in _from_pretrained
tokenizer.add_tokens(token, special_tokens=bool(token in special_tokens))
File "***/python3.7/site-packages/transformers/tokenization_utils_base.py", line 945, in add_tokens
return self._add_tokens(new_tokens, special_tokens=special_tokens)
File "***/python3.7/site-packages/transformers/tokenization_utils.py", line 444, in _add_tokens
self._create_trie(self.unique_no_split_tokens)
File "***/python3.7/site-packages/transformers/tokenization_utils.py", line 454, in _create_trie
trie.add(token)
File "***/python3.7/site-packages/transformers/tokenization_utils.py", line 87, in add
ref = ref[char]
KeyboardInterrupt
```
The time disparity leads me to believe that when `RobertaTokenizer.add_tokens()` is called, a trie is either not created or is created extremely fast, whereas when `RobertaTokenizer.from_pretrained()` is called, a trie is created (slowly). Using `RobertaTokenizerFast` instead of `RobertaTokenizer` produces similar results at a similar timescale.
[1]: https://huggingface.co/roberta-base
### Expected behavior
Both `add_tokens()` and `from_pretrained()` should take comparable amount of time. Either building the trie is important and cannot be sped up, in which case `add_tokens()` should also take roughly 20 minutes, or building the trie is unimportant or can be sped up, in which case `from_pretrained()` should finish in a matter of seconds.
| 04-25-2022 20:45:42 | 04-25-2022 20:45:42 | Hi, pretty sure this is because `add_tokens` and therefore the `trie` creation is done N times for all the N tokens, which is indeed excruciatingly slow (and completely uncessary).
I think we can create the `trie` only once, wdyt @SaulLu <|||||>Hi @Witiko,
Thanks for sharing this issue!
I share your analysis @Narsil ! When the `from_pretrained` method is called, the tokens are added 1 by 1 in this loop.
https://github.com/huggingface/transformers/blob/fa322474060beb3673cf5a3e39ccd3c8ad57ecd3/src/transformers/tokenization_utils_base.py#L1948-L1974
My memory may be faulty, but I had the impression that I had already read in an issue / PR that there could be a difficulty to circumvent to achieve this type of change - I can't find it back unfortunately. For the moment, I think that it is necessary to see in this loop that the order of addition has its importance and that we can alternate between addition of normal and special tokens.<|||||>We did it in tokenizers` since the `Trie` insertion order of added tokens should not be important (this is also currently the case in slow tokenizers)
https://github.com/huggingface/tokenizers/blob/main/tokenizers/src/tokenizer/serialization.rs#L172
There might be other things to deal with in the python code, but the `Trie` itself doesn't care about insertion order, so we can create it only once.<|||||>Yes I absolutely agree! It just seemed important to mention it because the code that generates the multiple `Trie` builds currently is code that is shared between the fast and python tokenizers. :smile: <|||||>Thank you for investigating. Should I try and open a PR, or are you planning to tackle this, @Narsil?<|||||>Hi @Witiko ,
I don't have a lot of bandwidth atm to handle this. If you can try and open a PR that would be awesome.
Feel free to ping me if you want early feedback (doesn't matter if PR is not ready).
Cheers,
Nicolas<|||||>Hello @Narsil,
neither do I, but I can take a stab at it sometime the next month. It seems to me that a simple fix might be to add a boolean parameter `_postpone_optimization` to `add_tokens()`, so that we can prevent the trie from being constructed in `from_pretrained()`. However, this does not solve the problem for users who would manually call `add_tokens()` with many small batches of tokens in their code. A more robust fix would be to construct the trie lazily at the point where is is needed.<|||||>> However, this does not solve the problem for users who would manually call add_tokens() with many small batches of tokens in their code.
`add_tokens` already accepts lists, so sending the maximum possible amount of tokens in one go is the way to go, laziness is not the solution to this problem here I think.<|||||>> `add_tokens` already accepts lists, so sending the maximum possible amount of tokens in one go is the way to go
The current code of `from_pretrained()` calls `add_tokens()` repeatedly with single tokens, so that it can persist the information about whether the token is special or not. Perhaps the way to go would be to first build a list of special and non-special tokens and then call `add_tokens()` once for special and once for non-special tokens?
> laziness is not the solution to this problem here I think.
I agree that laziness makes it more difficult to predict performance and reason about the code, especially in multiprocessing settings. Having `add_tokens()` that behaves optimally when you add tokens in bulk seems more straightforward.<|||||>@SaulLu @Narsil I opened PR #17119 that fixes this issue. I would appreciate your review at your earliest convenience.<|||||>Hey @Witiko I have seen your PR, but I am not sure I can do a proper review regarding the implications of this code, I did a small quality improvements suggestions in terms of pure code.
Pinging @SaulLu for visibility (If you don't have time I can look more into this btw, I just figured you would review faster than me)<|||||>@Narsil Thanks to your suggestions, the diff in #17119 against the current code is now tiny. Furthermore, the code causes a net speedup in loading tokenizers, so hopefully we can have the PR merged soon. π€<|||||>Thanks a lot for working on this fix.
On my side, I'm trying to look at your PR tomorrow. As this is a change that will impact all tokenizers, this is a contribution that requires a very attentive review on our part, that's why it can be a bit long. <|||||>@SaulLu Thank you. Your caution is appreciated. we wouldn't want to break tokenizers. π |
transformers | 16,935 | closed | Limit the use of PreTrainedModel.device | # What does this PR do?
I'm currently working on solutions to do model parallelism, offload weights to the CPU or the hard drive, and I've encountered some bugs linked to the way we use the `PreTrainedModel.device`: it grabs the first parameter of the model to infer a device for the whole model. This doesn't work when the model is:
- split on several devices and the first parameter grabbed happens to be on the wrong one
- not materialized because its parameters are offloaded on the CPU or the hard-drive.
So whenever it's possible, it would be great to rely on something else if we can, for instance some device where the inputs are. This PR does this for every use of this `device` attribute in modeling_utils and generation_utils, with the exception of some code where there are no inputs passed so we generate them and have to use something for the device.
If all works well, I plan to add all modeling files that make use of that attribute (when in the `dummy_inputs`, I'll leave the `self.device` but outside of it, will grab the device of any inputs we have). | 04-25-2022 17:40:45 | 04-25-2022 17:40:45 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 16,934 | closed | Fix Iterations for decoder | The current script works fine if the number of decoder layers = the number of encoder layers.
However, it will not work if the number of layers is not equal, like in t5-efficient models.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Models:
- t5: @patrickvonplaten, @patil-suraj
| 04-25-2022 17:06:24 | 04-25-2022 17:06:24 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 16,933 | closed | Fix RemBertTokenizerFast | # What does this PR do?
`RemBertTokenizer(Fast)` are similar to `AlbertTokenizer(Fast)`, the slow versions are based on `SentencePiece`.
Unlike `AlbertTokenizerFast`, the fast tokenizer `RemBertTokenizerFast` doesn't have
```python
self.can_save_slow_tokenizer = False if not self.vocab_file else True
```
And I got error when I want to call `save_pretrained()` after doing something like
```
tokenizer_fast.train_new_from_iterator(training_ds["text"], 1024)
```
(while working on the task for creating tiny random models/processor)
### Error message without this PR
```
File "/home/yih_dar_huggingface_co/transformers/create_dummy_models.py", line 457, in convert_processors
p.save_pretrained(output_folder)
File "/home/yih_dar_huggingface_co/transformers/src/transformers/tokenization_utils_base.py", line 2101, in save_pretrained
save_files = self._save_pretrained(
File "/home/yih_dar_huggingface_co/transformers/src/transformers/tokenization_utils_fast.py", line 591, in _save_pretrained
vocab_files = self.save_vocabulary(save_directory, filename_prefix=filename_prefix)
File "/home/yih_dar_huggingface_co/transformers/src/transformers/models/rembert/tokenization_rembert_fast.py", line 237, in save_vocabulary
if os.path.abspath(self.vocab_file) != os.path.abspath(out_vocab_file):
File "/home/yih_dar_huggingface_co/miniconda3/envs/py-3-9/lib/python3.9/posixpath.py", line 375, in abspath
path = os.fspath(path)
TypeError: expected str, bytes or os.PathLike object, not NoneType
``` | 04-25-2022 17:00:26 | 04-25-2022 17:00:26 | I could provide the full code sample to reproduce the issue without this PR if necessary.<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>Great to know the test is there (in common) π |
transformers | 16,932 | closed | CodeParrot data pretokenization | This PR adds code for data pretokenization of CodeParrot . In fact it takes long to tokenize the data especially for small models, so having a pretokenized dataset might improve the training speed. We also fix an error in the `README.md` and `scripts/initialize_model.py` inside the codeparrot repo.
cc @lvwerra | 04-25-2022 16:09:33 | 04-25-2022 16:09:33 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 16,931 | closed | ZeroShotClassificationPipeline not using GPU | ### System Info
```shell
- `transformers` version: 4.18.0
- Platform: Linux-5.13.0-37-generic-x86_64-with-glibc2.31
- Python version: 3.9.10
- Huggingface_hub version: 0.4.0
- PyTorch version (GPU?): 1.11.0+cu113 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
```
### Who can help?
Hello, @Narsil sorry to bother you once again....
When using a ZeroShotClassificationPipeline, it seems that a lot of preprocessing is done on CPU instead of GPU.
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```python
from transformers.modeling_utils import PreTrainedModel
from transformers.models.auto.modeling_auto import AutoModelForSequenceClassification
from transformers.models.auto.tokenization_auto import AutoTokenizer
from transformers.pipelines.zero_shot_classification import ZeroShotClassificationPipeline
from transformers.tokenization_utils import PreTrainedTokenizer
import torch
import itertools
few_show_classification_model: PreTrainedModel = AutoModelForSequenceClassification.from_pretrained("facebook/bart-large-mnli")
few_show_classification_model = few_show_classification_model.to(torch.device("cuda"))
few_show_classification_tokenizer: PreTrainedTokenizer = AutoTokenizer.from_pretrained("facebook/bart-large-mnli")
classifier = ZeroShotClassificationPipeline(model=few_show_classification_model, tokenizer=few_show_classification_tokenizer, device=0, multi_label=False)
words = ["hello", "you", "I", "am", "beautiful", "and", "we", "like", "sugar"]
utterances = [" ".join(w) for w in list(itertools.permutations(words))[:4]]
contexts = [" ".join(w) for w in list(itertools.permutations(words))[:3000]]
classifier(utterances, contexts)
```
Model is taking 2.9Gb on GPU RAM (+small other things)
The GPU ram is not changing at all during all inference
The CPU ram is however using a lot (with a lot of contexts just for the sake of explanation)
The CPU ram in my case goes from 5Gb to 22.6Gb and the GPU ram stays the same.
### Expected behavior
I was hoping of having a CUDA out of memory error instead of a huge load in CPU ram.
Maybe the model is computing on CPU because of a bad initialization?
Thanks a lot in advance.
Have a great day.
| 04-25-2022 15:38:49 | 04-25-2022 15:38:49 | @Ierezell ,
preprocessing will always happen on CPU so it's not entirely surprising. There's no way to make preprocessing happen on GPU (tokenization) afaik.
Here you're using 3k sentences X 3k labels so we're looking at 9M individual `input_ids` sequences that have to be generated.
Can you try doing this:
```python
from transformers.modeling_utils import PreTrainedModel
from transformers.models.auto.modeling_auto import AutoModelForSequenceClassification
from transformers.models.auto.tokenization_auto import AutoTokenizer
from transformers.pipelines.zero_shot_classification import ZeroShotClassificationPipeline
from transformers.tokenization_utils import PreTrainedTokenizer
import torch
import itertools
few_show_classification_model: PreTrainedModel = AutoModelForSequenceClassification.from_pretrained("facebook/bart-large-mnli")
few_show_classification_model = few_show_classification_model.to(torch.device("cuda"))
few_show_classification_tokenizer: PreTrainedTokenizer = AutoTokenizer.from_pretrained("facebook/bart-large-mnli")
classifier = ZeroShotClassificationPipeline(model=few_show_classification_model, tokenizer=few_show_classification_tokenizer, device=0, multi_label=False)
words = ["hello", "you", "I", "am", "beautiful", "and", "we", "like", "sugar"]
def utterances(words):
for w in itertools.permutations(words):
yield " ".join(w)
contexts = [" ".join(w) for w in list(itertools.permutations(words))[:3000]]
classifier(utterances(), contexts)
```
Should be easier on your RAM, please note that `list(itertools.permutations)` is still creating `8!` (40k) objects.<|||||>Hello @Narsil,
Thanks for the fast reply :)
It was my guess but I'm happy to have the confirmation.
I just didn't though that pre-processing could take that much memory (in the example it's too much for sure).
As it's utterances X labels the memory requirement can raise quite fast (in my case 10 labels vs 500).
Using a generator is indeed saving a good part of memory.
My fix was batching on the labels (contexts).
Thanks again for your time and help.
Have a great day.
|
transformers | 16,930 | closed | [Generation] `length_penalty` means `beam_alpha` | https://github.com/huggingface/transformers/blob/508baf194313c397345af868202404e285494a28/src/transformers/generation_utils.py#L949-L952
https://github.com/huggingface/transformers/blob/508baf194313c397345af868202404e285494a28/src/transformers/generation_beam_search.py#L829
There are two issues about `length_penalty`:
1. Actually, `length_penalty=1` does **NOT** mean no penalty. It is no penalty when `length_penalty=0`, right?
2. This is different from the [`beam_alpha` paper](https://arxiv.org/pdf/1609.08144.pdf), why?
<img width="632" alt="image" src="https://user-images.githubusercontent.com/42370681/165103163-b5946237-b03e-4a24-bb3e-2122a9dacec2.png">
| 04-25-2022 14:05:28 | 04-25-2022 14:05:28 | See https://github.com/huggingface/transformers/issues/4915 https://github.com/huggingface/transformers/issues/4918#issuecomment-681985118 https://github.com/huggingface/transformers/issues/14768<|||||>cc @patrickvonplaten
https://github.com/huggingface/transformers/blame/508baf194313c397345af868202404e285494a28/src/transformers/generation_beam_search.py#L829<|||||>Hey @ShaneTian,
Good catch! I think that's exactly right - we should change the docs here to:
```py
length_penalty (`float`, *optional*, defaults to 1.0):
Exponential penalty to the length. 1.0 means that the beam score is penalized by the sequence length. 0.0 means no penalty. Set to values < 0.0 in order to encourage the
model to generate longer sequences, to a value > 0.0 in order to encourage the model to produce shorter
sequences.
```
Would you like to open a pull request for this?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>@ShaneTian note that the length penalty is different from the "alpha" penalty simply because we discovered it first ;-) Would be too difficult to change now though<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 16,929 | closed | Fix doc test quicktour dataset | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Makes sure that `path` of dataset is not returned.
See discussion: https://huggingface.slack.com/archives/C02CH2YP4EQ/p1650887822286239?thread_ts=1650816548.265789&cid=C02CH2YP4EQ
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 04-25-2022 12:47:51 | 04-25-2022 12:47:51 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 16,928 | closed | A bug in modeling_ibert.py | ```
class IBertEmbeddings(nn.Module):
def __init__(self, config):
super().__init__()
self.quant_mode = config.quant_mode
self.embedding_bit = 8
self.embedding_act_bit = 16
self.act_bit = 8
self.ln_input_bit = 22
self.ln_output_bit = 32
self.word_embeddings = QuantEmbedding(
config.vocab_size,
config.hidden_size,
padding_idx=config.pad_token_id,
weight_bit=self.embedding_bit,
quant_mode=self.quant_mode,
)
self.token_type_embeddings = QuantEmbedding(
config.type_vocab_size, config.hidden_size, weight_bit=self.embedding_bit, quant_mode=self.quant_mode
)
# position_ids (1, len position emb) is contiguous in memory and exported when serialized
self.register_buffer("position_ids", torch.arange(config.max_position_embeddings).expand((1, -1)))
self.position_embedding_type = getattr(config, "position_embedding_type", "absolute")
# End copy
self.padding_idx = config.pad_token_id
self.position_embeddings = QuantEmbedding(
config.max_position_embeddings,
config.hidden_size,
padding_idx=self.padding_idx,
weight_bit=self.embedding_bit,
quant_mode=self.quant_mode,
)
# Integer-only addition between embeddings
self.embeddings_act1 = QuantAct(self.embedding_act_bit, quant_mode=self.quant_mode)
self.embeddings_act2 = QuantAct(self.embedding_act_bit, quant_mode=self.quant_mode)
# self.LayerNorm is not snake-cased to stick with TensorFlow model variable name and be able to load
# any TensorFlow checkpoint file
self.LayerNorm = IntLayerNorm(
config.hidden_size,
eps=config.layer_norm_eps,
output_bit=self.ln_output_bit,
quant_mode=self.quant_mode,
force_dequant=config.force_dequant,
)
self.output_activation = QuantAct(self.act_bit, quant_mode=self.quant_mode)
self.dropout = nn.Dropout(config.hidden_dropout_prob)
def forward(
self, input_ids=None, token_type_ids=None, position_ids=None, inputs_embeds=None, past_key_values_length=0
):
if position_ids is None:
if input_ids is not None:
# Create the position ids from the input token ids. Any padded tokens remain padded.
position_ids = create_position_ids_from_input_ids(
input_ids, self.padding_idx, past_key_values_length
).to(input_ids.device)
else:
position_ids = self.create_position_ids_from_inputs_embeds(inputs_embeds)
if input_ids is not None:
input_shape = input_ids.size()
else:
input_shape = inputs_embeds.size()[:-1]
if token_type_ids is None:
token_type_ids = torch.zeros(input_shape, dtype=torch.long, device=self.position_ids.device)
if inputs_embeds is None:
inputs_embeds, inputs_embeds_scaling_factor = self.word_embeddings(input_ids)
else:
inputs_embeds_scaling_factor = None
token_type_embeddings, token_type_embeddings_scaling_factor = self.token_type_embeddings(token_type_ids)
embeddings, embeddings_scaling_factor = self.embeddings_act1(
inputs_embeds,
inputs_embeds_scaling_factor,
identity=token_type_embeddings,
identity_scaling_factor=token_type_embeddings_scaling_factor,
)
if self.position_embedding_type == "absolute":
position_embeddings, position_embeddings_scaling_factor = self.position_embeddings(position_ids)
embeddings, embeddings_scaling_factor = self.embeddings_act1(
embeddings,
embeddings_scaling_factor,
identity=position_embeddings,
identity_scaling_factor=position_embeddings_scaling_factor,
)
embeddings, embeddings_scaling_factor = self.LayerNorm(embeddings, embeddings_scaling_factor)
embeddings = self.dropout(embeddings)
embeddings, embeddings_scaling_factor = self.output_activation(embeddings, embeddings_scaling_factor)
return embeddings, embeddings_scaling_factor
def create_position_ids_from_inputs_embeds(self, inputs_embeds):
"""
We are provided embeddings directly. We cannot infer which are padded so just generate sequential position ids.
Args:
inputs_embeds: torch.Tensor
Returns: torch.Tensor
"""
input_shape = inputs_embeds.size()[:-1]
sequence_length = input_shape[1]
position_ids = torch.arange(
self.padding_idx + 1, sequence_length + self.padding_idx + 1, dtype=torch.long, device=inputs_embeds.device
)
return position_ids.unsqueeze(0).expand(input_shape)
```
In _modeling_ibert.py line 147_ may use a wrong quantizer, should be self.embeddings_act2.
| 04-25-2022 12:47:04 | 04-25-2022 12:47:04 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 16,927 | closed | Fix wrong image conditional checking | Apparently, If an empty batch comes in, it is considered valid. However, then line 138 brakes since it tries to access element 0 of an empty batch.
| 04-25-2022 10:45:19 | 04-25-2022 10:45:19 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_16927). All of your documentation changes will be reflected on that endpoint.<|||||>Hi,
Thanks for your PR. If we go ahead with this, then it should also be fixed for all other feature extractors, like ViT, BEiT, DeiT, etc. cc @patil-suraj <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Gently pinging @Charlyo, let me know if you want to finish this, otherwise happy to take over :) <|||||>Sorry guys! Feel free to take over @patil-suraj :)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 16,926 | closed | Pytorch QA examples fix & clean-up (code dedup) | Thank you for the great library! I had to clean up QA examples, because of the duplicate pre- and post-processing code. However, while doing so I have encountered a number of issues that I had to fix. Please, see details below.
# What does this PR do?
1. refactoring: consolidating duplicate post-processing functions in a helper file (now shared between regular and no-trainer version)
2. Fixes evaluation errors popping up when you train/eval on squad v2 (one was newly encountered and one that was previously reported #15401 but not completely fixed).
3. Removes boolean arguments that don't use `store_true`. Please, don't use these: **ANY* non-empty string is being converted to True in this case and this clearly is not the desired behavior (and it creates a **LOT** of confusion).
4. all **no-trainer** test scripts are now saving metric values in the same way (with the right prefix ``eval_``), which is consistent with the **trainer**-based versions.
5. Adds forgotten `model.eval()` in the **no-trainer** versions. This improved some results, but not everything (see the discussion in the end). Please, see the F1 scores and the discussion below.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] You make sure to update the documentation with your changes? **I believe examples aren't covered by the documentation**
- [X] Did you write any new necessary tests? **I trained squad and squad v2 models and compared results (see the discussion below)**, but I am not sure if running more QA tests automatically will be feasible. Do note that the existing "unit-test" is very crude and does not permit detecting small regressions in model quality.
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Perhaps, this can be of most interest for @sgugger, @patil-suraj.
## Comparing old and new performance + some potential issues
Some remaining issues:
1. Despite the fixes & improvements, there's still a discrepancy between no-trainer and original version for SQuAD v2 or the beam-search version.
2. In particular, for SQuAD v2 and the beam-search variant **without trainer**, both old and new numbers look very wrong to me.
Please note that to be able to run SQuAD v2 tests, **I had to apply utils_qa.py fixes to the old code as well**. Otherwise, it would have just failed:
The metric is F1, the exact scores have the same pattern.
| | previous | fixed |
|-----------------------------------|----------|-------|
| squad v1 | 88.4 | 88.4 |
| squad v1 (no trainer) | 86.7 | 88.5 |
| squad v1 (beam search) | 92.1 | 92.1 |
| squad v1 (beam search no trainer) | 90.2 | 91.0 |
| squad v2 (beam search) | 83.2 | 83.2 |
| squad v2 (beam search no trainer) | 4.9 | 50.1 | | 04-25-2022 10:37:31 | 04-25-2022 10:37:31 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Thanks for your PR! The first point is not something we want, as we would like the user to see all the code in one file for the data preparation/basic postprocessing. The functions put in the `utils` module are a bit different in the sense we don't expect users to change that code, but adapting `prepare_train_features` to one's dataset for instance, is something we can expect. It comes with duplicate code, so some additional work on our side for maintenance, but that's okay since it's for the end user's benefit :-)
Changes 2 to 5 are very much welcome however. Would you mind just removing change 1 from your PR?<|||||>Hi @sgugger thank for a quick review. I can certainly revert these and retest/re-pull request.
BTW, do you have an idea why there's such a gap in performance for SQuAD v2 in the case of the beam-search version. There's also a small difference for v1. I would really love to know what's wrong here. Trainers are cool, but for some custom cases modifying a PyTorch training loop is so much easier.<|||||>There is probably another bug in the beam search version for squad V2. As this example is not used by a lot of people, we just didn't notice until now π¬ |
transformers | 16,925 | closed | Update build_pr_documentation.yml | Testing https://github.com/huggingface/doc-builder/pull/197
[don't merge] | 04-25-2022 10:19:09 | 04-25-2022 10:19:09 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 16,924 | closed | [BigScience176B] Model conversion from Megatron-LM to transformers | ### Feature request
Creating here a thread for the model conversion of the BigScience-176B model from Megatron-LM to the transformers library. I will summarize here what I have done so far, the current status of the conversion procedure, as well as a summary of the 'discoveries' that I have made and the small details that we have to care about when dealing with the conversion procedure. I did my work by forking @thomwolf 's fork. The tests has been done on the DGX machine (4 NVIDIA A100).
## :cherry_blossom: Big picture
- Generate some samples with a recent checkpoint
- Testing the exactness of the logits / hidden states values that we obtain using the same input, between the model from Megatron-LM and the converted model. We use a small GPT2 trained on a dummy dataset (2 sentences). This model has been pushed on the hub and being used for integration tests.
- Apply these tests on a recent checkpoint of the 176B model to make sure about the robustness of the tests
## :paperclip: Main links:
- First PR: thomwolf/transformers#1
- WIP PR: thomwolf/transformers#2
- Final PR: #16514
- [The Small debug-GPT2 model](https://huggingface.co/bigscience/bigscience-small-testing)
## :hammer: Current status
- For now, all tests pass on the DGX's GPU (using different conda environments between the Megatron-LM model & the transformers model) with ```assertEqual```.
- The tests does not pass with ```assertEqual``` when running them on the CPU, but they pass with ```assertAlmostEqual```, with a tolerance of (0.05) for the logits after the ```LayerNorm``` on the Embedding layer and a tolerance ```1e-06``` on the final logits. Check the tests [here](https://github.com/thomwolf/transformers/blob/bigscience176b/tests/bigscience176b/test_embeddings_bigscience176b.py). This behavior of *non-exactness* seem to be expected and we can not do much about that according to pytorch/pytorch#76052
- Added simple reconstruction and encoding tests on the BigScience tokenizer
## :pushpin: Tips for conversion
+ Explicitly specify the dtype of your modules when initializing them seem to be helpful to ensure exact reproducibility - added an argument `dtype` on the config file
+ Concatenating the weights from Row-parallelized weights seem to return unsimilar results, I made a reproducible script and raised an issue pytorch/pytorch#76232 the solution for now is to [manually aggregate the results across each TP rank](https://github.com/younesbelkada/transformers/blob/c83999f991137bd475d409a4b70f4903b256e608/src/transformers/models/bigscience176b/modeling_bigscience176b.py#L251). Needs further investigation for possible improvement of the conversion.
## :white_check_mark: Next steps
- [x] Fix integration tests on the PR thomwolf/transformers#2
- [x] Define which checkpoint to use for the next tests
- [x] Convert the model with the selected checkpoints and compare the hidden states values between the 2 models. -> fixed some issues in this new commit a4fa70c1a5042fdca7d0fbf26b0aad6ca99fdadc
- [x] `MixedFusedLayerNorm` and `FusedScaledSoftmax` seem to be replaceable respectively by `LayerNorm` and `Softmax` from `torch.nn`. Verify this assumption on the new checkpoints.
- [ ] Convert a sharded version of the large model and try the tests on that
cc @thomwolf @suzana-ilic @thomasw21 @stas00
### Motivation
The feature request is related to the [BigScience workshop](https://bigscience.huggingface.co/), where a large Language Model is currently being trained using [Megatron-LM](https://github.com/NVIDIA/Megatron-LM).
### Your contribution
Ultimately submitting a PR to add the BigScience-176B model to the transformers library - by ensuring the exactness of the operations between the converted model and the original trained model on Megatron-LM. | 04-25-2022 10:02:41 | 04-25-2022 10:02:41 | I will start by linking this thread to this PR. A possible next step and improvement could be to get some ideas of the equivalency tests that has been made by this PR bigscience-workshop/Megatron-DeepSpeed#121<|||||>Also I think the big picture is we want to generate ASAP. (perhaps even before checking exactitude of conversion :S)
<|||||>As a side note, I'm working on a solution to do model parallelism/offload while maximizing the GPU(s) memory/RAM available which should be useful to run this model on all kinds of setups (albeit more slowly). Should land in Accelerate in the coming weeks :-) <|||||>After running several tests on JZ using a small model (but way larger than the debug model):
+ I fixed issues regarding some operations that were not considered in the previous version of the model a4fa70c1a5042fdca7d0fbf26b0aad6ca99fdadc
+ There are still small discrepancies most likely due to `MixedFusedLayerNorm` used in Meg-DS. A workaround is to use the `apex.normalization.FusedLayerNorm` but needs apex to be installed.
But overall the model outputs the same logits with little discrepancies (`torch.test.assert_all_close` pass for eg with `rtol=1e-7, atol=1e-02`), exact same argmax for the same input as well as some metrics (min max mean) which are the same. I still need to dig more to understand what causes this very small differences.
I will move soon to trying these tests with the large model and see how it will impact everything (maybe we will be able to figure out what causes this little changes)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Closing this issue since the whole discussion around the model conversion has been done in the WIP PR: #17202 |
transformers | 16,923 | closed | QA examples fixing & clean-up: | Thank you for the great library! I had to clean up QA examples, because of the duplicate pre- and post-processing code. However, while doing so I have encountered a number of issues that I had to fix. Please, see details below.
# What does this PR do?
1. refactoring: consolidating duplicate post-processing functions in a helper file (now shared between regular and no-trainer version)
2. Fixes evaluation errors popping up when you train/eval on squad v2-eval : utils_qa.py (one was newly encountered and one that was previously reported #15401).
3. Fixes SQuAD unit tests and ensures all boolean arguments use store_true. Please, don't use boolean arguments that don't use store_true. False or false or any non-empty string is being converted to True in this case and this clearly is not the desired behavior.
4. all **no-trainer** test scripts are now saving metric values in the same way (with the right prefix ``eval_``). Previously, because of the bug described in item 3, the unit test was using SQuAD v2 metrics instead of SQuAD v1 metrics. v2 uses different metric name for the exact match. This was previously "fixed" at the level of the ``run_qa_no_trainer.py``. However, such a fix isn't necessary any more.
5. Adds forgotten model.eval() in the **no-trainer** versions. This fully fixed training of the **no-trainer**
variant for the regular squad QA model **without** beam search. In the case of **using beam-search** the gap decreased, but not fully. Yet it is small. Unfortunately, the beam-search SQuAD v2 version produces strange numbers (the older code is even worse), so this requires extra investigation IMHO. Please, see the F1 scores and the discussion below.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] You make sure to update the documentation with your changes? **I believe examples aren't covered by the documentation**
- [X] Did you write any new necessary tests? **I trained squad and squad v2 models and compared results (see the discussion below)**, but I am not sure if any new tests are needed. I also **fixed** a unit test so it uses the proper SQuAD (v1) metric.
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Perhaps, this can be of most interest for @sgugger, @patil-suraj.
## Comparing old and new performance
Some remaining issues:
1. Despite the fixes & improvements, there's still a discrepancy between no-trainer and original version for SQuAD v2 or the beam-search version.
2. In particular, for SQuAD v2 and the beam-search variant **without trainer**, both old and new numbers look very wrong to me.
Please note that to be able to run SQuAD v2 tests, **I had to apply utils_qa.py fixes to the old code as well**. Otherwise, it would have just failed:
The metric is F1, the exact scores have the same pattern.
| | previous | fixed |
|-----------------------------------|----------|-------|
| squad v1 | 88.4 | 88.4 |
| squad v1 (no trainer) | 86.7 | 88.5 |
| squad v1 (beam search) | 92.1 | 92.1 |
| squad v1 (beam search no trainer) | 90.2 | 91.0 |
| squad v2 (beam search) | 83.2 | 83.2 |
| squad v2 (beam search no trainer) | 4.9 | 50.1 | | 04-25-2022 09:51:18 | 04-25-2022 09:51:18 | |
transformers | 16,922 | closed | Spanish translation of the file philosophy.mdx | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 04-25-2022 09:36:32 | 04-25-2022 09:36:32 | Spanish translation of the file philosophy.mdx #15947<|||||>Thank you @jkmg! Could you please add `philosophy` to [`transformers/docs/source/es/_toctree.yml`](https://github.com/huggingface/transformers/blob/main/docs/source/es/_toctree.yml)? As a reference, you can use the [new Translation](https://github.com/huggingface/transformers/blob/main/docs/TRANSLATING.md) guide (section "βοΈ Start translating"). This would allow the tests to pass.<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>@jkmg Thank you very much! Merged π€. Please let me know, through the #15947, if you wish to translate another part of the docs.
|
transformers | 16,921 | closed | QA examples fixing & clean-up: | Thank you for the great library! I had to clean up QA examples, because of the duplicate pre- and post-processing code. However, while doing so I have encountered a number of issues that I had to fix. Please, see details below.
# What does this PR do?
1. refactoring: consolidating duplicate post-processing functions in a helper file (now shared between regular and no-trainer version)
2. Fixes evaluation errors popping up when you train/eval on squad v2-eval : utils_qa.py (one was newly encountered and one that was previously reported #15401).
3. Fixes SQuAD unit tests and ensures all boolean arguments use store_true. Please, don't use boolean arguments that don't use store_true. False or false or any non-empty string is being converted to True in this case and this clearly is not the desired behavior.
4. all **no-trainer** test scripts are now saving metric values in the same way (with the right prefix ``eval_``). Previously, because of the bug described in item 3, the unit test was using SQuAD v2 metrics instead of SQuAD v1 metrics. v2 uses different metric name for the exact match. This was previously "fixed" at the level of the ``run_qa_no_trainer.py``. However, such a fix isn't necessary any more.
5. Adds forgotten model.eval() in the **no-trainer** versions. This fully fixed training of the **no-trainer**
variant for the regular squad QA model **without** beam search. In the case of **using beam-search** the gap decreased, but not fully. Yet it is small. Unfortunately, the beam-search SQuAD v2 version produces strange numbers (the older code is even worse), so this requires extra investigation IMHO. Please, see the F1 scores and the discussion below.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] You make sure to update the documentation with your changes? **I believe examples aren't covered by the documentation**
- [X] Did you write any new necessary tests? **I trained squad and squad v2 models and compared results (see the discussion below)**, but I am not sure if any new tests are needed. I also **fixed** a unit test so it uses the proper SQuAD (v1) metric.
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Perhaps, this can be of most interest for @sgugger, @patil-suraj.
## Comparing old and new performance
Some remaining issues:
1. Despite the fixes & improvements, there's still a discrepancy between no-trainer and original version for SQuAD v2 or the beam-search version.
2. In particular, for SQuAD v2 and the beam-search variant **without trainer**, both old and new numbers look very wrong to me.
Please note that to be able to run SQuAD v2 tests, **I had to apply utils_qa.py fixes to the old code as well**. Otherwise, it would have just failed:
The metric is F1, the exact scores have the same pattern.
| | previous | fixed |
|-----------------------------------|----------|-------|
| squad v1 | 88.4 | 88.4 |
| squad v1 (no trainer) | 86.7 | 88.5 |
| squad v1 (beam search) | 92.1 | 92.1 |
| squad v1 (beam search no trainer) | 90.2 | 91.0 |
| squad v2 (beam search) | 83.2 | 83.2 |
| squad v2 (beam search no trainer) | 4.9 | 50.1 | | 04-25-2022 09:13:48 | 04-25-2022 09:13:48 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 16,920 | closed | Fix `KeyError` when initialize the model with `ignore_mismatched_sizes=True` | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
`KeyError` is thrown when the model is initialized from pre-trained weights with `ignore_mismatched_sizes=True`. Reproduced as follows:
```python
>>> import transformers
>>> transformers.BertModel.from_pretrained("bert-base-cased", ignore_mismatched_sizes=True)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/shenyl/miniconda/envs/cuda111/lib/python3.8/site-packages/transformers/modeling_utils.py", line 1882, in from_pretrained
model, missing_keys, unexpected_keys, mismatched_keys, error_msgs = cls._load_pretrained_model(
File "/home/shenyl/miniconda/envs/cuda111/lib/python3.8/site-packages/transformers/modeling_utils.py", line 2003, in _load_pretrained_model
and state_dict[checkpoint_key].shape != model_state_dict[model_key].shape
KeyError: 'bert.embeddings.LayerNorm.weight'
```
The cause is that the key modified by function `_fix_key` is not found in `state_dict`.
The solution is to use the original loaded keys when finding the mismatched keys. | 04-25-2022 08:23:50 | 04-25-2022 08:23:50 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 16,919 | closed | [TESTING] Update build_pr_documentation.yml | Important: not to be merged
Testing https://github.com/huggingface/doc-builder/pull/197 | 04-25-2022 08:02:44 | 04-25-2022 08:02:44 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 16,918 | closed | refactoring & fixing Pytorch QA examples | # What does this PR do?
This PR improves/fixes Pytorch QA examples in several ways:
1. extracted duplicating post-processing functions (now shared between regular and no-trainer version)
2. fixed squad v2-eval error in utils_qa.py (one was newly encountered and one that was previously reported.
3. added forgotten model.eval() in the "no-trainer" versions. This fully fixed training of the **no-trainer**
variant for the regular squad QA model. There might be still a small gap left for regular SQuAD (it might be just unlucky seed) and a big one for SQuAD v2. Please, see the numbers and the discussion below.
<!-- Remove if not applicable -->
Fixes #15401 (that was only partially fixed)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] **I believe examples aren't covered by the documentation** you make sure to update the documentation with your changes?
- [ ] Did you write any new necessary tests? **I trained squad and squad v2 models and compared results (see the discussion below)**, but I am not sure if any new tests are needed.
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Perhaps, this can be of most interest for @sgugger, @patil-suraj.
## Comparing old and new performance
Some remaining issues:
1. Despite the fixes & improvements, there's still a discrepancy between no-trainer and original version for SQuAD v2 or the beam-search version.
2. In particular, for SQuAD v2 and the beam-search variant **without trainer**, both old and new numbers look very wrong to me.
Please note that to be able to run SQuAD v2 tests, **I had to apply utils_qa.py fixes to the old code as well**. Otherwise, it would have just failed:
The metric is F1, the exact scores have the same pattern.
| | previous | fixed |
|-----------------------------------|----------|-------|
| squad v1 | 88.4 | 88.4 |
| squad v1 (no trainer) | 86.7 | 88.5 |
| squad v1 (beam search) | 92.1 | 92.1 |
| squad v1 (beam search no trainer) | 90.2 | 91.0 |
| squad v2 (beam search) | 83.2 | 83.2 |
| squad v2 (beam search no trainer) | 4.9 | 50.1 | | 04-25-2022 05:51:39 | 04-25-2022 05:51:39 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 16,917 | closed | refactoring & fixing Pytorch QA examples | # What does this PR do?
This PR improves/fixes Pytorch QA examples in several ways:
1. extracted duplicating post-processing functions (now shared between regular and no-trainer version)
2. added/fixed code to save statistics to some no-trainer versions (one was even buggy)
3. fixed squad v2-eval error in utils_qa.py (one was newly encountered and one that was previously reported.
4. added forgotten model.eval() in the "no-trainer" versions. This fully fixed training of the **no-trainer**
variant for the regular squad QA model. There might be still a small gap left for regular SQuAD (it might be just unlucky seed) and a big one for SQuAD v2. Please, see the numbers and the discussion below.
<!-- Remove if not applicable -->
Fixes #15401 (that was only partially fixed)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] **I believe examples aren't covered by the documentation** you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests? **I trained squad and squad v2 models and compared results (see the discussion below)**, but I am not sure if any new tests are needed.
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Perhaps, this can be of most interest for @sgugger, @patil-suraj.
## Comparing old and new performance
Some remaining issues:
1. Despite the fixes & improvements, there's still a discrepancy between no-trainer and original version for SQuAD v2 or the beam-search version.
2. In particular, for SQuAD v2 and the beam-search variant **without trainer**, both old and new numbers look very wrong to me.
Please note that to be able to run SQuAD v2 tests, **I had to apply utils_qa.py fixes to the old code as well**. Otherwise, it would have just failed:
The metric is F1, the exact scores have the same pattern.
| | previous | fixed |
|-----------------------------------|----------|-------|
| squad v1 | 88.4 | 88.4 |
| squad v1 (no trainer) | 86.7 | 88.5 |
| squad v1 (beam search) | 92.1 | 92.1 |
| squad v1 (beam search no trainer) | 90.2 | 91.0 |
| squad v2 (beam search) | 83.2 | 83.2 |
| squad v2 (beam search no trainer) | 4.9 | 50.1 | | 04-25-2022 05:04:47 | 04-25-2022 05:04:47 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 16,916 | closed | refactoring & fixing Pytorch QA examples | # What does this PR do?
This PR improves/fixes Pytorch QA examples in several ways:
1. extracted duplicating post-processing functions (now shared between regular and no-trainer version)
2. added/fixed code to save statistics to some no-trainer versions (one was even buggy)
3. fixed squad v2-eval error in utils_qa.py (one was newly encountered and one that was previously reported, it was fixed for regular squad QA model, but not for the model that uses beam search: #15401)
4. added forgotten model.eval() in the "no-trainer" versions.
<!-- Remove if not applicable -->
Fixes # 15401 (that was only partially fixed)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] **I believe examples aren't covered by the documentation** you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [X] **Examples do not have tests, however, I trained squad and squad v2 models and compare results (see the discussion below)** Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Perhaps, this can be of interest for @sgugger, @patil-suraj.
## Comparing old and new performance
Let me for simplicity just dump all the numbers here. Old means old example code, the new means the new one (refactored and with model.eval() added). As you can see in all the cases, the new code produces the same or better scores. **The only weird case is SQUAD v2** in the no-trainer mode.
Some remaining issues:
1. Frankly speaking, both old and new numbers look weird to me.
2. Despite the fixes & improvements, there's still a discrepancy between no-trainer and original version.
Also note that to be able to run SQuAD v2, **I had to apply utils_qa.py fixes to the old code as well**. Otherwise, it would have just failed:
```
./old/run_qa_beam_search/1/eval_results.json: "eval_f1": 92.13525592586892,
./old/run_qa_beam_search_no_trainer/1/eval_results.json: "f1": 90.18714704272388
./old/run_qa_beam_search_squad_v2/1/eval_results.json: "eval_HasAns_f1": 85.72401519644451,
./old/run_qa_beam_search_squad_v2/1/eval_results.json: "eval_NoAns_f1": 80.7569386038688,
./old/run_qa_beam_search_squad_v2/1/eval_results.json: "eval_best_f1": 84.39435986584854,
./old/run_qa_beam_search_squad_v2/1/eval_results.json: "eval_best_f1_thresh": -15.705571174621582,
./old/run_qa_beam_search_squad_v2/1/eval_results.json: "eval_f1": 83.23692092011478,
./old/run_qa_beam_search_no_trainer_squad_v2/1/eval_results.json: "f1": 4.879995879559425,
./old/run_qa_beam_search_no_trainer_squad_v2/1/eval_results.json: "HasAns_f1": 5.23619957456295,
./old/run_qa_beam_search_no_trainer_squad_v2/1/eval_results.json: "NoAns_f1": 4.524810765349033,
./old/run_qa_beam_search_no_trainer_squad_v2/1/eval_results.json: "best_f1": 50.07346266505704,
./old/run_qa_beam_search_no_trainer_squad_v2/1/eval_results.json: "best_f1_thresh": -18.9681339263916
./old/run_qa/1/eval_results.json: "eval_f1": 88.3974945885421,
./old/run_qa_notrainer/1/eval_results.json: "eval_f1": 86.7048555821845,
./new/run_qa_beam_search/1/eval_results.json: "eval_f1": 92.13525592586892,
./new/run_qa_beam_search_no_trainer/1/eval_results.json: "f1": 91.04994418518018
./new/run_qa_beam_search_squad_v2/1/eval_results.json: "eval_HasAns_f1": 85.72401519644451,
./new/run_qa_beam_search_squad_v2/1/eval_results.json: "eval_NoAns_f1": 80.7569386038688,
./new/run_qa_beam_search_squad_v2/1/eval_results.json: "eval_best_f1": 84.39435986584854,
./new/run_qa_beam_search_squad_v2/1/eval_results.json: "eval_best_f1_thresh": -15.705571174621582,
./new/run_qa_beam_search_squad_v2/1/eval_results.json: "eval_f1": 83.23692092011478,
./new/run_qa_beam_search_no_trainer_squad_v2/1/eval_results.json: "f1": 50.07159100480081,
./new/run_qa_beam_search_no_trainer_squad_v2/1/eval_results.json: "HasAns_f1": 0.0,
./new/run_qa_beam_search_no_trainer_squad_v2/1/eval_results.json: "NoAns_f1": 100.0,
./new/run_qa_beam_search_no_trainer_squad_v2/1/eval_results.json: "best_f1": 50.07159100480081,
./new/run_qa_beam_search_no_trainer_squad_v2/1/eval_results.json: "best_f1_thresh": 0.0
./new/run_qa/1/eval_results.json: "eval_f1": 88.3974945885421,
./new/run_qa_notrainer/1/eval_results.json: "f1": 88.45989105569917
``` | 04-25-2022 03:45:11 | 04-25-2022 03:45:11 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 16,915 | closed | Option to return output object from Trainer.evaluate | Added an option to return output object from Trainer.evaluate along with metrics. This can help in analyses which use predicted results of evaluation dataset. | 04-25-2022 03:34:20 | 04-25-2022 03:34:20 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Why not use the `predict` method in this case?<|||||>Thanks @sgugger. I didn't know predict can return metrics too. |
transformers | 16,914 | closed | LayoutLMv3: Pre-training for Document AI with Unified Text and Image Masking | ### Model description
LayoutLMV3 is the successor of the LayoutLM models. the models are specialized in multimodal document analysis tasks and achieve SOTA results on them.
The current [code](https://github.com/microsoft/unilm/blob/master/layoutlmv3/layoutlmft/models/layoutlmv3/modeling_layoutlmv3.py) for the LayoutLMV3 is implemented in huggingface format.
Since the code is already more similar to the Huggingface format, I am not sure whether integrating this model into this repo is necessary, but there will be an ease of use if it is available in the library itself.
### Open source status
- [X] The model implementation is available
- [X] The model weights are available
### Provide useful links for the implementation
Model Implementation: https://github.com/microsoft/unilm/tree/master/layoutlmv3
Model Weights: https://huggingface.co/microsoft/layoutlmv3-base
Authors: Yupan Huang,Tengchao Lv, Lei Cui, Yutong Lu ,Furu Wei | 04-25-2022 02:04:22 | 04-25-2022 02:04:22 | cc @NielsRogge :) |
transformers | 16,913 | closed | Missing `f` prefix on f-strings fix | Fixes #16911 | 04-24-2022 21:24:53 | 04-24-2022 21:24:53 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 16,912 | closed | TF: XLA logits processors - minimum length, forced eos, and forced bos | # What does this PR do?
(Review after https://github.com/huggingface/transformers/pull/16899)
A few more XLA-compatible logits processors -- minimum length, forced eos, and forced bos. Only the first one needed changes, mostly to avoid needless retracing (it actually compiled without changes but would trigger a retrace at iteration, which would be super slow).
After this PR, the only remaining processors are the bad words and ngrams ones. | 04-24-2022 17:05:49 | 04-24-2022 17:05:49 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 16,911 | closed | Missing `f` prefix on f-strings | Some strings looks like they're meant to be f-strings but are missing the `f` prefix meaning variable interpolation won't happen.
https://github.com/huggingface/transformers/blob/d91841315aab55cf1347f4eb59332858525fad0f/src/transformers/utils/hub.py#L751
https://github.com/huggingface/transformers/blob/d91841315aab55cf1347f4eb59332858525fad0f/src/transformers/configuration_utils.py#L637
https://github.com/huggingface/transformers/blob/d91841315aab55cf1347f4eb59332858525fad0f/src/transformers/pipelines/audio_utils.py#L58
https://github.com/huggingface/transformers/blob/d91841315aab55cf1347f4eb59332858525fad0f/src/transformers/pipelines/audio_utils.py#L147
https://github.com/huggingface/transformers/blob/d91841315aab55cf1347f4eb59332858525fad0f/src/transformers/models/auto/feature_extraction_auto.py#L310
https://github.com/huggingface/transformers/blob/d91841315aab55cf1347f4eb59332858525fad0f/src/transformers/models/xglm/modeling_flax_xglm.py#L156
https://github.com/huggingface/transformers/blob/d91841315aab55cf1347f4eb59332858525fad0f/src/transformers/models/plbart/modeling_plbart.py#L1025
https://github.com/huggingface/transformers/blob/d91841315aab55cf1347f4eb59332858525fad0f/src/transformers/models/bert/convert_bert_original_tf2_checkpoint_to_pytorch.py#L132
https://github.com/huggingface/transformers/blob/d91841315aab55cf1347f4eb59332858525fad0f/src/transformers/models/bart/modeling_bart.py#L1053
https://github.com/huggingface/transformers/blob/d91841315aab55cf1347f4eb59332858525fad0f/src/transformers/models/prophetnet/modeling_prophetnet.py#L760
https://github.com/huggingface/transformers/blob/d91841315aab55cf1347f4eb59332858525fad0f/tests/extended/test_trainer_ext.py#L269
https://github.com/huggingface/transformers/blob/d91841315aab55cf1347f4eb59332858525fad0f/examples/research_projects/onnx/summarization/bart_onnx/generation_onnx.py#L642
I found this issue automatically. I'm a bot. Beep Boop π¦. See other issues I found in your repo [here](https://codereview.doctor/huggingface/transformers) | 04-23-2022 22:15:02 | 04-23-2022 22:15:02 | |
transformers | 16,910 | closed | push_to_hub on custom tokenizer causing a flood of "deadlock" messages | ### System Info
```shell
- `transformers` version: 4.18.0
- Platform: Linux-4.15.0-176-generic-x86_64-with-glibc2.31
- Python version: 3.9.10
- Huggingface_hub version: 0.5.1
- PyTorch version (GPU?): 1.11.0 (False)
- Tensorflow version (GPU?): 2.8.0 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
```
### Who can help?
@SaulLu @Narsil @n1t0
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Reproduction code available here: [slides.ipynb](https://github.com/cakiki/huggingface-intro/blob/11db121d492762362bc5e1637950e2e269571a0d/slides.ipynb) if you scroll down to `π€ Tokenizers`
**TL;DR:**
I'm training a custom rust `WordPiece` tokenizer, wrapping that into a `PreTrainedTokenizerFast` and calling `.push_to_hub` on that. The push succeeds but I get the `The current process just got forked, after parallelism has already been used.` message, even when I don't use the tokenizer.
I've tried:
setting the `TOKENIZERS_PARALLELISM` to false -> all jupyter notebook cells that need the internet hang (including `push_to_hub`)
setting the `TOKENIZERS_PARALLELISM` to true -> only `push_to_hub` hangs.
### Expected behavior
```shell
N/A
```
| 04-23-2022 19:59:14 | 04-23-2022 19:59:14 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 16,909 | closed | Mask Token Spacing in RobertaTokenizer | ### System Info
- `transformers` version: 4.18.0
- Platform: Linux-5.16.13-200.fc35.x86_64-x86_64-with-glibc2.34
- Python version: 3.10.2
- Huggingface_hub version: 0.5.1
- PyTorch version (GPU?): not installed (NA)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help?
@SaulLu
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```python
from transformers import RobertaTokenizer, BertTokenizer
bert_tk = BertTokenizer.from_pretrained("bert-base-cased")
roberta_tk = RobertaTokenizer.from_pretrained("roberta-base")
enc_bert = bert_tk("Testing the spacing of " + bert_tk.pad_token + " and " + bert_tk.mask_token + " tokens.")
enc_roberta = roberta_tk("Testing the spacing of " + roberta_tk.pad_token + " and " + roberta_tk.mask_token + " tokens.")
dec_bert = bert_tk.decode(enc_bert.input_ids, skip_special_tokens=False)
dec_roberta = roberta_tk.decode(enc_roberta.input_ids, skip_special_tokens=False)
print("bert", dec_bert)
print("roberta", dec_roberta)
```
Output is:
```
bert [CLS] Testing the spacing of [PAD] and [MASK] tokens. [SEP]
roberta <s>Testing the spacing of <pad> and<mask> tokens.</s>
```
### Expected behavior
I expected Roberta's `<mask>` token to be surrounded by spaces as it is done with the `<pad>` token. In BERT, this is done the same way for both special tokens - why not in Roberta?
If someone confirms that this is indeed a bug and not expected behavior, I would be happy to try to get to the root cause myself.
| 04-23-2022 18:29:37 | 04-23-2022 18:29:37 | Hello @mpoemsl ,
Sorry for the late reply. I don't think it's really a bug but I'd be interested to know why you used the decoding feature with Roberta (and Bert). :smile: <|||||>Hi @SaulLu, thanks for your reply!
I have been working on a model that uses representations derived from Transformer language models (e.g. RoBERTa, BERT) and that is trained on annotations on word, token, and character level, so I had to do a lot of mapping between those granularities and was inspecting them with `decode`.
The `RobertaTokenizer` has shown a lot of behavior that to me was unexpected, so I am mostly sticking with BERT for now. Another example is the `add_prefix_space=True` behavior:
```python
from transformers import RobertaTokenizer
tk = RobertaTokenizer.from_pretrained("roberta-base", add_prefix_space=True)
enc = tk(["Testing", "spacing", "lorem", "ipsum", "abc"], return_tensors="pt", is_split_into_words=True)
dec = tk.batch_decode(enc.input_ids.T)
print(dec)
```
Output is:
```
['<s>', ' Testing', ' spacing', ' lore', 'm', ' ', 'ips', 'um', ' ab', 'c', '</s>']
```
What is surprising to me here is that it inserted a literal space token before `'ips'` and not a space-prefixed `' ips'`. It probably has to with whether the space-prefixed token is in the vocab or not.
I agree that those are just quirks of RoBERTa and not really bugs, but should this behavior perhaps be noted in the documentation somewhere? The [current docs on RobertaTokenizer](https://huggingface.co/docs/transformers/model_doc/roberta#transformers.RobertaTokenizer) do not mention the distinction of whether the space-prefixed version of a token is in the vocab and they do not mention the spacing behavior of special tokens.
<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 16,908 | closed | Name of LayerNorm parameter in RobertaLMHead. | As you know, current training codes using `weight decay`, usually detect target parameters based on their names.
e.g.)
```javascript
no_decay = ["bias", "LayerNorm.weight"]
optimizer_grouped_parameters = [
{"params": [p for n, p in model.named_parameters() if not any(nd in n for nd in no_decay)],
"weight_decay": args.weight_decay},
{"params": [p for n, p in model.named_parameters() if any(nd in n for nd in no_decay)],
"weight_decay": 0.0},
]
```
In the code below
,https://github.com/huggingface/transformers/blob/v4.18.0/src/transformers/models/roberta/modeling_roberta.py#L696,
Layernorm module defined as "layer_norm". Hence "layer_norm" naming, weight-decay will be applied to RobertaLMHead.layer_norm, different from intent.
```javascript
class RobertaLMHead(nn.Module):
"""Roberta Head for masked language modeling."""
def __init__(self, config):
super().__init__()
self.dense = nn.Linear(config.hidden_size, config.hidden_size)
self.layer_norm = nn.LayerNorm(config.hidden_size, eps=config.layer_norm_eps)
self.decoder = nn.Linear(config.hidden_size, config.vocab_size)
self.bias = nn.Parameter(torch.zeros(config.vocab_size))
self.decoder.bias = self.bias
def forward(self, features, **kwargs):
x = self.dense(features)
x = gelu(x)
x = self.layer_norm(x)
# project back to size of vocabulary with bias
x = self.decoder(x)
return x
def _tie_weights(self):
# To tie those two weights if they get disconnected (on TPU or when the bias is resized)
self.bias = self.decoder.bias
```
Is there any special reason?
I don't think it will cause a significant impact in training model, but I wonder what intention it was.
| 04-23-2022 14:51:30 | 04-23-2022 14:51:30 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 16,907 | closed | [WIP] Enable reproducibility for distributed trainings | # What does this PR do?
This PR ensures reproducibility for distributed trainings by setting seed for worker in dataloader and setting environment variables for cuda.
This PR is motivated by [this issue](https://github.com/huggingface/transformers/issues/16549#).
## Who can review?
@saattrupdan @sgugger I am looking forward to your feedback | 04-23-2022 14:26:48 | 04-23-2022 14:26:48 | _The documentation is not available anymore as the PR was closed or merged._<|||||>> Thanks for working on this, that's an important feature! So as to not introduce a breaking change, and for clarity of the API, I'd personally vouch for not adding the `enable_determinism` flag to the `set_seed` method.
>
> From the title of the method I understand it should set the seed, and that's it. I don't think it should do anything else. However, the `enable_determinism_for_distributed_training` method likely needs the seed to be set in order to benefit from full determinism, so I'd even push to have the `set_seed` method called inside the `enable_determinism_for_distributed_training`, adding a `seed` argument to that last method.
>
> What do you think?
I like this idea. I can implement it after we reach a conclusion on it, however, it is not clear to me how to implement it. Could you point me to which parts of the code I need to change/pay attention not to break anything if we decide to go for this idea? <|||||>@sgugger Thanks for the pointers and sorry for not being so clear. I would like to know in which places `enable_full_determinism` should be called. Currently, `set_seed` is called several places in the codebase. I don't think these calls will be replaced with `enable_full_determinism`.
With the latest commits, I have already addressed your pointers. Now I am waiting your feedback for where to call `enable_full_determinism` in the codebase. It is not called any place in the codebase right now.<|||||>There can be an added flag in the `TrainingArguments` and we can call this function instead of `set_seed` in the `Trainer`. Otherwise it will be for the users to use this one instead of `set_seed` in their own scripts (you should make it accessible in the main init by the way!)<|||||>@sgugger I think I have addressed all your comments. Is there anything that needs to be done for this PR?<|||||>Is it normal that 3 tests fail suddenly after a commit in a docstring? I couldn't understand why tests are failing.<|||||>Those are just flaky, no link to your PR. Thanks again for all your work on this!<|||||>@sgugger @hasansalimkanmaz I had a question about this PR - why is it necessary to set `CUDA_LAUNCH_BLOCKING`? This disables asynchronous execution of CUDA programs, but the cuda/pytorch docs don't mention it necessary for deterministic training? I do use it to get the "true" stack trace when there are device-side asserts but was wondering what role it plays in deterministic training. Many thanks!
<|||||>@alexcoca It's required to make some CUDA algorithms deterministic if the CUDA version is older than 10.2. I suppose it could be replaced by a CUDA version check somehow, and only using it if it's an old version?<|||||>@saattrupdan I would go for this approach, because running the CUDA programs in asynchronous mode will definitely slow things down beyond belief. I implemented this PR myself without the `CUDA_LAUNCH_BLOCKING` setting and will report if I manage to preserve determinism.<|||||>I experimented with training a dialogue state tracking model on the SGD corpus starting from Google's v1.1 T5 (220M) paramaters. I allowed the model to train for roughly two epochs and evaluated task oriented performance every 2k steps (max train steps was 12k).
Ran 4 experiments: 2 in which I set the seed, and an additional 2 where I do roughly the same as `ensure_determinism` except setting `CUDA_LAUNCH_BLOCKING`. I also set `CUBLAS_WORKSPACE_CONFIG=':4096:8'`. Each experiment was trained on 2 A100-80GB with `cuda/11.4 openmpi/4.1.1/gcc-9.4.0-epagguv`, `pytorch 1.10` and transformers `4.19.2`. You can see below that I was able to reproduce the metrics in all runs and with no major performance hits. I guess that convolution benchmarking and non-det ops are less relevant for T5. With `4.18.0` the performance was wreaking havoc on the same seed, sign that the data ordering was the culprit.


I guess the moral of the story here is that one could:
- Check CUDA version to avoid running in blocking mode when not necessary
- potentially allow the user to specify which `CUBLAS_WORKSPACE_CONFIG` as `:16:8` may impact performance (see [here](https://docs.nvidia.com/cuda/cublas/index.html#cublasApi_reproducibility))#16907
@sgugger ?<|||||>Agreed for the first one. For the second one, we could avoid overriding an existing `CUBLAS_WORKSPACE_CONFIG` if it's already in the env? In all cases, it should be clearly stated in the doc of the flag that triggers the full reproducibility that it comes at a performance price.<|||||>Yes, I agree with the above! I'm at ACL next week but I'll try and open a small PR to address this the week after!
<|||||>Thanks, @alexcoca for noticing this and for your time. |
transformers | 16,906 | closed | Add missing whitespaces in RuntimeError message | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #16905
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
---
Add whitespace in the message
```python
>>> from transformers import pipeline
>>> pipeline(task=None, model=None)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/.../venv/lib/python3.9/site-packages/transformers/pipelines/__init__.py", line 495, in pipeline
raise RuntimeError(
RuntimeError: Impossible to instantiate a pipeline without either a task or a model being specified. Please provide a task class or a model
```
| 04-23-2022 10:57:03 | 04-23-2022 10:57:03 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 16,905 | closed | Missing whitespaces at RuntimeError message | Trailing whitespaces are missing,
https://github.com/huggingface/transformers/blob/31ec2cb2badfbdd4c1ac9c6c9b8a74e974984206/src/transformers/pipelines/__init__.py#L494-L499
so the error message is a little hard to read.
```python
>>> from transformers import pipeline
>>> pipeline(task=None, model=None)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/.../venv/lib/python3.9/site-packages/transformers/pipelines/__init__.py", line 495, in pipeline
raise RuntimeError(
RuntimeError: Impossible to instantiate a pipeline without either a task or a modelbeing specified.Please provide a task class or a model
```
- Python 3.9.4
- transformers 4.18.0 | 04-23-2022 10:49:28 | 04-23-2022 10:49:28 | |
transformers | 16,904 | closed | Source link of transformers.pipeline is broken | This is about **documentation link error**.
There is a issue on the page https://huggingface.co/docs/transformers/main_classes/pipelines#transformers.pipeline
<img width="550" alt="image" src="https://user-images.githubusercontent.com/21273221/164890499-54a79a1a-12c7-4776-bcf7-c39bf6f3efc1.png">
**Procedure**
Click the link of `<source>`, then go to https://github.com/huggingface/transformers/blob/v4.18.0/src/transformers/pipelines.py#L372 but see 404 Page not found on GitHub.
**Correct link**
I found that the correct link is https://github.com/huggingface/transformers/blob/v4.18.0/src/transformers/pipelines/__init__.py#L372
---
It seems that the content is generated by
https://github.com/huggingface/transformers/blob/31ec2cb2badfbdd4c1ac9c6c9b8a74e974984206/docs/source/en/main_classes/pipelines.mdx#L123 | 04-23-2022 10:34:19 | 04-23-2022 10:34:19 | cc @sgugger @mishig25 <|||||>`doc-builder` doesn't like objects defined in `__init__`, will send a fix.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 16,903 | closed | T5base memory usage for interface | ### System Info
```shell
Ubuntu 18.04 RTX 2080
```
### Who can help?
@patrickvonplaten
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Just load t5 base from any example.
### Expected behavior
Is it normal that T5 base uses around 1000 MB of GPU memory in half precision (FP-16) for interface.
Many other models use roughly 50 percent of FP-32 model size for FP-16 which should be only 400-500 MB.
I read here that the model https://huggingface.co/google/t5-efficient-small-el16 is 350 mb in size so it uses roughly 50% of the size (175 MB) when used for interface.
Is there anything to do to reduce the memory usage. T5-large for example also in Fp16 uses around 2GB which is much better.
I expected memory usage to be 50% of model size for fp 16.
| 04-23-2022 09:54:34 | 04-23-2022 09:54:34 | @Oxi84,
Could you please provide us with a codesnippet that shows the error?<|||||>@patrickvonplaten
Here is it - it takes 1300 mb of GPU memory and the model size is just 850 mb. I tried all combinations, and 1300 mb is minimum. For example T5 large takes around 2000 MB, and ithe model size is 2.7 GB so it is more than 3X larger.
I expected T5 base would not use more than 600 MB of GPU memory.
from transformers import T5ForConditionalGeneration,T5Tokenizer,T5TokenizerFast
model1b = T5ForConditionalGeneration.from_pretrained("iarfmoose/t5-base-question-generator",cache_dir="/root/Desktop/model_cache_tmp1/")
model1b.eval()
model1b.half()
model1b.to("cuda")
<|||||>Good thing is that when i load base and large model together it takes 2.7Gb of memory, while only base is 1.3Gb and only large is 2.3GB. Also adding additional base models seem to take just 200 MB. I suppose there is some kind of general cache you have implemented for all T5 models together.
from transformers import T5ForConditionalGeneration,T5Tokenizer,T5TokenizerFast
model1ba = T5ForConditionalGeneration.from_pretrained("t5-base",cache_dir="/root/Desktop/model_cache_tmp1/")
model1ba.eval()
model1ba.half()
model1ba.to("cuda")
import torch
from transformers import T5ForConditionalGeneration,T5Tokenizer,T5TokenizerFast
model1b = T5ForConditionalGeneration.from_pretrained("t5-large",cache_dir="/root/Desktop/model_cache_tmp1/")
model1b.eval()
model1b.half()
model1b.to("cuda")
input(11)
<|||||>Hey @Oxi84,
Note that some GPU memory is always taken up by PyTorch itself. See thread here: https://github.com/huggingface/transformers/pull/16881#issuecomment-1106576937<|||||>Thanks.
It is awesome idea to merge all the models on one flask file/server, this saved me 1GB of memory.
i use gc collect and cuda empty cache to remove things from memory after each interface and a certain number of batches if not after each..<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 16,902 | closed | migrate azure blob for beit checkpoints | ## Motivation
We are going to use a new blob account to store the checkpoints.
## Modification
Modify the azure blob storage URLs for BEiT checkpoints.
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 04-23-2022 05:09:41 | 04-23-2022 05:09:41 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 16,901 | closed | [WIP] Avoid BERT `attention_mask` from promoting dtype in self-attention `forward()` | ### Overview
~The `attention_mask` passed into `BertSelfAttention.forward()` is taken from the BERT model's first parameter dtype at **construction time**.~
<details>
<summary>Code Pointers</summary>
https://github.com/huggingface/transformers/blob/22fc93c4d9608fa9cd171b4f3044f8c756f86773/src/transformers/models/bert/modeling_bert.py#L985
https://github.com/huggingface/transformers/blob/22fc93c4d9608fa9cd171b4f3044f8c756f86773/src/transformers/modeling_utils.py#L655
https://github.com/huggingface/transformers/blob/22fc93c4d9608fa9cd171b4f3044f8c756f86773/src/transformers/modeling_utils.py#L556-L561
https://github.com/huggingface/transformers/blob/22fc93c4d9608fa9cd171b4f3044f8c756f86773/src/transformers/modeling_utils.py#L123-L125
</details>
However, if a user only casts down the input or parameter dtype(s) (e.g. to `torch.float16`) **after** construction time and before running `forward()`, the `extended_attention_mask` will still have dtype `torch.float32` rather than the reduced precision dtype.
Due to PyTorch [type promotion semantics](https://pytorch.org/docs/stable/tensor_attributes.html#type-promotion-doc), `attention_scores = attention_scores + attention_mask` will promote `attention_scores` to `torch.float32` and propagate through the remainder of the forward pass, defeating the intention of the reduced precision.
https://github.com/huggingface/transformers/blob/22fc93c4d9608fa9cd171b4f3044f8c756f86773/src/transformers/models/bert/modeling_bert.py#L343
In order to avoid this behavior, the model parameters must be cast to the reduced precision at construction time, which is restrictive. This PR ensures that the addition does not change the dtype.
**Before**
`attention_scores`: FP16
`attention_mask`: FP32
==> FP32
`attention_scores`: FP32
`attention_mask`: FP16
==> FP32
**After**
`attention_scores`: FP16
`attention_mask`: FP32 -> FP16
==> FP16
`attention_scores`: FP32
`attention_mask`: FP16 -> FP32
==> FP32
(Hence, we see that the reverse setting has no change in behavior, while for the desired setting, we prevent the unwanted type promotion.)
### Test Plan
I ran `tests/bert/test_modeling_bert.py` locally with 4 GPUs and 68 passed and 11 skipped. Since this is a small change, direct inspection may be more valuable.
| 04-23-2022 01:52:06 | 04-23-2022 01:52:06 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 16,900 | closed | Add missing ckpt in config docs | # What does this PR do?
As discussed on Slack, I worked on the `Config` files to add missing information about checkpoints, or correct them.
- I tried to check the mentioned checkpoints are actually on the Hub
- also tried to make sure the checkpoints are for the target architecture
- I didn't verify the statement `Instantiating a configuration with the defaults will yield a similar configuration to that of the Speech2Text2 [mentioned checkpoint]`
- in particular, the hyperparameters like `hidden_dim`, `num_layers` might be different
- it says `similar`, so I think it is fine (..?)
@patrickvonplaten Could you take a look on the speech models?
@NielsRogge Could you take a look on the vision models? | 04-22-2022 20:46:14 | 04-22-2022 20:46:14 | _The documentation is not available anymore as the PR was closed or merged._<|||||>> Thanks for fixing all of those! It would be awesome to have some kind of quality script to check we don't introduce new faulty checkpoints.
Yes, I do have some (draft) check locally. I plan to add it in another PR (unless it's necessary to do so in this PR).<|||||>Thank you @NielsRogge I should try to use the correct names, as defined in `MODEL_NAMES_MAPPING`.<|||||>> Thanks a lot for this PR, awesome that this gets improved.
>
> Left some comments, just for consistency, I would always use the template:
>
> > "will yield a similar configuration of that of the - snake-cased model name - [checkpoint name](link) architecture".
I will add this to the check I currently have (locally, but will push to another PR), thanks!<|||||>Merge now. Thanks for the review.
With this PR, all configs are good except the following (which are expected, since those composite models don't have full default config arguments - they rely on the encoder and decoder configs.)
- DecisionTransformerConfig
- VisionEncoderDecoderConfig
- VisionTextDualEncoderConfig
- CLIPConfig
- SpeechEncoderDecoderConfig
- EncoderDecoderConfig
- RagConfig
|
transformers | 16,899 | closed | TF: XLA Logits Warpers | # What does this PR do?
This PR enables XLA on the logits warpers... which actually needed no changes. In essence, it adds XLA tests to ensure we don't regress. | 04-22-2022 18:28:56 | 04-22-2022 18:28:56 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@patrickvonplaten Sorry, I know you've already reviewed this, but I'm going to re-request your review. I realized the tests were much easier to understand (and with fewer lines) if they were parametrized, instead of having two tests (one for XLA, another for non-XLA) with shared code π
|
transformers | 16,898 | closed | ValueError: cannot find context for 'fork' when processor_with_lm.batch_decode(_logits) | ### System Info
```shell
## Environment info
- `transformers` version: 4.17.0
- Platform: Windows-10-10.0.22000-SP0
- Python version: 3.8.13
- PyTorch version (GPU?): 1.9.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
```
### Who can help?
@patrickvonplaten
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
## To reproduce
- The model I am using (Wav2Vec2.0 Large XLS-R 53 English):
- Steps to reproduce the behavior:
1. I am [fine-tuning Wav2Vec with LM Head](https://huggingface.co/blog/fine-tune-wav2vec2-english) using WikiText to produce 5-grams LM. I downloaded the fine-tuned model dir locally and was able to perform inference on my audio `.wav` file(s)
2. Please find [here](https://drive.google.com/drive/folders/1IBUTglXLw4IX8uKC0qmGKKhkoCvc3s94?usp=sharing), model files, test audio file, and requirements.txt if needed to reproduce the problem
### Code snippet
```{python}
import torch
from transformers import Wav2Vec2ForCTC, Wav2Vec2ProcessorWithLM
from datasets import load_dataset
import soundfile as sf
model_name = "jonatasgrosman/wav2vec2-large-xlsr-53-english"
model = Wav2Vec2ForCTC.from_pretrained(model_name)
processor_path = path_join(getcwd(), "stt_assets", "stt_model")
processor = Wav2Vec2ProcessorWithLM.from_pretrained(processor_path)
dataset = load_dataset("timit_asr", split="test").shuffle().shuffle().select(range(100))
char_translations = str.maketrans({"-": " ", ",": "", ".": "", "?": ""})
def prepare_example(example):
example["speech"], _ = sf.read(example["file"])
example["text"] = example["text"].translate(char_translations)
example["text"] = " ".join(example["text"].split()) # clean up whitespace
example["text"] = example["text"].lower()
return example
dataset = dataset.map(prepare_example, remove_columns=["file"])
pprint(dataset)
features = processor(speech, sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(**features).logits
# logits shape is torch.Size([100, 304, 33])
transcription = processor.batch_decode(logits)
# EXCEPTION IS RAISED in `processor.batch_decode()` ValueError: cannot find context for 'fork'
print(transcription)
```
### Expected behavior
```
What I am expecting is that I get a list of transcriptions from `processor.batch_decode()`
but I get this `ValueError: cannot find context for 'fork'` Exception. I am using Windows 11,
I have tried to research it and I guess it is something related to multiprocessing but I could
not really figure out how to solve it yet
```
| 04-22-2022 18:27:14 | 04-22-2022 18:27:14 | Related https://github.com/woven-planet/l5kit/issues/129<|||||>Hey @elsheikh21,
Let's try to narrow the bug further down :-)
Does the following work for you:
```python
from multiprocessing import get_context
pool = get_context("fork").Pool(num_processes)
pool.close()
```
?
<|||||>Hello @patrickvonplaten
I have tried to run
```python
from multiprocessing import get_context
num_processes = 8
pool = get_context("fork").Pool(num_processes)
pool.close()
```
and got the following traceback
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Users\AhmedElSheikh\AppData\Local\Programs\Python\Python38\lib\multiprocessing\context.py", line 239, in get_context
return super().get_context(method)
File "C:\Users\AhmedElSheikh\AppData\Local\Programs\Python\Python38\lib\multiprocessing\context.py", line 193, in get_context
raise ValueError('cannot find context for %r' % method) from None
ValueError: cannot find context for 'fork'
```
System Information
`Windows 11`
`Python 3.8.10`<|||||>> Related [woven-planet/l5kit#129](https://github.com/woven-planet/l5kit/issues/129)
I have read this thread, yet the error itself occurs when I call processor.batch_decode and I am working on the project not just to be used on my local device only<|||||>> Hello @patrickvonplaten
>
> I have tried to run
>
> ```python
> from multiprocessing import get_context
> num_processes = 8
> pool = get_context("fork").Pool(num_processes)
> pool.close()
> ```
>
> and got the following traceback
>
> ```
> Traceback (most recent call last):
> File "<stdin>", line 1, in <module>
> File "C:\Users\AhmedElSheikh\AppData\Local\Programs\Python\Python38\lib\multiprocessing\context.py", line 239, in get_context
> return super().get_context(method)
> File "C:\Users\AhmedElSheikh\AppData\Local\Programs\Python\Python38\lib\multiprocessing\context.py", line 193, in get_context
> raise ValueError('cannot find context for %r' % method) from None
> ValueError: cannot find context for 'fork'
> ```
>
> System Information `Windows 11` `Python 3.8.10`
This seems to be the error then.
Could you try to replace `"fork"` with `"spawn"`? <|||||>If `"spawn"` works then it might make most sense to just update `"fork"` to `"spawn"` <|||||>I have tried to run with `"spawn"` and it works fine, but in that case I will need to change the file `transformers.models.wav2vec2_with_lm.processing_wav2vec2_with_lm.py` and I guess that wont work when I run the same code on another machine, is there a way to force `"spawn"` when `"fork"` does not work?<|||||>Think we can just replace `"fork"` with `"spawn"` - do you want to open a PR to fix it? :-)<|||||>> Think we can just replace `"fork"` with `"spawn"` - do you want to open a PR to fix it? :-)
Yes, I would happily do that, I guess it would be something along those lines? please feel free to modify my approach. Otherwise I will start reading about collaborating and how to open PR
```python
try:
pool = get_context("fork").Pool(num_processes)
except ValueError as exc:
if "cannot find context for 'fork'" in exc:
pool = get_context("spawn").Pool(num_processes)
logging.info("Switching to \"spawn\" as \"fork\" context is not found")
```<|||||>I think we can actually just change `"fork"` to `"spawn"` (no need for a try, ... expect IMO). According to https://stackoverflow.com/questions/64095876/multiprocessing-fork-vs-spawn and some other docs, `"spawn"` is safe and given that the child process is LM-boosted decoding (which is always slow), doing the switch should be fine<|||||>> I think we can actually just change `"fork"` to `"spawn"` (no need for a try, ... expect IMO). According to https://stackoverflow.com/questions/64095876/multiprocessing-fork-vs-spawn and some other docs, `"spawn"` is safe and given that the child process is LM-boosted decoding (which is always slow), doing the switch should be fine
Okay let us do it your way then, I have also created a custom dataset loader (from flac/wav audio files) and model finetuner, evaluator if those can be helpful for the community I would love to share them as well
For now I will open a PR for `spawn` and `fork`<|||||>Exactly same problem here, also trying to run this under Windows 10 and getting the same error, when in processing_wav2vec2_with_lm.py, line 316, gets "fork" from context.
But since I see it's already being fixed, I'll just thank and wait π <|||||>> Exactly same problem here, also trying to run this under Windows 10 and getting the same error, when in processing_wav2vec2_with_lm.py, line 316, gets "fork" from context. But since I see it's already being fixed, I'll just thank and wait π
as a quick fix you can replace "fork" with "spawn" in the line ` pool = get_context("fork").Pool(num_processes)`, file `transformers.models.wav2vec2_with_lm.processing_wav2vec2_with_lm.py`
<|||||>@ADD-eNavarro @elsheikh21 sorry I don't work with Windows usually and am a bit buried with other issues. Regarding the PR please lemme know if anything isn't clear, happy trying to be more precise - in short I think we should try to apply the exact same solution that was applied in `pyctcdecode`<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 16,897 | closed | Fix typos BigBird ONNX conversion | # What does this PR do?
I tried to convert one `BigBird` model to ONNX with the recent PR merged #16427
But it seems that there is a typo in the `src/transformers/onnx/features.py` file.
I also fixed the typo in the test_v2 file.
Here is the error I get while trying to convert `google/bigbird-roberta-base`.
```bash
$ python -m transformers.onnx --model=google/bigbird-roberta-base onnx/
> KeyError: "big-bird is not supported yet.
> Only ['albert', 'bart', 'mbart', 'bert', 'bigbird',
> 'ibert', 'camembert', 'distilbert', 'flaubert', 'marian', 'm2m-100', 'roberta', 't5',
> 'xlm-roberta', 'gpt2', 'gptj', 'gpt-neo', 'layoutlm', 'electra', 'vit', 'beit', 'blenderbot',
> 'blenderbot-small', 'data2vec-text'] are supported.
> If you want to support big-bird please propose a PR or open up an issue."
```
As you can see in the error `bigbird` should be `big-bird`.
ping @lewtun @LysandreJik
| 04-22-2022 18:26:49 | 04-22-2022 18:26:49 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 16,896 | closed | MobileBERT tokenizer tests | # What does this PR do?
This PR implements tests for MobileBERT. As MobileBERT uses a copy of the BERT Tokenizer, the test inherits from BertTokenizationTest, and also checks the merge & vocab files for these two models are identical.
Contributes fixes to issue #16627
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
cc. @LysandreJik @SaulLu | 04-22-2022 16:49:42 | 04-22-2022 16:49:42 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Obviously - thanks!<|||||>Hi, @leondz
The main branch has recently merged a PR that changes test folders, like
```
tests/mobilebert -> tests/models/mobilebert
```
Could you follow the ideas shown in the instructions in [this](https://github.com/huggingface/transformers/pull/17008#issuecomment-1116059265) to incorporate the changes into your working branch. Thank you. (You might need to fix a few import places)
<|||||>> Hi, @leondz
>
> The main branch has recently merged a PR that changes test folders, like
>
> ```
> tests/mobilebert -> tests/models/mobilebert
> ```
>
> Could you follow the ideas shown in the instructions in [this](https://github.com/huggingface/transformers/pull/17008#issuecomment-1116059265) to incorporate the changes into your working branch. Thank you. (You might need to fix a few import places)
Thanks for this, it makes sense. By the way, `make fixup` seems to adjust content in /examples and /docs in a way that looks mistaken - out of scope for this PR but is that something to be looked at?
e.g.
```
--- a/docs/source/en/model_doc/bert-generation.mdx
+++ b/docs/source/en/model_doc/bert-generation.mdx
@@ -49,7 +49,7 @@ Usage:
>>> input_ids = tokenizer(
... "This is a long article to summarize", add_special_tokens=False, return_tensors="pt"
-... ).input_ids
+>>> ).input_ids
>>> labels = tokenizer("This is a short summary", return_tensors="pt").input_ids
```<|||||>Could you check this comment
https://github.com/huggingface/transformers/pull/17008#issuecomment-1115007653
and see if it works well? That's my first thought :-)<|||||>> By the way, `make fixup` seems to adjust content in /examples and /docs in a way that looks mistaken - out of scope for this PR but is that something to be looked at?
>
> e.g.
>
> ```
>
> --- a/docs/source/en/model_doc/bert-generation.mdx
> +++ b/docs/source/en/model_doc/bert-generation.mdx
> @@ -49,7 +49,7 @@ Usage:
>
> >>> input_ids = tokenizer(
> ... "This is a long article to summarize", add_special_tokens=False, return_tensors="pt"
> -... ).input_ids
> +>>> ).input_ids
> >>> labels = tokenizer("This is a short summary", return_tensors="pt").input_ids
>
> ```
OK this was fixed in https://github.com/huggingface/doc-builder/pull/207 :)
> Could you check this comment
>
> [#17008 (comment)](https://github.com/huggingface/transformers/pull/17008#issuecomment-1115007653)
>
> and see if it works well? That's my first thought :-)
Yes! All done :) |
transformers | 16,895 | closed | Datasets: Cannot cast array data from dtype('O') to dtype('int64') according to the rule 'safe' | ### System Info
```shell
- `transformers` version: 4.18.0
- Platform: Linux-5.4.144+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.13
- Huggingface_hub version: 0.5.1
- PyTorch version (GPU?): 1.10.0+cu111 (True)
- Tensorflow version (GPU?): 2.8.0 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
```
### Who can help?
@LysandreJik
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```python
from datasets import load_dataset,Features,Value,ClassLabel
class_names = ["cmn","deu","rus","fra","eng","jpn","spa","ita","kor","vie","nld","epo","por","tur","heb","hun","ell","ind","ara","arz","fin","bul","yue","swe","ukr","bel","que","ces","swh","nno","wuu","nob","zsm","est","kat","pol","lat","urd","sqi","isl","fry","afr","ron","fao","san","bre","tat","yid","uig","uzb","srp","qya","dan","pes","slk","eus","cycl","acm","tgl","lvs","kaz","hye","hin","lit","ben","cat","bos","hrv","tha","orv","cha","mon","lzh","scn","gle","mkd","slv","frm","glg","vol","ain","jbo","tok","ina","nds","mal","tlh","roh","ltz","oss","ido","gla","mlt","sco","ast","jav","oci","ile","ota","xal","tel","sjn","nov","khm","tpi","ang","aze","tgk","tuk","chv","hsb","dsb","bod","sme","cym","mri","ksh","kmr","ewe","kab","ber","tpw","udm","lld","pms","lad","grn","mlg","xho","pnb","grc","hat","lao","npi","cor","nah","avk","mar","guj","pan","kir","myv","prg","sux","crs","ckt","bak","zlm","hil","cbk","chr","nav","lkt","enm","arq","lin","abk","pcd","rom","gsw","tam","zul","awa","wln","amh","bar","hbo","mhr","bho","mrj","ckb","osx","pfl","mgm","sna","mah","hau","kan","nog","sin","glv","dng","kal","liv","vro","apc","jdt","fur","che","haw","yor","crh","pdc","ppl","kin","shs","mnw","tet","sah","kum","ngt","nya","pus","hif","mya","moh","wol","tir","ton","lzz","oar","lug","brx","non","mww","hak","nlv","ngu","bua","aym","vec","ibo","tkl","bam","kha","ceb","lou","fuc","smo","gag","lfn","arg","umb","tyv","kjh","oji","cyo","urh","kzj","pam","srd","lmo","swg","mdf","gil","snd","tso","sot","zza","tsn","pau","som","egl","ady","asm","ori","dtp","cho","max","kam","niu","sag","ilo","kaa","fuv","nch","hoc","iba","gbm","sun","war","mvv","pap","ary","kxi","csb","pag","cos","rif","kek","krc","aii","ban","ssw","tvl","mfe","tah","bvy","bcl","hnj","nau","nst","afb","quc","min","tmw","mad","bjn","mai","cjy","got","hsn","gan","tzl","dws","ldn","afh","sgs","krl","vep","rue","tly","mic","ext","izh","sma","jam","cmo","mwl","kpv","koi","bis","ike","run","evn","ryu","mnc","aoz","otk","kas","aln","akl","yua","shy","fkv","gos","fij","thv","zgh","gcf","cay","xmf","tig","div","lij","rap","hrx","cpi","tts","gaa","tmr","iii","ltg","bzt","syc","emx","gom","chg","osp","stq","frr","fro","nys","toi","new","phn","jpa","rel","drt","chn","pli","laa","bal","hdn","hax","mik","ajp","xqa","pal","crk","mni","lut","ayl","ood","sdh","ofs","nus","kiu","diq","qxq","alt","bfz","klj","mus","srn","guc","lim","zea","shi","mnr","bom","sat","szl"]
features = Features({ 'label': ClassLabel(names=class_names), 'text': Value('string')})
num_labels = features['label'].num_classes
data_files = { "train": "train.csv", "test": "test.csv" }
sentences = load_dataset("loretoparisi/tatoeba-sentences",
data_files=data_files,
delimiter='\t',
column_names=['label', 'text'],
features = features
```
ERROR:
```
ClassLabel(num_classes=403, names=['cmn', 'deu', 'rus', 'fra', 'eng', 'jpn', 'spa', 'ita', 'kor', 'vie', 'nld', 'epo', 'por', 'tur', 'heb', 'hun', 'ell', 'ind', 'ara', 'arz', 'fin', 'bul', 'yue', 'swe', 'ukr', 'bel', 'que', 'ces', 'swh', 'nno', 'wuu', 'nob', 'zsm', 'est', 'kat', 'pol', 'lat', 'urd', 'sqi', 'isl', 'fry', 'afr', 'ron', 'fao', 'san', 'bre', 'tat', 'yid', 'uig', 'uzb', 'srp', 'qya', 'dan', 'pes', 'slk', 'eus', 'cycl', 'acm', 'tgl', 'lvs', 'kaz', 'hye', 'hin', 'lit', 'ben', 'cat', 'bos', 'hrv', 'tha', 'orv', 'cha', 'mon', 'lzh', 'scn', 'gle', 'mkd', 'slv', 'frm', 'glg', 'vol', 'ain', 'jbo', 'tok', 'ina', 'nds', 'mal', 'tlh', 'roh', 'ltz', 'oss', 'ido', 'gla', 'mlt', 'sco', 'ast', 'jav', 'oci', 'ile', 'ota', 'xal', 'tel', 'sjn', 'nov', 'khm', 'tpi', 'ang', 'aze', 'tgk', 'tuk', 'chv', 'hsb', 'dsb', 'bod', 'sme', 'cym', 'mri', 'ksh', 'kmr', 'ewe', 'kab', 'ber', 'tpw', 'udm', 'lld', 'pms', 'lad', 'grn', 'mlg', 'xho', 'pnb', 'grc', 'hat', 'lao', 'npi', 'cor', 'nah', 'avk', 'mar', 'guj', 'pan', 'kir', 'myv', 'prg', 'sux', 'crs', 'ckt', 'bak', 'zlm', 'hil', 'cbk', 'chr', 'nav', 'lkt', 'enm', 'arq', 'lin', 'abk', 'pcd', 'rom', 'gsw', 'tam', 'zul', 'awa', 'wln', 'amh', 'bar', 'hbo', 'mhr', 'bho', 'mrj', 'ckb', 'osx', 'pfl', 'mgm', 'sna', 'mah', 'hau', 'kan', 'nog', 'sin', 'glv', 'dng', 'kal', 'liv', 'vro', 'apc', 'jdt', 'fur', 'che', 'haw', 'yor', 'crh', 'pdc', 'ppl', 'kin', 'shs', 'mnw', 'tet', 'sah', 'kum', 'ngt', 'nya', 'pus', 'hif', 'mya', 'moh', 'wol', 'tir', 'ton', 'lzz', 'oar', 'lug', 'brx', 'non', 'mww', 'hak', 'nlv', 'ngu', 'bua', 'aym', 'vec', 'ibo', 'tkl', 'bam', 'kha', 'ceb', 'lou', 'fuc', 'smo', 'gag', 'lfn', 'arg', 'umb', 'tyv', 'kjh', 'oji', 'cyo', 'urh', 'kzj', 'pam', 'srd', 'lmo', 'swg', 'mdf', 'gil', 'snd', 'tso', 'sot', 'zza', 'tsn', 'pau', 'som', 'egl', 'ady', 'asm', 'ori', 'dtp', 'cho', 'max', 'kam', 'niu', 'sag', 'ilo', 'kaa', 'fuv', 'nch', 'hoc', 'iba', 'gbm', 'sun', 'war', 'mvv', 'pap', 'ary', 'kxi', 'csb', 'pag', 'cos', 'rif', 'kek', 'krc', 'aii', 'ban', 'ssw', 'tvl', 'mfe', 'tah', 'bvy', 'bcl', 'hnj', 'nau', 'nst', 'afb', 'quc', 'min', 'tmw', 'mad', 'bjn', 'mai', 'cjy', 'got', 'hsn', 'gan', 'tzl', 'dws', 'ldn', 'afh', 'sgs', 'krl', 'vep', 'rue', 'tly', 'mic', 'ext', 'izh', 'sma', 'jam', 'cmo', 'mwl', 'kpv', 'koi', 'bis', 'ike', 'run', 'evn', 'ryu', 'mnc', 'aoz', 'otk', 'kas', 'aln', 'akl', 'yua', 'shy', 'fkv', 'gos', 'fij', 'thv', 'zgh', 'gcf', 'cay', 'xmf', 'tig', 'div', 'lij', 'rap', 'hrx', 'cpi', 'tts', 'gaa', 'tmr', 'iii', 'ltg', 'bzt', 'syc', 'emx', 'gom', 'chg', 'osp', 'stq', 'frr', 'fro', 'nys', 'toi', 'new', 'phn', 'jpa', 'rel', 'drt', 'chn', 'pli', 'laa', 'bal', 'hdn', 'hax', 'mik', 'ajp', 'xqa', 'pal', 'crk', 'mni', 'lut', 'ayl', 'ood', 'sdh', 'ofs', 'nus', 'kiu', 'diq', 'qxq', 'alt', 'bfz', 'klj', 'mus', 'srn', 'guc', 'lim', 'zea', 'shi', 'mnr', 'bom', 'sat', 'szl'], id=None)
Value(dtype='string', id=None)
Using custom data configuration loretoparisi--tatoeba-sentences-7b2c5e991f398f39
Downloading and preparing dataset csv/loretoparisi--tatoeba-sentences to /root/.cache/huggingface/datasets/csv/loretoparisi--tatoeba-sentences-7b2c5e991f398f39/0.0.0/433e0ccc46f9880962cc2b12065189766fbb2bee57a221866138fb9203c83519...
Downloading data files: 100%
2/2 [00:18<00:00, 8.06s/it]
Downloading data: 100%
391M/391M [00:13<00:00, 35.3MB/s]
Downloading data: 100%
92.4M/92.4M [00:02<00:00, 36.5MB/s]
Failed to read file '/root/.cache/huggingface/datasets/downloads/933132df9905194ea9faeb30cabca8c49318795612f6495fcb941a290191dd5d' with error <class 'ValueError'>: invalid literal for int() with base 10: 'cmn'
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
/usr/local/lib/python3.7/dist-packages/pandas/_libs/parsers.pyx in pandas._libs.parsers.TextReader._convert_tokens()
TypeError: Cannot cast array data from dtype('O') to dtype('int64') according to the rule 'safe'
During handling of the above exception, another exception occurred:
ValueError Traceback (most recent call last)
15 frames
/usr/local/lib/python3.7/dist-packages/pandas/_libs/parsers.pyx in pandas._libs.parsers.TextReader._convert_tokens()
ValueError: invalid literal for int() with base 10: 'cmn'
```
while loading without `features` it loads without errors
```
sentences = load_dataset("loretoparisi/tatoeba-sentences",
data_files=data_files,
delimiter='\t',
column_names=['label', 'text']
)
```
but the `label` col seems to be wrong (without the `ClassLabel` object):
```
sentences['train'].features
{'label': Value(dtype='string', id=None),
'text': Value(dtype='string', id=None)}
```
The dataset was https://huggingface.co/datasets/loretoparisi/tatoeba-sentences
Dataset format is:
```
ces Nechci vΔdΔt, co je tam uvnitΕ.
ces Kdo o tom chce slyΕ‘et?
deu Tom sagte, er fΓΌhle sich nicht wohl.
ber Mel-iyi-d anida-t tura ?
hun Gondom lesz rΓ‘ rΓΆgtΓΆn.
ber Mel-iyi-d anida-tt tura ?
deu Ich will dich nicht reden hΓΆren.
```
### Expected behavior
```shell
correctly load train and test files.
```
| 04-22-2022 16:47:12 | 04-22-2022 16:47:12 | Hi @loretoparisi π this seems to be a `datasets` issue, I'd suggest opening an issue there π https://github.com/huggingface/datasets<|||||>Thank you opened!
https://github.com/huggingface/datasets/issues/4210 |
transformers | 16,894 | closed | Remove device parameter from create_extended_attention_mask_for_decoder | # What does this PR do?
This RP removes redundant `device` parameter from `create_extended_attention_mask_for_decoder` that may cause potential issues if passed `device` is not equal `attention_mask.device`, see line [`modeling_utils.py#L610`](https://github.com/huggingface/transformers/blob/6d90d76f5db344e333ec184cc6f414abe2aa6559/src/transformers/modeling_utils.py#L610). Explanation: tracing logic from line 610 to method signature:
`causal_mask.device` == `attention_mask.device` => `seq_ids.device` == `attention_mask.device` => `device` == `attention_mask.device`
@michaelbenayoun
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 04-22-2022 14:59:43 | 04-22-2022 14:59:43 | _The documentation is not available anymore as the PR was closed or merged._<|||||>This seems legit for me, pinging @LysandreJik, @sgugger and @ydshieh to comment on this.
<|||||>LGTM, as it uses the device from the argument `attention_mask`.
https://github.com/huggingface/transformers/blob/5d59df5e880cba38b1f2aa69acb8e5db0d84841f/src/transformers/modeling_utils.py#L592-L594
Thank you for reducing the potential issue!
(Please wait the approvals from sgugger or LysandreJik before merge π )<|||||>@sgugger thanks for the code review! all comments have been addressed<|||||>Thanks! Pinging @LysandreJik for final review :-) |
transformers | 16,893 | closed | Make create_extended_attention_mask_for_decoder static method | # What does this PR do?'
`create_extended_attention_mask_for_decoder` doesn't access `self` and can be `@staticmethod`. This resolves some issues with fx tracing for pytorch pipeline parallelism project.
cc @michaelbenayoun
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 04-22-2022 14:26:40 | 04-22-2022 14:26:40 | _The documentation is not available anymore as the PR was closed or merged._<|||||>(just a comment)
Would it be possible to provide the code sample for the issue that occurs without this PR, or a link to the issue page?<|||||>Looks good to me once all the tests pass.
Pinging @sgugger for review!<|||||>> Would it be possible to provide the code sample for the issue that occurs without this PR, or a link to the issue page?
The project will be released and the repo will be opened soon<|||||>> Looks good to me once all the tests pass.
@michaelbenayoun @sgugger all tests passed<|||||>Thanks again for your contribution! |
transformers | 16,892 | closed | TF: XLA stable softmax | # What does this PR do?
As discussed in the thread about XLA problems (https://github.com/huggingface/transformers/issues/16838), this PR adds a stable wrapper for the softmax operation, and replaces `tf.nn.softmax` by the wrapped function.
This PR:
- Adds the wrapped softmax, named `stable_softmax`, in `tf_utils.py`. Its docstring includes why it is needed and why the new operation is valid;
- Adds tests to the wrapped softmax, including XLA tests;
- Replaces `tf.nn.softmax` by `stable_softmax` everywhere except in the doctests (I think it overcomplicates the examples, and no XLA should be needed there);
- Removes the `skipIf` for XLA tests, as they can now be successfully executed in a CPU.
Closes #16838 | 04-22-2022 14:09:16 | 04-22-2022 14:09:16 | _The documentation is not available anymore as the PR was closed or merged._<|||||>This looks good to me! Do you think it would be better to change `stable_softmax` to only add the offset if we're running on CPU? It makes very little difference either way, but we could hide the complexity of that inside `stable_softmax` and keep our code paths entirely unchanged on GPU. I'm not certain, though - since it's such a small change maybe we can just do it everywhere.<|||||>> This looks good to me! Do you think it would be better to change `stable_softmax` to only add the offset if we're running on CPU? It makes very little difference either way, but we could hide the complexity of that inside `stable_softmax` and keep our code paths entirely unchanged on GPU. I'm not certain, though - since it's such a small change maybe we can just do it everywhere.
Good point! Hope this won't affect tests on GPU (at least not for PT/TF equivalence which use `1e-5`). Let's see!<|||||>@Rocketknight1 @ydshieh if you run the test and print the difference between `stable_softmax` and `tf.nn.softmax`, the difference is exactly `0.0` -- I don't think we need to worry about that :D<|||||>@gante With this, do we still have issues regarding sampling in `generate()`. Sorry, I didn't really follow that issue about sampling, but would like to know a bit more π <|||||>@ydshieh after this fix, the errors related to `generate()` are gone -- they were caused by the forward pass in the models, which in turn were caused by the issue this PR solves<|||||>(I might be completely wrong below)
I could imagine that we (will) have tests like:
- testing non-XLA and XLA `generte()` that use sampling
- even with this PR, the differences of output logits between these two might still be as large as, say, `1e-3`?
- if so, the sampling might give different sampling results ..?
- if not, what's the magnitude of the diff we get after this PR?
- testing PT and TF `generte()` that use sampling
- so same potential issue as above ..?
Thanks π <|||||>OK, I saw your previous comment
```
I've spun up an Nvidia T4 ( = no tf32 format) and got an error < 1e-5 for all cases
```<|||||>Based on the testing results, I'm happy for this to be merged now! If this is an XLA bug, though, we should make sure to revert our changes once none of the TF versions we support are affected by it anymore.
Should we add a TODO to the `masked_softmax` function or a reminder somewhere to make sure that we document why this change is here, and when it can be removed?<|||||>@Rocketknight1 added a TODO with instructions related to when to deprecate π |
transformers | 16,891 | closed | Minor fixes/improvements in `convert_file_size_to_int` | Fix `convert_file_size_to_int`'s docstring example and the GB str to int conversion (per [this comment](https://github.com/huggingface/datasets/pull/4190#discussion_r856022534) by @lhoestq). | 04-22-2022 14:03:51 | 04-22-2022 14:03:51 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 16,890 | closed | LED Model returns AlgorithmError when using SageMaker SMP training | ### System Info
```shell
using sagemaker
mpi_options = {
"enabled" : True,
"processes_per_host" : 8
}
smp_options = {
"enabled":True,
"parameters": {
"microbatches": 1,
"placement_strategy": "spread",
"pipeline": "interleaved",
"optimize": "memory",
"partitions": 2,
"ddp": True,
}
}
distribution={
"smdistributed": {"modelparallel": smp_options},
"mpi": mpi_options
}
hyperparameters={'epochs': 1,
'train_batch_size': 1,
'eval_batch_size': 1,
'model_name':HHousen/distil-led-large-cnn-16384,
'output_dir': 'bucket',
'warmup_steps': 25,
'checkpoint_s3_uri': 'bucket',
'logging_steps':100,
'evaluation_strategy':"steps",
'gradient_accumulation_steps':10
}
huggingface_estimator = HuggingFace(entry_point='trainer.py',
source_dir='./scripts',
instance_type='ml.p3.16xlarge',
instance_count=1,
role=role,
volume=100,
transformers_version='4.6.1',
pytorch_version='1.8.1',
py_version='py36',
hyperparameters=hyperparameters,
distribution=distribution)
```
### Who can help?
@ydshieh @sgugger
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1. Create huggingface estimator
2. training_args = Seq2SeqTrainingArguments(
predict_with_generate=True,
evaluation_strategy="steps",
per_device_train_batch_size=1,
per_device_eval_batch_size=1,
fp16=True,
fp16_backend="apex",
output_dir=s3_bucket,
logging_steps=50,
warmup_steps=25,
gradient_accumulation_steps=10,
)
Error I get:
[1,0]<stderr>: File "/opt/conda/lib/python3.6/site-packages/smdistributed/modelparallel/torch/patches/tracing.py", line 68, in trace_forward
[1,0]<stderr>: raise e
[1,0]<stderr>: File "/opt/conda/lib/python3.6/site-packages/smdistributed/modelparallel/torch/patches/tracing.py", line 51, in trace_forward
[1,0]<stderr>: output = original_forward(self, *args, **kwargs)
[1,0]<stderr>: File "/opt/conda/lib/python3.6/site-packages/transformers/models/led/modeling_led.py", line 125, in forward
[1,0]<stderr>: return super().forward(positions)
[1,0]<stderr>: File "/opt/conda/lib/python3.6/site-packages/smdistributed/modelparallel/torch/patches/tracing.py", line 68, in trace_forward
[1,0]<stderr>: raise e
[1,0]<stderr>: File "/opt/conda/lib/python3.6/site-packages/smdistributed/modelparallel/torch/patches/tracing.py", line 51, in trace_forward
[1,0]<stderr>: output = original_forward(self, *args, **kwargs)
[1,0]<stderr>: File "/opt/conda/lib/python3.6/site-packages/transformers/models/led/modeling_led.py", line 121, in forward
[1,0]<stderr>: bsz, seq_len = input_ids_shape[:2]
[1,0]<stderr>:ValueError: not enough values to unpack (expected 2, got 1)
--------------------------------------------------------------------------
Primary job terminated normally, but 1 process returned
a non-zero exit code. Per user-direction, the job has been aborted.
--------------------------------------------------------------------------
--------------------------------------------------------------------------
mpirun.real detected that one or more processes exited with non-zero status, thus causing
the job to be terminated. The first process to do so was:
Process name: [[41156,1],0]
Exit code: 1
--------------------------------------------------------------------------
### Expected behavior
```shell
Training on a sagemaker notebook p3dn.24xlarge using fairscale `simple` and these versions
transformers-4.16.2
torch-1.10.2
fairscale-0.4.5
py37
I can successfully train the LED model with my training data. Trying to get it to work with Huggingface estimator and sagemaker SMP I would assume the same outcome.
```
| 04-22-2022 13:26:44 | 04-22-2022 13:26:44 | cc @philschmid <|||||>I would also suggest @kanwari3 to
- try to use the same Python/PyTorch/transformers versions (and other libraries) on SageMaker that work locally (if possible)
- if the above doesn't work, try to use on local machine the same versions as those used on SageMaker, and see if you still get errors
So we have a better idea about if this is indeed a SageMaker issue or libraries issue<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>cc @philschmid , cc @ydshieh , cc @sgugger
Hi,
This is a follow up on this post with the same title. We are trying to fix the issue and are still getting the same error after trying out several fixes including matching the python, transformers, and pytorch versions according to the recommendations (3.8, 4.16.2, and 1.10.2, respectively):
-ValueError: not enough values to unpack (expected 2, got 1)
The error is in the βmodeling_ledβ within the transformers module expecting a different input_ids shape. We tried unsqueezing the input_ids and attention_masks but it didnβt fix the error.
New Update is we tried below to unsqueeze input tensors to the "modeling_led" to solve the above error:
def unsqueeze_col(example):
return {"input_ids": torch.unsqueeze(example["input_ids"], 0)}
pubmed_train = pubmed_train.map(unsqueeze_col)
Iβd greatly appreciate your feedback. Please let me know if you need any further information about the project. |
transformers | 16,889 | closed | [DocTests] Fix some doc tests | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes a typo in t5.mdx docs and data2vec_vision
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 04-22-2022 09:12:49 | 04-22-2022 09:12:49 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 16,888 | closed | KeyError: loss when pretraining using BertForPreTraining | ### System Info
```shell
- `transformers` version: 4.19.0.dev0
- Platform: Linux-5.13.0-40-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.5.1
- PyTorch version (GPU?): 1.11.0+cu102 (False)
- Tensorflow version (GPU?): 2.7.0 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
```
### Who can help?
@sgugger
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
==========================================
My dataset is in `.txt` format, and it look like this:
```
Sentence 1
Sentence 2
Sentence 3
Sentence 4
Sentence A
Sentence B
Sentence C
Sentence D
Sentence E
Sentence a
Sentence b
Sentence c
```
### Reproduction
1. Tokenize my own dataset using WordLevel Tokenizer
2. Do post pre-processing
3. Train the tokenizer
4. Load Dataset using LineByLineTextDataset
5. Define the configuration of BERT model using BertConfig
6. Create BertForPreTraining model
7. Define Data Collator using DataCollatorForLanguageModeling
8. Initialize Trainer
9. Do pre-training
=======================================================
Below are the code snippets from step 1 to 9.
### 1) Tokenize Dataset Using WordLevel Tokenizer
```
tokenizer = Tokenizer(WordLevel(unk_token="[UNK]"))
trainer = WordLevelTrainer(vocab_size=52_000,
min_frequency=1,
special_tokens=["[UNK]",
"[CLS]",
"[SEP]",
"[PAD]",
"[MASK]"])
tokenizer.pre_tokenizer = Whitespace()
```
### 2) Do Post Pre-Processing
```
tokenizer.post_processor = TemplateProcessing(
single="[CLS] $A [SEP]",
pair="[CLS] $A [SEP] $B:1 [SEP]:1",
special_tokens=[
("[CLS]", 1),
("[SEP]", 2),
],
)
```
### 3) Train Tokenizer
```
path = [str(x) for x in Path("path_to_text_corpus").glob("**/*.txt")]
tokenizer.train(path, trainer)
tokenizer.save("path_to_trained_tokenizer.json")
```
### 4) Load Dataset
```
dataset = LineByLineTextDataset(
tokenizer=tokenizer,
file_path="path_to_corpus.txt",
block_size=128,
)
```
### 5) Define BERT Configuration
```
config = BertConfig(
vocab_size=50000,
hidden_size=768,
num_hidden_layers=12,
num_attention_heads=12,
intermediate_size=3072, hidden_act='gelu',
hidden_dropout_prob=0.1,
attention_probs_dropout_prob=0.1,
max_position_embeddings=512,
type_vocab_size=1,
initializer_range=0.02,
layer_norm_eps=1e-12,
pad_token_id=3,
gradient_checkpointing=False,
)
```
### 6) Create BERT Model for Pretraining
```
model = BertForPreTraining(config=config)
```
### 7) Define Data Collator
```
data_collator = DataCollatorForLanguageModeling(
tokenizer=tokenizer,
mlm=True,
mlm_probability=0.15,
)
```
### 8) Initialize Trainer
```
training_args = TrainingArguments(
output_dir="path_to_pretrained_model",
overwrite_output_dir=True,
num_train_epochs=1,
per_device_train_batch_size=4,
save_steps=10000,
save_total_limit=2,
)
trainer = Trainer(
model=model,
args=training_args,
data_collator=data_collator,
train_dataset=dataset,
)
```
### 9) Do Pre-Training
```
trainer.train()
```
### Expected behavior
I want to pretrain BERT like model with NSP and MLM, but when i run `trainer.train`, i got this error:
```shell
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
<timed eval> in <module>
~/env/lib/python3.8/site-packages/transformers/trainer.py in train(self, resume_from_checkpoint, trial, ignore_keys_for_eval, **kwargs)
1421 tr_loss_step = self.training_step(model, inputs)
1422 else:
-> 1423 tr_loss_step = self.training_step(model, inputs)
1424
1425 if (
~/env/lib/python3.8/site-packages/transformers/trainer.py in training_step(self, model, inputs)
2010
2011 with self.autocast_smart_context_manager():
-> 2012 loss = self.compute_loss(model, inputs)
2013
2014 if self.args.n_gpu > 1:
~/env/lib/python3.8/site-packages/transformers/trainer.py in compute_loss(self, model, inputs, return_outputs)
2052 else:
2053 # We don't use .loss here since the model may return tuples instead of ModelOutput.
-> 2054 loss = outputs["loss"] if isinstance(outputs, dict) else outputs[0]
2055
2056 return (loss, outputs) if return_outputs else loss
~/env/lib/python3.8/site-packages/transformers/utils/generic.py in __getitem__(self, k)
217 if isinstance(k, str):
218 inner_dict = {k: v for (k, v) in self.items()}
--> 219 return inner_dict[k]
220 else:
221 return self.to_tuple()[k]
KeyError: 'loss'
```
```
| 04-22-2022 04:31:07 | 04-22-2022 04:31:07 | Please use the [forums](https://discuss.huggingface.co/) to debug your code. In this instance, you are not providing the model with the `next_sentence_label` it needs to compute the loss. |
transformers | 16,887 | closed | added deit onnx config | # What does this PR do?
Added DeiT OnnxConfig to make this model available for conversion
@ChainYo
https://github.com/huggingface/transformers/issues/16308
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 04-22-2022 01:33:52 | 04-22-2022 01:33:52 | _The documentation is not available anymore as the PR was closed or merged._<|||||>> # What does this PR do?
> Added DeiT OnnxConfig to make this model available for conversion
>
> @ChainYo
> ## Who can review?
> Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR.
Also pinging @LysandreJik, @lewtun and @NielsRogge because he did the implementation of DeiT.
Btw Did you try to convert one `DeiT` model with your add ?
If so you could add the converted model to the [ONNXConfig for all](https://huggingface.co/OWG) organization, it would be awesome!<|||||>Thanks @lewtun , I just added those.
I'm trying to add a README to [ONNXConfigForAll](https://huggingface.co/OWG),
How does onnx work in ViT?
I tried the code below, but its not working
```
from transformers import ViTFeatureExtractor, ViTModel
import torch
from datasets import load_dataset
from onnxruntime import InferenceSession
dataset = load_dataset("huggingface/cats-image")
image = dataset["test"]["image"][0]
feature_extractor = ViTFeatureExtractor.from_pretrained("google/vit-base-patch16-224-in21k")
model = ViTModel.from_pretrained("google/vit-base-patch16-224-in21k")
inputs = feature_extractor(image, return_tensors="pt")
session = InferenceSession("onnx/model.onnx")
# ONNX Runtime expects NumPy arrays as input
outputs = session.run(output_names=["last_hidden_state"], input_feed=list(inputs))
```<|||||>> How does onnx work in ViT?
> I tried the code below, but its not working
Could you check on [netron.app](https://netron.app) what are the inputs of the onnx converted model and then check if the `input_feed` is correct. Btw it should be a `Dict` not a `List`.
<|||||>> > How does onnx work in ViT?
> > I tried the code below, but its not working
>
> Could you check on [netron.app](https://netron.app) what are the inputs of the onnx converted model and then check if the `input_feed` is correct. Btw it should be a `Dict` not a `List`.
And inputs itself is a dictionary

The input as seen from netron is pixel_values, I did try dict(inputs), inputs, list(inputs)

<|||||>I just figured it out. You can find it here [https://huggingface.co/OWG/DeiT](https://huggingface.co/OWG/DeiT) :-)<|||||>> I just figured it out. You can find it here https://huggingface.co/OWG/DeiT :-)
It seems really good! :hugs: <|||||>Thanks for the fix! |
transformers | 16,886 | closed | Allow saved_model export of TFCLIPModel in save_pretrained | # What does this PR do?
I apologize if this is out of scope. There were a few bugs in TFCLIPModel which prevented the model from being exported using the TensorFlow SavedModel format:
1) `_build_causal_attention_mask` makes use of `tf.constant` with a runtime dynamic value. It seems `shape_list` makes use of `tf.shape` which returns a symbolic tensor (inside autograph), which prevents the graph from being fully traced. `tf.constant` does not allow runtime dynamic values, but `tf.fill` does, so I replaced `tf.constant` with a ` tf.cast` and `tf.fill` combo. I don't even think `TFCLIPModel` would run inside a `tf.function` without this change because the autograph trace fails.
2) `TFCLIPTextModel` needs to override the `serving` default implementation. The default implementation expects `token_type_ids` which is not a valid input here.
3) `serving_output` for TFCLIPModel has some issue with tracing through nested dataclasses, which I can't seem to get right just quite yet. Ideally it should be as easy as calling `serving_output` on `text_model_output` and `vision_model_output` (since there is some `convert_to_tensor` stuff going on in each output). I was having problems with TensorFlow saying `TFBaseModelOutputWithPooling` not being a tensor, so I figured the tuple conversion would work, but it doesn't seem to be the fix.
I added tests for exporting each of `TFCLIPModel`, `TFTextModel` and `TFVisionModel` as saved models to verify individual components work and that it's the integration of both that's failing
cc @LysandreJik for TensorFlow changes
| 04-22-2022 00:28:48 | 04-22-2022 00:28:48 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Tagging @ydshieh -- this PR adds several tests<|||||>@gante No problem, glad to help now and if you have plans to improve graph/saved_model serialization in the future I will be glad to help then as well :D<|||||>@seanmor5 Thank you for this PR π !
@gante Let me take a look before merge.
I haven't checked yet, but from the author @seanmor5 's description (regarding `tf.constant` and dynamic shapes), it looks (almost) all the model won't be able to use `saved_model` and in graph model. However, I don't think this is the case as @gante is able to work with XLA.
Therefore I would like to check a bit more on my side π <|||||>> I haven't checked yet, but from the author @seanmor5 's description (regarding tf.constant and dynamic shapes), it looks (almost) all the model won't be able to use saved_model and in graph model. However, I don't think this is the case as @gante is able to work with XLA.
I think we will have to touch a significant number of models for XLA/Saved Model, to be honest π
<|||||>Hi, I could confirm the issue from `tf.constant()` with `symbolic tensor as shape`.
If I change the signature of `serving` to fixed shape, it works.
```
@tf.function(
input_signature=[
{
"input_ids": tf.TensorSpec((3, 5), tf.int32, name="input_ids"),
"attention_mask": tf.TensorSpec((3, 5), tf.int32, name="attention_mask"),
}
]
)
def serving(self, inputs):
```
I will check running the model in `tf.function` too.<|||||>Regarding `tf.function`, I am able to make the following code working. It works even with `jit_compile=True`.
@seanmor5 Could you elaborate a bit more your concern regarding `tf.function`?
### Code snippet
```python
from transformers import TFCLIPTextModel, TFCLIPVisionModel, TFCLIPModel, CLIPConfig
import os
import tensorflow as tf
ckpt = "openai/clip-vit-base-patch32"
config = CLIPConfig.from_pretrained(ckpt)
text_config = config.text_config
model = TFCLIPTextModel.from_config(text_config)
def get_inputs(batch_size, seq_len):
input_ids = tf.constant(1, shape=[batch_size, seq_len], dtype=tf.int32)
attention_mask = tf.constant(1, shape= [batch_size, seq_len], dtype=tf.int32)
inputs = {"input_ids": input_ids, "attention_mask": attention_mask}
return inputs
inputs_1 = get_inputs(3, 5)
inputs_2 = get_inputs(4, 7)
outputs = model(**inputs_1)
# print(outputs)
@tf.function
def foo(inputs):
outputs = model(**inputs)
return outputs
outputs = foo(inputs_1)
outputs = foo(inputs_2)
```<|||||>Sorry for being a bit picky, but I would prefer to get better context in order to decide such changes. In particular, if this issue only occurs during saving to saved_model, I think we can do some more research first to see if there is better solution.<|||||>@ydshieh No problem! You are right, I was assuming there might be issues with `tf.function`, but because the input shapes are static and known at trace time then it makes sense that it works. I think the issue is exclusive to `saved_model` because the input shapes might not be known and so the shape could be symoblic.
EDIT: This is a failing case for `tf.function` assuming a non-static sequence length. This is probably not really desirable behavior though, because of the limitations of dynamic shapes in XLA. So it's probably okay to ignore, but I'm just pointing out for due diligence :)
```
@tf.function(
input_signature=[tf.TensorSpec((3, None), dtype=tf.int32), tf.TensorSpec((3, None), dtype=tf.int32)],
jit_compile=True,
experimental_relax_shapes=True
)
def foo(input_ids, attn_mask):
outputs = model(input_ids=input_ids, attention_mask=attn_mask)
return outputs
inputs = [(tf.constant(1, shape=[x, y], dtype=tf.int32),
tf.constant(1, shape=[x, y], dtype=tf.int32))
for x, y in zip([3, 3, 3, 3, 3], [1, 2, 3, 4, 5])]
for inp in inputs:
print(foo(*inp))
```
I am open to exploring whatever other options you think might be better<|||||>OK. Maybe doing some more research is good. I will find time to get some more ideas.
I always feel that these limitations not easy to handle, but so far (my own) use cases could use a fixed shape (other than batch dim).<|||||>For context, we have been suggesting to users to pad and set the second dimension of the shape to the model's maximum sequence length, in this sort of situation. However, it's unoptimized, a manual process, and it doesn't work well in all situations (e.g. in auto-regressive text generation with models like GPT-2, the defined padded input length has to be smaller than the max sequence length, to allow new tokens to come in, but big enough to handle all prompts).<|||||>I understand, and that's what I will do although not the best ideally.
One question is: if we use `tf.fill` as suggested by @seanmor5 , are we able to run the failing case provided above.
We know from the PR that it will work for saved_model, but I would like to verify it also works for the above example.
(I have feeling that it won't work even with `tf.fill`, but need to verify)
<|||||>@ydshieh So I applied the patch with `tf.fill` and the function does run with an `input_signature=[tf.TensorSpec(3, None, dtype=tf.int32), tf.TensorSpec((3, None), dtype=tf.int32)`
One thing to note is that without the input signature to relax the sequence length constraint, the function is retraced which can be a performance hit. With `tf.fill`, I can verify the following is not retraced with the relaxed input signature:
```
is_not_retraced = True
@tf.function(
input_signature=[tf.TensorSpec((3, None), dtype=tf.int32), tf.TensorSpec((3, None), dtype=tf.int32)],
jit_compile=True,
experimental_relax_shapes=True
)
def foo(input_ids, attn_mask):
global compiles
compiles += 1
outputs = model(input_ids=input_ids, attention_mask=attn_mask)
return outputs
inputs = [(tf.constant(1, shape=[x, y], dtype=tf.int32),
tf.constant(1, shape=[x, y], dtype=tf.int32))
for x, y in zip([3, 3, 3, 3, 3], [1, 2, 3, 4, 5])]
prev_concrete_f = foo.get_concrete_function(*inp)
for inp in inputs:
concrete_f = foo.get_concrete_function(*inp)
is_not_retraced = is_not_retraced and concrete_f is prev_concrete_f
assert is_not_retraced
```
But this version is retraced without an input signature:
```
is_not_retraced = True
@tf.function
def foo(input_ids, attn_mask):
global compiles
compiles += 1
outputs = model(input_ids=input_ids, attention_mask=attn_mask)
return outputs
inputs = [(tf.constant(1, shape=[x, y], dtype=tf.int32),
tf.constant(1, shape=[x, y], dtype=tf.int32))
for x, y in zip([3, 3, 3, 3, 3], [1, 2, 3, 4, 5])]
prev_concrete_f = foo.get_concrete_function(*inp)
for inp in inputs:
concrete_f = foo.get_concrete_function(*inp)
is_not_retraced = is_not_retraced and concrete_f is prev_concrete_f
assert is_not_retraced
```
```
---------------------------------------------------------------------------
AssertionError Traceback (most recent call last)
/var/folders/57/yg31bn915kg_s_tzht3by3r80000gp/T/ipykernel_24110/149582172.py in <module>
18 is_not_retraced = is_not_retraced and concrete_f is prev_concrete_f
19
---> 20 assert is_not_retraced
AssertionError:
```<|||||>@seanmor5 Thank you for all the effort providing the information. I will take a look (you know a lot about the TF graph thing π !<|||||>I could confirm all @seanmor5 states, and also find the TF doc [here](https://www.tensorflow.org/api_docs/python/tf/fill)
<img width="741" alt="Screenshot 2022-04-27 231948" src="https://user-images.githubusercontent.com/2521628/165633061-7b73272b-d1ac-43cb-9890-e145a982c645.png">
I start to think that's the design. Not sure why TF decides to do so, maybe there are reasons to separate the static constants and dynamic constants (performance consideration?).
I am in favor to approve. I will take some time to check the added tests, and think about it a bit more in the meantime.<|||||>@ydshieh Thank you! I've had to debug way too much TF code in my life so I've gotten use to it :)
So unfortunately the last thing that needs to be addressed is the failing `serving_output` test for the joint `TFCLIPModel`, and I'm not quite sure what the fix might be. Here is the stack trace of the failing test (cc @gante):
```
E ValueError: in user code:
E
E
E ValueError: Got a non-Tensor value TFBaseModelOutputWithPooling(last_hidden_state=<tf.Tensor 'StatefulPartitionedCall:6' shape=(None, None, 32) dtype=float32>, pooler_output=<tf.Tensor 'StatefulPartitionedCall:7' shape=(None, 32) dtype=float32>, hidden_states=<tf.Tensor 'StatefulPartitionedCall:5' shape=(6, None, None, 32) dtype=float32>, attentions=<tf.Tensor 'StatefulPartitionedCall:4' shape=(5, None, 4, None, None) dtype=float32>) for key 'text_model_output' in the output of the function __inference_serving_217811 used to generate the SavedModel signature 'serving_default'. Outputs for functions used as signatures must be a single Tensor, a sequence of Tensors, or a dictionary from string to Tensor.
```<|||||>@seanmor5 that exception is raised [here](https://github.com/tensorflow/tensorflow/blob/3848851f009efcc742d6dbd0f49510d1f3f78b13/tensorflow/python/saved_model/signature_serialization.py#L218) -- i.e. on the TF serving side, so outside our control.
It means we have to change something about our API if we want to support serving for all outputs. I'm calling in for second opinions: @sgugger, as he's more experienced in these situations, and @Rocketknight1, my fellow TF MLE.
___________________________
@sgugger @Rocketknight1 some context:
1. This PR attempts to fix serving for TF CLIP which, as you know, has an image and a text component;
2. If we want to output everything (i.e. attention and hidden layers for the vision and the text models), we have the existing `TFCLIPOutput` ([here](https://github.com/huggingface/transformers/blob/main/src/transformers/models/clip/modeling_tf_clip.py#L92)), which contains `tf.Tensor` and `TFBaseModelOutputWithPooling` members. Note: contrarily to other `ModelOutput` child classes, `TFCLIPOutput` can contains non-`tf.Tensor` members;
3. Our `serving_output` functions return classes inherited from `ModelOutput`, like `TFBaseModelOutputWithPooling` (e.g. [gpt2](https://github.com/huggingface/transformers/blob/main/src/transformers/models/gpt2/modeling_tf_gpt2.py#L777), [bert](https://github.com/huggingface/transformers/blob/main/src/transformers/models/bert/modeling_tf_bert.py#L1127)). We would like to do the same here;
4. TF raises an exception whenever we attempt to serve structures that do not contain tensors, sequence of tensors, or dictionary of tensors (see first link in this comment)... which is the case here, `TFCLIPOutput` does not fit that criteria (see why below);
5. @seanmor5 originally proposed to return `.to_tuple()` ([this one](https://github.com/huggingface/transformers/blob/main/src/transformers/models/clip/modeling_tf_clip.py#L122)) instead, and it works. However, in that case, the API would be different for this model (and across frameworks), and we would lose the ability to access fields by name.
A few additional notes:
1. Happy to move this to a separate issue, as it may warrant further discussion;
2. Any decision here would set a precedent for multimodal TF models;
3. More specifically, the exception will not be raised if we are serving a structure containing `CompositeTensor` ([code](https://github.com/tensorflow/tensorflow/blob/3848851f009efcc742d6dbd0f49510d1f3f78b13/tensorflow/python/framework/composite_tensor.py#L33)) members. Acording to its docstring, it can expand whatever `tf.nest` can expand, and if the leaves are `tf.Tensor`, we are all good. Looking at the docs of `tf.nest` ([here](https://www.tensorflow.org/api_docs/python/tf/nest)), we can see that it treats `@dataclass`-decorated structures as an atom. So while we can serve most `@dataclass`-decorated `ModelOutput` (its members are `tf.Tensor`), we cannot serve `@dataclass`-decorated `ModelOutput` containing other `@dataclass`-decorated `ModelOutput`, which would likely be our desired case for multimodal models.
Potential solutions:
1. Consider using `namedtuples`? It has a few [drawbacks](https://peps.python.org/pep-0557/#why-not-just-use-namedtuple)
2. Use dictionaries and, if needed, add syntactic sugar to access fields by name and by index?
3. Expand nested fields -- instead of `TFCLIPOutput` holding two `TFBaseModelOutputWithPooling`, it holds a tuple for each `TFBaseModelOutputWithPooling` or expands their attributes directly into `TFCLIPOutput` (e.g. `text_model_output` -> `text_model_output_attentions`, text_model_output_hidden_states`, ...)
4. ???<|||||>For more information, is it possible to convert the nested fields (here `text_model_output` and `vision_model_output` that are `TFBaseModelOutputWithPooling` and not tensors) to tuple inside the `serve` part only? Or would that change need to be done all the time?<|||||>I haven't tried TF serving yet (probably just once). For HF models, while `serving_output` returns things like `TFBaseModelOutputWithPooling`, what happens when we use the converted TF serving models? For example, if we call those TF serving models, is the output still be `TFBaseModelOutputWithPooling`? Or it is just a dictionary ..?
<|||||>> For more information, is it possible to convert the nested fields (here `text_model_output` and `vision_model_output` that are `TFBaseModelOutputWithPooling` and not tensors) to tuple inside the `serve` part only? Or would that change need to be done all the time?
@sgugger Yes, it is possible, and it should solve the problem (`tuple` is supported by `tf.nest`). The only difference (to PT) is that we wouldn't be able to access the field by name, but that seems like a small price to pay for a simple solution.<|||||>> > For more information, is it possible to convert the nested fields (here `text_model_output` and `vision_model_output` that are `TFBaseModelOutputWithPooling` and not tensors) to tuple inside the `serve` part only? Or would that change need to be done all the time?
>
> @sgugger Yes, it is possible, and it should solve the problem (`tuple` is supported by `tf.nest`). The only difference (to PT) is that we wouldn't be able to access the field by name, but that seems like a small price to pay for a simple solution.
I **guess** @sgugger doesn't necessarily mean we should use `tuple` instead of `dict`, but just a question about where we should do the conversion (?).
I would much prefer using dictionary though. Maybe let us check what really gives as outputs when we use HF's TF serving model to make the decision?<|||||>> I haven't tried TF serving yet (probably just once). For HF models, while `serving_output` returns things like `TFBaseModelOutputWithPooling`, what happens when we use the converted TF serving models? For example, if we call those TF serving models, is the output still be `TFBaseModelOutputWithPooling`? Or it is just a dictionary ..?
@ydshieh Can confirm that the output of a loaded model using `tf.keras.models.load_model` is a `dict`, not a subclass of `ModelOutput`
(output types of a reloaded `TFCLIPTextModel`, after storing with `tf.keras.models.load_model`)
<img width="578" alt="Screenshot 2022-04-28 at 17 15 15" src="https://user-images.githubusercontent.com/12240844/165797819-7fcf28c2-c27f-44d3-896a-90794fec0c03.png">
______________________________
I experimented with converting the problematic variables to multiple formats:
- to `dict` with `dict()` and with `dataclass.asdict()`
- to `tuple` with `tuple()`, `.to_tuple()`
- using `tf.nest.flatten()` as the `CompositeTensor` docstring suggests
All of them result in the same exception. I also tried to look into documentation on how to cast into `CompositeTensor`, but there is none we can use (there is an [experimental function](https://www.tensorflow.org/probability/api_docs/python/tfp/experimental/as_composite) in `tensorflow-probability`, which is not our dependency atm, but it throws an exception related to the expected input object).
The only thing that seems to work is a flat serving structure, without nested components.
@seanmor5, I got the exact same exception when attempting your original solution, with `return output.to_tuple()`. The command I ran was `RUN_SLOW=1 py.test -vv tests/clip/test_modeling_tf_clip.py::TFCLIPModelTest::test_saved_model_creation_extended` -- were you also running this command?<|||||>Are those composite outputs really useful for the serving? Can't we just remove them entirely?<|||||>Probably. We can leave them as a TODO and wait for an issue :) @seanmor5 would you be okay with that? (I'm afraid we are hitting a hard problem for a feature we probably don't need)<|||||>Thank you for the work, @gante ! Surprised that `dict` is not working π’ . Adding to TODO is good for me. Meanwhile, I think the users might customize the format if they want to sever the model. We can even add a comment in the code around `serving_output.`
Could you share the code you use π ?<|||||>> Could you share the code you use π ?
I was using the code as is in this PR, and `RUN_SLOW=1 py.test -vv tests/clip/test_modeling_tf_clip.py::TFCLIPModelTest::test_saved_model_creation_extended` to debug :)<|||||>@gante Sorry for the confusion, `to_tuple` was not working for me either unfortunately, I had mentioned that in my original message....as you said I think the only solution is with a flat structure. I'm okay with forgoing this part of the PR as a TODO for later. Do you want me to revert back to the original implementation, or add an exception to use one of the individual `TFCLIPTextModel` or `TFCLIPVisionModel` instead? Both of those work fine<|||||>@seanmor5 no worries :) I'd say to revert the changes `serving_output`, and add a TODO pointing to this PR (so our future selves can refer to this discussion) <|||||>@gante @ydshieh @sgugger I've gone ahead and reverted the change and added in a TODO referencing this PR. Thanks for the feedback/discussions, I will continue to research to see if there are any better workarounds for the issue<|||||>Hi @seanmor5 @gante , I played a bit more, and so far not able to get it work neither.
During the process, I found [Extension types](https://www.tensorflow.org/guide/extension_type), but with it still get errors like
```
E tensorflow.python.saved_model.nested_structure_coder.NotEncodableError: No encoder for object MaskedTensor.Spec(values=TensorSpec(shape=(2, 3), dtype=tf.int32, name=None), mask=TensorSpec(shape=(2, 3), dtype=tf.bool, name=None)) of type <class 'transformers.models.clip.modeling_tf_clip.MaskedTensor.Spec'>.
```
In addition to the experimentation, I got the chance to look a bit the added tests, and found something to correct, like the following block won't work (if we are able to save/load model) anyway, because the outputs don't have these keys
https://github.com/huggingface/transformers/blob/a4abfa6ba3b7875a13538dbc2ddc4eb17dfcca8d/tests/clip/test_modeling_tf_clip.py#L602-L607
Considering the `saved_model` for `TFCLIPModel` is not working for now, maybe we can just remove this test (at the final version - once we really don't have a way to make it work).
Also, there is `loss` being `None` issue when I tried to save with `saved_model`. I had to manually remove the related lines to do the experimentation. You don't have such issue??
I will do code review more carefully later.
<|||||>@ydshieh You are right, I never noticed the issue because the test always failed at `save_pretrained`. I updated the test to remove the offending lines. Can you expand on the issue with `loss` being `None`?<|||||>> @ydshieh You are right, I never noticed the issue because the test always failed at `save_pretrained`. I updated the test to remove the offending lines. Can you expand on the issue with `loss` being `None`?
Sure, I will provide more info later.
Regarding the code you removed in the latest commit so far: actually I don't mean to say removing those blocks. Instead:
- if we are able to make saved_pretrained work -> we should update those testing blocks (to use the correct keys, etc.) in order to be able to test the loaded saved model
- if we are not able to make saved_pretrained work --> the whole `test_saved_model_creation_extended` should be removed for now.
So far, you can keep as it is in your latest commit. We will decide what to do at the end π
<|||||>@seanmor5 No need to make it clean for now :-) just leave it (because we need to change it at the end anyway).
Regarding the `loss`: if we do
- `return {"dummy": tf.constant([0])}` in `serving_output` (the `TODO` one), we get
```
E File "C:\Users\33611\Desktop\Project\transformers-ydshieh\src\transformers\models\clip\modeling_tf_clip.py", line 869, in call *
E if return_loss:
E
E ValueError: 'loss' is None at the end of the else branch.
```
- if we change a bit the line around ` if return_loss:` like
```
loss = 0.0 ## just for experimentation
# if return_loss:
# loss = clip_loss(logits_per_text)
```
it works.
That's why I think something more need to be changed to make things work (even if we solve the issue about nested structure).
<|||||>@ydshieh Yeah I just started messing around with ExtensionTypes and got the same error, it seems any change to make use of ExtensionTypes might end up being very in-depth. I also encountered the same `NotEncodableError` as well, I will try to do more research as to why that happens<|||||>Yeah. We could definitely leave that nested structure issue as a TODO, and merge this PR once I have a close review on the tests you added (and have a final review from some core maintainers). But go ahead if you would like to do some more research in the meantime.
<|||||>@ydshieh Thank you for the review! I believe I addressed all comments<|||||>@seanmor5 Thank you. I will take a look.
Regarding the tests:
- You can ignore `Model templates runner / run_tests_templates (pull_request) `.
- About `ci/circleci: check_code_quality `, you can run `make style` and `make quality` to fix. (After installing `pip install hf-doc-builder -U`)<|||||>@ydshieh Thank you! I was running black locally but it seemed for some reason it was not catching the formatting issues. I have fixed the issues, though it seems the code quality is still failing but from the `docs/source` path which I have not touched. <|||||>@seanmor5
Regarding the style, could you follow this comment, please?
https://github.com/huggingface/transformers/pull/17008#issuecomment-1116020201
And I just merged a (big) PR, so you will need to move a bit the test file, please see this comment:
https://github.com/huggingface/transformers/pull/17008#issuecomment-1116059265
Thank you so much!<|||||>@ydshieh Beautiful! That worked great!<|||||>Merged. Thank you again, @seanmor5 .
Let's see if we could find a way to make `TFCLIPModel` work. |
transformers | 16,885 | closed | Tensorflow to Onnx change batch and sequence size | ### Feature request
Add the ability to set the batch size and sequence length when converting a model to Onnx rather than it always defaulting to a batch size of 2 and sequence length of 8. Here is the example code that I have by importing `OnnxConfig` and setting it's `default_fixed_batch` and `default_fixed_sequence` before exporting the model.
```python
import tensorflow as tf
from transformers import (
DistilBertTokenizerFast,
TFAutoModelForSequenceClassification,
)
from transformers.onnx import export, OnnxConfig
from transformers.models.distilbert import DistilBertOnnxConfig
import onnxruntime
import numpy as np
from pathlib import Path
import tempfile
checkpoint = "distilbert-base-uncased"
tokenizer = DistilBertTokenizerFast.from_pretrained(checkpoint)
# you would load your trained model from the folder path here, but for demo purposes, we will load the transformer checkpoint
model = TFAutoModelForSequenceClassification.from_pretrained(checkpoint)
onnx_config = DistilBertOnnxConfig(model.config, task="sequence-classification")
# current solution for setting the Onnx model's batch size and sequence length
OnnxConfig.default_fixed_batch = 1
OnnxConfig.default_fixed_sequence = 128
# create output folder/filename
save_folder = tempfile.mkdtemp()
onnx_model_path = Path(save_folder).joinpath("model.onnx")
# convert to onnx
onnx_input_names, onnx_output_names = export(tokenizer, model, onnx_config, onnx_config.default_onnx_opset, onnx_model_path)
# predict
session = onnxruntime.InferenceSession(str(onnx_model_path), providers=['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider'])
onnx_inputs = tokenizer(["test string"],
max_length=128,
truncation=True,
padding="max_length",
return_tensors="np")
onnx_inputs = {k: v.astype(np.int32) for k, v in onnx_inputs.items()}
outputs = session.run(None, input_feed=onnx_inputs)
```
### Motivation
To make it easier to set the batch size and sequence length when converting a model to Onnx. Rather than importing `OnnxConfig` and setting it's `default_fixed_batch` and `default_fixed_sequence` before exporting the model, I think the OnnxConfig class could have kwargs for batch size and sequence length.
### Your contribution
I could create a PR to change the `OnnxConfig` class signature to something like this:
```python
def __init__(self, config: "PretrainedConfig", task: str = "default", patching_specs: List[PatchingSpec] = None, batch_size: int = 2, sequence_length: int = 8):
```
And update the `default_sequence_length` and `default_batch_size` property functions to return the variable set from the `__init__` function | 04-22-2022 00:21:46 | 04-22-2022 00:21:46 | Hi @nyoungstudios , I don't get the goal of this feature because when you convert a model to onnx you want to set the `batch_size` to dynamic and the `sequence_length` needs to fit the model config. This way people can use the model with any `batch_size` and the right model `sequence_length`.
The `batch_size=2` and `sequence_length=8` are here for creating a dummy input which is required by ONNX for conversion. It allows onnx to understand the flow though the model layers.
The conversion is also automated with the `python -m transformers.onnx --model=distilbert-base-uncased feature=sequence-classification onnx/`.<|||||>@ChainYo maybe I am missing something, but the code sample I have calls the same functions as in the transformers.onnx package cli command. And trying to run an inference without matching the dimensions will return an error like this:
with this input
```python
onnx_inputs = tokenizer(["test string"],
truncation=True,
return_tensors="np")
onnx_inputs = {k: v.astype(np.int32) for k, v in onnx_inputs.items()}
print(onnx_inputs)
# {'attention_mask': array([[1, 1, 1, 1]], dtype=int32), 'input_ids': array([[ 101, 3231, 5164, 102]], dtype=int32)}
```
Returns this traceback
```python-traceback
InvalidArgument: [ONNXRuntimeError] : 2 : INVALID_ARGUMENT : Got invalid dimensions for input: input_ids for the following indices
index: 0 Got: 1 Expected: 2
index: 1 Got: 4 Expected: 8
Please fix either the inputs or the model.
```<|||||>> @ChainYo maybe I am missing something, but the code sample I have calls the same functions as in the transformers.onnx package cli command.
It doesn't run the onnx checker function and the validating function to validate that the outputs are the same as the pytorch model.
Oh and another thing I just noticed, you are using a `base` model but with `sequence-classification` task. You need to finetune it or get an already fine-tuned model to be able to use it for sequence-classification.
<|||||>yes, I did also run this Onnx output model checker
```python
from transformers.utils import logging
logger = logging.get_logger("transformers.onnx") # pylint: disable=invalid-name
logger.setLevel(logging.INFO)
validate_model_outputs(onnx_config, tokenizer, model, onnx_model_path, onnx_output_names, onnx_config.atol_for_validation)
# Validating ONNX model...
# -[β] ONNX model output names match reference model ({'logits'})
# - Validating ONNX Model output "logits":
# -[β] (2, 2) matches (2, 2)
# -[β] all values close (atol: 1e-05)
```
But that isn't the problem. Also, I am using Tensorflow not PyTorch. And I am just using the base model here as an example. I have fine tuned the model to make predictions, but that doesn't seem to be relevant to the problem here.
The traceback shows that it is expecting a batch size of 2 and a sequence size of 8. Here I provided a batch size of 1 and a sequence size of 4.<|||||>> The traceback shows that it is expecting a batch size of 2 and a sequence size of 8. Here I provided a batch size of 1 and a sequence size of 4.
Oh man, I didn't see it was a feature request, I'm so sorry! I though you were facing a bug while converting your model to onnx :weary:.
That could be nice to add this yes! Even if doing it via the command line is easier and works like a breeze.<|||||>No problem, I can work on a pr for this in my free time. I did try running this example but with PyTorch and was able to use the Onnx model without a fixed sequence length or batch size. Curious to know if you know if Tensorflow to Onnx supports dynamic sequence length and batch size like PyTorch to Onnx? If not, I think I might try to explore seeing that that is any option before writing code to have Tensorflow to Onnx support different fixed sizes.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 16,884 | closed | [tracker] Sharding huge models process and current status | this is an Issue to track which pre-existing huge models (>11GB) need sharding, which have been completed and the code to do that.
### Why shard huge checkpoints?
Because it takes much less CPU memory to load a huge model of say 42GB, especially if you're loading concurrently in multiple processes.
Here is the breakdown for HF Transformers `from_pretrained` model loading with DDP. The example in each case uses a model of 30GB and 8 DDP processes:
- non-sharded model: `2 * model size * number of processes`. Example: `2*30*8=480GB`
- non-sharded model + `low_cpu_mem_usage=True`: `model size * number of processes`. Example: `30*8=240GB` (but it's slower)
- sharded model: `(size_of_largest_shard + model size) * number of processes`. Example: `(10+30)*8=320GB`
- sharded model + deepspeed zero 3: `size_of_largest_shard * number of processes`. Example: `10*8=80GB`
### Using sharded models
Here is an example of how to get the 42GB T0 model via multi-part sharded branch (about 9GB per shard here):
* Directly:
```
AutoModel.from_pretrained("bigscience/T0", revision="sharded")
```
* Via HF Trainer example scripts:
```
examples/pytorch/translation/run_translation.py \
--model_name_or_path bigscience/T0 --model_revision sharded ...
```
do note that I called these branches "sharded" but other users may call them anything they want, so check the model's available branches on the hub, e.g. Here is the sharded branch of T0 https://huggingface.co/bigscience/T0/tree/sharded
And you can further re-shard them to an even smaller shards, e.g. to 5GB shards:
```
python -c 'from transformers import AutoModelForSeq2SeqLM; \
model = AutoModelForSeq2SeqLM.from_pretrained("bigscience/T0_3B"); \
model.save_pretrained("t0-sharded", max_shard_size="5GB")'
```
### Infrastructure decisions
- [ ] need to decide how to tell the user about all these different branches. I proposed an automatic extraction in the "Use in Transformers" pop-up, e.g. it could say:
```
Other available branches: sharded, bf16, fp16
```
### Sharding progress
- [x] bigscience/T0
- [x] bigscience/T0_single_prompt
- [x] bigscience/T0p
- [x] bigscience/T0pp
- [x] t5-11b
- [x] google/byt5-xxl
- [x] google/mt5-xxl
- [x] google/t5-v1_1-xxl
- [x] allenai/unifiedqa-t5-11b
- [x] allenai/unifiedqa-v2-t5-11b-1363200
- [x] allenai/unifiedqa-v2-t5-11b-1251000
- [x] allenai/macaw-answer-11b
- [x] allenai/macaw-11b
- [x] EleutherAI/gpt-j-6B
- [x] facebook/xglm-7.5B
- [x] facebook/incoder-6B
- [x] facebook/m2m100-12B-last-ckpt
- [x] facebook/m2m100-12B-avg-10-ckpt
- [x] facebook/m2m100-12B-avg-5-ckpt
XXX: fill in more?
-----------------
Here is how each was sharded, `bigscience/T0` here and the rest below.
```
git lfs install
git clone https://huggingface.co/bigscience/T0
python -c "from transformers import AutoModelForSeq2SeqLM; \
model = AutoModelForSeq2SeqLM.from_pretrained('./T0'); \
model.save_pretrained('T0-sharded')"
mv T0-sharded/pytorch_model* T0
mv T0-sharded/config.json T0
cd T0
huggingface-cli lfs-enable-largefiles .
git checkout -b sharded
git rm pytorch_model.bin
git add pytorch_model*
git commit -am "add sharded checkpoint"
git push --set-upstream origin sharded
cd -
```
Verified that it downloaded the right version and the example evals just fine: Using `--model_name_or_path bigscience/T0 --model_revision sharded`
```
export BS=1; rm -r output_dir; PYTHONPATH=src USE_TF=0 CUDA_VISIBLE_DEVICES=0 deepspeed \
--num_gpus=1 examples/pytorch/translation/run_translation.py \
--model_name_or_path bigscience/T0 --model_revision sharded --output_dir \
output_dir --adam_eps 1e-06 --evaluation_strategy=steps --do_eval \
--label_smoothing 0.1 --learning_rate 3e-5 --logging_first_step \
--logging_steps 500 --max_source_length 128 --max_target_length 128 \
--overwrite_output_dir --per_device_eval_batch_size 1 --predict_with_generate \
--sortish_sampler --source_lang en --target_lang ro --dataset_name wmt16 \
--dataset_config ro-en --source_prefix 'translate English to Romanian: ' \
--val_max_target_length 128 --warmup_steps 50 --max_eval_samples 50 \
--deepspeed tests/deepspeed/ds_config_zero3.json --fp16 --skip_memory_metrics 0
```
The rest of the command line instructions follow:
<details>
<summary>Click to expand!</summary>
```
git clone https://huggingface.co/t5-11b
python -c "from transformers import AutoModelForSeq2SeqLM; \
model = AutoModelForSeq2SeqLM.from_pretrained('./t5-11b'); \
model.save_pretrained('t5-11b-sharded')"
mv t5-11b-sharded/pytorch_model* t5-11b
mv t5-11b-sharded/config.json t5-11b
cd t5-11b
huggingface-cli lfs-enable-largefiles .
git checkout -b sharded
git rm pytorch_model.bin
git rm tf_model.h5
git add pytorch_model*
git commit -am "add sharded checkpoint"
git push --set-upstream origin sharded
cd -
------------------
git clone https://huggingface.co/bigscience/T0_single_prompt
python -c "from transformers import AutoModelForSeq2SeqLM; \
model = AutoModelForSeq2SeqLM.from_pretrained('./T0_single_prompt'); \
model.save_pretrained('T0_single_prompt-sharded')"
mv T0_single_prompt-sharded/pytorch_model* T0_single_prompt
mv T0_single_prompt-sharded/config.json T0_single_prompt
cd T0_single_prompt
huggingface-cli lfs-enable-largefiles .
git checkout -b sharded
git rm pytorch_model.bin
git add pytorch_model*
git commit -am "add sharded checkpoint"
git push --set-upstream origin sharded
cd -
------------------
git clone https://huggingface.co/bigscience/T0p
python -c "from transformers import AutoModelForSeq2SeqLM; \
model = AutoModelForSeq2SeqLM.from_pretrained('./T0p'); \
model.save_pretrained('T0p-sharded')"
mv T0p-sharded/pytorch_model* T0p
mv T0p-sharded/config.json T0p
cd T0p
huggingface-cli lfs-enable-largefiles .
git checkout -b sharded
git rm pytorch_model.bin
git add pytorch_model*
git commit -am "add sharded checkpoint"
git push --set-upstream origin sharded
cd -
------------------
git clone https://huggingface.co/bigscience/T0pp
python -c "from transformers import AutoModelForSeq2SeqLM; \
model = AutoModelForSeq2SeqLM.from_pretrained('./T0pp'); \
model.save_pretrained('T0pp-sharded')"
mv T0pp-sharded/pytorch_model* T0pp
mv T0pp-sharded/config.json T0pp
cd T0pp
huggingface-cli lfs-enable-largefiles .
git checkout -b sharded
git rm pytorch_model.bin
git add pytorch_model*
git commit -am "add sharded checkpoint"
git push --set-upstream origin sharded
cd -
```
git clone https://huggingface.co/allenai/unifiedqa-t5-11b
git clone https://huggingface.co/allenai/unifiedqa-v2-t5-11b-1363200
git clone https://huggingface.co/allenai/unifiedqa-v2-t5-11b-1251000
git clone https://huggingface.co/allenai/macaw-answer-11b
git clone https://huggingface.co/allenai/macaw-11b
git clone https://huggingface.co/facebook/xglm-7.5B
git clone https://huggingface.co/facebook/incoder-6B
git clone https://huggingface.co/facebook/m2m100-12B-last-ckpt
git clone https://huggingface.co/facebook/m2m100-12B-avg-10-ckpt
git clone https://huggingface.co/facebook/m2m100-12B-avg-5-ckpt
### autogenerate the code for above models ###
perl -le '$q=chr(39); print qq[
python -c "from transformers import AutoModelForSeq2SeqLM; \\
model = AutoModelForSeq2SeqLM.from_pretrained($q./$_$q); \\
model.save_pretrained($q$_-sharded$q)"
mv $_-sharded/pytorch_model* $_
mv $_-sharded/config.json $_
cd $_
huggingface-cli lfs-enable-largefiles .
git lfs untrack '*.bin.*'
git checkout -b sharded
git rm pytorch_model.bin
git add pytorch_model*
git commit -am "add sharded checkpoint"
git push --set-upstream origin sharded
cd -
------------------
] for @ARGV' unifiedqa-t5-11b unifiedqa-v2-t5-11b-1363200 unifiedqa-v2-t5-11b-1251000 macaw-answer-11b macaw-11b xglm-7.5B incoder-6B m2m100-12B-last-ckpt m2m100-12B-avg-10-ckpt m2m100-12B-avg-5-ckpt
------------
python -c "from transformers import AutoModelForSeq2SeqLM; \
model = AutoModelForSeq2SeqLM.from_pretrained('./unifiedqa-t5-11b'); \
model.save_pretrained('unifiedqa-t5-11b-sharded')"
mv unifiedqa-t5-11b-sharded/pytorch_model* unifiedqa-t5-11b
mv unifiedqa-t5-11b-sharded/config.json unifiedqa-t5-11b
cd unifiedqa-t5-11b
huggingface-cli lfs-enable-largefiles .
git lfs untrack *.bin.*
git checkout -b sharded
git rm pytorch_model.bin
git add pytorch_model*
git commit -am "add sharded checkpoint"
git push --set-upstream origin sharded
cd -
------------------
python -c "from transformers import AutoModelForSeq2SeqLM; \
model = AutoModelForSeq2SeqLM.from_pretrained('./unifiedqa-v2-t5-11b-1363200'); \
model.save_pretrained('unifiedqa-v2-t5-11b-1363200-sharded')"
mv unifiedqa-v2-t5-11b-1363200-sharded/pytorch_model* unifiedqa-v2-t5-11b-1363200
mv unifiedqa-v2-t5-11b-1363200-sharded/config.json unifiedqa-v2-t5-11b-1363200
cd unifiedqa-v2-t5-11b-1363200
huggingface-cli lfs-enable-largefiles .
git lfs untrack *.bin.*
git checkout -b sharded
git rm pytorch_model.bin
git add pytorch_model*
git commit -am "add sharded checkpoint"
git push --set-upstream origin sharded
cd -
------------------
python -c "from transformers import AutoModelForSeq2SeqLM; \
model = AutoModelForSeq2SeqLM.from_pretrained('./unifiedqa-v2-t5-11b-1251000'); \
model.save_pretrained('unifiedqa-v2-t5-11b-1251000-sharded')"
mv unifiedqa-v2-t5-11b-1251000-sharded/pytorch_model* unifiedqa-v2-t5-11b-1251000
mv unifiedqa-v2-t5-11b-1251000-sharded/config.json unifiedqa-v2-t5-11b-1251000
cd unifiedqa-v2-t5-11b-1251000
huggingface-cli lfs-enable-largefiles .
git lfs untrack *.bin.*
git checkout -b sharded
git rm pytorch_model.bin
git add pytorch_model*
git commit -am "add sharded checkpoint"
git push --set-upstream origin sharded
cd -
------------------
python -c "from transformers import AutoModelForSeq2SeqLM; \
model = AutoModelForSeq2SeqLM.from_pretrained('./macaw-answer-11b'); \
model.save_pretrained('macaw-answer-11b-sharded')"
mv macaw-answer-11b-sharded/pytorch_model* macaw-answer-11b
mv macaw-answer-11b-sharded/config.json macaw-answer-11b
cd macaw-answer-11b
huggingface-cli lfs-enable-largefiles .
git lfs untrack *.bin.*
git checkout -b sharded
git rm pytorch_model.bin
git add pytorch_model*
git commit -am "add sharded checkpoint"
git push --set-upstream origin sharded
cd -
------------------
python -c "from transformers import AutoModelForSeq2SeqLM; \
model = AutoModelForSeq2SeqLM.from_pretrained('./macaw-11b'); \
model.save_pretrained('macaw-11b-sharded')"
mv macaw-11b-sharded/pytorch_model* macaw-11b
mv macaw-11b-sharded/config.json macaw-11b
cd macaw-11b
huggingface-cli lfs-enable-largefiles .
git lfs untrack *.bin.*
git checkout -b sharded
git rm pytorch_model.bin
git add pytorch_model*
git commit -am "add sharded checkpoint"
git push --set-upstream origin sharded
cd -
------------------
python -c "from transformers import AutoModelForCausalLM; \
model = AutoModelForCausalLM.from_pretrained('./xglm-7.5B'); \
model.save_pretrained('xglm-7.5B-sharded')"
mv xglm-7.5B-sharded/pytorch_model* xglm-7.5B
mv xglm-7.5B-sharded/config.json xglm-7.5B
cd xglm-7.5B
huggingface-cli lfs-enable-largefiles .
git lfs untrack *.bin.*
git checkout -b sharded
git rm pytorch_model.bin
git add pytorch_model*
git commit -am "add sharded checkpoint"
git push --set-upstream origin sharded
cd -
------------------
python -c "from transformers import AutoModelForCausalLM; \
model = AutoModelForCausalLM.from_pretrained('./incoder-6B'); \
model.save_pretrained('incoder-6B-sharded')"
mv incoder-6B-sharded/pytorch_model* incoder-6B
mv incoder-6B-sharded/config.json incoder-6B
cd incoder-6B
huggingface-cli lfs-enable-largefiles .
git lfs untrack *.bin.*
git checkout -b sharded
git rm pytorch_model.bin
git add pytorch_model*
git commit -am "add sharded checkpoint"
git push --set-upstream origin sharded
cd -
------------------
python -c "from transformers import AutoModelForSeq2SeqLM; \
model = AutoModelForSeq2SeqLM.from_pretrained('./m2m100-12B-last-ckpt'); \
model.save_pretrained('m2m100-12B-last-ckpt-sharded')"
mv m2m100-12B-last-ckpt-sharded/pytorch_model* m2m100-12B-last-ckpt
mv m2m100-12B-last-ckpt-sharded/config.json m2m100-12B-last-ckpt
cd m2m100-12B-last-ckpt
huggingface-cli lfs-enable-largefiles .
git lfs untrack *.bin.*
git checkout -b sharded
git rm pytorch_model.bin
git add pytorch_model*
git commit -am "add sharded checkpoint"
git push --set-upstream origin sharded
cd -
------------------
python -c "from transformers import AutoModelForSeq2SeqLM; \
model = AutoModelForSeq2SeqLM.from_pretrained('./m2m100-12B-avg-10-ckpt'); \
model.save_pretrained('m2m100-12B-avg-10-ckpt-sharded')"
mv m2m100-12B-avg-10-ckpt-sharded/pytorch_model* m2m100-12B-avg-10-ckpt
mv m2m100-12B-avg-10-ckpt-sharded/config.json m2m100-12B-avg-10-ckpt
cd m2m100-12B-avg-10-ckpt
huggingface-cli lfs-enable-largefiles .
git lfs untrack *.bin.*
git checkout -b sharded
git rm pytorch_model.bin
git add pytorch_model*
git commit -am "add sharded checkpoint"
git push --set-upstream origin sharded
cd -
------------------
python -c "from transformers import AutoModelForSeq2SeqLM; \
model = AutoModelForSeq2SeqLM.from_pretrained('./m2m100-12B-avg-5-ckpt'); \
model.save_pretrained('m2m100-12B-avg-5-ckpt-sharded')"
mv m2m100-12B-avg-5-ckpt-sharded/pytorch_model* m2m100-12B-avg-5-ckpt
mv m2m100-12B-avg-5-ckpt-sharded/config.json m2m100-12B-avg-5-ckpt
cd m2m100-12B-avg-5-ckpt
huggingface-cli lfs-enable-largefiles .
git lfs untrack *.bin.*
git checkout -b sharded
git rm pytorch_model.bin
git add pytorch_model*
git commit -am "add sharded checkpoint"
git push --set-upstream origin sharded
cd -
</details>
| 04-21-2022 21:23:35 | 04-21-2022 21:23:35 | Both of your commits (https://huggingface.co/bigscience/T0/commit/858cd92e88c9548d194f61259af965d1d1e916b7 and https://huggingface.co/t5-11b/commit/82929bfe90cbfc4e9a3dedf38bb967650ddb6ac2) looks good to me, @stas00.
One thing I realize just now is that the `pytorch_model.bin.index.json` index files are LFS-tracked even though they're small JSON files, because there is the `*.bin.*` pattern in .gitattributes (in all repos)
That's no huge deal though, cc @sgugger @LysandreJik @Pierrci, the only drawback is that we won't get nice diffs on them<|||||>OK, I will wait for a decision on `*.bin.*` as it is being discussed on slack and then adjust accordingly.<|||||>Why do you use the threshold size of >11GB?
I think we can do this only for >30GB models (30GB is the newly updated Cloudfront file size)<|||||>Because our default shard size is 10GB, and there are quite a few models that are 10.5GB, so no need to bother with those. That's why it's >11GB and not >10GB.
We need models to be sharded to smaller chunks not just due to Cloudfront limitations, but primarily because it's very expensive to load these large models cpu memory wise, especially in the DDP situation.
e.g. if you have 8 gpus and an unsharded model is 30GB you will need at least 480GB of CPU RAM to load it with the normal setup. (`2*30*8`)
So here is the breakdown for HF Transformers `from_pretrained` model loading with DDP. The example in each case uses a model of 30GB and 8 DDP processes:
- non-sharded model: `2 * model size * number of processes`. Example: `2*30*8=480GB`
- non-sharded model + `low_cpu_mem_usage=True`: `model size * number of processes`. Example: `30*8=240GB` (but it's slower)
- sharded model: `(size_of_largest_shard + model size) * number of processes`. Example: `(10+30)*8=320GB`
- sharded model + deepspeed zero 3: `size_of_largest_shard * number of processes`. Example: `10*8=80GB`
Does my math make sense?
We already have open Issues where users have difficulties loading the models because they don't have an insane amount of CPU memory available.
Note that even on JeanZay the A100 80GB nodes have *only* 512GB, so it'd be impossible to load huge 60GB+ models on those nodes using HF Transformers models w/o sharding, even though the GPUs are huge. There will be not enough CPU memory to do that.
<|||||>so here is what I did to move index files out of LFS:
```
git lfs untrack '*.bin.*'
git add --renormalize .
git commit -am 'Restore file contents that were previously in LFS'
```
courtesy of https://stackoverflow.com/a/54119191/9201239<|||||>OK, all 19 models on the list have been sharded and pushed - if there are more let me know.<|||||>Examples of commits restoring index files to non-LFS:
- https://huggingface.co/bigscience/T0/commit/6a981956e2d0601aff8b9b8caf76bdebdfedff29
- https://huggingface.co/t5-11b/commit/17eca4b4fcdc56f878d0518e928a17fa6d71ab8b
Example of commit on Eleuther-owned model:
- https://huggingface.co/EleutherAI/gpt-j-6B/commit/d2ea4ce5253728dd5541727d8ef209cf9b48530d<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>sorry to post it here..
How could I shard a checkpoint myself and load it?
Would this work?
```
python -c 'from transformers import AutoModelForSeq2SeqLM; \
model = AutoModelForSeq2SeqLM.from_pretrained("save_dir/mT5_finetuned/"); \
model.save_pretrained("save_dir/mT5_finetuned_sharded/", max_shard_size="9GB")
model2 = AutoModelForSeq2SeqLM.from_pretrained("save_dir/mT5_finetuned_sharded/") # load directly from sharded file folder
# move all config files to sharded file folder as well
```
<|||||>that looks mostly correct, @edchengg
just drop `revision="sharded"` - you'd only use that if you upload the saved sharded model checkpoint to a hub, into a branch called "sharded" (instead of "main").
for the local filesystem, there is no revision.<|||||>> * (size_of_largest_shard + model size) * number of processes
Hi @stas00, thanks so much for this amazing work. Apologies for the following naive question, I'm trying to learn :). You mention `number_of_processes` in your calculation, and I had the curiosity to briefly skim through the `from_pretrained` call https://github.com/huggingface/transformers/blob/ae54e3c3b18bac0832ad62ea9b896dfd52a09850/src/transformers/modeling_utils.py#L2544, yet I could not see anything to indicate that the loading happens in multiple processes? So I presume `number_of_processes` here refers to multiple processes where a `from_pretrained` call may be made (such as when doing DDP)? <|||||>Yes, of course, @alexcoca. As you proposed:
When you do DDP you run `n_procs == n_gpus` so each of these processes calls `from_pretrained` and thus each of them needs to load a copy of the model. Hence you need `model_size_in_bytes * n_procs` cpu memory to load a model.
The only exception to this is Deepspeed ZeRO stage3 which has a feature called `zero.Init` which immediately shards the model across gpus and frees up the memory on cpu. So it uses much less cpu memory during the loading process.
Sometimes one has a lot of gpu memory but little cpu memory, in that case you could work around the issue by staggering the loading, so that say only one rank loads at a time, then moves the model onto the gpu and frees up the memory for other ranks to load next.<|||||>That's fantastic, thanks so much for your reply. I assume that this feature gets called whenever we use the Deepspeed ZeRO integration (stage 3) for training with the `Trainer`? <|||||>With HF Trainer it's automatic, but if you want to use it w/ your own Trainer, it's 2 lines of code:
https://huggingface.co/docs/transformers/main/main_classes/deepspeed#nontrainer-deepspeed-integration
<|||||>Hi @stas00 Thank you for this work. Is there a sharded version of [Flan-T5-XL](https://huggingface.co/google/flan-t5-xl)? I see [this](ybelkada/flan-t5-xl-sharded-bf16) but unsure what the original source model was.<|||||>as you can see https://huggingface.co/google/flan-t5-xl is already sharded: https://huggingface.co/google/flan-t5-xl/tree/main
all new models that are being added are automatically sharded (unless the user overrides `save_pretrained`s defaults)
<|||||>Thank you @stas00. Interestingly, Flan-T5-XL, as is, cannot load into a free Colab T4 GPU (out of RAM), while [this sharded variant does](https://github.com/huggingface/transformers/issues/ybelkada/flan-t5-xl-sharded-bf16).
Comparing https://huggingface.co/google/flan-t5-xl/blob/main/pytorch_model.bin.index.json and https://huggingface.co/ybelkada/flan-t5-xl-sharded-bf16/blob/main/pytorch_model.bin.index.json,
the former has
```
"total_size": 11925413888
```
while the latter has
```
"total_size": 5962706944
```
Does https://huggingface.co/google/flan-t5-xl/tree/main contain the correct "official" model weights from the original Flan-T5 checkpoints?
And could one shard it so that it's loadable on a Colab T4?<|||||>as you can probably see one of them is saved in bf16 and the other in fp32, so the former is half the size of the latter.
> Does https://huggingface.co/google/flan-t5-xl/tree/main contain the correct "official" model weights from the original Flan-T5 checkpoints?
I'd say open a new issue to discuss the specifics of this model. This issue is not the place to discuss it I think.<|||||>Ok, thanks. |
transformers | 16,883 | closed | Integrating R3M Models into Transformers | # What does this PR do?
This PR integrates the new R3M visual encoder model into transformers. See this [issue](https://github.com/huggingface/transformers/issues/16403).
In its current state, the new model/config can be loaded, trained, and pre-trained models located [here](https://huggingface.co/surajnair) can be loaded using `AutoModel.from_pretrained` and behave as expected.
However, the model is simply a visual encoder on a ResNet backbone, taking a batch of images passing them through the pre-trained ResNet18,34,50, and returning a batch of embeddings. Therefore most of the testing infrastructure fails (as this is not an NLP model). There are also probably boilerplate versions of the model (ex: CausalLM) which are not applicable.
<!-- Remove if not applicable -->
Fixes # 16403
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ X] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests? - still a WIP.
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@edbeeching @LysandreJik have been helping me so far.
| 04-21-2022 20:42:38 | 04-21-2022 20:42:38 | Thanks a lot for your contribution!
@NielsRogge, could you do a first pass on this model?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 16,882 | closed | Documentation: Spanish translation of fast_tokenizers.mdx | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #15947
Translation to Spanish of fast_tokenizers.mdx
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? #15947
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 04-21-2022 19:32:41 | 04-21-2022 19:32:41 | Thank you @jloayza10! Could you please add `fast_tokenizers` to [`transformers/docs/source/es/_toctree.yml`](https://github.com/huggingface/transformers/blob/main/docs/source/es/_toctree.yml)? As a reference, you can use the [new Translation](https://github.com/huggingface/transformers/blob/main/docs/TRANSLATING.md) guide (section "βοΈ Start translating"). This would allow the tests to pass.<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>Hi @omarespejel , I added `fast_tokenizers` to the _toctree.yml file.
There is still a `check_code_quality` check that is failing, are there further modifications on my part to perform to address this or is this ok?<|||||>Thank you very much @jloayza10! This was great. Please let me know if you would like to translate another file π€ |
transformers | 16,881 | closed | Fix PyTorch RAG tests GPU OOM | # What does this PR do?
Fix PyTorch RAG tests GPU OOM.
The GPU OOM
```
E tensorflow.python.framework.errors_impl.ResourceExhaustedError: OOM when allocating tensor with shape[32,5,16,300,64] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc [Op:GatherV2]
```
could be found in https://github.com/huggingface/transformers/runs/6100697349?check_suite_focus=true
## Results
- Without this PR, after the PyTorch RAG test, torch occupies about `9.5 GB` GPU memory. There are 10 TF RAG tests failed.
- With this PR, all PT/TF RAG tests pass without GPU OOM | 04-21-2022 18:38:34 | 04-21-2022 18:38:34 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Also cc @patil-suraj and @stas00 to see if they have suggestions<|||||>Good for me!<|||||>> What cache is this emptying since the model is not deleted? I think this is a symptom there is a memory leak in the model, which would need to be fixed.
From the documentations [torch.cuda.empty_cache](https://pytorch.org/docs/stable/generated/torch.cuda.empty_cache.html) and [Memory management](https://pytorch.org/docs/stable/notes/cuda.html#memory-management), in particular
```
PyTorch uses a caching memory allocator to speed up memory allocations.
This allows fast memory deallocation without device synchronizations.
However, the unused memory managed by the allocator will still show as if used in nvidia-smi.
```
and
```
Calling empty_cache() releases all unused cached memory from PyTorch so that those can be used by other GPU applications.
```
my understanding is: `PyTorch` will keep the allocated GPU memory for later use in order to avoid to reduce the number of times of memory allocation - the goal is to speed up some (memory) operations.
This doesn't mean those GPU memory are leaked - PyTorch still controls them. But for other applications (say `TensorFlow` or `nvidia-smi`), it means those GPU memory are not available. Use `empty_cache()` will release them.
Of course, the memory occupied by current available tensors won't be release.
<|||||>if you're trying to use multiple programs concurrently accessing the same GPU (regardless if they all are pytorch, or mixed framework), `torch.cuda.empty_cache` is a definite help and is OK to use as long as it's inside the test only and not in `transformers`
But why when the pytorch test finishes the torch still has allocated tensors? Why not free them?
Often `import gc; gc.collect()` is needed to force garbage collection immediately. I'm not sure if this is the case.
<|||||>@stas00
My words in the previous comment might be a bit confusing: we don't have the issue of `the pytorch test finishes the torch still has allocated tensors`. I am just mentioning a general fact (which is quite trivial) that is mentioned in the PyTorch docs I linked.
`empty_cache()` helps, but not completely. There are still some GPU memory occupied even we leave the testing methods, but still in the same Python process (for example, entering the TF testing module which is launched together with the PT testing).
There are some discussions, like this one
https://discuss.pytorch.org/t/pytorch-do-not-clear-gpu-memory-when-return-to-another-function/125944/4<|||||>when you do `torch.ones(1)` it allocates 1-2GB of cuda kernels on the gpu and they remain allocated unless the program is shutdown.
In such a case the solution is not to run the program inside `pytest` but to use an external process. Once an external process finishes 100% of gpu memory is returned. (Except the tests are then much slower because it has to launch an external program)
I created a special framework for running external programs
https://github.com/huggingface/transformers/blob/6d90d76f5db344e333ec184cc6f414abe2aa6559/src/transformers/testing_utils.py#L1536
You can see it extensively used in the deepspeed and extended tests.<|||||>Yeah, I know this approach, but wasn't very sure how not use it in a good way with testing.
Maybe we can discuss this!
By the way: `torch.ones(1) it allocates 1-2GB of cuda kernels` --> I tried it and it seems a correct statement.
I am really surprised (and not very happy) that there is no way to free these kinds of memory allocation. <|||||>Chances are is that there was no need for that until now and nobody asked for it. If I may propose you could create a feature request at pytorch asking for a feature that releases the cuda env completely. It's very possible that there is a C++ API to do that and it just wasn't made available in python.
The use case can be for example this exact situation, where the same program needs to alternate between different frameworks in the same run and needs to be able to access all of gpu's memory.
Does tf give the memory fully back when it's done and the process is still running?<|||||>If the goal is to recover as much memory as possible, shouldn't we delete the model before calling the `empty_cache` function?<|||||>There have been some requests in torch GH page without response
https://github.com/pytorch/pytorch/issues/28829
(this one is on 2019/10)
Same situation for TF: not fully giving back GPU memory, and the requests are always without response<|||||>> If the goal is to recover as much memory as possible, shouldn't we delete the model before calling the empty_cache function?
@sgugger You are right! I tried it and just like you said.
Maybe I can just implement `tearDownModule()` which calls `empty_cache()`, so we don't need to `del models` + `empty_cache()` in all testing methods ..?
I am going to try this and see if how it goes.
(tried with toy examples, and works as expected)<|||||>Note that the `tearDown` is only called at the end of all tests of the module, so won't do the same thing you implemented (clean up at the end of each test).<|||||>Implement `tearDown` for `unittest.TestCase` subclass (and make sure to call its `super`) - this one will be called at the end of each test.
and before `empty_cache` it often helps to call `gc.collect()` to make it deterministic.<|||||>OK, I can do that.
But I feel that while we are in PyTorch test itself, we don't need to call `empty_cache()` --> because the occupied cache will be managed by `torch` and will be assigned to subsequential torch operations if they require GPU.
This `empty_cache()` is mainly for other applications to use `GPU`, like TF for example, in the same process.
And since the TF tests are in other modules, `tearDownModule()` in PT test module should be enough.
But again, I can go for `tearDown()`<|||||>Of course, we are discussing this particular test. I wasn't suggesting to do it to all tests.
The reason I suggested `gc.collect` before `empty_cache` is because when you free the model it's not guaranteed it'll be immediately freed due to how python's GC works. So if you want a reliable deterministic memory release inside a long running `pytest` process, `gc.collect` followed by `empty_cache` is how you make things deterministic.<|||||>@sgugger @stas00
With the suggestions, all TF RAG tests pass now on GPU! π₯ Thank you!<|||||>Unrelated to this PR, but since you work a lot with tests (thank you!), in case you're not aware of it, awhile ago I have developed:
https://github.com/huggingface/transformers/blob/6d90d76f5db344e333ec184cc6f414abe2aa6559/src/transformers/testing_utils.py#L998
which extends `unittest.TestCase` with various handy features - like automatic removal of temp dirs, accessors to file paths and many others. It's extensively documented in the module itself and also in https://huggingface.co/docs/transformers/testing
You don't need to do anything about it, other than perhaps I hope it'll save you time in the future.<|||||>Thank you, @stas00 . Maybe I can play with it, and at some point have a discussion with other team members to see if to use it by default!<|||||>And of course please feel free to extend it if there are other features that can be re-used.<|||||>Would like to have @sgugger and/or @LysandreJik opinion before merge :-) <|||||>Merge now - we should have 13 test failure fewer (if this PR also works well on multiple GPUs too) |
transformers | 16,880 | closed | Changes in create_optimizer to support tensor parallelism with SMP | # What does this PR do?
- Changes in create_optimizer to support tensor parallelism with SMP. For SMP optimizer is created from wrapped model(SMP model).
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 04-21-2022 17:22:27 | 04-21-2022 17:22:27 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 16,879 | closed | TF: XLA repetition penalty | # What does this PR do?
This PR adds our first XLA-compatible TF logit processor, as well as corresponding tests. Since this is the first of a series of small (but similar) PRs, I'd like to request a more thorough review, so the remaining ones are quick.
More specifically, this PR makes three changes:
1. Rewrites the TF repetition penalty processor so as to be XLA-compatible;
2. Adds XLA tests for the processor;
3. Since the test mentioned in 2. was a near copy/paste of the non-XLA test, I've decided to split the test into three parts to improve code reuse and reduce errors from ad hoc edits (as the first and last part can be reused in the two versions of the test, XLA and non-XLA)
- get inputs
- run the processor
- check the output | 04-21-2022 15:56:01 | 04-21-2022 15:56:01 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Thinking about it more, a multiplicative logit penalty really doesn't work, right? Even if we use the reciprocal when the logit is negative, the scale of the penalty depends on the logit's distance from 0. For example, a logit in the range -0.1 to +0.1 will barely be moved by the penalty term, but such logits usually have quite a high probability of being chosen, because most logits are large and negative.<|||||>(merging as the main goal was to port to XLA but, by all means, continue the discussion :) ) |
transformers | 16,878 | closed | Fix doctest list | # What does this PR do?
We have `docs/source/en/model_doc/t5v1_1.mdx` in `documentation_tests.txt`, but the file is actually `docs/source/en/model_doc/t5v1.1.mdx`.
This cause doctest fail.
```
ERROR: file or directory not found: docs/source/en/model_doc/t5v1_1.mdx
```
(https://github.com/huggingface/transformers/runs/6104280137?check_suite_focus=true) | 04-21-2022 15:19:11 | 04-21-2022 15:19:11 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 16,877 | closed | Quick tour_ AutoModel Introduction | ### System Info
```shell
- `transformers` version: 3.4.0
- Platform: Linux-4.15.0-144-generic-x86_64-with-debian-buster-sid
- Python version: 3.7.7
- PyTorch version (GPU?): 1.6.0 (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
```
### Who can help?
_No response_
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
from transformers import AutoModelForSequenceClassification
model_name = "nlptown/bert-base-multilingual-uncased-sentiment"
pt_model = AutoModelForSequenceClassification.from_pretrained(model_name,return_dict=True)
pt_outputs = pt_model(**pt_batch)
from torch import nn
pt_predictions = nn.functional.softmax(pt_outputs.logits, dim=-1)
print(pt_predictions)
### Expected behavior
```shell
from transformers import AutoModelForSequenceClassification
model_name = "nlptown/bert-base-multilingual-uncased-sentiment"
pt_model = AutoModelForSequenceClassification.from_pretrained(model_name)
pt_outputs = pt_model(**pt_batch)
from torch import nn
pt_predictions = nn.functional.softmax(pt_outputs.logits, dim=-1)
print(pt_predictions)
I think pt_outputs.logits is wrong, since from_pretrained(model_name) without return_dict=True.
```
| 04-21-2022 14:59:58 | 04-21-2022 14:59:58 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 16,876 | closed | Replace deprecated logger.warn with warning | From [Python docs](https://docs.python.org/3/library/logging.html#logging.Logger.warning):
_There is an obsolete method `warn` which is functionally identical to `warning`. As `warn` is deprecated, please do not use it - use `warning` instead._
| 04-21-2022 14:07:45 | 04-21-2022 14:07:45 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 16,875 | closed | [WIP] Add Jukebox model | This is a draft pull request.
# What does this PR do?
This PR will progressively add the [Jukebox](https://openai.com/blog/jukebox/) model to the hub.
It is linked to [#16870](https://github.com/huggingface/transformers/issues/16870).
# Currently planned steps (WIP)
- [x] Create template files with `transformeres-cli add-new-model-like`
- [x] `src/transformers/tokenization_jukebox.py`
- [x] `src/transformers/test_tokenization_jukebox.py`
- [x] `src/transformers/configuration_jukebox.py`
- [x] `src/transformers/modeling_jukebox.py`
- [ ] `src/transformers/configuration_jukebox.py`
- [ ] `docs/source/model_doc/jukebox.rst`
- [ ] `src/transformers/tokenization_jukebox_fast.py` (will most probably use WordLevel tokenizer). Also requires to implement a converter function `class JukeboxConverter(Converter):`
| 04-21-2022 13:30:35 | 04-21-2022 13:30:35 | Tokenizer and corresponding test should be done. Lacking some detailed description and also probably something about the arguments in the init that are not data but I don't remember if I should create setters (@patrickvonplaten would love to have your review)<|||||>Cool nice to see much progress here!
Feel free to also add a file that shows how you compare OpenAI's original to the current (HF) implementation<|||||>What happened to the git commit history here?<|||||>I rebased instead of merging π€ Will create a new PR to replace that one <|||||>See followup in #17826 |
transformers | 16,874 | closed | Use ACT2FN to fetch ReLU activation in the T5 model | - all activations should be fetched through ACT2FN
- it returns ReLU as `nn.Module`, which allows attaching hooks on the activation function and prints it to stdout when `print(model)` | 04-21-2022 13:17:05 | 04-21-2022 13:17:05 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 16,873 | closed | The bare ViLT Model not working with CUDA ? |
@NielsRogge - Tried in the stock Colab notebook with the latest version of Transformers. Probably the ```to(device)``` is not handled uniformly for Text Embeddings or Image Embeddings?
What I am trying to do: I need to extract the raw hidden_state or pooler_ouput for large batches of image + text pairs. Use it as a feature for training my custom model.
```python
from transformers import ViltProcessor, ViltModel
import torch
device = "cuda" if torch.cuda.is_available() else "cpu"
vilt_model = ViltModel.from_pretrained("dandelin/vilt-b32-mlm")
vilt_processor = ViltProcessor.from_pretrained("dandelin/vilt-b32-mlm")
vilt_model.to(device) # Added this ? As it is slow in CPU
from PIL import Image
import requests
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
text = "hello world"
inputs = vilt_processor(image, text, return_tensors="pt")
outputs = vilt_model(**inputs)
```
Throws
```
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument index in method wrapper__index_select)
```
| 04-21-2022 13:09:52 | 04-21-2022 13:09:52 | Hey! You're getting this error because your model is on GPU while your inputs are not. Could you try casting your inputs to GPU?
```py
inputs = {k:v.to('cuda') for k,v in inputs.items()}
```<|||||>Thanks, I was assuming ViltProcessor will do it for me. Closing |
transformers | 16,872 | closed | Return input_ids in ImageGPT feature extractor | # What does this PR do?
This goes along with the deprecation of `pixel_values` to the profit of `input_ids` in the `ImageGPT` models. | 04-21-2022 12:46:10 | 04-21-2022 12:46:10 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 16,871 | closed | Enabling `imageGPT` auto feature extractor. | # What does this PR do?
Attempts to superseed https://github.com/huggingface/transformers/pull/16869
With less specific overrides Thanks @ydshieh, you definitely couldn't guess about the options I modified here :)
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
--> | 04-21-2022 10:41:47 | 04-21-2022 10:41:47 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@sgugger For `ImageGPT`, we don't want to perform padding (otherwise, it fails with `None` as padding value).
As you mentioned, `ImageGPTFeatureExactor` outputs `pixel_values` (despite it is not a very good name). So @Narsil 's change
https://github.com/huggingface/transformers/blob/104ee5b3fe1b164d71dec6efd8ddd00a381d13ba/src/transformers/pipelines/base.py#L78-L81
will skip padding. (Although the comment could be more specific regarding the exceptional case)<|||||>
>
> will skip padding. (Although the comment could be more specific regarding the exceptional case)
I added this `# ImageGPT actually will use B, SEQ_LEN not tensor of shape 4` to specify there's an edge case, do you think we could phrase this better ?<|||||>> > will skip padding. (Although the comment could be more specific regarding the exceptional case)
>
> I added this `# ImageGPT actually will use B, SEQ_LEN not tensor of shape 4` to specify there's an edge case, do you think we could phrase this better ?
This is good enough for me π <|||||>Merging #16872 so you can rebase on it and make sure any fix works with that change.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Hi @sgugger ,
Lost track of this one, is it ok to merge ?<|||||>Good for me as long as the tests passing are on a rebased main. Maybe @NielsRogge can have a look too to confirm it's good to go? |
transformers | 16,870 | closed | OpenAI's Jukebox for music generation | ### Model description
**[Jukebox](https://openai.com/blog/jukebox/)** is an autoregressive model for music generation published by OpenAI in 2020. It is based on a [hierarchical VQ-VAE](https://arxiv.org/abs/1906.00446) and Scalable Transformers (based on [Sparse Transformers](https://arxiv.org/abs/1904.10509) ) to create long music samples that can be conditioned on Genres, Artists, Timing and Lyrics.
3 sampling strategies will be made available :
- Ancestral sampling : tokens are generated in an autoregressive fashion and are then upsampled
- Windowed sampling : in order to generate long sequences, samples are repeatedly produced from overlapping windows using the previous codes as context.
- Primed sampling : a continuation of a previous audio is obtained using the VQ-VAE encoding of the audio as initial tokens for the ancestral sampling
The generated tokens are then passed through the VQ-VAE decoder to obtain the final audio.
The Lyric conditional informations are obtained using a Lyric Transformer model.
### Open source status
- [X] The model implementation is available
- [X] The model weights are available
### Provide useful links for the implementation
The code is available at https://github.com/openai/jukebox, and weights are available at https://openaipublic.azureedge.net/jukebox/models/.
Authors :
- [prafullasd](https://github.com/prafullasd)
- [heewooj](https://github.com/heewooj)
- [jongwook](https://github.com/jongwook)
- [mcleavey](https://github.com/mcleavey)
| 04-21-2022 09:48:01 | 04-21-2022 09:48:01 | |
transformers | 16,869 | closed | Add ImageGPT to mappings | # What does this PR do?
Similar to #16857, but only for `ImageGPT`, together with the necessary to make the (pipeline) tests pass.
| 04-21-2022 07:59:50 | 04-21-2022 07:59:50 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Closed - in favor of #16871 |
transformers | 16,868 | closed | DeBerta unable to load from cache | ### System Info
```shell
Transformers=4.18.0
Pytorch=1.3.0a0+ee77ccb
Ipython=6.2.1
Python=3.6.9
the transformers-cli is unfortunately not accurate--I am on a networked file system with a base conda environment containing torch on other large libraries installed on every machine and transformers installed in a virtualenv in the filesystem.
```
### Who can help?
@LysandreJik
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
In [1]: import transformers
In [2]: from transformers import AutoConfig, AutoModel
In [3]: config = AutoConfig.from_pretrained('microsoft/deberta-base', cache_dir='path-to-uploaded-transformers-cache')
In [6]: model = AutoModel.from_pretrained('microsoft/deberta-base', config=config, cache_dir='path-to-uploaded-transformers-cache')
ValueError Traceback (most recent call last)
/state/partition1/llgrid/pkg/anaconda/anaconda3-2020a/lib/python3.6/tarfile.py in nti(s)
188 s = nts(s, "ascii", "strict")
--> 189 n = int(s.strip() or "0", 8)
190 except ValueError:
ValueError: invalid literal for int() with base 8: 'v2\nq\x02((X'
```
### Expected behavior
I have a shared filesystem where I would like to use the Deberta model
The filesystem does not allow file lock because of how the network is setup, so I download all models locally and upload them.
I have downloaded the Deberta model to a cache dir. Output looks like this:
```shell
me@node:~$ ls transformers_cache/
05056f257c8d2b63ad16fd26f847c9ab9ee34e33cdfad926e132be824b237869.5d12962c5ee615a4c803841266e9c3be9a691a924f72d395d3a6c6c81157788b
05056f257c8d2b63ad16fd26f847c9ab9ee34e33cdfad926e132be824b237869.5d12962c5ee615a4c803841266e9c3be9a691a924f72d395d3a6c6c81157788b.json
05056f257c8d2b63ad16fd26f847c9ab9ee34e33cdfad926e132be824b237869.5d12962c5ee615a4c803841266e9c3be9a691a924f72d395d3a6c6c81157788b.lock
c2bc27a1c7529c177696ff76b1e74cba8667be14e202359f20f9114e407f43e2.a39abb1c6179fb264c2db685f9a056b7cb8d4bc48d729888d292a2280debf8e2
c2bc27a1c7529c177696ff76b1e74cba8667be14e202359f20f9114e407f43e2.a39abb1c6179fb264c2db685f9a056b7cb8d4bc48d729888d292a2280debf8e2.json
c2bc27a1c7529c177696ff76b1e74cba8667be14e202359f20f9114e407f43e2.a39abb1c6179fb264c2db685f9a056b7cb8d4bc48d729888d292a2280debf8e2.lock
ce0ac094af27cf80bbf403595a6d47f1fc632981bf1d4c5bf69968568cbea410.e8ad27cc324bb0dc448d4d95f63e48f72688fb318a4c4c3f623485621b0b515c
ce0ac094af27cf80bbf403595a6d47f1fc632981bf1d4c5bf69968568cbea410.e8ad27cc324bb0dc448d4d95f63e48f72688fb318a4c4c3f623485621b0b515c.json
ce0ac094af27cf80bbf403595a6d47f1fc632981bf1d4c5bf69968568cbea410.e8ad27cc324bb0dc448d4d95f63e48f72688fb318a4c4c3f623485621b0b515c.lock
dataset-metadata.json
dde0725208c11536042f6f416c538792d44a2d57d1ae399bbd1bc5867e02c465.0a3ec262cb3d4f634c72ce55f2766bb88771e6499b2512830e2e63bf19dbf97a
dde0725208c11536042f6f416c538792d44a2d57d1ae399bbd1bc5867e02c465.0a3ec262cb3d4f634c72ce55f2766bb88771e6499b2512830e2e63bf19dbf97a.json
dde0725208c11536042f6f416c538792d44a2d57d1ae399bbd1bc5867e02c465.0a3ec262cb3d4f634c72ce55f2766bb88771e6499b2512830e2e63bf19dbf97a.lock
e313266bff73867debdfa78c78a9a4966d5e78281ac4ed7048c178b16a37eba7.fb501413b9cef9cef6babdc543bb4153cbec58d52bce077647efba3e3f14ccf3
e313266bff73867debdfa78c78a9a4966d5e78281ac4ed7048c178b16a37eba7.fb501413b9cef9cef6babdc543bb4153cbec58d52bce077647efba3e3f14ccf3.json
e313266bff73867debdfa78c78a9a4966d5e78281ac4ed7048c178b16a37eba7.fb501413b9cef9cef6babdc543bb4153cbec58d52bce077647efba3e3f14ccf3.lock
```
However when I try to load from the cache, I get the header error listed above.
This error is not reproduced on my local machine (MacOS)
```
| 04-20-2022 23:38:29 | 04-20-2022 23:38:29 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 16,867 | closed | Fix a bug in run_qa_no_trainer.py | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes a bug in run_qa_no_trainer.py.
In the code, eval_metric dictionary generated from [squad metric](https://huggingface.co/metrics/squad) doesn't have 'exact' key.
KeyError: 'exact' raised during execution.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 04-20-2022 23:23:34 | 04-20-2022 23:23:34 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_16867). All of your documentation changes will be reflected on that endpoint.<|||||>Hi @dreamgonfly I have a more comprehensive PR that fixes this issue. Turns out that saving of the metrics is tricky, because the unit tests (which fail in your case) actually use SQuAD v2 metric and they have a different name for the exact matching score. Naturally, you were running this code from the command line and it was using the SQuAD v1 metric. This is why your tests are failing, but the command line results are fine (it took me a few hours of head scratching to figure this out):
https://github.com/huggingface/transformers/pull/16958 |
transformers | 16,866 | closed | TF: rework XLA generate tests | # What does this PR do?
In the light of recent findings (https://github.com/huggingface/transformers/issues/16838), this PR reworks existing XLA generate tests. The following key changes were made:
1. Added a `@unittest.skipIf` on XLA generate tests, to skip when no GPU is present;
2. Rework XLA `sample` tests -- due to the minor numerical differences that arise when we use XLA (and that we can't control), the sampling step will gather different samples even when we use the same seed. The only thing we can properly test is whether a) we can seed them and b) the results are sensible;
3. Adds at least one XLA test where the batch size is > 1 and the inputs have different lengths, so we can confirm that masking works (GPT-2 is not working π , added a TODO);
4. Removes redundant tests (we had tests outside the integration tests that were testing the same thing). | 04-20-2022 21:24:23 | 04-20-2022 21:24:23 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@patrickvonplaten reintroduced the fast tests, will merge as soon as CI gets to green |
transformers | 16,865 | closed | Fix multiproc metrics in no_trainer examples | # Fixes a bug in the `no_trainer` scripts involving metric evaluation
## What does this add?
Fixes up the metric evaluation in most of the `no_trainer` scripts when multi-processing is involved. The final batch gets duplicated in those situations, so the metrics are slightly higher (or lower) than their actual values.
## Who is it for?
Users of the `no_trainer` scripts | 04-20-2022 20:53:38 | 04-20-2022 20:53:38 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 16,864 | closed | Fix custom init sorting script | # What does this PR do?
The custom init sort script was failing when there is a comment between the test for a framework and the imports (see the diff in the main init). This PR fixes that. | 04-20-2022 20:35:01 | 04-20-2022 20:35:01 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 16,863 | closed | Module import takes too long | ### Feature request
Make
```python
import transformers
```
faster
### Motivation
When I import the transformers package, it always takes > 5 seconds on my MacBook Pro machine.
I'm not sure if it's just me but how can I make this faster?
This can get a bit annoying when my debug script is pretty fast to run.
MWE to illustrate the problem
```python
from datetime import datetime
start = datetime.now()
import transformers
print(datetime.now() - start)
```
### Your contribution
NA | 04-20-2022 19:05:20 | 04-20-2022 19:05:20 | Hey @StefanHeng, it should not take that long. What version of `transformers` do you have installed?<|||||>Interesting.
I have python version `3.9.7` and `transformers` version 4.18.0 in a conda environment. <|||||>You may have tools that you do not directly need that slow down your instantiation: are you using torch, tensorflow or flax? If you're only using one, can you try to uninstall the other two if they're installed and checking if it speeds up the install?<|||||>I'm using `torch`. Now that you mentioned it, I do also have `tensorflow` installed in the environment too.
Because I use `torch` for the architectures but I need the `tensorboard` under `tensorflow` for visualizations. <|||||>When I uninstall `tensorflow`, I'm seeing ~1 seconds for the `transformers` import. Nice!
but is it possible to speed up loading 2 backends? cos I do need the `tensorboard` functionalities under `tensorflow`... <|||||>Unfortunately, it isn't `transformers` taking all that time, it's likely `tensorflow`. Loading `tensorflow` on its own will likely take most of the time that you saw when you had both frameworks installed, as we load a tiny subset of the items when doing `import transformers`. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 16,862 | closed | Language model example hanging | ### System Info
```shell
Transformer: 4.17
PyTorch: 1.10.2
Public Docker image I used:
763104351884.dkr.ecr.us-west-2.amazonaws.com/huggingface-pytorch-training:1.10.2-transformers4.17.0-gpu-py38-cu113-ubuntu20.04-v1.0
Hardware:
1 AWS EC2 P4D instance, 8 A100 GPUs
```
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Just run the examples using command in readme
https://github.com/huggingface/transformers/tree/v4.17.0/examples/pytorch/language-modeling
```
python run_mlm.py \
--model_name_or_path roberta-base \
--dataset_name wikitext \
--dataset_config_name wikitext-2-raw-v1 \
--per_device_train_batch_size 8 \
--per_device_eval_batch_size 8 \
--do_train \
--do_eval \
--output_dir /tmp/test-mlm
```
It hangs after finish downloading the data.
```
root/.cache/huggingface/datasets/wikitext/wikitext-2-raw-v1/1.0.0/a241db52902eaf2c6aa732210bead40c090019a499ceb13bcbfa3f8ab646a126/cache-7f695945617
eacf7.arrow
Grouping texts in chunks of 512: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
ββββββββββββββββββββββββββββββββββ| 4/4 [00:00<00:00, 14.70ba/s]
04/20/2022 18:15:03 - INFO - datasets.utils.file_utils - https://raw.githubusercontent.com/huggingface/datasets/1.18.4/metrics/accuracy/accuracy.py
not found in cache or force_download set to True, downloading to /root/.cache/huggingface/datasets/downloads/tmp0f76eppv
Downloading: 3.19kB [00:00, 3.66MB/s]
```
### Expected behavior
```shell
run training as expected
```
| 04-20-2022 18:35:19 | 04-20-2022 18:35:19 | Hello @roywei,
The example scripts are always tested and maintained with the corresponding `transformers` version.
I couldn't reproduce your error! Here is how I ran the script
```python
import sagemaker
from sagemaker.huggingface import HuggingFace
# gets role for executing training job
role = sagemaker.get_execution_role()
hyperparameters = {
'model_name_or_path':'roberta-base',
'output_dir':'/opt/ml/model',
'dataset_name':'wikitext',
'dataset_config_name': 'wikitext-2-raw-v1',
'per_device_train_batch_size': 8,
'per_device_eval_batch_size': 8,
'do_train': True,
'do_eval': True
# add your remaining hyperparameters
# more info here https://github.com/huggingface/transformers/tree/v4.17.0/examples/pytorch/language-modeling
}
# git configuration to download our fine-tuning script
git_config = {'repo': 'https://github.com/huggingface/transformers.git','branch': 'v4.17.0'}
# creates Hugging Face estimator
huggingface_estimator = HuggingFace(
entry_point='run_mlm.py',
source_dir='./examples/pytorch/language-modeling',
instance_type='ml.p3.2xlarge',
instance_count=1,
role=role,
git_config=git_config,
transformers_version='4.17.0',
pytorch_version='1.10.2',
py_version='py38',
hyperparameters = hyperparameters
)
# starting the train job
huggingface_estimator.fit()
```
P.S. Is there a reason why you don't use Amazon SageMaker? <|||||>Hi I can confirm on single node it's working as expected. I'd like to rule out some possibilities on 2 node run, so trying to run it with DDP + NCCL on 2 nodes.
I added the following env var in `run_mlm.py` to help with pytorch ddp launch
```
os.environ["LOCAL_RANK"] = os.environ["OMPI_COMM_WORLD_LOCAL_RANK"]
os.environ["WORLD_SIZE"] = os.environ["OMPI_COMM_WORLD_SIZE"]
os.environ["RANK"] = os.environ["OMPI_COMM_WORLD_RANK"]
os.environ["MASTER_ADDR"]="algo-1"
os.environ["MASTER_PORT"]="12345"
```
also added distribution in estimator API
```
distribution = {"mpi":{"enabled":True}}
huggingface_estimator = HuggingFace(
entry_point='run_mlm.py',
source_dir='./examples/pytorch/language-modeling',
instance_type='ml.p3.2xlarge',
instance_count=1,
role=role,
git_config=git_config,
transformers_version='4.17.0',
pytorch_version='1.10.2',
py_version='py38',
distribution=distribution,
hyperparameters = hyperparameters
)
```
Above will produce a hang, let me know if I did anything wrong. Thanks!
<|||||>I think the key difference is if I use the distributed launcher, the job hanged at beginning of the training, if I specify the distribution and num of nodes to 1, it would still hang.
```
mpirun -N xxx python run_mlm.py
```
If I do not specify distribution arg, it will be ran using `python run_mlm.py` directly and it works fine on 1 node.<|||||>Hi, I've found the issue, it's from our user's modification of the script, that's setting `preprocessing_num_workers` to a very large number, leading to the hang. Removing that makes both NCCL and SMDDP works. The defaults in this example is working fine as well.
Thanks for all the help @philschmid .
Closing!<|||||>Hey @roywei,
not sure if you knew this but all `examples/` scripts, including `run_mlm.py` support distributed training (data & model parallelism) out of the box through the `Trainer`, here is an example notebook for Data parallelism
* https://github.com/huggingface/notebooks/blob/main/sagemaker/03_distributed_training_data_parallelism/sagemaker-notebook.ipynb<|||||>Thanks a lot! I will pass this as a reference to our user. |
transformers | 16,861 | closed | Add onnx config for RoFormer | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- #16308
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- ~[ ] Did you write any new necessary tests?~
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 04-20-2022 18:13:40 | 04-20-2022 18:13:40 | Hey nice PR! Did you try to convert one `RoFormer` model with your add ?
If so you could add the converted model to the [ONNXConfig for all](https://huggingface.co/OWG) organization, it would be awesome!<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>> Hey nice PR! Did you try to convert one `RoFormer` model with your add ? If so you could add the converted model to the [ONNXConfig for all](https://huggingface.co/OWG) organization, it would be awesome!
```
python -m transformers.onnx -m junnyu/roformer_chinese_sim_char_ft_small onnx/model.onnx
Validating ONNX model...
-[β] ONNX model output names match reference model ({'last_hidden_state'})
- Validating ONNX Model output "last_hidden_state":
-[β] (2, 8, 384) matches (2, 8, 384)
-[β] all values close (atol: 1e-05)
All good, model saved at: onnx/model.onnx
```
:) <|||||>It seems that there is 2x `sequence-classification` for features
> All good, model saved at: onnx/model.onnx
Nice news!<|||||>> Thanks for adding RoFormer @skrsna and thank you to @ChainYo for the review!
>
> I agree with @ChainYo's suggestions and would also like to see this included in the slow tests if possible. Could you add a roformer checkpoint to `text_onnx_v2.py` please, e.g. maybe this works well: https://huggingface.co/junnyu/roformer_chinese_base
>
> Then please run
>
> ```
> RUN_SLOW=1 pytest tests/onnx/test_onnx_v2.py -s -k "roformer"
> ```
Thanks for the review @ChainYo and @lewtun, I added roformer to the onnx test case
```
tests/onnx/test_onnx_v2.py::OnnxExportTestCaseV2::test_pytorch_export_65_roformer_causal_lm If you want to use `RoFormerForCausalLM` as a standalone, add `is_decoder=True.`
PASSED
tests/onnx/test_onnx_v2.py::OnnxExportTestCaseV2::test_pytorch_export_66_roformer_default PASSED
tests/onnx/test_onnx_v2.py::OnnxExportTestCaseV2::test_pytorch_export_67_roformer_masked_lm PASSED
tests/onnx/test_onnx_v2.py::OnnxExportTestCaseV2::test_pytorch_export_68_roformer_multiple_choice PASSED
tests/onnx/test_onnx_v2.py::OnnxExportTestCaseV2::test_pytorch_export_69_roformer_question_answering PASSED
tests/onnx/test_onnx_v2.py::OnnxExportTestCaseV2::test_pytorch_export_70_roformer_sequence_classification PASSED
tests/onnx/test_onnx_v2.py::OnnxExportTestCaseV2::test_pytorch_export_71_roformer_token_classification PASSED
``` |
transformers | 16,860 | closed | [docs] fix url | Fix broken link
Fixes: https://github.com/huggingface/transformers/issues/16854
@sgugger | 04-20-2022 17:12:46 | 04-20-2022 17:12:46 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 16,859 | closed | Add OnnxConfig for ConvBERT | # What does this PR do?
I have added OnnxConfig for ConvBERT model. I have checked all features available (which are the same as DistilBERT), so the features list seems to be good.
## Who can review?
Models:
- albert, bert, xlm: @LysandreJik
| 04-20-2022 17:12:27 | 04-20-2022 17:12:27 | _The documentation is not available anymore as the PR was closed or merged._<|||||>The conversion is working as demonstrated by the fact I successfully converted a base model and uploaded it on the hub.
Check the model [here](https://huggingface.co/OWG/convbert-base-spanish)
Here is the command I have used:
```bash
$ python -m transformers.onnx --model=mrm8488/convbert-base-spanish --feature=default onnx/
> Validating ONNX model...
> -[β] ONNX model output names match reference model ({'last_hidden_state'})
> - Validating ONNX Model output "last_hidden_state":
> -[β] (2, 8, 768) matches (2, 8, 768)
> -[β] all values close (atol: 1e-05)
> All good, model saved at: onnx/model.onnx
```<|||||>> The conversion is working as demonstrated by the fact I successfully converted a base model and uploaded it on the hub.
>
> Check the model [here](https://huggingface.co/OWG/convbert-base-spanish)
Awesome job on pushing this to the Hub! We have a PR in `optimum` coming soon that will allow you to also download these checkpoints directly as ONNX model objects: https://github.com/huggingface/optimum/pull/113<|||||>> Awesome job on pushing this to the Hub! We have a PR in `optimum` coming soon that will allow you to also download these checkpoints directly as ONNX model objects: [huggingface/optimum#113](https://github.com/huggingface/optimum/pull/113)
I dreamed about this and wanted to give it a shot if I had time (spoiler: not in the past months) but someone did it!!!
Btw the tests seem to pass!<|||||>I saw on another onnx config contribution that I could add the model to onnx tests, is it mandatory ?
Check : #16887<|||||>> I saw on another onnx config contribution that I could add the model to onnx tests, is it mandatory ? Check : #16887
Ah yes, well spotted! You can pick one of the checkpoints from here and add it to the test: https://huggingface.co/models?search=convbert
This will ensure this export is tested as part of our daily CI with the slow tests :)<|||||>Thanks for including the test case - looks great! |
transformers | 16,858 | closed | Checkpoints are NOT save in GCS (Google Cloud Storage) | ### System Info
```shell
- `transformers` version: 4.18.0
- Platform: Linux-5.4.144+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.13
- Huggingface_hub version: 0.5.1
- PyTorch version (GPU?): 1.10.0+cu111 (True)
- Tensorflow version (GPU?): 2.8.0 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
```
### Who can help?
@patrickvonplaten @patil-suraj @sgugger
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Steps to reproduce the behavior:
I am working with BART model in Google Colab:
```
from google.colab import auth
auth.authenticate_user()
project_id = "my_project_id"
!gcloud config set project {project_id}
tokenizer = BartTokenizer.from_pretrained('facebook/bart-base')
model = BartForConditionalGeneration.from_pretrained('facebook/bart-base')
....
training_args = Seq2SeqTrainingArguments(output_dir='gs://my_bucket/models/')
```
### Expected behavior
```shell
It is supposed to save the checkpoints in GCS (Google Cloud Storage). However, it is saving the checkpoints in my local.
In fact, I can see my files if I type:
!gsutil ls gs://my_bucket/
```
But I can not save my checkpoints.
Do you have any idea how can I fix it?
Thank you in advance.
```
| 04-20-2022 15:53:44 | 04-20-2022 15:53:44 | Hi @JessicaLopezEspejel π Most file writing functions, like `torch.save`, don't support writing to/reading from GCS out of the box. There are a few workarounds:
- [edit] See @mikcnt comment [here](https://github.com/huggingface/transformers/issues/16858#issuecomment-1114924716)
- Use TensorFlow, whose native read/write functions support GCS when properly implemented (see [this](https://www.tensorflow.org/api_docs/python/tf/io/gfile/GFile)) ;)
- Wrap your existing code so as to call read from/write to GCS externally;
- After running your code, manually copy files around with `gsutil -m cp -r ...` ([docs](https://cloud.google.com/storage/docs/gsutil/commands/cp))
Also see this [similar StackOverflow question](https://stackoverflow.com/questions/57898998/is-it-possible-to-load-a-pretrained-pytorch-model-from-a-gcs-bucket-url-without).<|||||>Hello @gante
I see, thank you very much. I will try one the workarounds you propose. :)
<|||||>Hello! I just wanted to clarify a little detail about what @gante said: `torch.save` actually supports writing to GCS! You just need to change the way you use it a bit :) I gave a complete explanation on how to do that [here](https://stackoverflow.com/questions/69100496/not-able-to-save-model-to-gs-bucket-using-torch-save).<|||||>Hello @mikcnt, thank you so much! I will try it. <|||||>@mikcnt thanks for the clarification π (gonna post-edit my comment, for future reference) |
transformers | 16,857 | closed | Add missing entries in mappings | # What does this PR do?
Add missing entries in mappings: `TOKENIZER_MAPPING`, `FEATURE_EXTRACTOR_MAPPING`, `PROCESSOR_MAPPING`
The remaining models that don't have any `tokenizer/feature_extractor/processor` are:
- EncoderDecoder
- VisionEncoderDecoder
- SpeechEncoderDecoder
- DecisionTransformer
which are expected to have no processor. | 04-20-2022 15:15:25 | 04-20-2022 15:15:25 | _The documentation is not available anymore as the PR was closed or merged._<|||||>> * The `ImageGPT` changes should probably go in their own PR
Without the changes, the `test_pipeline` will fail for `ImageGPT`. Previously, `ImageGPT` was not used in pipeline testing because it had no found tokenizer/feature_extractor.
I can add `("imagegpt", "ImageGPTFeatureExtractor")` in another PR instead of this PR, and make the necessary changes along with it. Let me know if this is necessary π <|||||>Yes, I think ImageGPT should be added in a separate PR since it requires additional changes.<|||||>The changes on ImageGPT are now on a separate PR: #16869 |
transformers | 16,856 | closed | How to convert the bert model in tf1 ckpt format which converted by the pytorch version to tf1/tf2 SavedModel? | ### System Info
```shell
- `transformers` version: 4.2.1
- Platform: Linux-3.10.0-957.el7.x86_64-x86_64-with-debian-buster-sid
- Python version: 3.6.13
- PyTorch version (GPU?): 1.10.0+cu102 (True)
- Tensorflow version (GPU?): 2.3.0 (True)
- Using GPU in script?: <yes>
- Using distributed or parallel set-up in script?: <no>
```
### Who can help?
@LysandreJik
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
1.follow the official demo convert_bert_pytorch_checkpoint_to_original_tf.py to **convert pytorch bert model to tensorflow1 ckpt model**
2.then try to **convert bert.ckpt to SavedModel** format,:
`
def convert_tf1_ckpt_to_saved_model(ckpt_dir):
# ckpt to SavedModel
saver = tf.compat.v1.train.import_meta_graph(ckpt_dir + "/bert.ckpt.meta")
input_ids = tf.compat.v1.placeholder(tf.int32, [None, 64], name='input_ids')
attention_mask = tf.compat.v1.placeholder(tf.int32, [None, 64], name='attention_mask')
inputs = {"input_ids": input_ids, "attention_mask": attention_mask}
session = tf.compat.v1.Session()
saver.restore(session, ckpt_dir + "/bert.ckpt")
output = tf.compat.v1.placeholder(tf.float32, [None, 64], name='output')
# output = session.run(output, feed_dict=inputs)
tf.compat.v1.saved_model.simple_save(session, ckpt_dir + "/savedmodel",
inputs=inputs,
outputs={"output": output})
`
The code in step 2 is probably wrong because **the inputs and outputs are not specified correctly**. I followed this script [https://stackoverflow.com/questions/44251328/tensorflow-print-all-placeholder-variable-names-from-meta-graph](url) to **get inputs and outputs of the meta graph**. But, it did not work.
3. load the savedmodel format bert, i got an error:
`tf_model = tf.saved_model.load("./savedmodel")`
error info:
tensorflow.python.ops.op_selector.UnliftableError: A SavedModel signature needs an input for each placeholder the signature's outputs use. An output for signature 'serving_default' depends on a placeholder which is not an input (i.e. the placeholder is not fed a value).
Unable to lift tensor <tf.Tensor 'output:0' shape=(None, 64) dtype=float32> because it depends transitively on placeholder <tf.Operation 'output' type=Placeholder> via at least one path, e.g.: output (Placeholder)
### Expected behavior
```shell
The bert model in tf1 ckpt format converted by the pytorch version can be correctly converted to tf1/tf2 SavedModel format, and the SavedModel version model can be loaded and deployed correctly.
```
| 04-20-2022 13:26:59 | 04-20-2022 13:26:59 | cc @Rocketknight1 @gante <|||||>Hi @catqaq π Thank you for posting a detailed description. Before I dig into the technical problem, I'd like to ask you a more general question.
Is it your final goal to obtain a TF `SavedModel` containing `BERT`? If you don't have requirements regarding how it's done, I believe you can also achieve it from our `from_pretrained` model method. See the following example
```python
# save the model with native TF functions and a signature ready for the maximum length
import tensorflow as tf
from transformers import BertTokenizer, TFBertModel
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
model = TFBertModel.from_pretrained('bert-base-uncased')
text = "This is a test input."
encoded_input = tokenizer(text, return_tensors='tf', padding="max_length")
model._saved_model_inputs_spec = None
model._set_save_spec(dict(encoded_input)) # <-- resizes the expected input to max model length
tf.saved_model.save(model, '/tmp/bert')
# load the saved model with native TF functions
model_tf = tf.saved_model.load("/tmp/bert")
# confirm that they both work and have the same output
hf_model_output = model(encoded_input)
tf_model_output = model_tf(encoded_input)
assert tf.experimental.numpy.allclose(
hf_model_output["last_hidden_state"],
tf_model_output["last_hidden_state"],
atol=1e-5
)
```
Let me know if this solves your problem :)<|||||>@gante Hi, thanks for your reply. **I've trained a series of models based on bert by pytorch**. So, **my final goal is to convert the pretrained pytorch version bert model to tensorflow savedmodel for deployment**. So, i tried to follow your demo(convert_bert_pytorch_checkpoint_to_original_tf.py) to convert my pytorch model to tensorflow1 ckpt model. But ckpt model is not convenient to deploy. <|||||>@catqaq I see. That script is a bit outdated, and TF 1 is indeed inconvenient. The following script should work for your case:
```python
# save the model with native TF functions and a signature ready for the maximum length
import tensorflow as tf
from transformers import AutoTokenizer, TFAutoModel
model_path = "/path/to/your/pytorch/model/saved/locally"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = TFAutoModel.from_pretrained(model_path, from_pt=True)
text = "This is a test input."
encoded_input = tokenizer(text, return_tensors='tf', padding="max_length")
model._saved_model_inputs_spec = None
model._set_save_spec(dict(encoded_input)) # <-- resizes the expected input to max model length
tf.saved_model.save(model, '/tmp/bert')
# load the saved model with native TF functions
model_tf = tf.saved_model.load("/tmp/bert")
# confirm that they both work and have the same output
hf_model_output = model(encoded_input)
tf_model_output = model_tf(encoded_input)
assert tf.experimental.numpy.allclose(
hf_model_output["last_hidden_state"],
tf_model_output["last_hidden_state"],
atol=1e-5
)
```
Notice the `model_path` variable, which should be your local pytorch model (or the path to the pytorch model on the hub) and the `from_pt` argument in `from_pretrained`. This script should load your PT model, convert it to TF 2, and store it in the saved model format.<|||||>@gante Thanks! It works well for original pytorch bert model. But i also have some **customized models based on bert with task related layers**. I found that from_pt related to load_pytorch_checkpoint_in_tf2_model function. So i followed this function to directly convert customized pytorch bert style model. In this case, do I have to rewrite the model (especially the task related parts) with tf2 and then load the pytorch model weights?<|||||>@catqaq if you have task-specific layers that are not defined in the `transformers` repository, I'm afraid you'll also have to create a custom TF2 model architecture, as you mentioned above -- tools like our `from_pt` work because we have the architecture definition in both frameworks, in such a way that we can cross-load weights :)
Since the issue you're having is not a bug in the repository, but rather a challenging use-case, I'd like to suggest using [our forum](https://discuss.huggingface.co/).<|||||>@gante Yep, i will to create a custom TF2 model architecture as close to pytorch version as possible. Thanks for your advice.<|||||>Awesome. I'm closing this issue, but feel free to reopen if you run into new `transformers`-related problems π <|||||>@gante Hi, I rewrite my model in tf2 and convert pt_model to tf_model sucessfully. But the difference between the pt_model and tf_model is huge. At first I thought there was something wrong with my tf_model implementation, but I **double-checked the mapping of every variable and made sure there was nothing wrong**. So, i tried to check the original bert model and **the difference between pt_bert and tf_bert(or tf_bert_from_pt) is also huge**!
Here is the code:
`from transformers import AutoTokenizer,TFAutoModel,AutoModel
import torch
import numpy
bert_path="./models/pretrained_models/**bert-base-chinese**"
pt_bert=AutoModel.from_pretrained(bert_path)
tf_bert=TFAutoModel.from_pretrained(bert_path)
tf_bert_from_pt=TFAutoModel.from_pretrained(bert_path,from_pt=True)
def compare_tf_pt_model_consistency(pt_model, tf_model, pt_inputs=None, tf_inputs=None, tokenizer=None, texts=None,
threshold=2e-2):
pt_inputs = pt_model.dummy_inputs if pt_inputs is None else pt_inputs
tf_inputs = tf_model.dummy_inputs if tf_inputs is None else tf_inputs
tf_outputs = tf_model(tf_inputs, training=False) # build the network
pt_model.eval() # ζ°ε’οΌεΏ
ι‘»ζ―eval樑εΌ
with torch.no_grad():
pt_outputs = pt_model(**pt_inputs)
np_pt = pt_outputs[0].numpy()
np_tf = tf_outputs[0].numpy()
diff = numpy.amax(numpy.abs(np_pt - np_tf))
print("Max absolute difference between models outputs {}".format(diff))
assert diff <= threshold, "Error, model absolute difference is >{}: {}".format(threshold, diff)
compare_tf_pt_model_consistency(pt_bert, tf_bert_from_pt, pt_inputs=None, tf_inputs=None)
compare_tf_pt_model_consistency(pt_bert, tf_bert, pt_inputs=None, tf_inputs=None)`
Then i got the error caused by the huge diff between pt_bert and tf_bert_from_pt/bert:
AssertionError: Error, model absolute difference is >0.02: **13.102652549743652**
If we change the model to **bert-base-uncased**, the diff will be 6.858981132507324, also huge!
AssertionError: Error, model absolute difference is >0.02: 6.858981132507324
And tf_model.summary() seems a bit strange with Output Shape=multiple:
tf_bert.summary()
Model: "tf_bert_model_1"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
bert (TFBertMainLayer) multiple 102267648
=================================================================
Total params: 102,267,648
Trainable params: 102,267,648
Non-trainable params: 0<|||||>Hi @catqaq π I'm afraid I'm unable to directly help you with your custom model -- as per our "[issues guidelines](https://github.com/huggingface/transformers/blob/main/ISSUES.md#the-github-issues)", we simply don't have the capacity to support custom code :)
I can, however, tell you what I would do in your situation. TF-PT mismatches often arise in two places:
1. weights that were not properly loaded π check for warnings in the console
2. non-trainable parameters that have different default values in PT and TF π add a debugger inside your model and check where the variables start diverging<|||||>@gante Hi, the point now is that not only does my custom model have a difference between tf_model and pt_model, **the original bert model also has a huge difference between tf_model and pt_model**. As mention above, **there is a diff up to 13.102652549743652 for bert-base-chinese and 6.858981132507324 for bert-base-uncased**. Maybe it is similar to https://github.com/huggingface/transformers/issues/3386<|||||>@catqaq -- that indeed seems like a relevant bug, thanks for pointing that out! I will have a look and let you know of my findings.<|||||>@catqaq the weights and the conversion script seem fine. Have a look at the script below:
```python
import numpy as np
from transformers import AutoModel, AutoTokenizer, TFAutoModel
model_name = "bert-base-uncased"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model_pt = AutoModel.from_pretrained(model_name)
model_tf = TFAutoModel.from_pretrained(model_name)
model_tf_from_pt = TFAutoModel.from_pretrained(model_name, from_pt=True)
text = "Replace me by any text you'd like."
encoded_input_pt = tokenizer([text], return_tensors="pt")
encoded_input_tf = tokenizer([text], return_tensors="tf")
output_pt = model_pt(**encoded_input_pt)["last_hidden_state"].detach().numpy()
output_tf = model_tf(**encoded_input_tf)["last_hidden_state"].numpy()
output_tf_from_pt = model_tf_from_pt(**encoded_input_tf)["last_hidden_state"].numpy()
print("TF - PT max diff:", np.max(np.abs(output_pt - output_tf)))
print("TF - TF (loaded from PT) max diff:", np.max(np.abs(output_tf - output_tf_from_pt)))
```
The output for this script on `main` is
```
TF - PT max diff: 9.059906e-06
TF - TF (loaded from PT) max diff: 0.0
```
Replacing `model_name` to `"bert-base-chinese"` and `text` to `"θΏζ―δΈδΈͺιζΊζ΅θ―"`, the output is
```
TF - PT max diff: 8.583069e-06
TF - TF (loaded from PT) max diff: 0.0
```<|||||>@gante Hi, thanks for the experimental verification. What caused our experimental results to be completely different? Transformers version? I use 4.2.1, which version of your transformers?<|||||>Hi @catqaq -- I'm testing on the current version of transformers' code, which is the unreleased `4.19.0.dev0`. Perhaps it is an error related to an older version, but the latest release (`4.18.0`) should also behave correctly. My advice here would be to update your version of transformers, since we don't fix older versions :)<|||||>@gante Thanks! Our production environment uses an older version, and I didn't find a similar problem in the issues, so at first I thought the diff was caused by incorrect parameter loading. But **after checking all the parameters one by one, I started to suspect that it was a version problem**. **Then I started trying to use the new version of the conversion code, and the results were the same as the old version of the conversion code**. Therefore, I guess that there may be **a little problem with the old version of TFBertModel, which may be similar to the situation of TFALBERT**https://github.com/huggingface/transformers/issues/3386.
I will continue the experiment tomorrow:
1. Experiment in the environment of the new version to determine that the diff is caused by the old version. This might be a good opportunity to upgrade our transformers!
2. Take a look at the **differences in the implementation of TFBertModel between the old and new versions**, and hope to find out **the source of the huge diff of the old version**. If you have time, any help would be greatly appreciated.<|||||>Hi@gante, It works! In transformers 4.18, max absolute difference between models (tf_custom_model_from_pt and pt_custom_model) outputs is 1.7881393432617188e-07. Then I compared TFBert implementations in 4.2.1 and 4.18. For
TFBertModel and TFBertMainLayer, the most obvious difference is that new version removed **input_processing** function. Could this be the reason?
<|||||>@catqaq probably not -- the `input_processing` got replaced by the `@unpack_inputs` decorator on top of `call()`. They have the same purpose, but the decorator results in cleaner code and in direct variable use (as opposed to `inputs[variable_name]`)<|||||>Possibly something got changed in the tool that converts PT weights into TF. I can't confirm, as that predates my existence at HuggingFace :)<|||||>@gante Yep, i didn't notice the @unpack_inputs decorator which plays a similar role as input_processing at first. But as mentioned above, new version convertion scripts combined with old transformers leads to the same results as old convertion scripts. Anyway, let's put this question aside for now. One last question is that for my custom model based on bert, is it necessary to use @unpack_inputs decorator before `call()`?<|||||>It is not :) The decorator does things like converting deprecated arguments into the new arguments, unpacking tensor dictionaries, etc. If you pass the expected inputs with the correct type, it shouldn't be needed.<|||||>@gante Got it. thanks!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 16,855 | closed | `offset_mapping` is strange for non-ascii token | ### System Info
```shell
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
```
### Who can help?
@SaulLu
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
1. Use BertTokenizerFast
2. tokenize the text
text = "The Dungan Revolt ( 1862β77 ) or Tongzhi Hui Revolt ( , Xiao'erjing : ΨͺΩΩΨΩΩΩ Ψ¨ΩΩΩΨ§/ΩΩΩΩΨ§ , ) or Hui ( Muslim ) Minorities War was a mainly ethnic and religious war fought in 19th-century western China , mostly during the reign of the Tongzhi Emperor ( r. 1861β75 ) of the Qing dynasty ."
outputs = self.tokenizer(text,
padding="max_length",
truncation=False,
max_length=self.config.max_seq_length,
return_offsets_mapping=True)
3. The `offset_mapping` missed some subtokens, for example, the subtoken at the char position 79.
<img width="83" alt="ζͺε±2022-04-20 δΈε8 46 55" src="https://user-images.githubusercontent.com/39556019/164233684-192d1c32-0f67-4198-8e59-e60426fce32d.png">
### Expected behavior
```shell
the offset_mapping should be:
[...
(70, 71),
(71, 72),
(73, 74),
(76, 77),
(78, 80), # this line ****
(81, 82),
(83, 84),
... ]
```
| 04-20-2022 12:48:20 | 04-20-2022 12:48:20 | The environment info is:
- `transformers` version: 4.18.0
- Platform: Linux-4.15.0-136-generic-x86_64-with-glibc2.27
- Python version: 3.9.12
- Huggingface_hub version: 0.5.1
- PyTorch version (GPU?): 1.11.0+cu113 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in> |
transformers | 16,854 | closed | Docs link to deepspeed infinity redirect to a 404 | ### System Info
```shell
Docs related
```
### Who can help?
@stas00 @sgugger
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1. Go to In https://huggingface.co/docs/transformers/performance
2. Click link with "DeepSpeed-Infinity" text
3. Get site that says "The documentation page DEEPSPEED doesnβt exist in v4.18.0, but exists on the main version. Click here to redirect to the main version of the documentation."
4. Redirects to https://huggingface.co/docs/transformers/main/en/deepspeed, which is 404
### Expected behavior
```shell
Ideally this should not link to an error.
```
| 04-20-2022 11:58:58 | 04-20-2022 11:58:58 | Thanks a lot for the report, Omar - fixed here: https://github.com/huggingface/transformers/pull/16860 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.