repo
stringclasses 1
value | number
int64 1
25.3k
| state
stringclasses 2
values | title
stringlengths 1
487
| body
stringlengths 0
234k
⌀ | created_at
stringlengths 19
19
| closed_at
stringlengths 19
19
| comments
stringlengths 0
293k
|
---|---|---|---|---|---|---|---|
transformers | 18,759 | closed | [WIP] testing graphql doc deletion | null | 08-25-2022 08:02:43 | 08-25-2022 08:02:43 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_18759). All of your documentation changes will be reflected on that endpoint. |
transformers | 18,758 | closed | Add MSN checkpoints to ViT | ### Feature request
It's been a while Meta released the [MSN](https://github.com/facebookresearch/msn) (Masked Siamese Networks) checkpoints. MSN is known to perform pretty well in few-shot settings. Few-shot regime is important for real-world applications. They have only released the encoder weights.
I have worked on a [Colab Notebook](https://colab.research.google.com/gist/sayakpaul/51556ce57facea02041cf0c0f1c3fed3/scratchpad.ipynb) that converts the released MSN checkpoints into HF format and performs assertions to ensure the converted weights can be loaded into HF's ViT classes.
This issue proposes adding the MSN checkpoints to be compatible with HF's ViT classes.
### Motivation
MSN is particularly useful in few-shot learning regimes.
### Your contribution
Can contribute to the checkpoints and other related utilities.
@NielsRogge could you assign this issue to me? | 08-25-2022 07:22:05 | 08-25-2022 07:22:05 | |
transformers | 18,757 | closed | AdamW algorithm is not the same as in the referenced paper | ### System Info
Not relevant
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
The HuggingFace implementation of AdamW is not the same as the algorithm from the paper [Decoupled Weight Decay Regularization](https://arxiv.org/abs/1711.05101), as claimed in the documentation.
It is easy to show this by mathematical proof, it it is not obvious by inspection. The HuggingFace AdamW has been deprecated in any case.
But this is still a bug which "caught" me. The issue is referenced, but not as a bug, in the closed issue [#3407](https://github.com/huggingface/transformers/issues/3407).
### Expected behavior
The PyTorch implementation matches the algorithm in the paper. | 08-25-2022 07:18:24 | 08-25-2022 07:18:24 | Hey @quantitative-technologies, we're in the process of removing the AdamW implementation, as you have seen it is deprecated. It will be removed in v5. |
transformers | 18,756 | closed | Automodel "from_config" fails when config is loaded from PretrainedConfig.from_dict(config) | ### System Info
If the config is saved to dict, it cannot be converted back to autoconfig ??
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```python
bart_config = transformers.BartConfig(vocab_size=vocab_size, d_model=64, encoder_ffn_dim=64, encoder_layers=2, decoder_ffn_dim=64, decoder_layers=2 )
config = bart_config.to_dict()
AutoModelForSeq2SeqLM.from_config(transformers.PretrainedConfig.from_dict(config))
```
This ends up in the following error. What is the alternative to loading autoconfig from a dict?
```text
ValueError: Unrecognized configuration class <class 'transformers.configuration_utils.PretrainedConfig'> for this kind of AutoModel: AutoModelForSeq2SeqLM.
Model type should be one of BartConfig, BigBirdPegasusConfig, BlenderbotConfig, BlenderbotSmallConfig, EncoderDecoderConfig, FSMTConfig, LEDConfig, LongT5Config, M2M100Config, MarianConfig, MBartConfig, MT5Config, PegasusConfig, PLBartConfig, ProphetNetConfig, T5Config, XLMProphetNetConfig.
```
### Expected behavior
`transformers.PretrainedConfig.from_dict(config)` should return the correct config type | 08-25-2022 01:44:10 | 08-25-2022 01:44:10 | Hi @elangovana 👋 `PretrainedConfig` is a base class, which is not supposed to work on its own.
Since you know what class you are storing in the example, you can simply do
```python
from transformers import AutoModelForSeq2SeqLM, BartConfig
vocab_size = 128
bart_config = BartConfig(
vocab_size=vocab_size, d_model=64, encoder_ffn_dim=64, encoder_layers=2, decoder_ffn_dim=64, decoder_layers=2
)
config = bart_config.to_dict()
model = AutoModelForSeq2SeqLM.from_config(BartConfig.from_dict(config))
```
Alternatively, if you don't want to rely on a specific config class at load time, you can also do
```python
from transformers import AutoConfig, AutoModelForSeq2SeqLM, BartConfig
vocab_size = 128
bart_config = BartConfig(
vocab_size=vocab_size, d_model=64, encoder_ffn_dim=64, encoder_layers=2, decoder_ffn_dim=64, decoder_layers=2
)
bart_config.save_pretrained("/tmp/")
model = AutoModelForSeq2SeqLM.from_config(AutoConfig.from_pretrained("/tmp"))
```<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 18,755 | closed | streamlining 'checkpointing_steps' parsing | # What does this PR do?
As discussed in #18720, changing the `checkpointing_steps` parsing logic in all PyTorch examples with no trainers.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
@muellerzr | 08-24-2022 21:47:38 | 08-24-2022 21:47:38 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 18,754 | closed | Pass the original mask tensor when using ds-inference at BLOOM architecture | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
This resolves DeepSpeed-Inference integration for the BLOOM model. There is no need to convert `attention_mask` to `casual_mask` when using DeepSpeed to run inference. There is [this PR ](https://github.com/microsoft/DeepSpeed/pull/2217)on the DeepSpeed side that will add `ds_inference` attribute to the transformer network, and the mask remains intact in that case. Note that this PR will not change the flow on the modeling side and will only use a different mask when one calls `deepspeed.init_inference` on the created model.
cc: @stas00
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
--> | 08-24-2022 18:08:32 | 08-24-2022 18:08:32 | I confirm that this solves the problem of DS-Inference, which broke after this commit: https://github.com/huggingface/transformers/commit/b69a62d579ded31d490e0c9a01bf9ecb81cb9b65
cc: @thomasw21, @younesbelkada
and also @LysandreJik - I hope it's OK if we sneak in a bit of `ds-inference` code into the BLOOM modeling code.
Unless of course you can think of how to make this configurable. perhaps a flag in the config file? that way other frameworks could choose that as well.
Thanks.<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>Hum I'm not too familiar with how DS works in this case, but would something like `self._prepare_attn_mask = lambda attention_mask, *args, **kwargs: attention_mask` work? This would live inside DS instead of the modeling?<|||||>That's probably a sufficient hack.
Reza, will this work for your needs?<|||||>@stas00 Will try, if it works, I am happy to use this instead.
Thanks @thomasw21 <|||||>Okay, I have made this change on the DeepSpeed side [(here)](https://github.com/microsoft/DeepSpeed/pull/2217/commits/82a37d6d4393d9ed76a80dcb392c51e48188fd14#diff-fdbdbba9c01d8349399cfadeba4cce1c5a34fc15c38ec44a335e1340c11b13a4R177) as suggested. So, I am gonna close this PR as it not needed anymore.
Thanks,
Reza |
transformers | 18,753 | closed | Fixing OPT fast tokenizer option. | # What does this PR do?
Fixes the relevant issues:
https://huggingface.co/wjmcat/opt-350m-paddle/discussions/1
https://huggingface.slack.com/archives/C01N44FJDHT/p1653511495183519 (internal link)
https://github.com/huggingface/transformers/pull/17088#discussion_r871246439
Basically, the OPT tokenizer is a GPT2 tokenizer that adds a BOS token at the start
of the tokens.
Lots of back&forth at the time, but the truth is that the `ByteLevel(trim_offsets=False)`
post_processor, actually doesn't do anything, so we can just replace it with a simple
`TemplateProcessing` processor and everything works correctly.
This PR fixes the biggest culprit (missing BOS token on the fast tokenizer version).
Call to witness on other issues I might have missed.
@SauLu
@patrickvonplaten
@mishig
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
--> | 08-24-2022 15:39:23 | 08-24-2022 15:39:23 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Good point!
Indeed, I had not noticed at all that `ByteLevel(trim_offsets=False)` was not doing anything. As `trim_offset` is not an argument in the `__init__` either, I don't see any problem that your solution could cause! :blush: |
transformers | 18,752 | closed | CLI: Improved error control and updated hub requirement | # What does this PR do?
This PR adds three things to the `pt-to-tf` CLI:
- Updates the hub minimum version requirement, so that all users can adequately use this CLI (the new `0.9.0` is a requirement to open a PR with a non-god token)
- Adds more flexible admissible error control (some models were failing with errors like 6e-5, which was annoying. The `5e-5` is a magic number anyways)
- Improves the commit message, in case the user fiddles with the admissible error | 08-24-2022 15:37:29 | 08-24-2022 15:37:29 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 18,751 | closed | run finetune_rag.py -- errror OSError: Not enough disk space. Needed: 139.13 GiB (download: 66.09 GiB, generated: 73.03 GiB, post-processed: Unknown size) | ### System Info
faiss-cpu 1.7.2
datasets 2.4.0
psutil 5.7.0
torch 1.12.1+cu113
ray 2.0.0
pytorch-lightning 1.7.2
transformers 4.21.1
GitPython 3.1.27
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
!python3 /content/drive/MyDrive/colab/rag/finetune_rag.py \
--data_dir /content/drive/MyDrive/colab/rag/data \
--output_dir /content/drive/MyDrive/colab/rag/output_data \
--model_name_or_path facebook/rag-token-base \
--model_type rag_sequence \
--fp16 \
--gpus 8 \
--profile \
--do_train \
--do_predict \
--n_val -1 \
--train_batch_size 8 \
--eval_batch_size 1 \
--max_source_length 128 \
--max_target_length 25 \
--val_max_target_length 25 \
--test_max_target_length 25 \
--label_smoothing 0.1 \
--dropout 0.1 \
--attention_dropout 0.1 \
--weight_decay 0.001 \
--adam_epsilon 1e-08 \
--max_grad_norm 0.1 \
--lr_scheduler polynomial \
--learning_rate 3e-05 \
--num_train_epochs 100 \
--warmup_steps 500 \
--gradient_accumulation_steps 1
OSError: Not enough disk space. Needed: 139.13 GiB (download: 66.09 GiB, generated: 73.03 GiB, post-processed: Unknown size)
### Expected behavior
Why rag need 139Gib? | 08-24-2022 15:22:00 | 08-24-2022 15:22:00 | Hi @wangyu185 👋 RAG is a retrieval-augmented model, meaning that it queries some database. It needs to have the database in memory. Please check the [model card](https://huggingface.co/facebook/rag-token-nq) and the [documentation](https://huggingface.co/docs/transformers/model_doc/rag) for more info.
As per our [issues guidelines](https://github.com/huggingface/transformers/blob/main/ISSUES.md), we reserve GitHub issues for bugs in the repository and/or feature requests. For any other matters, we'd like to invite you to use our [forum](https://discuss.huggingface.co/) 🤗<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 18,750 | closed | Revert to rescale and safely handle flag in owlvit config | # What does this PR do?
Reverts the `rescale_image` method name to back to `rescale`. This keeps the method inline with other method names such as `normalize`.
This undoes the renaming in #18677 done to address failing tests caused by `rescale` key being in OWL-ViT config and the introduction of a `rescale` method in #18499.
The OWL-ViT config has [since been updated](https://huggingface.co/google/owlvit-base-patch32/commit/17740e19dde58d657d21b970ead1cce0ea40f4da). As the model has been released, we rename the key in the config in case `rescale` is there for backwards compatibility.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests? | 08-24-2022 15:09:39 | 08-24-2022 15:09:39 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 18,749 | closed | [Wav2vec2 + LM Test] Improve wav2vec2 with lm tests and make torch version dependent for now | Trying to correct potentially flaky test:
```
tests/models/wav2vec2_with_lm/test_processor_wav2vec2_with_lm.py::Wav2Vec2ProcessorWithLMTest::test_word_time_stamp_integration
```
The test actually always passed for me locally.
I've now changed list to tensor as list comparison with float numbers seems brittle. @ydshieh - just to better understand, is this test consistently failing or was it flaky? | 08-24-2022 14:55:48 | 08-24-2022 14:55:48 | It fails consistently since PT 1.12.<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>Note after some more debugging the reason seems to be in `datasets`. E.g. this part of the test:
```python
ds = load_dataset("common_voice", "en", split="train", streaming=True)
ds = ds.cast_column("audio", datasets.Audio(sampling_rate=16_000))
ds_iter = iter(ds)
sample = next(ds_iter)
print(sample["audio"]["array"])
```
yields different results between `torchaudio=0.11.0` and `torchaudio=0.12.1`. Opening an issue in Datasets.<|||||>@ydshieh good for a second review |
transformers | 18,748 | closed | Add image-guided object detection support to OWL-ViT | Hi,
The [OWL-ViT](https://huggingface.co/docs/transformers/model_doc/owlvit) model is an open-vocabulary model that can be used for both zero-shot text-guided (supported) and one-shot image-guided (not supported) object detection.
It'd be great to add support for one-shot object detection to `OwlViTForObjectDetection` such that users can query images with an image of the target object instead of using text queries - e.g. using an image of a butterfly to search for all butterfly instances in the target image. See an example below.
<img width="989" alt="Screenshot 2022-08-24 at 17 16 28" src="https://user-images.githubusercontent.com/8944735/186441941-7278676e-aecb-4c7d-b1d5-df4fb444becb.png">
To do this, we would just need to compute and use the `OwlViTModel` (alias to CLIP) embeddings of the query images instead of the text query embeddings within `OwlViTForObjectDetection.forward()`, which would take the target image + either text queries or image queries as input. Similarly, `OwlViTProcessor` would be updated to preprocess sets of (image, text) and (image, query_image).
@sgugger @NielsRogge @amyeroberts @LysandreJik what do you think about this? Would this be something we would like to support? | 08-24-2022 14:22:06 | 08-24-2022 14:22:06 | I think it would be a great addition, especially as it doesn't seem to be too much work to add. I'm guessing for the processor, and your description, the call signature would look something like this:
`def __call__(self, text=None, query_image=None, images=None, padding="max_length", return_tensors="np", **kwargs):`
and then we check there's at most one of `text` or `query_image`? <|||||>@amyeroberts exactly, it'd be pretty straightforward to implement. Based on the paper, image-guided detection is also less sensitive in terms of the probability threshold<|||||>Sounds good! <|||||>Hi @amyeroberts @alaradirik, I'm happy to take this up!<|||||>@unography that would be great! You can ping me if you need any help or have questions. You can also find the relevant details in the appendix of the OWL-ViT [paper](https://arxiv.org/abs/2205.06230).
<|||||>@alaradirik sure!
just to confirm the high-level changes -
1. `OwlViTProcessor` takes `query_image` as an additional param, and returns a dict like - `{pixel_values: ..., query_pixel_values: ...`
2. `OwlViTForObjectDetection.forward` takes this `query_pixel_values` as additional param
3. `image_image_embedder`, similar to `image_text_embedder`, takes this query values and returns `query_embeds`, and then we do detection on this
Does this seem correct?<|||||>@unography that seems correct. The `image_image_embedder()` method would be almost the same as the `image_text_embedder()` but would compute `query_image_embeds `instead of `text_embeds`.
However, there will be some changes to the `image_text_embedder()` method as calling the `OwlViTModel.get_text_features` and `OwlViTModel.get_image_features` within `OwlViTForObjectDetectionModel `causes memory leaks. This will be fixed in this [PR](https://github.com/huggingface/transformers/pull/18734), so it'd be great if you could wait until it is merged.<|||||>@alaradirik sure, will wait for it to get merged before proceeding with this<|||||>Hi @unography, just wanted to give you an update, the memory leak issue is fixed with this merged [PR](https://github.com/huggingface/transformers/pull/18734).
You can go ahead working on this issue if you want :)<|||||>sure, will do, thanks for informing!<|||||>Hey! I had a question about image-guided object detection. I've tested it and it works very well, but when a desired object isn't in the image, it will try to create bounding boxes just about anywhere, with a fairly high confidence score. do you have any idea to counter it? |
transformers | 18,747 | closed | Bloom 176B with deepspeed-inference: Cuda illegal memory access | ### System Info
- `transformers` version: 4.21.1
- Platform: Linux-3.10.0-1160.49.1.el7.x86_64-x86_64-with-glibc2.17
- Python version: 3.8.13
- Huggingface_hub version: 0.9.0
- PyTorch version (GPU?): 1.10.2+cu113 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: yes, deepspeed
### Who can help?
@stas00 @patrickvonplaten @patil-suraj
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I am running `bigscience/bloom` on a distributed cluster of 4 nodes of 4x40GB A100 each using deepspeed-inference. I am using a modified version of this script here https://github.com/bigscience-workshop/Megatron-DeepSpeed/blob/main/scripts/inference/bloom-ds-inference.py
At batch size 1, everything is fine for 18 batches (more or less reproducible, once it cleared 19), then I get the error below.
Not sure whether to submit this here, to deepspeed, or to https://github.com/bigscience-workshop/Megatron-DeepSpeed
Here has the best visibility, but I might submit to all three and refer back to this. I'm also running on Determined AI and kubernetes, which could be relevant.
Here some key parts of the code:
```python
with deepspeed.OnDevice(dtype=torch.float16, device="meta"):
model = AutoModelForCausalLM.from_pretrained(
model_args.model_name_or_path,
torch_dtype=torch.bfloat16
)
model = deepspeed.init_inference(
model,
mp_size=training_args.world_size,
dtype=torch.float16,
checkpoint=checkpoint_filename,
replace_with_kernel_inject=True
)
data_loader = DataLoader(
dataset,
sampler=SequentialSampler(dataset),
batch_size=training_args.per_device_eval_batch_size,
collate_fn=collate,
drop_last=False,
num_workers=0,
pin_memory=False,
)
for step, batch in enumerate(data_loader):
outputs = model.generate(
batch["input_ids"].to(torch.cuda.current_device()),
max_new_tokens=150,
do_sample=True
)
```
And here is the error, which occurs after 18-19 examples:
```
!!!! kernel execution error. (m: 3584, n: 609, k: 14336, error: 13)
!!!! kernel execution error. (m: 14336, n: 609, k: 3584, error: 13)
Traceback (most recent call last):
File "./ds_inference_core.py", line 186, in <module>
main(core_context, trial)
File "./ds_inference_core.py", line 132, in main
outputs = model.generate(
File "/opt/conda/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 28, in decorate_context
return func(*args, **kwargs)
File "/opt/conda/lib/python3.8/site-packages/transformers/generation_utils.py", line 1326, in generate
return self.sample(
File "/opt/conda/lib/python3.8/site-packages/transformers/generation_utils.py", line 1944, in sample
outputs = self(
File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/opt/conda/lib/python3.8/site-packages/deepspeed/inference/engine.py", line 521, in forward
outputs = self.model_orig_fwd(*inputs, **kwargs)
File "/opt/conda/lib/python3.8/site-packages/transformers/models/bloom/modeling_bloom.py", line 821, in forward
transformer_outputs = self.transformer(
File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/opt/conda/lib/python3.8/site-packages/transformers/models/bloom/modeling_bloom.py", line 709, in forward
outputs = block(
File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/opt/conda/lib/python3.8/site-packages/deepspeed/ops/transformer/inference/transformer_inference.py", line 842, in forward
output = self.mlp(attention_output, input, inp_norm, self.attention.attn_ob)
File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/opt/conda/lib/python3.8/site-packages/deepspeed/ops/transformer/inference/transformer_inference.py", line 712, in forward
return DeepSpeedMLPFunction.apply(input,
File "/opt/conda/lib/python3.8/site-packages/deepspeed/ops/transformer/inference/transformer_inference.py", line 646, in forward
dist.all_reduce(output, group=mp_group)
File "/opt/conda/lib/python3.8/site-packages/deepspeed/comm/comm.py", line 126, in log_wrapper
return func(*args, **kwargs)
File "/opt/conda/lib/python3.8/site-packages/deepspeed/comm/comm.py", line 513, in all_reduce
return cdb.all_reduce(tensor, op, group, async_op)
File "/opt/conda/lib/python3.8/site-packages/deepspeed/comm/torch.py", line 45, in all_reduce
return torch.distributed.all_reduce(tensor=tensor,
File "/opt/conda/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py", line 1287, in all_reduce
work = group.allreduce([tensor], opts)
RuntimeError: CUDA error: an illegal memory access was encountered
CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Fatal Python error: Aborted
Thread 0x00007fbdd7fff700 (most recent call first):
File "/opt/conda/lib/python3.8/socket.py", line 669 in readinto
File "/opt/conda/lib/python3.8/http/client.py", line 277 in _read_status
File "/opt/conda/lib/python3.8/http/client.py", line 316 in begin
File "/opt/conda/lib/python3.8/http/client.py", line 1348 in getresponse
File "/opt/conda/lib/python3.8/site-packages/urllib3/connectionpool.py", line 444 in _make_request
File "/opt/conda/lib/python3.8/site-packages/urllib3/connectionpool.py", line 703 in urlopen
File "/opt/conda/lib/python3.8/site-packages/requests/adapters.py", line 489 in send
File "/opt/conda/lib/python3.8/site-packages/requests/sessions.py", line 701 in send
File "/opt/conda/lib/python3.8/site-packages/requests/sessions.py", line 587 in request
File "/run/determined/pythonuserbase/lib/python3.8/site-packages/determined/common/requests.py", line 41 in request
File "/run/determined/pythonuserbase/lib/python3.8/site-packages/determined/common/api/request.py", line 129 in do_request
File "/run/determined/pythonuserbase/lib/python3.8/site-packages/determined/common/experimental/session.py", line 65 in _do_request
File "/run/determined/pythonuserbase/lib/python3.8/site-packages/determined/common/experimental/session.py", line 86 in get
File "/run/determined/pythonuserbase/lib/python3.8/site-packages/determined/core/_preempt.py", line 57 in _get_preemption
File "/run/determined/pythonuserbase/lib/python3.8/site-packages/determined/core/_preempt.py", line 89 in run
File "/opt/conda/lib/python3.8/threading.py", line 932 in _bootstrap_inner
File "/opt/conda/lib/python3.8/threading.py", line 890 in _bootstrap
[2022-08-24 10:39:18] [b57bcf88] [rank=0]
Thread 0x00007fbde3fff700 (most recent call first):
File "/opt/conda/lib/python3.8/threading.py", line 306 in wait
File "/opt/conda/lib/python3.8/queue.py", line 179 in get
File "/opt/conda/lib/python3.8/site-packages/tensorboard/summary/writer/event_file_writer.py", line 227 in run
File "/opt/conda/lib/python3.8/threading.py", line 932 in _bootstrap_inner
File "/opt/conda/lib/python3.8/threading.py", line 890 in _bootstrap
[2022-08-24 10:39:18] [b57bcf88] [rank=0]
Current thread 0x00007fc2a776e6c0 (most recent call first):
<no Python frame>
```
### Expected behavior
I expect the inference to complete successfully :) | 08-24-2022 14:21:14 | 08-24-2022 14:21:14 | Maybe related: bigscience-workshop/Megatron-Deepspeed#324<|||||>You probably want to open Deepspeed-Inference-related issues at https://github.com/microsoft/DeepSpeed since these aren't related to `transformers` other than that `transformers` are used from inside Deepspeed-Inference.
That project contains custom cuda kernels so it's very difficult to debug and when you file the issue you want to tag @RezaYazdaniAminabadi who wrote them.
Please help him out by providing a minimal reproducible example and he will surely fix the problem.
But I'm closing this Issue since there is nothing we can do here. |
transformers | 18,746 | closed | Add FP32 cast in ConvNext LayerNorm to prevent rounding errors with FP16 input | # What does this PR do?
The `ConvNextLayerNorm` isn't stable if the inputs are in FP16: https://github.com/huggingface/transformers/blob/main/src/transformers/models/convnext/modeling_convnext.py#L111
If the inputs are FP16 then the whole calculation will be done in FP16:
```
u = x.mean(1, keepdim=True)
s = (x - u).pow(2).mean(1, keepdim=True)
x = (x - u) / torch.sqrt(s + self.eps)
```
The mean of squares will suffer rounding errors. This PR just as a cast to `float` before the calculation and a cast back to input type afterwards, following the precendent in other models that have their own layernorms, e.g. https://github.com/huggingface/transformers/blob/main/src/transformers/models/deberta/modeling_deberta.py#L263
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 08-24-2022 13:21:56 | 08-24-2022 13:21:56 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Pingng @ydshieh here as there seems to be a CI issue<|||||>@jimypbr
Could you follow the instruction here to refresh the CircleCI permission?
https://support.circleci.com/hc/en-us/articles/360048210711-How-to-Refresh-User-Permissions- |
transformers | 18,745 | closed | Cannot import ORTModelForSeq2SeqLM | ### System Info
- `transformers` version: 4.21.1
- Platform: Linux-5.4.0-109-generic-x86_64-with-glibc2.31
- Python version: 3.9.5
- Huggingface_hub version: 0.4.0
- PyTorch version (GPU? no): 1.12.1+cu102 (False)
- Tensorflow version (GPU? no): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No, only CPU. But it doesn't work even with GPU enabled
- Using distributed or parallel set-up in script?: No
@deepspeed
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1. install python3.10
2. install `pip install optimum[onnxruntime]` (or `pip install optimum[deepspeed]`)
3. run any script with the foollowing import:
`from optimum.onnxruntime import ORTModelForSeq2SeqLM`
It fails with error:
```
from optimum.onnxruntime import ORTModelForSeq2SeqLM
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "CodeBERT/venv_3.9_onxx/lib/python3.9/site-packages/optimum/onnxruntime/__init__.py", line 62, in <module>
from .trainer import ORTTrainer
File "CodeBERT/venv_3.9_onxx/lib/python3.9/site-packages/optimum/onnxruntime/trainer.py", line 47, in <module>
from transformers.deepspeed import deepspeed_init, deepspeed_reinit, is_deepspeed_zero3_enabled
ImportError: cannot import name 'deepspeed_reinit' from 'transformers.deepspeed' (CodeBERT/venv_3.9_onxx/lib/python3.9/site-packages/tr
ansformers/deepspeed.py)
```
4. Run
```
model = ORTModelForSeq2SeqLM.from_pretrained(model_id, from_transformers=True)
```
It fails with the following error:
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
NameError: name 'ORTModelForSeq2SeqLM' is not defined
```
### Expected behavior
Import should be done correctly and it is possible to use it | 08-24-2022 09:04:04 | 08-24-2022 09:04:04 | Hi @lyriccoder! It looks like you are not using the latest release of Optimum. Could you try to upgrade it with `pip install --upgrade optimum[onnxruntime]` and let me know if it works? |
transformers | 18,744 | closed | Update perf_infer_gpu_many.mdx | Fixes doc link typo
related to https://github.com/huggingface/doc-builder/issues/282 | 08-24-2022 08:18:22 | 08-24-2022 08:18:22 | _The documentation is not available anymore as the PR was closed or merged._<|||||>link works as expected in https://moon-ci-docs.huggingface.co/docs/transformers/pr_18744/en/perf_infer_gpu_many |
transformers | 18,742 | closed | Bump nbconvert from 6.3.0 to 6.5.1 in /examples/research_projects/lxmert | Bumps [nbconvert](https://github.com/jupyter/nbconvert) from 6.3.0 to 6.5.1.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/jupyter/nbconvert/releases">nbconvert's releases</a>.</em></p>
<blockquote>
<h2>Release 6.5.1</h2>
<p>No release notes provided.</p>
<h2>6.5.0</h2>
<h2>What's Changed</h2>
<ul>
<li>Drop dependency on testpath. by <a href="https://github.com/anntzer"><code>@anntzer</code></a> in <a href="https://github-redirect.dependabot.com/jupyter/nbconvert/pull/1723">jupyter/nbconvert#1723</a></li>
<li>Adopt pre-commit by <a href="https://github.com/blink1073"><code>@blink1073</code></a> in <a href="https://github-redirect.dependabot.com/jupyter/nbconvert/pull/1744">jupyter/nbconvert#1744</a></li>
<li>Add pytest settings and handle warnings by <a href="https://github.com/blink1073"><code>@blink1073</code></a> in <a href="https://github-redirect.dependabot.com/jupyter/nbconvert/pull/1745">jupyter/nbconvert#1745</a></li>
<li>Apply Autoformatters by <a href="https://github.com/blink1073"><code>@blink1073</code></a> in <a href="https://github-redirect.dependabot.com/jupyter/nbconvert/pull/1746">jupyter/nbconvert#1746</a></li>
<li>Add git-blame-ignore-revs by <a href="https://github.com/blink1073"><code>@blink1073</code></a> in <a href="https://github-redirect.dependabot.com/jupyter/nbconvert/pull/1748">jupyter/nbconvert#1748</a></li>
<li>Update flake8 config by <a href="https://github.com/blink1073"><code>@blink1073</code></a> in <a href="https://github-redirect.dependabot.com/jupyter/nbconvert/pull/1749">jupyter/nbconvert#1749</a></li>
<li>support bleach 5, add packaging and tinycss2 dependencies by <a href="https://github.com/bollwyvl"><code>@bollwyvl</code></a> in <a href="https://github-redirect.dependabot.com/jupyter/nbconvert/pull/1755">jupyter/nbconvert#1755</a></li>
<li>[pre-commit.ci] pre-commit autoupdate by <a href="https://github.com/pre-commit-ci"><code>@pre-commit-ci</code></a> in <a href="https://github-redirect.dependabot.com/jupyter/nbconvert/pull/1752">jupyter/nbconvert#1752</a></li>
<li>update cli example by <a href="https://github.com/leahecole"><code>@leahecole</code></a> in <a href="https://github-redirect.dependabot.com/jupyter/nbconvert/pull/1753">jupyter/nbconvert#1753</a></li>
<li>Clean up pre-commit by <a href="https://github.com/blink1073"><code>@blink1073</code></a> in <a href="https://github-redirect.dependabot.com/jupyter/nbconvert/pull/1757">jupyter/nbconvert#1757</a></li>
<li>Clean up workflows by <a href="https://github.com/blink1073"><code>@blink1073</code></a> in <a href="https://github-redirect.dependabot.com/jupyter/nbconvert/pull/1750">jupyter/nbconvert#1750</a></li>
</ul>
<h2>New Contributors</h2>
<ul>
<li><a href="https://github.com/pre-commit-ci"><code>@pre-commit-ci</code></a> made their first contribution in <a href="https://github-redirect.dependabot.com/jupyter/nbconvert/pull/1752">jupyter/nbconvert#1752</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a href="https://github.com/jupyter/nbconvert/compare/6.4.5...6.5">https://github.com/jupyter/nbconvert/compare/6.4.5...6.5</a></p>
<h2>6.4.3</h2>
<h2>What's Changed</h2>
<ul>
<li>Add section to <code>customizing</code> showing how to use template inheritance by <a href="https://github.com/stefanv"><code>@stefanv</code></a> in <a href="https://github-redirect.dependabot.com/jupyter/nbconvert/pull/1719">jupyter/nbconvert#1719</a></li>
<li>Remove ipython genutils by <a href="https://github.com/rgs258"><code>@rgs258</code></a> in <a href="https://github-redirect.dependabot.com/jupyter/nbconvert/pull/1727">jupyter/nbconvert#1727</a></li>
<li>Update changelog for 6.4.3 by <a href="https://github.com/blink1073"><code>@blink1073</code></a> in <a href="https://github-redirect.dependabot.com/jupyter/nbconvert/pull/1728">jupyter/nbconvert#1728</a></li>
</ul>
<h2>New Contributors</h2>
<ul>
<li><a href="https://github.com/stefanv"><code>@stefanv</code></a> made their first contribution in <a href="https://github-redirect.dependabot.com/jupyter/nbconvert/pull/1719">jupyter/nbconvert#1719</a></li>
<li><a href="https://github.com/rgs258"><code>@rgs258</code></a> made their first contribution in <a href="https://github-redirect.dependabot.com/jupyter/nbconvert/pull/1727">jupyter/nbconvert#1727</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a href="https://github.com/jupyter/nbconvert/compare/6.4.2...6.4.3">https://github.com/jupyter/nbconvert/compare/6.4.2...6.4.3</a></p>
<h2>6.4.0</h2>
<h2>What's Changed</h2>
<ul>
<li>Optionally speed up validation by <a href="https://github.com/gwincr11"><code>@gwincr11</code></a> in <a href="https://github-redirect.dependabot.com/jupyter/nbconvert/pull/1672">jupyter/nbconvert#1672</a></li>
<li>Adding missing div compared to JupyterLab DOM structure by <a href="https://github.com/SylvainCorlay"><code>@SylvainCorlay</code></a> in <a href="https://github-redirect.dependabot.com/jupyter/nbconvert/pull/1678">jupyter/nbconvert#1678</a></li>
<li>Allow passing extra args to code highlighter by <a href="https://github.com/yuvipanda"><code>@yuvipanda</code></a> in <a href="https://github-redirect.dependabot.com/jupyter/nbconvert/pull/1683">jupyter/nbconvert#1683</a></li>
<li>Prevent page breaks in outputs when printing by <a href="https://github.com/SylvainCorlay"><code>@SylvainCorlay</code></a> in <a href="https://github-redirect.dependabot.com/jupyter/nbconvert/pull/1679">jupyter/nbconvert#1679</a></li>
<li>Add collapsers to template by <a href="https://github.com/SylvainCorlay"><code>@SylvainCorlay</code></a> in <a href="https://github-redirect.dependabot.com/jupyter/nbconvert/pull/1689">jupyter/nbconvert#1689</a></li>
<li>Fix recent pandoc latex tables by adding calc and array (<a href="https://github-redirect.dependabot.com/jupyter/nbconvert/issues/1536">#1536</a>, <a href="https://github-redirect.dependabot.com/jupyter/nbconvert/issues/1566">#1566</a>) by <a href="https://github.com/cgevans"><code>@cgevans</code></a> in <a href="https://github-redirect.dependabot.com/jupyter/nbconvert/pull/1686">jupyter/nbconvert#1686</a></li>
<li>Add an invalid notebook error by <a href="https://github.com/gwincr11"><code>@gwincr11</code></a> in <a href="https://github-redirect.dependabot.com/jupyter/nbconvert/pull/1675">jupyter/nbconvert#1675</a></li>
<li>Fix typos in execute.py by <a href="https://github.com/TylerAnderson22"><code>@TylerAnderson22</code></a> in <a href="https://github-redirect.dependabot.com/jupyter/nbconvert/pull/1692">jupyter/nbconvert#1692</a></li>
<li>Modernize latex greek math handling (partially fixes <a href="https://github-redirect.dependabot.com/jupyter/nbconvert/issues/1673">#1673</a>) by <a href="https://github.com/cgevans"><code>@cgevans</code></a> in <a href="https://github-redirect.dependabot.com/jupyter/nbconvert/pull/1687">jupyter/nbconvert#1687</a></li>
<li>Fix use of deprecated API and update test matrix by <a href="https://github.com/blink1073"><code>@blink1073</code></a> in <a href="https://github-redirect.dependabot.com/jupyter/nbconvert/pull/1696">jupyter/nbconvert#1696</a></li>
<li>Update nbconvert_library.ipynb by <a href="https://github.com/letterphile"><code>@letterphile</code></a> in <a href="https://github-redirect.dependabot.com/jupyter/nbconvert/pull/1695">jupyter/nbconvert#1695</a></li>
<li>Changelog for 6.4 by <a href="https://github.com/blink1073"><code>@blink1073</code></a> in <a href="https://github-redirect.dependabot.com/jupyter/nbconvert/pull/1697">jupyter/nbconvert#1697</a></li>
</ul>
<h2>New Contributors</h2>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="https://github.com/jupyter/nbconvert/commit/7471b75a506b2fec776613e50e4f2234b97f3c8e"><code>7471b75</code></a> Release 6.5.1</li>
<li><a href="https://github.com/jupyter/nbconvert/commit/c1943e0e9fd0ad6abd7d8dae380474cca4b04a31"><code>c1943e0</code></a> Fix pre-commit</li>
<li><a href="https://github.com/jupyter/nbconvert/commit/8685e9378086e8d82a0df92505fe386095f929ad"><code>8685e93</code></a> Fix tests</li>
<li><a href="https://github.com/jupyter/nbconvert/commit/0abf2906bc6c7170c8d70bc0df6995d21c5aeaf1"><code>0abf290</code></a> Run black and prettier</li>
<li><a href="https://github.com/jupyter/nbconvert/commit/418d545ae596d95f5ea82d141c68fd1abc99f1a6"><code>418d545</code></a> Run test on 6.x branch</li>
<li><a href="https://github.com/jupyter/nbconvert/commit/bef65d7ab2a469b01e4aa25f44c0f20326f7c7c5"><code>bef65d7</code></a> Convert input to string prior to escape HTML</li>
<li><a href="https://github.com/jupyter/nbconvert/commit/0818628718c4a5d3ddd671fbd4881bf176e7d6e2"><code>0818628</code></a> Check input type before escaping</li>
<li><a href="https://github.com/jupyter/nbconvert/commit/b206470f9ecd71b006a37dd1298dd3d9e3dd46dd"><code>b206470</code></a> GHSL-2021-1017, GHSL-2021-1020, GHSL-2021-1021</li>
<li><a href="https://github.com/jupyter/nbconvert/commit/a03cbb8a8d04d47aefec51e7b1b816045682aed5"><code>a03cbb8</code></a> GHSL-2021-1026, GHSL-2021-1025</li>
<li><a href="https://github.com/jupyter/nbconvert/commit/48fe71eb3335caf4e03166e56e0d16efcfbeaf44"><code>48fe71e</code></a> GHSL-2021-1024</li>
<li>Additional commits viewable in <a href="https://github.com/jupyter/nbconvert/compare/6.3.0...6.5.1">compare view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
- `@dependabot use these labels` will set the current labels as the default for future PRs for this repo and language
- `@dependabot use these reviewers` will set the current reviewers as the default for future PRs for this repo and language
- `@dependabot use these assignees` will set the current assignees as the default for future PRs for this repo and language
- `@dependabot use this milestone` will set the current milestone as the default for future PRs for this repo and language
You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts).
</details> | 08-23-2022 19:23:09 | 08-23-2022 19:23:09 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_18742). All of your documentation changes will be reflected on that endpoint. |
transformers | 18,741 | closed | Bump nbconvert from 6.3.0 to 6.5.1 in /examples/research_projects/visual_bert | Bumps [nbconvert](https://github.com/jupyter/nbconvert) from 6.3.0 to 6.5.1.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/jupyter/nbconvert/releases">nbconvert's releases</a>.</em></p>
<blockquote>
<h2>Release 6.5.1</h2>
<p>No release notes provided.</p>
<h2>6.5.0</h2>
<h2>What's Changed</h2>
<ul>
<li>Drop dependency on testpath. by <a href="https://github.com/anntzer"><code>@anntzer</code></a> in <a href="https://github-redirect.dependabot.com/jupyter/nbconvert/pull/1723">jupyter/nbconvert#1723</a></li>
<li>Adopt pre-commit by <a href="https://github.com/blink1073"><code>@blink1073</code></a> in <a href="https://github-redirect.dependabot.com/jupyter/nbconvert/pull/1744">jupyter/nbconvert#1744</a></li>
<li>Add pytest settings and handle warnings by <a href="https://github.com/blink1073"><code>@blink1073</code></a> in <a href="https://github-redirect.dependabot.com/jupyter/nbconvert/pull/1745">jupyter/nbconvert#1745</a></li>
<li>Apply Autoformatters by <a href="https://github.com/blink1073"><code>@blink1073</code></a> in <a href="https://github-redirect.dependabot.com/jupyter/nbconvert/pull/1746">jupyter/nbconvert#1746</a></li>
<li>Add git-blame-ignore-revs by <a href="https://github.com/blink1073"><code>@blink1073</code></a> in <a href="https://github-redirect.dependabot.com/jupyter/nbconvert/pull/1748">jupyter/nbconvert#1748</a></li>
<li>Update flake8 config by <a href="https://github.com/blink1073"><code>@blink1073</code></a> in <a href="https://github-redirect.dependabot.com/jupyter/nbconvert/pull/1749">jupyter/nbconvert#1749</a></li>
<li>support bleach 5, add packaging and tinycss2 dependencies by <a href="https://github.com/bollwyvl"><code>@bollwyvl</code></a> in <a href="https://github-redirect.dependabot.com/jupyter/nbconvert/pull/1755">jupyter/nbconvert#1755</a></li>
<li>[pre-commit.ci] pre-commit autoupdate by <a href="https://github.com/pre-commit-ci"><code>@pre-commit-ci</code></a> in <a href="https://github-redirect.dependabot.com/jupyter/nbconvert/pull/1752">jupyter/nbconvert#1752</a></li>
<li>update cli example by <a href="https://github.com/leahecole"><code>@leahecole</code></a> in <a href="https://github-redirect.dependabot.com/jupyter/nbconvert/pull/1753">jupyter/nbconvert#1753</a></li>
<li>Clean up pre-commit by <a href="https://github.com/blink1073"><code>@blink1073</code></a> in <a href="https://github-redirect.dependabot.com/jupyter/nbconvert/pull/1757">jupyter/nbconvert#1757</a></li>
<li>Clean up workflows by <a href="https://github.com/blink1073"><code>@blink1073</code></a> in <a href="https://github-redirect.dependabot.com/jupyter/nbconvert/pull/1750">jupyter/nbconvert#1750</a></li>
</ul>
<h2>New Contributors</h2>
<ul>
<li><a href="https://github.com/pre-commit-ci"><code>@pre-commit-ci</code></a> made their first contribution in <a href="https://github-redirect.dependabot.com/jupyter/nbconvert/pull/1752">jupyter/nbconvert#1752</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a href="https://github.com/jupyter/nbconvert/compare/6.4.5...6.5">https://github.com/jupyter/nbconvert/compare/6.4.5...6.5</a></p>
<h2>6.4.3</h2>
<h2>What's Changed</h2>
<ul>
<li>Add section to <code>customizing</code> showing how to use template inheritance by <a href="https://github.com/stefanv"><code>@stefanv</code></a> in <a href="https://github-redirect.dependabot.com/jupyter/nbconvert/pull/1719">jupyter/nbconvert#1719</a></li>
<li>Remove ipython genutils by <a href="https://github.com/rgs258"><code>@rgs258</code></a> in <a href="https://github-redirect.dependabot.com/jupyter/nbconvert/pull/1727">jupyter/nbconvert#1727</a></li>
<li>Update changelog for 6.4.3 by <a href="https://github.com/blink1073"><code>@blink1073</code></a> in <a href="https://github-redirect.dependabot.com/jupyter/nbconvert/pull/1728">jupyter/nbconvert#1728</a></li>
</ul>
<h2>New Contributors</h2>
<ul>
<li><a href="https://github.com/stefanv"><code>@stefanv</code></a> made their first contribution in <a href="https://github-redirect.dependabot.com/jupyter/nbconvert/pull/1719">jupyter/nbconvert#1719</a></li>
<li><a href="https://github.com/rgs258"><code>@rgs258</code></a> made their first contribution in <a href="https://github-redirect.dependabot.com/jupyter/nbconvert/pull/1727">jupyter/nbconvert#1727</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a href="https://github.com/jupyter/nbconvert/compare/6.4.2...6.4.3">https://github.com/jupyter/nbconvert/compare/6.4.2...6.4.3</a></p>
<h2>6.4.0</h2>
<h2>What's Changed</h2>
<ul>
<li>Optionally speed up validation by <a href="https://github.com/gwincr11"><code>@gwincr11</code></a> in <a href="https://github-redirect.dependabot.com/jupyter/nbconvert/pull/1672">jupyter/nbconvert#1672</a></li>
<li>Adding missing div compared to JupyterLab DOM structure by <a href="https://github.com/SylvainCorlay"><code>@SylvainCorlay</code></a> in <a href="https://github-redirect.dependabot.com/jupyter/nbconvert/pull/1678">jupyter/nbconvert#1678</a></li>
<li>Allow passing extra args to code highlighter by <a href="https://github.com/yuvipanda"><code>@yuvipanda</code></a> in <a href="https://github-redirect.dependabot.com/jupyter/nbconvert/pull/1683">jupyter/nbconvert#1683</a></li>
<li>Prevent page breaks in outputs when printing by <a href="https://github.com/SylvainCorlay"><code>@SylvainCorlay</code></a> in <a href="https://github-redirect.dependabot.com/jupyter/nbconvert/pull/1679">jupyter/nbconvert#1679</a></li>
<li>Add collapsers to template by <a href="https://github.com/SylvainCorlay"><code>@SylvainCorlay</code></a> in <a href="https://github-redirect.dependabot.com/jupyter/nbconvert/pull/1689">jupyter/nbconvert#1689</a></li>
<li>Fix recent pandoc latex tables by adding calc and array (<a href="https://github-redirect.dependabot.com/jupyter/nbconvert/issues/1536">#1536</a>, <a href="https://github-redirect.dependabot.com/jupyter/nbconvert/issues/1566">#1566</a>) by <a href="https://github.com/cgevans"><code>@cgevans</code></a> in <a href="https://github-redirect.dependabot.com/jupyter/nbconvert/pull/1686">jupyter/nbconvert#1686</a></li>
<li>Add an invalid notebook error by <a href="https://github.com/gwincr11"><code>@gwincr11</code></a> in <a href="https://github-redirect.dependabot.com/jupyter/nbconvert/pull/1675">jupyter/nbconvert#1675</a></li>
<li>Fix typos in execute.py by <a href="https://github.com/TylerAnderson22"><code>@TylerAnderson22</code></a> in <a href="https://github-redirect.dependabot.com/jupyter/nbconvert/pull/1692">jupyter/nbconvert#1692</a></li>
<li>Modernize latex greek math handling (partially fixes <a href="https://github-redirect.dependabot.com/jupyter/nbconvert/issues/1673">#1673</a>) by <a href="https://github.com/cgevans"><code>@cgevans</code></a> in <a href="https://github-redirect.dependabot.com/jupyter/nbconvert/pull/1687">jupyter/nbconvert#1687</a></li>
<li>Fix use of deprecated API and update test matrix by <a href="https://github.com/blink1073"><code>@blink1073</code></a> in <a href="https://github-redirect.dependabot.com/jupyter/nbconvert/pull/1696">jupyter/nbconvert#1696</a></li>
<li>Update nbconvert_library.ipynb by <a href="https://github.com/letterphile"><code>@letterphile</code></a> in <a href="https://github-redirect.dependabot.com/jupyter/nbconvert/pull/1695">jupyter/nbconvert#1695</a></li>
<li>Changelog for 6.4 by <a href="https://github.com/blink1073"><code>@blink1073</code></a> in <a href="https://github-redirect.dependabot.com/jupyter/nbconvert/pull/1697">jupyter/nbconvert#1697</a></li>
</ul>
<h2>New Contributors</h2>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="https://github.com/jupyter/nbconvert/commit/7471b75a506b2fec776613e50e4f2234b97f3c8e"><code>7471b75</code></a> Release 6.5.1</li>
<li><a href="https://github.com/jupyter/nbconvert/commit/c1943e0e9fd0ad6abd7d8dae380474cca4b04a31"><code>c1943e0</code></a> Fix pre-commit</li>
<li><a href="https://github.com/jupyter/nbconvert/commit/8685e9378086e8d82a0df92505fe386095f929ad"><code>8685e93</code></a> Fix tests</li>
<li><a href="https://github.com/jupyter/nbconvert/commit/0abf2906bc6c7170c8d70bc0df6995d21c5aeaf1"><code>0abf290</code></a> Run black and prettier</li>
<li><a href="https://github.com/jupyter/nbconvert/commit/418d545ae596d95f5ea82d141c68fd1abc99f1a6"><code>418d545</code></a> Run test on 6.x branch</li>
<li><a href="https://github.com/jupyter/nbconvert/commit/bef65d7ab2a469b01e4aa25f44c0f20326f7c7c5"><code>bef65d7</code></a> Convert input to string prior to escape HTML</li>
<li><a href="https://github.com/jupyter/nbconvert/commit/0818628718c4a5d3ddd671fbd4881bf176e7d6e2"><code>0818628</code></a> Check input type before escaping</li>
<li><a href="https://github.com/jupyter/nbconvert/commit/b206470f9ecd71b006a37dd1298dd3d9e3dd46dd"><code>b206470</code></a> GHSL-2021-1017, GHSL-2021-1020, GHSL-2021-1021</li>
<li><a href="https://github.com/jupyter/nbconvert/commit/a03cbb8a8d04d47aefec51e7b1b816045682aed5"><code>a03cbb8</code></a> GHSL-2021-1026, GHSL-2021-1025</li>
<li><a href="https://github.com/jupyter/nbconvert/commit/48fe71eb3335caf4e03166e56e0d16efcfbeaf44"><code>48fe71e</code></a> GHSL-2021-1024</li>
<li>Additional commits viewable in <a href="https://github.com/jupyter/nbconvert/compare/6.3.0...6.5.1">compare view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
- `@dependabot use these labels` will set the current labels as the default for future PRs for this repo and language
- `@dependabot use these reviewers` will set the current reviewers as the default for future PRs for this repo and language
- `@dependabot use these assignees` will set the current assignees as the default for future PRs for this repo and language
- `@dependabot use this milestone` will set the current milestone as the default for future PRs for this repo and language
You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts).
</details> | 08-23-2022 19:22:51 | 08-23-2022 19:22:51 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_18741). All of your documentation changes will be reflected on that endpoint. |
transformers | 18,740 | closed | Progress bars shown despite disable_tqdm=True in Trainer | ### System Info
- `transformers` version: 4.19.2
- Platform: CentOS Linux 7.9.2009
- Python version: 3.10.4
- Huggingface_hub version: 0.2.1
- PyTorch version (GPU?): 1.12.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: Yes
### Who can help?
@sgugger
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I'm running HuggingFace Trainer with `TrainingArguments(disable_tqdm=True, ...)` for fine-tuning the EleutherAI/gpt-j-6B model but there are still progress bars displayed, see screenshot.
```
training_args = TrainingArguments(
disable_tqdm=True,
output_dir='./checkpoints',
save_total_limit=10,
logging_dir='/content/logs',
num_train_epochs=config["N_EPOCHS"],
evaluation_strategy='epoch'
save_strategy='steps',
save_steps=30,
logging_steps=10,
overwrite_output_dir=True,
per_device_train_batch_size=4,
per_device_eval_batch_size=4,
gradient_accumulation_steps=4,
eval_accumulation_steps=4,
gradient_checkpointing=True,
max_grad_norm=0.5,
lr_scheduler_type="cosine",
learning_rate=1e-4,
warmup_ratio=0.05,
weight_decay=0.1,
fp16_full_eval=True
fp16=True,
fp16_opt_level='O1',
report_to=['tensorboard']
)
```

### Expected behavior
No progress bars displayed anymore.
| 08-23-2022 16:37:59 | 08-23-2022 16:37:59 | This question has been answered on the [forum](https://discuss.huggingface.co/t/progress-bars-shown-despite-disable-tqdm-true-in-trainer/22003/2?u=nielsr); therefore closing this issue. |
transformers | 18,739 | closed | fixed docstring typos | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (#18721)
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [X] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
@ydshieh | 08-23-2022 15:16:35 | 08-23-2022 15:16:35 | Hi, @JadeKim042386 . I am sorry that I was wrong - the 35 files changes comes from the fact you fixed more occurrences rather than just the initial ones!
I will review this PR now.<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_18739). All of your documentation changes will be reflected on that endpoint.<|||||>LGTM, but the change
from
```
kwargs: (*optional*) Remaining dictionary ...
```
to
```
kwargs (*optional*) Remaining dictionary ...
```
should be
```
kwargs (*optional*): Remaining dictionary ...
```
in my opinion. WDYT?<|||||>@ydshieh
Oh, the colon was missing!
I think it would be better to put a colon~
<|||||>@LysandreJik The number of changed files is large, but it just removes the extra `:` from some docstrings.
change
```
encoder_layerdrop: (`float`, *optional*, defaults to 0.0):
```
to
```
encoder_layerdrop (`float`, *optional*, defaults to 0.0):
```
I checked on the documentation page, and indeed there are missing arguments due to this fact. |
transformers | 18,738 | closed | Aggregation strategy does not respect original utterance | ### System Info
- `transformers` version: 4.21.1
- Platform: Linux-5.19.2-arch1-1-x86_64-with-glibc2.36
- Python version: 3.9.13
- Huggingface_hub version: 0.8.1
- PyTorch version (GPU?): 1.11.0+cu113 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help?
@Narsil
@LysandreJik
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Steps to reproduce :
1. Finetune a model for token classification.
2. Get the results as follow with `AggregationStrategy.SIMPLE`:
Input : `"Hi, can you book me flight ticket ? Also, will there be some snacks available on the plane ? Many thanks."`
```json
[
{"entity_group": "lbl1", "score": 0.0, "word": "Hi", "start": 0, "end": 2},
{"entity_group": "lbl2", "score": 0.0, "word": ",", "start": 2, "end": 3},
{"entity_group": "lbl3", "score": 0.0, "word": "can you book me flight ticket?", "start": 4, "end": 35},
{"entity_group": "lbl2", "score": 0.0, "word": "Also", "start": 36, "end": 40},
{"entity_group": "lbl3", "score": 0.0, "word": ", will there be some snacks available on the plane?", "start": 40, "end": 92},
{"entity_group": "lbl1", "score": 0.0, "word": "Many thanks.", "start": 93, "end": 105}
]
```
Note that the `start` and `end` of the Aggregated Output are good ! But the "word" is not.
3. Do the same thing *without* aggregation and get :
```jsonc
[
//...
{'entity': 'lbl3', 'score': 0.0, 'index': 3, 'word': 'can', 'start': 4, 'end': 7},
{'entity': 'lbl3', 'score': 0.0, 'index': 4, 'word': 'you', 'start': 8, 'end': 11},
{'entity': 'lbl3', 'score': 0.0, 'index': 5, 'word': 'book', 'start': 12, 'end': 16},
{'entity': 'lbl3', 'score': 0.0, 'index': 6, 'word': 'me', 'start': 17, 'end': 19},
{'entity': 'lbl3', 'score': 0.0, 'index': 7, 'word': 'flight', 'start': 20, 'end': 26},
{'entity': 'lbl3', 'score': 0.0, 'index': 8, 'word': 'ticket', 'start': 27, 'end': 33},
{'entity': 'lbl3', 'score': 0.0, 'index': 9, 'word': '?', 'start': 34, 'end': 35},
//...
]
```
This sentence with the index is well extracted as `input[27:33]="ticket"` + `input[33:34] = " "` + `input[34:35]="?"` = `"ticket ?"`
### Expected behavior
Having the returned `word` span matches exactly the original utterance as we could use indexes and text span in the rest of our code.
Thanks in advance,
Have a great day :) | 08-23-2022 14:59:32 | 08-23-2022 14:59:32 | HI @ierezell ,
It does make sense what you asking for, but it's probably not going to happen since it would be a backward incompatibilty.
Pinging core maintainers to get advice on this @sgugger @LysandreJik.
`"word"` was an output since the creation of the pipeline and corresponds to the tokens being decoded: https://github.com/huggingface/transformers/blob/main/src/transformers/pipelines/token_classification.py#L409
This means that during decoding anything can happen to the string and there is no reason for it to be like the original string.
What we could do, is use the offsets instead to lookup in the original string what was actually in there, but that would break existing pipelines relying on this code to actually use decode.
- The non breaking solution would be to add a new key `"original_word"` (or a better name) that does the indexing for you. But that's now 2 different strings in the returned results, and that's not super obvious which one is better (they're going to be the same most of the time).
- Do nothing (current solution).
- Break backward compatbility, that's probably for v5.
There are a lot of quirks in the pipelines like this which are not huge changes, but that we still can't do because it would mean breaking user code.
<|||||>Hello @Narsil,
I understood the underlying problem as I checked the code before opening the issue and saw the "tokenizer problem".
It's not a "huge" problem as anyone can still reconstruct the "original" spans with starts and stops as they're "correct".
However, maybe a little bit more doc on this behavior can help users avoid these misconceptions about the returned data.
For the solutions :
1. For my use case would be best, but for 90% of the other users, it's too many troubles for the team to maintain, and 2. is enough as start and stop, used correctly is doing what I need.
2. In my opinion the best, but as said above may be documenting that to tell the users to use start and stop to get spans instead of innocently relying on the "word" if their application needs to do processing with the original utterance.
3. Would totally solve the issue but for sure let's keep that for later as it's breaking.
I totally understand that you need to keep a stable API, and I understand the underlying problem. I just wanted to flag that innocent people like me can misuse the returned data.
Thanks for the information and help :)
You can close the issue if you want, as 2. is enough and you're aware of the use case.
Have a great day<|||||>Thanks for understanding, I'll let core maintainers conclude on this.
For the documentation, would this be better/enough ?
https://github.com/huggingface/transformers/pull/18763
|
transformers | 18,737 | closed | Random change to test doc building | null | 08-23-2022 14:55:03 | 08-23-2022 14:55:03 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_18737). All of your documentation changes will be reflected on that endpoint. |
transformers | 18,736 | closed | Random change to test doc building | null | 08-23-2022 14:49:44 | 08-23-2022 14:49:44 | |
transformers | 18,735 | closed | fixed docstring typos | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (#18721)
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [X] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). | 08-23-2022 14:41:03 | 08-23-2022 14:41:03 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_18735). All of your documentation changes will be reflected on that endpoint. |
transformers | 18,734 | closed | Owlvit memory leak fix | # What does this PR do?
- Fixes memory leak in calls to `OwlViTForObjectDetection` model. The memory leak was due to making calls to `OwlViTModel`'s `get_image_features` and `get_text_features` methods. Replacing the calls to these methods with a single call to the` forward` method fixed the issue.
- Fixes typos regarding expected input shapes, this is discussed in another [PR](https://github.com/huggingface/transformers/pull/18450).
- Eliminates calling `self.owlvit()` within `OwlViTForObjectDetection.forward()` twice when `output_hidden_states` is set to True.
Memory usage profile after the fix:
```
import requests
from PIL import Image
from tqdm import trange
import torch
from transformers import OwlViTModel, OwlViTForObjectDetection, OwlViTProcessor
model = OwlViTForObjectDetection.from_pretrained("google/owlvit-base-patch32")
processor = OwlViTProcessor.from_pretrained("google/owlvit-base-patch32")
text_prompts = ["a photo of a cat", "a photo of a dog"]
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = processor(text=[text_prompts], images=image, return_tensors="pt")
for i in range(50):
with torch.no_grad():
_ = model(**inputs)
```

Fixes # 18629
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)?
This was discussed in this [issue](https://github.com/huggingface/transformers/issues/18629).
- [ X] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [X ] Did you write any new necessary tests?
-->
| 08-23-2022 13:05:29 | 08-23-2022 13:05:29 | _The documentation is not available anymore as the PR was closed or merged._<|||||>> I don't have any problem with the PR but I'd like the review of @amyeroberts, @NielsRogge and/or @sgugger as well if possible.
>
> Do you have insights on why using `get_text_features` and `get_image_features` resulted in memory leaks? It seems to me like the previous usage wasn't wrong, so I wonder if these methods shouldn't be fixed as well to prevent the leakage.
Thank you @LysandreJik! `OwlViTModel.forward()` already includes the code of `OwlViTModel.get_text_features `and `OwlViTModel.get_image_features`. I was able to replicate the issue with a toy example with two PyTorch models, where one model object is set as the other one's attribute. Calling a non-forward function of the attribute model causes memory leaks. <|||||>> Could you post a profile before the change for comparison?
Thank you @amyeroberts! I changed the last hidden state variable names. I'm also including a snapshot of memory usage (without using `torch.no_grad()`) before the fix:

|
transformers | 18,733 | closed | CLI: Don't check the model head when there is no model head | # What does this PR do?
As the title says -- when there is no model head, don't check it :)
When `architectures` is not defined, it is implied that there is no model head in the weights. | 08-23-2022 12:03:26 | 08-23-2022 12:03:26 | cc @ChrisFugl (related to #18678 )<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_18733). All of your documentation changes will be reflected on that endpoint.<|||||>OK, so it was empty dict, and we can't take max of the values, right? LGTM, thank you @gante <|||||>nit: I would probably check if the dict is empty or not, instead of check `architectures` is None.
For example, `BertModel` has `pooler_output`.<|||||>@ydshieh that is a great point!
Going to change it<|||||>@ydshieh added your suggestion (plus a check -- a model specified head should have outputs beyond `hidden_...`) |
transformers | 18,732 | closed | Llm qa tests | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Adding tests for DQA
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 08-23-2022 10:28:33 | 08-23-2022 10:28:33 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_18732). All of your documentation changes will be reflected on that endpoint. |
transformers | 18,731 | closed | Update image segmentation pipeline test | # What does this PR do?
The image segmentation pipeline tests - `tests/pipelines/test_pipelines_image_segmentation.py` - were failing after the merging of #1849 (49e44b216b2559e34e945d5dcdbbe2238859e29b). This was due to the difference in rescaling.
Previously the images were rescaled by `image = image / 255`. In the new commit, a `rescale` method was added, and images rescaled using `image = image * scale`. This was known to cause small differences in the processed images (see
[PR comment](https://github.com/huggingface/transformers/pull/18499#discussion_r940347575)).
Testing locally, changing the `rescale` method to divide by a scale factor (255) resulted in the tests passing. It was therefore decided after discussion with @ydshieh the test values could be updated, as there was no logic difference between the commits.
## Comparing masks
The same code as in the segmentation pipeline was run to compare the output masks.
```
#git_hash = "86d0b26d6c5566506b1be55002654b48d9c19ffe"
git_hash = "49e44b216b2559e34e945d5dcdbbe2238859e29b"
from transformers import pipeline
import numpy as np
model_id = "facebook/detr-resnet-50-panoptic"
image_segmenter = pipeline("image-segmentation", model=model_id)
outputs = image_segmenter("http://images.cocodataset.org/val2017/000000039769.jpg")
for i, o in enumerate(outputs):
img = o['mask']
np.save(f"img_{git_hash}_{i}", img)
img.save(f"img_{git_hash}_{i}.png")
```
Mask generate for output 0 on commit hash 86d0b26d6c5566506b1be55002654b48d9c19ffe (parent commit)

Mask generate for output 0 on commit hash 49e44b216b2559e34e945d5dcdbbe2238859e29b

Checking the pixel values, there was a single pixel difference
```
>>> git_hash_1 = "86d0b26d6c5566506b1be55002654b48d9c19ffe"
>>> git_hash_2 = "49e44b216b2559e34e945d5dcdbbe2238859e29b"
>>>
>>> def max_diff(a, b):
>>> return np.amax(np.abs(a - b))
>>>
>>> for i in range(6):
>>> print(f"\nComparing mask {i}")
>>> img_orig = f"img_{git_hash_1}_{i}"
>>> img_new = f"img_{git_hash_2}_{i}"
>>> arr_orig = np.load(img_orig + ".npy")
>>> arr_new = np.load(img_new + ".npy")
>>> print(f"Max difference: ", max_diff(arr_orig, arr_new))
>>>
>>> m = arr_orig != arr_new
>>> # image shape is 480 x 640
>>> x = np.arange(480 * 640).reshape(480, 640)
>>> print("No. pixels different: ", len(x[m]))
```
Output:
```
Comparing mask 0
Max difference: 255
No. pixels different: 1
Comparing mask 1
Max difference: 0
No. pixels different: 0
Comparing mask 2
Max difference: 0
No. pixels different: 0
Comparing mask 3
Max difference: 0
No. pixels different: 0
Comparing mask 4
Max difference: 0
No. pixels different: 0
Comparing mask 5
Max difference: 0
No. pixels different: 0
```
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
| 08-23-2022 10:26:31 | 08-23-2022 10:26:31 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@amyeroberts Let's merge this PR?<|||||>Amy is on vacation ;-) |
transformers | 18,730 | closed | The training loss(logging steps) will drop suddenly after each epoch? Help me plz! Orz | ### System Info
transformers version: 4.17.0
Python version: 3.7.0
torch version: 1.10.1
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
CLIP(https://github.com/huggingface/transformers/tree/main/examples/pytorch/contrastive-image-text).
I have implemented a Dataset to train, but i have found that after each epoch the training loss will drop suddenly. The Dataset overrides three methods(__init__, __getitem__ and __len__) and i couldn't figure out the reason for the above phenomenon.
I think the data is shuffled properly(checked) and the learning_rate drops smoothly(observed).
I would appreciate it if you could afford time to help me.
The picture is drawn according to the trainer_state.json
<img width="1108" alt="image" src="https://user-images.githubusercontent.com/27990344/186142292-6c3d4a56-9c9e-45b2-a139-1668b995e59b.png">
### Expected behavior
Figure out the reason. | 08-23-2022 09:34:38 | 08-23-2022 09:34:38 | And i also found that the loss(last step) of each epoch may drop suddenly when the size of dataset isn't an integer multiple of batch size. Because the clip_loss is up to batch and the last batch will be duplicate unless "dataloader_drop_last" is set.<|||||>@ydshieh<|||||>For my experience, it could be that the loss is calculated as the average of losses in the steps in each epoch.
For example, suppose there are 3 epochs, each epoch has 1000 steps.
In the 1st epoch, the loss is calculated as the average among the steps that have been done.
When the 2nd epoch starts, it resets the loss. The loss in the 1st step in the 2nd epoch is not the average of all previous steps (in the 1st epoch) and the current step, but just a loss over the single batch.
If this is the case, the loss picture you shared is normal.
Could you check if this is the case in your training? You might have to read the source code (along with your provided training arguments) to verify.
Otherwise, could you share your training arguments, so we can have a look? Thanks.<|||||>Thanks. The Trainer does reset the loss in function "_maybe_log_save_evaluate".
But I still don't understand this phenomenon, because i get smooth loss curve when i train BERT with Trainer and Dataset.
Anyway, I'll figured it out myself, thank a lot!


<|||||>>
Hello, i have read the source code carefully(line 2025 in https://github.com/huggingface/transformers/blob/main/src/transformers/trainer.py) but found the loss is reset at every log step rather than the start of every epoch.
Here is my loss curve of training BERT(https://github.com/huggingface/transformers/tree/main/examples/pytorch/language-modeling).
<img width="1064" alt="image" src="https://user-images.githubusercontent.com/27990344/186614028-1c85e2ed-32ab-4bbb-bd78-a16a21401848.png">
I think the loss of BERT and CLIP should be both smooth or both not when using Trainer.
My training arguments are as follows:
<img width="686" alt="image" src="https://user-images.githubusercontent.com/27990344/186614185-6e8c947a-9330-4610-883f-3e09373eb91e.png">
<|||||>Hi @lchwhut Do you use the same training arguments for both BERT and CLIP (other than the training script and datasets of course)? Could you also share the one you used for BERT training?<|||||>HI @ydshieh Almost the same, here is training arguments for BERT:
<img width="632" alt="image" src="https://user-images.githubusercontent.com/27990344/186615950-7422505d-60a2-4618-8a52-80aab17257f0.png">
I also trained CLIP according to readme(https://github.com/huggingface/transformers/blob/main/examples/pytorch/contrastive-image-text/README.md) and obtained unsmooth loss curve(But the loss of BERT is smooth). Could you spare some time to train CLIP and check the loss in trainer_state.json? Thanks<|||||>>
It seems if i train BERT demo(run_mlm.py https://github.com/huggingface/transformers/tree/main/examples/pytorch/language-modeling) without any modification(other than dataset) i will get smooth loss curve.
But if i train CLIP demo(run_clip.py https://github.com/huggingface/transformers/tree/main/examples/pytorch/contrastive-image-text) without any modification i will get unsmooth loss curve.<|||||>@lchwhut It is not clear to me what could cause this difference. I see you log on each step while training CLIP. And the training loss is reset once the log is performed.
Do you have the logged values for each step? Could you see at which steps the loss value dropped down? Do those steps correspond to the end of epochs?
I would suggest you to ask the question on the [forum](https://discuss.huggingface.co/) to see if someone have faced the same issue and figured out the cause.<|||||>I found you mentioned earlier:
> And i also found that the loss(last step) of each epoch may drop suddenly when the size of dataset isn't an integer multiple of batch size. Because the clip_loss is up to batch and the last batch will be duplicate unless "dataloader_drop_last" is set.
Could you explain a bit what this means `the last batch will be duplicate`? Usually the `Trainer` class or the `datasets` won't duplicate the examples.
<|||||>>
I specified logging_steps=1 and dataloader_drop_last=True while training CLIP on a small dataset and it's clear to see i have trained 10 epochs according to the loss curve. The loss curve was almost the same when i trained CLIP demo without modification yesterday.
<img width="1035" alt="image" src="https://user-images.githubusercontent.com/27990344/186835607-4ba7fb3d-3de1-4aa2-bce3-b30a4655f8b9.png">
I checked the logged values for each step and found the loss value(dataloader_drop_last=True) dropped significantly at the first step of each epoch, details are below:
<img width="782" alt="image" src="https://user-images.githubusercontent.com/27990344/186835652-936e250f-ae77-4c77-ac76-29367a5c70e4.png"><|||||>>
Forget what i said about "the last batch will be duplicate". Really sorry about that.
Maybe if i specify dataloader_drop_last=False, the number of samples in the last batch will be not equal to batch_size(less than).
In this case, the clip_loss will drop significantly because it is greatly affected by batch size.<|||||>Hi, I think this probably depends on the dataset (or more precisely, the way you use the dataset), rather than an issue in the Trainer class or the training script.
It would be better to post it on the [forum](https://discuss.huggingface.co/).
(Also, as you mentioned you overrides some dataset methods, it would be nice to share what changes you have done there. Otherwise, try to see if you can reproduce the same situation with the original training script `run_clip.py`.)
<|||||>>
Hello, I posted it on the forum(https://discuss.huggingface.co/t/the-training-loss-logging-steps-will-drop-suddenly-after-each-epoch-help-me-plz-orz/22129/2)
Actully, I have trained CLIP without any modification(original training script run_clip.py) and obtained unsmooth loss curve. Can you try to train CLIP following the readme(https://github.com/huggingface/transformers/tree/main/examples/pytorch/contrastive-image-text)? I think you will also reproduce the same situation.
For your convenience, i post my loss curve of CLIP demo(original training script run_clip.py)


I am glad to share my code:
<img width="1148" alt="image" src="https://user-images.githubusercontent.com/27990344/186872852-59a5143e-3bfc-49cd-b53e-d5dbf455025c.png">
<|||||>I've encountered the same thing. My hypothesis is the drop in loss on epoch boundaries is an indication that your model is memorizing the training data. Interesting, it's not necessarily overfitting, as in my case, the validation loss continued to drop, just not as fast as the training loss.
My reasoning is as follows:
Consider training step $i$. The loss will be lower for step $i+1$ iff what the model has learned in step $i$ generalizes and helps it make a better prediction for step $i+1$. This happens early on during training, which is why your loss decreases in the middle of the first 2 epochs.
But, once the model has learned all the generalizable information it can, what it has learned in step $i$ will not help with step $i+1$. When this happens, the loss will plateau. It will still memorize the data, so when it sees the same data point again in the next epoch, the loss will be lower. This is why there is a sharp drop in loss at the start of each epoch.
I also hypothesize that if the loss is increasing in the middle of an epoch that would indicate that the learning from step $i$ hurts the prediction for step $i+1$ and is an indication of overfitting.
I would suggest looking at the validation loss to make sure you aren't overfitting.<|||||>>
Thanks for your sharing.
I still don't understand what you said about `your model is memorizing the training data`(or more precisely, why this cause sharp drop in loss `only` at the start of each epoch).
I think if there is `only` a sharp drop in loss at the start of each epoch, this means the model only memorizes the data at the start of each epoch? But the Trainer shuffles the data every epochs, so i can't figure it out. Could you explain mor about that?
Another phenomenon (`the train loss of BERT demo drop smoothly` and both CLIP and BERT utilize `Trainer` and `load_dataset`) I found made me think that the sharp drop in loss at the start of each epoch `may be related to the calculation of the CLIP_LOSS`.
As you can see above: the loss of last step of each epoch would drop more significantly(sometime could be zero when the bs of last step is 1) when the size of dataset is not an integer multiple of batch_size, because the CLIP_LOSS is up to batch_size.
In my opinion, there could be a variable (or sth else) that changes along with the epochs. That means the variable become smaller when new epoch comes, and then multiply with loss.
I am checking the calculation of the CLIP_LOSS now (line 386: https://github.com/huggingface/transformers/blob/main/src/transformers/models/vision_text_dual_encoder/modeling_vision_text_dual_encoder.py)<|||||>I don't think it's related to CLIP as I've seen this happen with multiple models. Here's the training loss with OPT-350M. OPT-1.3B and GPT-neo-125M also had this behavior. The larger the model, the larger the loss drops were. Unfortunately, I no longer have the tensorboard logs for those runs. I've also seen this to a lesser degree with a large MLP and my own pytorch training loop, so I don't think it's an issue with HF transformers.

The model is learning every at step, not just the start of the epoch. Consider having a dataset of: a, b, c and learning from one data point doesn't help improve the predictions for the others (Ex If learning from data point a doesn't help improve the prediction on data points b and c). Let's look at what training a model on this dataset would look like.
For each epoch the data is shuffled, but it will go through all the data before repeating. So for 4 epochs, an example training session could look like this:
| Step | Epoch | Data Point | Number of times the model has seen the data point | Loss |
|------|-------|------------|---------------------------------------------------|------|
| 1 | 1 | c | 0 | 5 |
| 2 | 1 | b | 0 | 5 |
| 3 | 1 | a | 0 | 5 |
| 4 | 2 | b | 1 | 4 |
| 5 | 2 | a | 1 | 4 |
| 6 | 2 | c | 1 | 4 |
| 7 | 3 | c | 2 | 3 |
| 8 | 3 | b | 2 | 3 |
| 9 | 3 | a | 2 | 3 |
| 10 | 4 | a | 3 | 2 |
| 11 | 4 | b | 3 | 2 |
| 12 | 4 | c | 3 | 2 |
Notice that during any single epoch, since one data point doesn't improve the predictions for others, the loss stays the same. But as the model trains, it remembers the correct prediction for each data point, so that when it sees it again (which happens in the next epoch) it will produce a better prediction for that specific data point. So the loss for any given data point is correlated with how many times it has seen that specific data point which increments at the start of each epoch.
As for why it's not happening with your BERT model, perhaps the model is too small, you have sufficient data to prevent memorization, or the dataset doesn't have this property.
I'll point out again that this is my best guess to why this is happening and I haven't done any experimentation to confirm that this is the reason. You could try training by sampling your dataset with replacement so that a single data point could appear multiple times in the same epoch. I would expect that the drop in loss at epoch starts wouldn't be visible, although the memorization would still occur.<|||||>> I don't think it's related to CLIP as I've seen this happen with multiple models. Here's the training loss with OPT-350M. OPT-1.3B and GPT-neo-125M also had this behavior. The larger the model, the larger the loss drops were. Unfortunately, I no longer have the tensorboard logs for those runs. I've also seen this to a lesser degree with a large MLP and my own pytorch training loop, so I don't think it's an issue with HF transformers.
>
> 
>
> The model is learning every at step, not just the start of the epoch. Consider having a dataset of: a, b, c and learning from one data point doesn't help improve the predictions for the others (Ex If learning from data point a doesn't help improve the prediction on data points b and c). Let's look at what training a model on this dataset would look like.
>
> For each epoch the data is shuffled, but it will go through all the data before repeating. So for 4 epochs, an example training session could look like this:
>
> Step Epoch Data Point Number of times the model has seen the data point Loss
> 1 1 c 0 5
> 2 1 b 0 5
> 3 1 a 0 5
> 4 2 b 1 4
> 5 2 a 1 4
> 6 2 c 1 4
> 7 3 c 2 3
> 8 3 b 2 3
> 9 3 a 2 3
> 10 4 a 3 2
> 11 4 b 3 2
> 12 4 c 3 2
> Notice that during any single epoch, since one data point doesn't improve the predictions for others, the loss stays the same. But as the model trains, it remembers the correct prediction for each data point, so that when it sees it again (which happens in the next epoch) it will produce a better prediction for that specific data point. So the loss for any given data point is correlated with how many times it has seen that specific data point which increments at the start of each epoch.
>
> As for why it's not happening with your BERT model, perhaps the model is too small, you have sufficient data to prevent memorization, or the dataset doesn't have this property.
>
> I'll point out again that this is my best guess to why this is happening and I haven't done any experimentation to confirm that this is the reason. You could try training by sampling your dataset with replacement so that a single data point could appear multiple times in the same epoch. I would expect that the drop in loss at epoch starts wouldn't be visible, although the memorization would still occur.
@n9Mtq4 Thank you so much for affording time to explain in detail! If this issue happens to multiple models and your own pytorch training loop, i think your reasoning is very reasonable!!!
The HF BERT demo doesn't reproduce the same situation, i guess it's because the `data allcator` HF used `randomly masks the tokens every epoch`. This means the HF BERT can hardly see the same data in a new epoch.
<img width="873" alt="image" src="https://user-images.githubusercontent.com/27990344/187142506-03f7737b-f926-4cfe-ab28-048d91be3fa8.png">
@ydshieh <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Hi, I also meet the same problem. How did you solve this? Thanks.<|||||>Hi, have you solved this problem? Thanks.<|||||>> Hi, have you solved this problem? Thanks.
Yes. I think the main cause is **data representation**. Specifically speaking, your current representation of your data is not good, so the model cannot learn data distribution correctly. Last time, I successfully found another way to represent my data, then this problem was solved.
How to modify the data representation? I think it's closely related to your own problem.<|||||>I've seen the same issue when train the 7B model with stanford alpaca:
<img width="461" alt="image" src="https://user-images.githubusercontent.com/5069709/233764190-84559ae6-ed20-4250-9ab0-3a860b596f27.png">
|
transformers | 18,729 | closed | typo in docstring | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (#18721)
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [X] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). | 08-23-2022 09:22:26 | 08-23-2022 09:22:26 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_18729). All of your documentation changes will be reflected on that endpoint.<|||||>@JadeKim042386 Thank you for the fix! I could find there are 15 same occurrences in `transformers`. Could you replace all of them by the correct one? Thank you!<|||||>@JadeKim042386 The commit history is now not clean - there are 35 files changed.
May I know if you use `git merge` or `git rebase` command? What's the exact command used?
Anyway, we need a clean status to review and approve the PR 🙏 .<|||||>@ydshieh sorry, I was looking for the build failed problem. I used `git commit --amend`.
Now, I will make clean status.<|||||>No problem. Thank you :-)<|||||>@ydshieh
There were total 35 files of the same occurrences in `transformers`.
Can I get a review and approval now? 😄<|||||>Hi, we can't merge a PR in such status. Considering the change could be quickly replicated: search -> replace, could you maybe
```bash
git checkout main
git pull upstream main
git checkout -b new_fix # a new branch to work this
```
and open a new PR instead of this one. Sorry to see this happens.<|||||>@ydshieh No problem~ 😎 I quickly open new PR. |
transformers | 18,728 | closed | Fix incorrect comments about atten mask for pytorch backend | # What does this PR do?
I found these comments do not match the actual behavior. In these comments we use `-10000.0` to mask out elements before `softmax` but the actual value used is `torch.finfo(dtype).min`.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
| 08-23-2022 08:57:15 | 08-23-2022 08:57:15 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@lygztq Great catch. This inconsistency was introduced in (my) PR #17306, where the docstrings were not updated - my bad.<|||||>I think it's more clear to change from `... the smallest value ...` to `... the dtype's smallest value ...`. WDYT?<|||||>> I think it's more clear to change from `... the smallest value ...` to `... the dtype's smallest value ...`. WDYT?
I agree, this is better. |
transformers | 18,727 | closed | Unpin detectron2 | # What does this PR do?
Unpin `detectron2` that are introduced in #18701 and #18680, as the issue was fixed on its side
https://github.com/facebookresearch/detectron2/commit/3cfda4704cb3b23255f29d683cf05cbaf10daaa4
The test `run_tests_layoutlmv2_and_v3` is OK now:
https://app.circleci.com/pipelines/github/huggingface/transformers/46171/workflows/c82c09a9-f6f7-4e21-8732-6681083960ff/jobs/541901 | 08-23-2022 08:05:52 | 08-23-2022 08:05:52 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_18727). All of your documentation changes will be reflected on that endpoint. |
transformers | 18,726 | closed | KeyError 'overflow_to_sample_mapping' when using LayoutXLM with regular Tokenizer + return_overflowing_tokens | ### System Info
(Running from Colab)
- `transformers` version: 4.21.1
- Platform: Linux-5.4.188+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.13
- Huggingface_hub version: 0.8.1
- PyTorch version (GPU?): 1.12.1+cu113 (True)
- Tensorflow version (GPU?): 2.8.2 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes, but not relevant here
- Using distributed or parallel set-up in script?: No
### Who can help?
- Introduced when adding [LayoutLMv3](https://github.com/huggingface/transformers/pull/17060) @NielsRogge , Tokenizer related @SaulLu
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
View this short notebook on Colab: https://colab.research.google.com/drive/1ETpz8UP42r7HjRg4VUkC7L8ou10qY3bQ?usp=sharing
### Expected behavior
Expected:
Processor extracts features from and tokenizes the input successfully.
Output:
```
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
[<ipython-input-6-2f428d9c1568>](https://localhost:8080/#) in <module>
----> 1 encoded = processor(image, words, boxes=boxes, word_labels=word_labels, **processor_kwargs)
[/usr/local/lib/python3.7/dist-packages/transformers/models/layoutxlm/processing_layoutxlm.py](https://localhost:8080/#) in __call__(self, images, text, text_pair, boxes, word_labels, add_special_tokens, padding, truncation, max_length, stride, pad_to_multiple_of, return_token_type_ids, return_attention_mask, return_overflowing_tokens, return_special_tokens_mask, return_offsets_mapping, return_length, verbose, return_tensors, **kwargs)
124 images = features.pop("pixel_values")
125 if return_overflowing_tokens is True:
--> 126 images = self.get_overflowing_images(images, encoded_inputs["overflow_to_sample_mapping"])
127 encoded_inputs["image"] = images
128
[/usr/local/lib/python3.7/dist-packages/transformers/tokenization_utils_base.py](https://localhost:8080/#) in __getitem__(self, item)
234 """
235 if isinstance(item, str):
--> 236 return self.data[item]
237 elif self._encodings is not None:
238 return self._encodings[item]
KeyError: 'overflow_to_sample_mapping'
``` | 08-23-2022 07:02:22 | 08-23-2022 07:02:22 | Please note that this probably also happens with LMv2 if using this combination of non fast processor + return_overflowing_tokens, not sure about LMv3
<|||||>Hi,
We probably need to update `LayoutXLMProcessor` similar to how it was done for LayoutLMv2 in #17092.
Would you be able to open a PR for this?<|||||>Absolutely!<|||||>Just stumbled over exactly the same bug. Thanks for looking into that! 😁 |
transformers | 18,725 | closed | Error Attention Mask Size for Customized Encoder-Decoder Architecture Using Multimodal Encoder | ### System Info
- `transformers` version: 4.21.1
- Platform: Linux-5.4.56.bsk.10-amd64-x86_64-with-debian-10.11
- Python version: 3.7.3
- Huggingface_hub version: 0.8.1
- PyTorch version (GPU?): 1.10.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: yes
### Who can help?
EncoderDecoder @patrickvonplaten and Multimodal @NielsRogge
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I was following the content (https://huggingface.co/blog/warm-starting-encoder-decoder) here to leverage the encoder-decoder model.
```python3
## code below is standard
from transformers import EncoderDecoderModel
import torch
model = EncoderDecoderModel.from_encoder_decoder_pretrained("microsoft/layoutlmv3-base", "roberta-base")
tokenizer = RobertaTokenizerFast.from_pretrained("roberta-base")
model.config.decoder_start_token_id = tokenizer.cls_token_id
model.config.eos_token_id = tokenizer.sep_token_id
model.config.bos_token_id = tokenizer.cls_token_id
model.config.pad_token_id = tokenizer.pad_token_id
model.config.vocab_size = model.config.encoder.vocab_size
model.config.max_length = 100
model.config.no_repeat_ngram_size = 3
model.config.early_stopping = True
model.config.num_beams = 1
### starting to test
batch_size = 2
input_len = 20
output_len = 30
vocab_size = 100 # just example number
input_ids = torch.randint(0, vocab_size, (batch_size, input_len))
bbox = torch.tensor([[[0,0,0,0] for _ in range(input_len)] for _ in range(batch_size)])
attention_mask = torch.ones((batch_size, input_len))
labels = torch.randint(0, vocab_size, (batch_size, output_len))
pixel_values = torch.randn((batch_size, 3, 224, 224))
model(input_ids=input_ids, bbox=bbox, attention_mask=attention_mask, pixel_values=pixel_values,labels=labels)
```
__Error Message__:
```bash
Traceback (most recent call last):
File "/Users/allanjie/projects/LayoutLMv3-DocVQA/demo.py", line 73, in <module>
model(input_ids=input_ids, bbox=bbox, attention_mask=attention_mask, pixel_values=pixel_values,labels=labels)
File "/Users/allanjie/anaconda3/envs/general/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/Users/allanjie/anaconda3/envs/general/lib/python3.9/site-packages/transformers/models/encoder_decoder/modeling_encoder_decoder.py", line 516, in forward
decoder_outputs = self.decoder(
File "/Users/allanjie/anaconda3/envs/general/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/Users/allanjie/anaconda3/envs/general/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 971, in forward
outputs = self.roberta(
File "/Users/allanjie/anaconda3/envs/general/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/Users/allanjie/anaconda3/envs/general/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 848, in forward
encoder_outputs = self.encoder(
File "/Users/allanjie/anaconda3/envs/general/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/Users/allanjie/anaconda3/envs/general/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 524, in forward
layer_outputs = layer_module(
File "/Users/allanjie/anaconda3/envs/general/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/Users/allanjie/anaconda3/envs/general/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 435, in forward
cross_attention_outputs = self.crossattention(
File "/Users/allanjie/anaconda3/envs/general/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/Users/allanjie/anaconda3/envs/general/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 336, in forward
self_outputs = self.self(
File "/Users/allanjie/anaconda3/envs/general/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/Users/allanjie/anaconda3/envs/general/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 259, in forward
attention_scores = attention_scores + attention_mask
RuntimeError: The size of tensor a (217) must match the size of tensor b (20) at non-singleton dimension 3
```
### Expected behavior
Thus, I think the reason for the error is actually the mismatch between the `attention_mask` and the actual attention_mask size.
The attention mask we provide for multimodal model such as LayoutLMv3 usually only considers the textual component,
```
attention_mask: batch_size x input_text_length
```
While the actual encoder_hidden_states are
```
encoder_hidden_states: batch_size x (input_text_length + visual token length) x hidden_size
```
Usually, `visual token length` is around 144.
__My Solution__:
What I did by myself is make a customized `EncoderDecoder` Class and modify the `forward` function to become something like this
```python3
visual_attention_mask = torch.ones(
(batch_size, seq_len - attention_mask.size(1)), dtype=torch.long, device=encoder_hidden_states.device
)
updated_attention_mask = torch.cat([attention_mask, visual_attention_mask], dim=1)
# Decode
decoder_outputs = self.decoder(
input_ids=decoder_input_ids,
attention_mask=decoder_attention_mask,
encoder_hidden_states=encoder_hidden_states,
encoder_attention_mask=updated_attention_mask, ## originally, it should be just attention mask.
```
Similar here
```python3
def prepare_inputs_for_generation(
self, input_ids, past=None, attention_mask=None, use_cache=None, encoder_outputs=None, **kwargs
):
decoder_inputs = self.decoder.prepare_inputs_for_generation(input_ids, past=past)
decoder_attention_mask = decoder_inputs["attention_mask"] if "attention_mask" in decoder_inputs else None
batch_size, seq_len, _ = encoder_outputs[0].size()
visual_attention_mask = torch.ones(
(batch_size, seq_len - attention_mask.size(1)), dtype=torch.long, device=encoder_outputs[0].device
)
updated_attention_mask = torch.cat([attention_mask, visual_attention_mask], dim=1)
input_dict = {
"attention_mask": updated_attention_mask,
"decoder_attention_mask": decoder_attention_mask,
"decoder_input_ids": decoder_inputs["input_ids"],
"encoder_outputs": encoder_outputs,
"past_key_values": decoder_inputs["past_key_values"],
"use_cache": use_cache,
}
return input_dict
```
__Suggested Modification__:
Because different models have a different design, we cannot think that everything is compatible with the above modification.
But I do suggest that we can add an `assertion` statement in between the encoder and decoder for users to check that.
```
assert encoder_outputs[0].size(1) == attention_mask.size(1), you should have the same length ......
```
This better help user understand what's wrong.
| 08-23-2022 03:00:06 | 08-23-2022 03:00:06 | I see what you mean. However, if we add this error message to `modeling_encoder_decoder.py`, then users will update the attention mask of the encoder to make sure it has the same sequence length as the encoder final hidden states.
However, for a model like LayoutLMv3, the `attention_mask` should just contain values for the text sequence length. It will result in an error in case you're providing an attention mask that has the same sequence length as the final hidden states.
Hence, I don't think adding this error message is possible, and it might be better to just do as you did - forking the library and tweak the code.<|||||>I see. I get your concern. Yeah. maybe that's the only way for now. |
transformers | 18,724 | closed | add timesformer model | ### Feature request
Add [TimeSformer, ICML 2021](https://arxiv.org/abs/2102.05095) model to HuggingFace transformers library.
Source code: https://github.com/facebookresearch/TimeSformer
Weight files: https://github.com/facebookresearch/TimeSformer#model-zoo
Alternative implementation: https://github.com/lucidrains/TimeSformer-pytorch
### Motivation
[TimeSformer, ICML 2021](https://arxiv.org/abs/2102.05095) is a transformer-based video understanding model from META AI, and all recent video classification/action recognition papers compare their proposal with this model.
As a Ph.D. candidate researching video transformers, I want to add TimeSformer support to my [video-transformers](https://github.com/fcakyon/video-transformers) package, but there is no easy way of doing that. It would be super easy if HuggingFace included it.
It would be an excellent addition to the `transformers` library considering it has 400+ citations and is an essential benchmark model.
### Your contribution
As a Ph.D. candidate working on video transformers, I would like to work on adding this architecture to the HuggingFace.
I want to start a PR but don't know where to start. Should I take the VideoMAE model as a reference since it has similar input/output to TimeSformer?
### Open source status
- [x] The model implementation is available
- [x] The model weights are available | 08-22-2022 19:54:28 | 08-22-2022 19:54:28 | cc @NielsRogge!<|||||>Hi,
Sure, let me know your email address, then we can set up a slack channel for easier communication. <|||||>> Hi,
>
> Sure, let me know your email address, then we can set up a slack channel for easier communication.
I have written my mail to you by Discord message 👍 |
transformers | 18,723 | closed | Add Trainer to quicktour | This PR makes some edits to the `pipeline` section to focus less on all the tasks (and their definitions) it is capable of. Users probably only need a general representative idea of what it can do, and then they're more interested in diving into how to use the pipeline.
I also added a brief section on the `Trainer` here about the basic parameters it accepts and a small explanation of how to customize the training loop behavior to keep the quick tour short. I think the `Trainer` is pretty important to include since a lot of users use it for training, and we also use it in our finetune guides. | 08-22-2022 18:09:40 | 08-22-2022 18:09:40 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 18,722 | closed | Fix GLUE MNLI evaluation when using `max_eval_samples` | # What does this PR do?
Even when selecting `max_eval_samples` the script will always also evaluate on the full mismatched validation set. This PR fixes this by sampling the mismatched split the same way the matched split is sampled. | 08-22-2022 15:44:30 | 08-22-2022 15:44:30 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 18,721 | closed | Typo in TrOCR documentation | decoder_layerdrop was hidden in [TrOCR documentation](https://huggingface.co/docs/transformers/v4.21.1/en/model_doc/trocr#transformers.TrOCRConfig). 😂

I think it should be revised as follows.
- **init_std** (float, optional, defaults to 0.02) — The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
- **decoder_layerdrop** — (float, optional, defaults to 0.0): The LayerDrop probability for the decoder. See the [LayerDrop paper](see https://arxiv.org/abs/1909.11556) for more details. | 08-22-2022 15:29:07 | 08-22-2022 15:29:07 | I found typo in TrOCRConfig docstring~
https://github.com/huggingface/transformers/blob/84beb8a49bf137a88d1b29ab3a85ba0a3cd097d5/src/transformers/models/trocr/configuration_trocr.py#L70
It revised as follows. (remove ':' ) #18721
```python
decoder_layerdrop (`float`, *optional*, defaults to 0.0):
``` |
transformers | 18,720 | closed | examples/run_summarization_no_trainer: fixed incorrect param to hasattr | # What does this PR do?
Fixes a small bug in `examples/run_summarization_no_trainer.py` which resulted in the script not checkpointing models even if the correct argument was passed from CLI.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
@sgugger @patil-suraj | 08-22-2022 15:08:30 | 08-22-2022 15:08:30 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_18720). All of your documentation changes will be reflected on that endpoint.<|||||>Would you like to take a look at this, @muellerzr?<|||||>@rahular this isn't quite the right solution. What this needs to check here is whether or not the checkpointing steps passed were either `"epoch"` or a digit. So what this currently does is ignore if epoch was actually passed. With this knowledge in mind do you want to give it another go? I'll be able to give a more detailed review if you're stuck etc in a few hours 😄 <|||||>Hi @muellerzr, `checkpointing_steps = args.checkpointing_steps` should already take care of the `epoch` case, if I'm not mistaken. Here's the flow in comments.
```
if hasattr(args, "checkpointing_steps"): # check if `args` has `checkpointing_steps`
checkpointing_steps = args.checkpointing_steps # stores whatever was passed as a local variable (including `epoch`)
if args.checkpointing_steps.isdigit(): # check if the passed argument is a digit
checkpointing_steps = int(args.checkpointing_steps) # overwrite the local variable with the digit after casting
else: # local variable is `None` if `args` does not have `checkpointing_steps`
checkpointing_steps = None
```
Let me know what you think.<|||||>@rahular good point! I think in that case something like the following chain may be how we want to do this:
```python
checkpointing_steps = args.checkpointing_steps
if checkpointing_steps is not None and checkpointing_steps.isdigit():
checkpointing_steps = int(checkpointing_steps)
```
Since it's always present (it's part of the arg parser) and we really care if it's not None or if it's a digit. Does this make sense to you as well?
(And the default is `None` anyways)<|||||>@muellerzr makes sense! Have simplified it and pushed.<|||||>Also make sure to run `make style; make quality` from the base directory of your fork, to fix the style failure<|||||>@muellerzr Sure I can take a look at other trainers as well!<|||||>@muellerzr let's close this PR and I will update the other trainers and create a new request. |
transformers | 18,719 | closed | `QuestionAnsweringPipeline`: different behaviors when with `handle_impossible_answer=True` | ### System Info
- `transformers` version: 4.1.1
- Platform: macOS-10.16-x86_64-i386-64bit
- Python version: 3.9.1
- PyTorch version (GPU?): 1.7.1 (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I am trying to use a Question Answering model on a question-context pair which is unanswerable. I would expect `""` (empty string) as the predicted output answer.
I am calling a `QuestionAnsweringPipeline` object on this sample with `handle_impossible_answer=True`, as recommended in the docs.
I tested two different models for Question Answering, `deepset/roberta-base-squad2` and `deepset/minilm-uncased-squad2`.
```python
import torch
from transformers import AutoTokenizer, AutoModelForQuestionAnswering, pipeline
question = "What is the capital of Belarus?"
context = "It rained yesterday."
for model_ckpt in ["deepset/minilm-uncased-squad2", "deepset/roberta-base-squad2"]:
tokenizer = AutoTokenizer.from_pretrained(model_ckpt)
model = AutoModelForQuestionAnswering.from_pretrained(model_ckpt)
pipe = pipeline("question-answering", model=model, tokenizer=tokenizer)
inputs = tokenizer(question, context, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
pipe_answer = pipe(question=question, context=context, handle_impossible_answer=True)
print(f"-- Model: {model_ckpt}")
print(f"-- answer: {pipe_answer}")
print(f"TOKEN START END")
print("-----------------------------")
for tok, st_, en_ in zip([tokenizer.decode(i) for i in inputs.input_ids[0]], outputs.start_logits[0], outputs.end_logits[0]):
print(f"{tok:<15s} {st_:+.1f} {en_:+.1f}")
print()
```
The output I obtained is the following.
```text
-- Model: deepset/minilm-uncased-squad2
-- answer: {'score': 0.99960857629776, 'start': 0, 'end': 0, 'answer': ''}
TOKEN START END
-----------------------------
[CLS] +4.7 +4.8
what -5.2 -4.8
is -5.8 -5.8
the -4.5 -5.6
capital -5.6 -5.8
of -5.6 -6.2
belarus -1.6 -2.0
? -6.6 -5.6
[SEP] +4.7 +4.8
it -5.4 -6.2
rained -5.9 -5.6
yesterday -5.5 -4.8
. -6.1 -3.9
[SEP] +4.7 +4.8
-- Model: deepset/roberta-base-squad2
-- answer: {'score': 0.37058913707733154, 'start': 0, 'end': 20, 'answer': 'It rained yesterday.'}
TOKEN START END
-----------------------------
<s> +1.1 +1.5
What -7.1 -7.4
is -9.1 -8.0
the -8.3 -8.8
capital -7.9 -6.5
of -8.9 -8.2
Belarus -6.3 -5.0
? -9.4 -7.7
</s> -9.3 -8.3
</s> -9.4 -8.3
It -6.8 -8.6
r -7.5 -8.6
ained -8.5 -6.9
yesterday -7.9 -6.7
. -7.6 -3.6
</s> -9.0 -8.5
```
### Expected behavior
Both models give very low scores to each token in the context for both `start` and `end` of the span, and since I passed `handle_impossible_answer=True` I would therefore expect the predicted output answer to be `""` (unanswerable) in both cases.
However (see output above) this is only true for `deepset/minilm-uncased-squad2`, while `deepset/roberta-base-squad2`surprisingly gives a non-empty answer. | 08-22-2022 14:44:03 | 08-22-2022 14:44:03 | Note: the bug is referred to `transformers 4.1.1`, but on the latest release `4.21.1` (thanks @EmileDel for noticing this!) the behavior seems consistent (see below). I don't know what has changed, but maybe this Issue can be closed?
```text
-- Model: deepset/minilm-uncased-squad2
-- answer: {'score': 0.9996085166931152, 'start': 0, 'end': 0, 'answer': ''}
TOKEN START END
-----------------------------
[CLS] +4.7 +4.8
what -5.2 -4.8
is -5.8 -5.8
the -4.5 -5.6
capital -5.6 -5.8
of -5.6 -6.2
belarus -1.6 -2.0
? -6.6 -5.6
[SEP] +4.7 +4.8
it -5.4 -6.2
rained -5.9 -5.6
yesterday -5.5 -4.8
. -6.1 -3.9
[SEP] +4.7 +4.8
-- Model: deepset/roberta-base-squad2
-- answer: {'score': 0.9927088618278503, 'start': 0, 'end': 0, 'answer': ''}
TOKEN START END
-----------------------------
<s> +1.1 +1.5
What -7.1 -7.4
is -9.1 -8.0
the -8.3 -8.8
capital -7.9 -6.5
of -8.9 -8.2
Belarus -6.3 -5.0
? -9.4 -7.7
</s> -9.3 -8.3
</s> -9.4 -8.3
It -6.8 -8.6
r -7.5 -8.6
ained -8.5 -6.9
yesterday -7.9 -6.7
. -7.6 -3.6
</s> -9.0 -8.5
``` <|||||>cc @Narsil <|||||>Hi @FrancescoCasalegno ,
I can't find the PR associated with this, but yes, early on the pipelines were roughly hardcoded and sometimes wouldn't work on some models because of differences in tokenizers for instance. I won't say everything is fixed, but we caught a bunch of these, and now behavior should be more consistent across models so hopefully you can easily swap out models and get a consistent behavior even when the underlying models are quite different.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 18,718 | closed | Fix typo: s/pre_layrnorm/pre_layernorm | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes typo in models/clip/modeling_clip.py
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 08-22-2022 14:17:16 | 08-22-2022 14:17:16 | _The documentation is not available anymore as the PR was closed or merged._<|||||>plz help :cry:<|||||>Hi @Silviase - thanks for opening this PR and contributing to improving the codebase 💪
Unfortunately, we can't update the layers like this, as it changes their weight names. This means that previous checkpoints cannot be loaded into the model. For example, if I checkout your branch and run
```
from transformers import AutoModel
model_checkpoint = "openai/clip-vit-base-patch32"
model = AutoModel.from_pretrained(model_checkpoint)
```
I get the following output:
```
Some weights of the model checkpoint at openai/clip-vit-base-patch32 were not used when initializing CLIPModel: ['vision_model.pre_layrnorm.bias', 'vision_model.pre_layrnorm.weight']
- This IS expected if you are initializing CLIPModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing CLIPModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Some weights of CLIPModel were not initialized from the model checkpoint at openai/clip-vit-base-patch32 and are newly initialized: ['vision_model.pre_layernorm.bias', 'vision_model.pre_layernorm.weight']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
```
@NielsRogge - do you have any suggestions?<|||||>Hi,
Welcome to the world of software engineering! We indeed can't change this due to backwards compatibility, as it would break any existing CLIP checkpoints.
cc @patil-suraj <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Closing this PR as it's not possible to fix this. |
transformers | 18,717 | closed | fix `pipeline_tutorial.mdx` doctest | # What does this PR do?
Fix [doctest failure](https://github.com/huggingface/transformers/runs/7942633458?check_suite_focus=true) due to the incorrect expected values.
(It's probably just because the previous values are obtained in a different env. or hardware) | 08-22-2022 13:09:44 | 08-22-2022 13:09:44 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_18717). All of your documentation changes will be reflected on that endpoint. |
transformers | 18,716 | closed | [LayoutLM] Add clarification to docs | # What does this PR do?
This PR clarifies that PDFs must be converted to Pillow images for LayoutLM.
Fixes #18113 | 08-22-2022 13:04:57 | 08-22-2022 13:04:57 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 18,715 | closed | Use bfloat16 data type for embeddings and masks when using with PyTorch amp for bfloat16 data type | ### Motivation
Add a attribute `use_torch_bfloat16_embeddings` in `PretrainedConfig` to indicate if bfloat16 data type for embeddings and masks is used and convert the data type of embeddings and masks to bfloat16 accordingly.
This will reduce the number of data type conversion between float and bfloat16 when running models with `torch.cpu.amp.autocast(dtype=torch.bfloat16)` and improve the performance with little accuracy regression. This is because there are many residual modules in models and thus result in data type promotion by binary operations implemented by tensoriterator in PyTorch.
For example: out = tensor1 + tensor2
If data type of tensor1 is float and tensor2 is bfloat16, pytorch will convert tensor2 to float and get float output. When running models using amp for bfloat16, the conversion will results in additional `to` operations, which will reduce performance.
### Testing
- Number of `to` operations
Model | wo/ bf16 embedding and masks| w/ bf16 embedding and masks
-- | -- | --
albert | 22 | 11
bert | 49 | 10
bart | 65 | 38
gpt2 | 56 | 29
distilbert | 40 | 19
roberta | 54 | 15
- Accuracy testing
Model | fp32 | amp bf16 | amp bf16 w/ bf16 embedding
-- | -- | -- | --
casual-language-modeling+xlm-roberta-base | 26.3895 | 26.5273 | 26.442 | this is loss
masked-language-modeling+bert-base-cased | 0.4819 | 0.4818 | 0.4819
masked-language-modeling+distilbert-base-cased | 0.3143 | 0.3158 | 0.3152
multiple-choice+distilbert-base-cased | 0.246 | 0.2461 | 0.2454
multiple-choice+google-electra-base-discriminator | 0.1193 | 0.1194 | 0.1201
text-classification+google-electra-base-generator | 0.6901 | 0.6838 | 0.6838
token-classification+google-electra-base-generator | 0.0414 | 0.0411 | 0.041
token-classification+gpt2 | 0.0379 | 0.0379 | 0.0379
albert | 0.453431373 | 0.428921569 | 0.446078431
distilbert | 0.681372549 | 0.681372549 | 0.681372549
roberta | 0.683823529 | 0.683823529 | 0.683823529
xlm-roberta | 0.637254902 | 0.637254902 | 0.639705882
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 08-22-2022 12:55:42 | 08-22-2022 12:55:42 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_18715). All of your documentation changes will be reflected on that endpoint.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 18,714 | closed | [DETR] Add num_channels attribute | # What does this PR do?
This PR adds a num_channels attribute to DetrConfig, making it possible to use the model on greyscale images (i.e. num_channels = 1).
Fixes #14875 | 08-22-2022 12:48:34 | 08-22-2022 12:48:34 | _The documentation is not available anymore as the PR was closed or merged._<|||||>In my previous experience with DETR I also have faced the issues with Feature extractor, does it support 1 chanel input now? (I had to do something like that: https://github.com/huggingface/transformers/pull/14933/files#diff-826e0e0303aa4a12e527884e4dc548fe8debb137905260e42ec8b4331c9c2c63R594) @NielsRogge
|
transformers | 18,713 | closed | Add Italian translation for `add_new_model.mdx` | # What does this PR do?
Italian translation of `add_new_model.mdx`
Related to issue: #17459
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
https://github.com/huggingface/transformers/issues/17459
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@mfumanelli @nickprock | 08-22-2022 12:07:56 | 08-22-2022 12:07:56 | Hi @Steboss89 , is a very long document well done! At the moment I read until the line 339.
It's ok but:
row 75: esssere --> essere
row 91: Similmente al modello --> Analogamente al modello
And the phrase "I files del modello devono esssere il più self-contained possibile cosicché quando leggi il codice di uno specifico modello, idealmente,
avresti da vedere solo il corrispettivo file `modeling_....py`." is unclear.<|||||>Hey @Steboss89! Thank you for your translation! And thank you, @nickprock for your kind review 🙏.
@Steboss89, additionally to what @nickprock mentioned, to pass the quality tests, make sure you run `make quality` on the main page of your directory. [Here are the instructions](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md).<|||||>It's actually `make style` ;-) <|||||>Hi @Steboss89.
row 426: arhcitettura --> architettura
row 446: che `init()` di tutte le componenti funziona correttamente. --> che `init()` di tutte le componenti funzioni correttamente.
row 454: Di solito é abbastanza fare una copia di uno script gia esistente e adattarlo al vostro caso. --> Di solito basta fare una copia di uno script gia esistente e adattarlo al vostro caso.
row 628: dobbiamo assicurarci che il vostro lavoro sia correttamente testato --> bisogna assicurarsi che il vostro lavoro sia correttamente testato
row 648: `BrandNewBertModelTester`/`BrandNewBertModelTest`
row 714: ttuto --> tutto
row 726: A volte capita che mancano delle informazioni --> A volte capita che manchino delle informazioni<|||||>Sorry for not coming back with corrections and tests earlier
I'll get through the quality checks as well as the corrections suggested
Thanks a million @nickprock @sgugger and @omarespejel
<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Hey @nickprock sorry again for such a long delay. Here is the corrected version of the documentation.
@sgugger and @omarespejel I can see from the quality check that these files:
```
['src/transformers/benchmark/benchmark_utils.py', 'src/transformers/testing_utils.py', 'src/transformers/trainer_utils.py', 'src/transformers/modelcard.py', 'src/transformers/utils/notebook.py', 'src/transformers/models/auto/auto_factory.py', 'src/transformers/models/transfo_xl/modeling_transfo_xl.py', 'src/transformers/models/xlm/tokenization_xlm.py', 'src/transformers/models/tapex/tokenization_tapex.py', 'src/transformers/models/perceiver/modeling_perceiver.py', 'src/transformers/models/flaubert/tokenization_flaubert.py', 'src/transformers/models/fsmt/tokenization_fsmt.py', 'src/transformers/generation_utils.py', 'src/transformers/generation_flax_utils.py', 'src/transformers/generation_tf_utils.py', 'src/transformers/trainer_pt_utils.py']
```
needs restyle. How can I fix this? Thanks <|||||>Your PR is a bit old now, so you will first need to rebase your branch on main (or start a fresh PR, whatever is easiest).<|||||>@sgugger Thanks very much
I'll rebase my branch with current main :) <|||||>_The documentation is not available anymore as the PR was closed or merged._ |
transformers | 18,712 | closed | TFSegformer fail to convert to onnx | Thanks for this repo !
### Error Description
1) train a tensorflow segformer model
2) save the checkpoint
2) try to convert it to onnx via
`python -m transformers.onnx --model CHECKPOINT_PATH OUT_PATH`
### Error:
```
2022-08-22 12:46:58.488185: I tensorflow/stream_executor/cuda/cuda_dnn.cc:368] Loaded cuDNN version 8400
2022-08-22 12:47:00.216906: E tensorflow/stream_executor/cuda/cuda_blas.cc:197] failed to set new cublas math mode: CUBLAS_STATUS_INVALID_VALUE
2022-08-22 12:47:00.216958: W tensorflow/core/framework/op_kernel.cc:1745] OP_REQUIRES failed at matmul_op_impl.h:442 : INTERNAL: Failed initializing math mode
Traceback (most recent call last):
File "/usr/lib/python3.8/runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/usr/lib/python3.8/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/usr/local/lib/python3.8/dist-packages/transformers/onnx/__main__.py", line 107, in <module>
main()
File "/usr/local/lib/python3.8/dist-packages/transformers/onnx/__main__.py", line 73, in main
model = FeaturesManager.get_model_from_feature(
File "/usr/local/lib/python3.8/dist-packages/transformers/onnx/features.py", line 579, in get_model_from_feature
model = model_class.from_pretrained(model, from_tf=True, cache_dir=cache_dir)
File "/usr/local/lib/python3.8/dist-packages/transformers/models/auto/auto_factory.py", line 463, in from_pretrained
return model_class.from_pretrained(
File "/usr/local/lib/python3.8/dist-packages/transformers/modeling_utils.py", line 2194, in from_pretrained
model, loading_info = load_tf2_checkpoint_in_pytorch_model(
File "/usr/local/lib/python3.8/dist-packages/transformers/modeling_tf_pytorch_utils.py", line 343, in load_tf2_checkpoint_in_pytorch_model
tf_model(tf_inputs, training=False) # Make sure model is built
File "/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py", line 67, in error_handler
raise e.with_traceback(filtered_tb) from None
File "/usr/local/lib/python3.8/dist-packages/transformers/modeling_tf_utils.py", line 407, in run_call_with_unpacked_inputs
return func(self, **unpacked_inputs)
File "/usr/local/lib/python3.8/dist-packages/transformers/models/segformer/modeling_tf_segformer.py", line 630, in call
outputs = self.segformer(
File "/usr/local/lib/python3.8/dist-packages/transformers/modeling_tf_utils.py", line 407, in run_call_with_unpacked_inputs
return func(self, **unpacked_inputs)
File "/usr/local/lib/python3.8/dist-packages/transformers/models/segformer/modeling_tf_segformer.py", line 488, in call
encoder_outputs = self.encoder(
File "/usr/local/lib/python3.8/dist-packages/transformers/models/segformer/modeling_tf_segformer.py", line 422, in call
layer_outputs = blk(
File "/usr/local/lib/python3.8/dist-packages/transformers/models/segformer/modeling_tf_segformer.py", line 324, in call
self_attention_outputs = self.attention(
File "/usr/local/lib/python3.8/dist-packages/transformers/models/segformer/modeling_tf_segformer.py", line 232, in call
self_outputs = self.self(hidden_states, height, width, output_attentions)
File "/usr/local/lib/python3.8/dist-packages/transformers/models/segformer/modeling_tf_segformer.py", line 161, in call
query_layer = self.transpose_for_scores(self.query(hidden_states))
tensorflow.python.framework.errors_impl.InternalError: Exception encountered when calling layer "query" (type Dense).
Failed initializing math mode [Op:MatMul]
Call arguments received:
• inputs=tf.Tensor(shape=(3, 16384, 32), dtype=float32)
```
### System Info
Ubuntu 20.04, Python 3.9, TF 2.9.1, Nvidia Titan
Who can help?
@sayakpaul @NielsRogge | 08-22-2022 10:50:58 | 08-22-2022 10:50:58 | Not sure what's missing, but here's a notebook that shows the ONNX conversion (using `tf2onnx`): https://github.com/deep-diver/segformer-tf-transformers/blob/main/notebooks/TFSegFormer_ONNX.ipynb. <|||||>I can confirm this approach is working, thanks for the quick answer |
transformers | 18,711 | closed | Removing warning of model type for `microsoft/tapex-base-finetuned-wtq` and friends. | # What does this PR do?
Fixes warning mentionned in https://github.com/huggingface/transformers/issues/18665
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@sgugger
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
--> | 08-22-2022 09:10:02 | 08-22-2022 09:10:02 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_18711). All of your documentation changes will be reflected on that endpoint. |
transformers | 18,710 | closed | Use bfloat16 data type for embeddings and masks when using with PyTorch amp for bfloat16 data type |
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Add a attribute `use_torch_bfloat16_embeddings` in PretrainedConfig to indicate if use bfloat16 data type for embeddings and masks and convert the data type of embeddings and masks to bfloat16 when using with PyTorch BFloat16 data type.
This will reduce the number of data type conversion between float and bfloat16 when running models with `autocast(dtype=torch.bfloat16)` and improve the performance with almost no accuracy regression. This is because there are many residual module in models and thus result in data type promotion by binary operations implemented by tensoriterator in PyTorch.
## Before submitting
<!--- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
-->
The call number of 'to' before :
with this PR, the call number of 'to' :
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 08-22-2022 08:12:21 | 08-22-2022 08:12:21 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_18710). All of your documentation changes will be reflected on that endpoint. |
transformers | 18,709 | closed | The phenomenon of being suddenly killed during RAG evaluation processing | ### System Info
Platform : Ubuntu 18.04.5
Python version : 3.8.8
pytorch-lightning version : 1.3.1
torch version : 1.12.1
### Who can help?
@patrickvonplaten Hello, an error occurs during the RAG Evaluation, and please help me.
It suddenly becomes killed without any error message.

The script was executed as follows.
python examples/research_projects/rag/eval_rag.py \
--model_name_or_path facebook/rag-sequence-base \
--model_type rag_sequence \
--evaluation_set output/biencoder-nq-dev.questions \
--gold_data_path output/biencoder-nq-dev.pages \
--predictions_path output/retrieval_preds.tsv \
--eval_mode retrieval \
--k 1
If look at the code, it occurs when get the model.

### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
run evaluation script after fine tuning
python examples/research_projects/rag/eval_rag.py \
--model_name_or_path facebook/rag-sequence-base \
--model_type rag_sequence \
--evaluation_set output/biencoder-nq-dev.questions \
--gold_data_path output/biencoder-nq-dev.pages \
--predictions_path output/retrieval_preds.tsv \
--eval_mode retrieval \
--k 1
### Expected behavior
Evaluation should be executed normally and the result should be saved as a tsv file. | 08-22-2022 06:31:18 | 08-22-2022 06:31:18 | This is very likely an out of memory error, unfortunately<|||||>Dear @LysandreJik ,
Thank you for your comment.
I am trying again from fine tuning.
When I run it with ray, I get the following error:

The script was run as below.
python examples/research_projects/rag/finetune_rag.py \
--data_dir '/home/junhokim/workspace/transformers/examples/research_projects/rag/finetune_data' \
--output_dir './finetune_rag_token' \
--model_name_or_path facebook/rag-token-base \
--model_type rag_token \
--do_train \
--do_predict \
--train_batch_size 1 \
--accumulate_grad_batches 1 \
--num_retrieval_workers 4 \
--max_epochs 3 \
--distributed_retriever ray\
--fp16 \
--gpus 2
cuda version : 11.6
torch version : 1.12.1+cu116
pytorch-lightning version : 1.5.10
torchmetrics version : 0.9.3
transformers version : 4.21.1
How can I fix the error?
Best Regards,
Junho Kim
<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Any update on this @Juno82?
Getting similar behaviour. <|||||>Same Error as Killed after downloading the model successfully.
sagemaker-user@studio$ python main.py \
> --model hf-causal \
> --model_args pretrained=EleutherAI/gpt-j-6B \
> --tasks lambada_*,hellaswag \
> --device cuda:0
Selected Tasks: ['hellaswag', 'lambada_openai', 'lambada_openai_cloze', 'lambada_openai_mt_de', 'lambada_openai_mt_en', 'lambada_openai_mt_es', 'lambada_openai_mt_fr', 'lambada_openai_mt_it', 'lambada_standard', 'lambada_standard_cloze']
Device not specified
Cuda Available? False
Downloading (…)lve/main/config.json: 100%|█████████████████████████████████████████████████████████| 930/930 [00:00<00:00, 61.1kB/s]
Downloading pytorch_model.bin: 100%|███████████████████████████████████████████████████████████| 24.2G/24.2G [05:56<00:00, 68.0MB/s]
**Killed**
Any suggestions to use the LM-Evaluation-harness framework ? |
transformers | 18,708 | closed | Adding a documentation to save the best checkpoint during the training in summarization example project | ### Feature request
I am working with the latest [latest summarization example](https://github.com/huggingface/transformers/tree/main/examples/pytorch/summarization). But after the training loop ends, I can't find an easy way to find what is the best checkpoint saved.
### Motivation
I would be easy to keep a separate checkpoint for the best one.
### Your contribution
If there's already a way, please let me know, so that we can document it. Else I can contribute to this feature. I think we can use trainer_arguments. | 08-21-2022 21:53:46 | 08-21-2022 21:53:46 | Hi @shamanez 👋 As per our [issues guidelines](https://github.com/huggingface/transformers/blob/main/ISSUES.md), we reserve GitHub issues for bugs in the repository and/or feature requests. For any other matters, we'd like to invite you to use our [forum](https://discuss.huggingface.co/) 🤗<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 18,707 | closed | Make TFNoRepeatNGramLogitsProcessor XLA compatible | ### Feature request
make TFNoRepeatNGramLogitsProcessor XLA compatible
### Motivation
We would like to use the BART model for summarization tasks. We are excited about the recent announcement of fast text generation with tensorflow; however, we realized that the [TFNoRepeatNGramLogitsProcessor](https://github.com/huggingface/transformers/blob/fd9aa82b07d9b844a21f18f1622de5ca104f25bd/src/transformers/generation_tf_logits_process.py#L385) is not XLA compatible therefore we cannot compile the generate method into a TF graph yet, which makes it hard to meet our throughput requirement.
### Your contribution
@gante Since you mentioned challenges in making this compatible with XLA, do you have an estimate on how long it will take for implementation? I can chat with the team to see how we would like to proceed, as this is a hard blocker for us.
If we are open to not use the TFNoRepeatNGramLogitsProcessor, could you suggest some hack to avoid compiling this part? | 08-21-2022 18:09:30 | 08-21-2022 18:09:30 | Hi @mizhazha 👋
Yeah, it's unfortunate that we weren't able to implement it with XLA. Sadly, using XLA is very binary -- either the entire function (`.generate()`) and all its inner calls can be compiled, or it is not XLA-compatible. If it is not a hard requirement for your application, the easiest solution is not to use it for now :D (i.e. don't set the `no_repeat_ngram_size` argument)
I've spent some time a few months ago trying to make it XLA-compatible, and I've had a few ideas to try to reimplement the processor with XLA capabilities. I will give it a go this week, maybe it can be solved quickly.
If I give no reply until the following Monday, feel free to ping me to brainstorm possible alternatives.<|||||>Hi @mizhazha 👋
I have attempted to explore alternatives in [this PR](https://github.com/huggingface/transformers/pull/18769). It seems like the simple alternatives are compute and memory heavy, so not worth merging (running without XLA is faster). I still think that efficient data structures might enable an efficient implementation, but we don't have the resources to explore those alternatives.
If your team still wants to use XLA and `no_repeat_ngram_size`, a viable alternative can be to apply XLA to the model forward pass only (as opposed to the whole `generate` method):
```python
import time
import tensorflow as tf
from transformers import AutoTokenizer, TFAutoModelForCausalLM
# 1. Load model and tokenizer
model_name = "distilgpt2"
# remember: decoder-only models need left-padding
tokenizer = AutoTokenizer.from_pretrained(model_name, padding_side="left")
tokenizer.pad_token = tokenizer.eos_token
model = TFAutoModelForCausalLM.from_pretrained(model_name)
# 2. Prepare tokenization and generation arguments -- don't forget padding to avoid retracing!
tokenization_kwargs = {"pad_to_multiple_of": 32, "padding": True, "return_tensors": "tf"}
generation_kwargs = {"max_new_tokens": 32, "no_repeat_ngram_size": 5}
# 3. Replace the forward pass by the XLA equivalent
model.call = tf.function(model.call, jit_compile=True)
# 4. Generate! Remember -- the first call will be slow, but all subsequent calls will be fast if you've done things right.
input_prompts = [f"The best thing about {country} is" for country in ["Spain", "Japan", "Angola"]]
for input_prompt in input_prompts:
tokenized_inputs = tokenizer([input_prompt], **tokenization_kwargs)
start = time.time_ns()
generated_text = model.generate(**tokenized_inputs, **generation_kwargs)
end = time.time_ns()
decoded_text = tokenizer.decode(generated_text[0], skip_special_tokens=True)
print(f"Original prompt -- {input_prompt}")
print(f"Generated -- {decoded_text}")
print(f"Execution time -- {(end - start) / 1e6:.1f} ms\n")
```<|||||>Hi @gante, thank you so much for trying to make this operator XLA compatible! Due to that the generate method is currently the time bottleneck, apply XLA to the forward pass won't help much. We will see if we can figure out a solution for efficient implementation.
I hope to get clarifications on this statement 'If it is not a hard requirement for your application, the easiest solution is not to use it for now :D (i.e. don't set the no_repeat_ngram_size argument)' --> Even if we don't set this no_repeat_ngram_size argument, all the inner calls still need to be compiled, so we still cannot compile the generate method, correct? I tried setting the argument to None it still cannot compile. <|||||>Hey @mizhazha 👋
If `no_repeat_ngram_size` is not set (or if it is set to `None`), then it will not be part of the code seen by the compiler, and XLA compilation should work.
What is the error you're seeing?<|||||>@gante Here is what I have run:
````
from transformers import BartTokenizerFast, TFBartForConditionalGeneration
tokenizer = BartTokenizerFast.from_pretrained('facebook/bart-base')
xla_generate = tf.function(TFBartForConditionalGeneration.from_pretrained('facebook/bart-base').generate, jit_compile=True)
tokenized_input = tokenizer("I have a white shiba", return_tensors="tf")
xla_generate(**tokenized_input)
````
Error message:
````
File "/home/jmi/miniconda3/envs/hf_tf29/lib/python3.7/site-packages/transformers/generation_tf_logits_process.py", line 427, in {}call{} * raise NotImplementedError("TFNoRepeatNGramLogitsProcessor is only implemented for eager execution.") NotImplementedError: TFNoRepeatNGramLogitsProcessor is only implemented for eager execution.
````<|||||>@mizhazha ah, that is because `generate` defaults are inherited from the model config file, and the model you are using sets this option ([here](https://huggingface.co/facebook/bart-base/blob/main/config.json#L44)). This inheritance is poorly documented at the moment, and we are working to improve it (https://github.com/huggingface/transformers/issues/18655).
Try setting `no_repeat_ngram_size=0` :) (`no_repeat_ngram_size=None` or not passing the argument causes the default from the model config to be used)<|||||>@gante Ah that works!! Thanks for the pointers and the help to unblock!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Hi @gante , I am still facing below error XLA compatible error for [hugging-face summarization code](https://github.com/huggingface/notebooks/blob/main/examples/summarization-tf.ipynb).
```
_logits_process.py", line 428, in __call__ *
raise NotImplementedError("TFNoRepeatNGramLogitsProcessor is only implemented for eager execution.")
NotImplementedError: TFNoRepeatNGramLogitsProcessor is only implemented for eager execution.
```
I have Apple M1 Pro and I have installed latest transformer's git repo(`pip install --upgrade git+https://github.com/huggingface/transformers.git`). I read your above comments but don't know where to add this for summarization code.<|||||>Hey @kirtinikam 👋 This feature is not available on TF XLA and it's not on our plans to implement it. The solution is to turn it off, i.e. to call `.generate()` (or the pipeline) with `no_repeat_ngram_size=0`<|||||>Hi @gante, I followed your suggestion and passed `no_repeat_ngram_size=0` in the `generate()` method. But still seeing XLA compilation errors on the Apple M1 Pro system.
```
File "/Users/summarizer/venv/lib/python3.9/site-packages/tensorflow/compiler/tf2xla/ops/gen_xla_ops.py", line 1040, in xla_dynamic_slice
return xla_dynamic_slice_eager_fallback(
File "/Users/summarizer/venv/lib/python3.9/site-packages/tensorflow/compiler/tf2xla/ops/gen_xla_ops.py", line 1092, in xla_dynamic_slice_eager_fallback
_result = _execute.execute(b"XlaDynamicSlice", 1, inputs=_inputs_flat,
tensorflow.python.framework.errors_impl.UnimplementedError: Exception encountered when calling layer "SelfAttention" (type TFT5Attention).
Could not find compiler for platform METAL: NOT_FOUND: could not find registered compiler for platform METAL -- check target linkage [Op:XlaDynamicSlice]
```
I tired to set no_repeat_ngram_size in both ways, (1) pass no_repeat_ngram_size=0 parameter to `generate()` method or (2) pass in kwargs dictionary `generation_kwargs = {"no_repeat_ngram_size": 0}` but got same error.
Can you please take a look at my code below.
```
def evaluate(self):
print(f"perform evaluation")
self.load_model('./models/my_model')
# generation_kwargs = {"no_repeat_ngram_size": 0}
all_preds = []
all_labels = []
for batch, labels in tqdm(self.generation_dataset):
predictions = self.model.generate(input_ids=batch["input_ids"],
attention_mask=batch["attention_mask"],
max_new_tokens=32, no_repeat_ngram_size=0)
decoded_preds = self.tokenizer.batch_decode(predictions, skip_special_tokens=True)
labels = labels.numpy()
labels = np.where(labels != -100, labels, self.tokenizer.pad_token_id)
decoded_labels = self.tokenizer.batch_decode(labels, skip_special_tokens=True)
decoded_preds = ["\n".join(sent_tokenize(pred.strip())) for pred in decoded_preds]
decoded_labels = ["\n".join(sent_tokenize(label.strip())) for label in decoded_labels]
all_preds.extend(decoded_preds)
all_labels.extend(decoded_labels)
result = self.metric.compute(
predictions=all_preds, references=all_labels, use_stemmer=True
)
result = {key: value.mid.fmeasure * 100 for key, value in result.items()}
print({k: round(v, 4) for k, v in result.items()})
```
Any sugestions on this.<|||||>Hey @kirtinikam -- that seems like a MacOS issue. Check [this link](https://developer.apple.com/forums/thread/709569) :) |
transformers | 18,706 | closed | update no trainer scripts to remove check for main process while initiating trackers | # What does this PR do?
This PR updates no_trainer examples with tracking enabled to remove the check for `is_main_process` while initiating trackers by `accelerator.init_trackers()` as this method automatically initializes the trackers on the main process. This behavior was merged in Accelerate in this [PR](https://github.com/huggingface/accelerate/pull/642)
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@muellerzr @sgugger | 08-21-2022 03:10:57 | 08-21-2022 03:10:57 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 18,705 | closed | Extend `tokenizer.pad` with `offset_mapping` | # What does this PR do?
Extend `tokenizer.pad`, accurately `tokenizer._pad`, method with functionality for padding `offset_mapping` with `(0, 0)`.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [Padding offsets mapping via tokenizer.pad #18681
](https://github.com/huggingface/transformers/issues/18681)? Please add a link
to it if that's the case.
- [] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 08-20-2022 18:26:25 | 08-20-2022 18:26:25 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_18705). All of your documentation changes will be reflected on that endpoint.<|||||>Hi @vad13irt. Could you explain a bit why this change is needed, and what would be its usage? With a short code snippet might be very helpful. Thank you!
<|||||>> Hi @vad13irt. Could you explain a bit why this change is needed, and what would be its usage? With a short code snippet might be very helpful. Thank you!
Hello, @ydshieh! Thank you for your question.
Why is it needed?
Sometimes, there is a need in converting token-wise predictions to char-wise predictions, because you shoud create the zero array with the length of input text, iterate over all tokens and their corresponding spans and then change indices in the zero-array to get char-wise predictions; or compute confidence scores of token-wise predictions and then selecting the most confident candidates, and then compute some losses or metrics.
For example, it is done in this [great repository](https://github.com/affjljoo3581/Feedback-Prize-Competition/blob/master/src/lightning/module.py) for one of the Kaggle competitions.
Let's take a look:
```py
def validation_epoch_end(self, outputs: List[Tuple[torch.Tensor, ...]]):
loss, texts, logits, labels, offset_mapping = map(list, zip(*outputs))
texts = sum(texts, [])
# Decode the NER-tags with beam-search algorithm.
preds, pred_probs = ner_beam_search_decode(
concat_tensors_with_padding(logits, padding=0).float().log_softmax(dim=-1),
self.id2label,
self.config.model.decoding.beam_size,
)
labels = concat_tensors_with_padding(labels, padding=-100)
offset_mapping = concat_tensors_with_padding(offset_mapping, padding=0)
preds, pred_probs = preds.cpu().numpy(), pred_probs.cpu().numpy()
labels, offset_mapping = labels.cpu().numpy(), offset_mapping.cpu().numpy()
# Collect the NER entities for predictions and labels to calculate the F1 score.
pred_entities_list, label_entities_list = [], []
for text, preds, pred_probs, labels, offset_mapping in zip(
texts, preds, pred_probs, labels, offset_mapping
):
valid_mask = offset_mapping[..., 1] > 0
preds, pred_probs = preds[valid_mask], pred_probs[valid_mask]
labels, offset_mapping = labels[valid_mask], offset_mapping[valid_mask]
# Extract the NER entities from BIO-naming tags. Note that the
# low-confidence or too-short entities will be dropped.
pred_entities, pred_entity_probs = extract_entities_from_ner_tags(
[self.id2label[x] for x in preds], offset_mapping, pred_probs
)
pred_entities = convert_offsets_to_word_indices(text, pred_entities)
pred_entities = [
(entity, a, b)
for (entity, a, b), prob in zip(pred_entities, pred_entity_probs)
if b - a + 1 >= self.minimum_lengths[entity]
and prob >= self.minimum_probs[entity]
]
pred_entities_list.append(pred_entities)
# Of course, we will extract the entities for labels.
label_entities, _ = extract_entities_from_ner_tags(
[self.id2label[x] for x in labels], offset_mapping
)
label_entities = convert_offsets_to_word_indices(text, label_entities)
label_entities_list.append(label_entities)
# Calculate the macro-F1 score for NER entities.
f1 = ner_entity_macro_f1_score(pred_entities_list, label_entities_list)
self.log("val/loss", torch.stack(loss).mean())
self.log("val/f1_score", f1)
```
In order to get the matrix representation of `offset_mapping` the user should pad them with this line of code, thereby alleviating the writing code for getting non-special tokens mask, although such behavior can be reached using `special_tokens_mask`.
```py
offset_mapping = concat_tensors_with_padding(offset_mapping, padding=0)
```
Then, he took the non-special tokens to further get the predicted entities and their confidence scores, e.g mean of char-wise predictions (please refer to this [code snippet](https://github.com/affjljoo3581/Feedback-Prize-Competition/blob/034427117cc8a3e1dd63401b3519fc28e3f18830/src/utils/ner_utils.py#L31)). The entities are selected with corresponding thresholds. Then, he computed F1-macro.
If you have some questions, please tag me (@vad13irt). <|||||>cc @SaulLu :)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 18,704 | closed | [RAG] - TypeError: init_retrieval() missing 1 required positional argument: 'distributed_port' for PL==1.5.10 | ### System Info
```
Name Version Build Channel
_libgcc_mutex 0.1 main
_openmp_mutex 5.1 1_gnu
absl-py 1.2.0 pypi_0 pypi
aiohttp 3.8.1 pypi_0 pypi
aiosignal 1.2.0 pypi_0 pypi
asttokens 2.0.5 pyhd3eb1b0_0
async-timeout 4.0.2 pypi_0 pypi
attrs 22.1.0 pypi_0 pypi
backcall 0.2.0 pyhd3eb1b0_0
blas 1.0 mkl
bzip2 1.0.8 h7b6447c_0
ca-certificates 2022.07.19 h06a4308_0
cachetools 5.2.0 pypi_0 pypi
certifi 2022.6.15 py38h06a4308_0
charset-normalizer 2.1.0 pypi_0 pypi
click 8.0.4 pypi_0 pypi
cudatoolkit 11.3.1 h9edb442_10 conda-forge
datasets 2.4.0 pypi_0 pypi
debugpy 1.5.1 py38h295c915_0
decorator 5.1.1 pyhd3eb1b0_0
dill 0.3.5.1 pypi_0 pypi
distlib 0.3.5 pypi_0 pypi
entrypoints 0.4 py38h06a4308_0
executing 0.8.3 pyhd3eb1b0_0
faiss-cpu 1.7.2 pypi_0 pypi
ffmpeg 4.2.2 h20bf706_0
filelock 3.8.0 pypi_0 pypi
freetype 2.11.0 h70c0345_0
frozenlist 1.3.1 pypi_0 pypi
fsspec 2022.7.1 pypi_0 pypi
future 0.18.2 pypi_0 pypi
giflib 5.2.1 h7b6447c_0
gitdb 4.0.9 pypi_0 pypi
gitpython 3.1.27 pypi_0 pypi
gmp 6.2.1 h295c915_3
gnutls 3.6.15 he1e5248_0
google-auth 2.10.0 pypi_0 pypi
google-auth-oauthlib 0.4.6 pypi_0 pypi
grpcio 1.43.0 pypi_0 pypi
huggingface-hub 0.8.1 pypi_0 pypi
idna 3.3 pypi_0 pypi
importlib-metadata 4.12.0 pypi_0 pypi
importlib-resources 5.9.0 pypi_0 pypi
intel-openmp 2021.4.0 h06a4308_3561
ipykernel 6.9.1 py38h06a4308_0
ipython 8.4.0 py38h06a4308_0
jedi 0.18.1 py38h06a4308_1
jpeg 9e h7f8727e_0
jsonschema 4.10.0 pypi_0 pypi
jupyter_client 7.2.2 py38h06a4308_0
jupyter_core 4.10.0 py38h06a4308_0
lame 3.100 h7b6447c_0
lcms2 2.12 h3be6417_0
ld_impl_linux-64 2.38 h1181459_1
lerc 3.0 h295c915_0
libdeflate 1.8 h7f8727e_5
libedit 3.1.20210910 h7f8727e_0
libffi 3.3 he6710b0_2
libgcc-ng 11.2.0 h1234567_1
libgomp 11.2.0 h1234567_1
libidn2 2.3.2 h7f8727e_0
libopus 1.3.1 h7b6447c_0
libpng 1.6.37 hbc83047_0
libsodium 1.0.18 h7b6447c_0
libstdcxx-ng 11.2.0 h1234567_1
libtasn1 4.16.0 h27cfd23_0
libtiff 4.4.0 hecacb30_0
libunistring 0.9.10 h27cfd23_0
libuv 1.40.0 h7b6447c_0
libvpx 1.7.0 h439df22_0
libwebp 1.2.2 h55f646e_0
libwebp-base 1.2.2 h7f8727e_0
lz4-c 1.9.3 h295c915_1
markdown 3.4.1 pypi_0 pypi
markupsafe 2.1.1 pypi_0 pypi
matplotlib-inline 0.1.2 pyhd3eb1b0_2
mkl 2021.4.0 h06a4308_640
mkl-service 2.4.0 py38h7f8727e_0
mkl_fft 1.3.1 py38hd3c417c_0
mkl_random 1.2.2 py38h51133e4_0
msgpack 1.0.4 pypi_0 pypi
multidict 6.0.2 pypi_0 pypi
multiprocess 0.70.13 pypi_0 pypi
ncurses 6.3 h5eee18b_3
nest-asyncio 1.5.5 py38h06a4308_0
nettle 3.7.3 hbbd107a_1
numpy 1.23.1 py38h6c91a56_0
numpy-base 1.23.1 py38ha15fc14_0
oauthlib 3.2.0 pypi_0 pypi
openh264 2.1.1 h4ff587b_0
openssl 1.1.1q h7f8727e_0
packaging 21.3 pypi_0 pypi
pandas 1.4.3 pypi_0 pypi
parso 0.8.3 pyhd3eb1b0_0
pexpect 4.8.0 pyhd3eb1b0_3
pickleshare 0.7.5 pyhd3eb1b0_1003
pillow 9.2.0 py38hace64e9_1
pip 22.1.2 py38h06a4308_0
pkgutil-resolve-name 1.3.10 pypi_0 pypi
platformdirs 2.5.2 pypi_0 pypi
prompt-toolkit 3.0.20 pyhd3eb1b0_0
protobuf 3.19.4 pypi_0 pypi
psutil 5.9.1 pypi_0 pypi
ptyprocess 0.7.0 pyhd3eb1b0_2
pure_eval 0.2.2 pyhd3eb1b0_0
pyarrow 9.0.0 pypi_0 pypi
pyasn1 0.4.8 pypi_0 pypi
pyasn1-modules 0.2.8 pypi_0 pypi
pydeprecate 0.3.1 pypi_0 pypi
pygments 2.11.2 pyhd3eb1b0_0
pyparsing 3.0.9 pypi_0 pypi
pyrsistent 0.18.1 pypi_0 pypi
python 3.8.13 h12debd9_0
python-dateutil 2.8.2 pyhd3eb1b0_0
pytorch 1.10.1 py3.8_cuda11.3_cudnn8.2.0_0 pytorch
pytorch-lightning 1.5.10 pypi_0 pypi
pytorch-mutex 1.0 cuda pytorch
pytz 2022.2.1 pypi_0 pypi
pyyaml 6.0 pypi_0 pypi
pyzmq 23.2.0 py38h6a678d5_0
ray 1.13.0 pypi_0 pypi
readline 8.1.2 h7f8727e_1
regex 2022.7.25 pypi_0 pypi
requests 2.28.1 pypi_0 pypi
requests-oauthlib 1.3.1 pypi_0 pypi
responses 0.18.0 pypi_0 pypi
rsa 4.9 pypi_0 pypi
setuptools 59.5.0 pypi_0 pypi
six 1.16.0 pyhd3eb1b0_1
smmap 5.0.0 pypi_0 pypi
sqlite 3.39.2 h5082296_0
stack_data 0.2.0 pyhd3eb1b0_0
tensorboard 2.10.0 pypi_0 pypi
tensorboard-data-server 0.6.1 pypi_0 pypi
tensorboard-plugin-wit 1.8.1 pypi_0 pypi
tk 8.6.12 h1ccaba5_0
tokenizers 0.12.1 pypi_0 pypi
torchaudio 0.10.1 py38_cu113 pytorch
torchmetrics 0.6.0 pypi_0 pypi
torchvision 0.11.2 py38_cu113 pytorch
tornado 6.1 py38h27cfd23_0
tqdm 4.64.0 pypi_0 pypi
traitlets 5.1.1 pyhd3eb1b0_0
transformers 4.21.1 pypi_0 pypi
typing_extensions 4.3.0 py38h06a4308_0
urllib3 1.26.11 pypi_0 pypi
virtualenv 20.16.3 pypi_0 pypi
wcwidth 0.2.5 pyhd3eb1b0_0
werkzeug 2.2.2 pypi_0 pypi
wheel 0.37.1 pyhd3eb1b0_0
x264 1!157.20191217 h7b6447c_0
xxhash 3.0.0 pypi_0 pypi
xz 5.2.5 h7f8727e_1
yarl 1.8.1 pypi_0 pypi
zeromq 4.3.4 h2531618_0
zipp 3.8.1 pypi_0 pypi
zlib 1.2.12 h7f8727e_2
zstd 1.5.2 ha4553b6_0
```
### Who can help?
@patrickvonplaten @lhoestq
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Hi, I'm try to run finetuning scirpt of RAG with ```pytorch-lightning==1.5.10```
And below one is my training command.
```
python examples/research_projects/rag/finetune_rag.py \
--data_dir '/home/keonwookim/transformers/examples/research_projects/rag/finetune_data' \
--output_dir './finetune_rag_token' \
--model_name_or_path facebook/rag-token-base \
--model_type rag_token \
--do_train \
--do_predict \
--train_batch_size 1 \
--accumulate_grad_batches 1 \
--num_retrieval_workers 4 \
--max_epochs 3 \
--fp16 \
--gpus 2
```
However, I'm struggling with the following error for days.
```INFO:torch.distributed.distributed_c10d:Rank 1: Completed store-based barrier for key:store_based_barrier_key:1 with 2 nodes.
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [2,3]
LOCAL_RANK: 1 - CUDA_VISIBLE_DEVICES: [2,3]
/home/keonwookim/anaconda3/envs/odqa/lib/python3.8/site-packages/transformers/optimization.py:306: FutureWarning: This implementation of AdamW is deprecated and will be removed in a future version. Use the PyTorch implementation torch.optim.AdamW instead, or set `no_deprecation_warning=True` to disable this warning
warnings.warn(
/home/keonwookim/anaconda3/envs/odqa/lib/python3.8/site-packages/transformers/optimization.py:306: FutureWarning: This implementation of AdamW is deprecated and will be removed in a future version. Use the PyTorch implementation torch.optim.AdamW instead, or set `no_deprecation_warning=True` to disable this warning
warnings.warn(
/home/keonwookim/anaconda3/envs/odqa/lib/python3.8/site-packages/pytorch_lightning/callbacks/model_checkpoint.py:631: UserWarning: Checkpoint directory /home/keonwookim/transformers/finetune_rag_token exists and is not empty.
rank_zero_warn(f"Checkpoint directory {dirpath} exists and is not empty.")
Traceback (most recent call last):
File "/home/keonwookim/anaconda3/envs/odqa/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 685, in _call_and_handle_interrupt
return trainer_fn(*args, **kwargs)
File "/home/keonwookim/anaconda3/envs/odqa/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 777, in _fit_impl
self._run(model, ckpt_path=ckpt_path)
File "/home/keonwookim/anaconda3/envs/odqa/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1199, in _run
self._dispatch()
File "/home/keonwookim/anaconda3/envs/odqa/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1279, in _dispatch
self.training_type_plugin.start_training(self)
File "/home/keonwookim/anaconda3/envs/odqa/lib/python3.8/site-packages/pytorch_lightning/plugins/training_type/training_type_plugin.py", line 202, in start_training
self._results = trainer.run_stage()
File "/home/keonwookim/anaconda3/envs/odqa/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1289, in run_stage
return self._run_train()
File "/home/keonwookim/anaconda3/envs/odqa/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1311, in _run_train
self._run_sanity_check(self.lightning_module)
File "/home/keonwookim/anaconda3/envs/odqa/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1368, in _run_sanity_check
self.call_hook("on_sanity_check_start")
File "/home/keonwookim/anaconda3/envs/odqa/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1495, in call_hook
callback_fx(*args, **kwargs)
File "/home/keonwookim/anaconda3/envs/odqa/lib/python3.8/site-packages/pytorch_lightning/trainer/callback_hook.py", line 78, in on_sanity_check_start
callback.on_sanity_check_start(self, self.lightning_module)
File "/home/keonwookim/transformers/examples/research_projects/rag/lightning_base.py", line 275, in on_sanity_check_start
pl_module.model.rag.retriever.init_retrieval() # better to use hook functions.
TypeError: init_retrieval() missing 1 required positional argument: 'distributed_port'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "examples/research_projects/rag/finetune_rag.py", line 649, in <module>
main(args)
File "examples/research_projects/rag/finetune_rag.py", line 613, in main
trainer: pl.Trainer = generic_train(
File "/home/keonwookim/transformers/examples/research_projects/rag/lightning_base.py", line 405, in generic_train
trainer.fit(model)
File "/home/keonwookim/anaconda3/envs/odqa/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 740, in fit
self._call_and_handle_interrupt(
File "/home/keonwookim/anaconda3/envs/odqa/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 698, in _call_and_handle_interrupt
self.training_type_plugin.reconciliate_processes(traceback.format_exc())
File "/home/keonwookim/anaconda3/envs/odqa/lib/python3.8/site-packages/pytorch_lightning/plugins/training_type/ddp.py", line 533, in reconciliate_processes
raise DeadlockDetectedException(f"DeadLock detected from rank: {self.global_rank} \n {trace}")
pytorch_lightning.utilities.exceptions.DeadlockDetectedException: DeadLock detected from rank: 0
Traceback (most recent call last):
File "/home/keonwookim/anaconda3/envs/odqa/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 685, in _call_and_handle_interrupt
return trainer_fn(*args, **kwargs)
File "/home/keonwookim/anaconda3/envs/odqa/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 777, in _fit_impl
self._run(model, ckpt_path=ckpt_path)
File "/home/keonwookim/anaconda3/envs/odqa/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1199, in _run
self._dispatch()
File "/home/keonwookim/anaconda3/envs/odqa/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1279, in _dispatch
self.training_type_plugin.start_training(self)
File "/home/keonwookim/anaconda3/envs/odqa/lib/python3.8/site-packages/pytorch_lightning/plugins/training_type/training_type_plugin.py", line 202, in start_training
self._results = trainer.run_stage()
File "/home/keonwookim/anaconda3/envs/odqa/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1289, in run_stage
return self._run_train()
File "/home/keonwookim/anaconda3/envs/odqa/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1311, in _run_train
self._run_sanity_check(self.lightning_module)
File "/home/keonwookim/anaconda3/envs/odqa/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1368, in _run_sanity_check
self.call_hook("on_sanity_check_start")
File "/home/keonwookim/anaconda3/envs/odqa/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1495, in call_hook
callback_fx(*args, **kwargs)
File "/home/keonwookim/anaconda3/envs/odqa/lib/python3.8/site-packages/pytorch_lightning/trainer/callback_hook.py", line 78, in on_sanity_check_start
callback.on_sanity_check_start(self, self.lightning_module)
File "/home/keonwookim/transformers/examples/research_projects/rag/lightning_base.py", line 275, in on_sanity_check_start
pl_module.model.rag.retriever.init_retrieval() # better to use hook functions.
TypeError: init_retrieval() missing 1 required positional argument: 'distributed_port'
```
In my opinion, it seems that ```InitCallback()``` in ```lightning_base.py``` is used, which is for Ray distribued_retriever.
### Expected behavior
May be CustomDDP() should be exploited to train, which gives self.distribued_port as a argument for init_retrieval() | 08-20-2022 09:09:47 | 08-20-2022 09:09:47 | Hi kaiiwoo, I think you may try to change the code below
https://github.com/huggingface/transformers/blob/30992ef0d911bdeca425969d210771118a5cd1ac/examples/research_projects/rag/lightning_base.py#L275
to
`pl_module.model.rag.retriever.init_retrieval(pl_module.hparams.distributed_port)`<|||||>Hi, @aRyBernAlTEglOTRO
thanks for your suggestion.
but I've tried that before, it takes such a long time in initializing distribued retriever, and finally ends with socketerror.
```distributed_backend=nccl
All distributed processes registered. Starting with 2 processes
----------------------------------------------------------------------------------------------------
INFO:torch.distributed.distributed_c10d:Rank 1: Completed store-based barrier for key:store_based_barrier_key:1 with 2 nodes.
LOCAL_RANK: 1 - CUDA_VISIBLE_DEVICES: [2,3]
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [2,3]
/home/keonwookim/anaconda3/envs/odqa/lib/python3.8/site-packages/transformers/optimization.py:306: FutureWarning: This implementation of AdamW is deprecated and will be removed in a future version. Use the PyTorch implementation torch.optim.AdamW instead, or set `no_deprecation_warning=True` to disable this warning
warnings.warn(
/home/keonwookim/anaconda3/envs/odqa/lib/python3.8/site-packages/transformers/optimization.py:306: FutureWarning: This implementation of AdamW is deprecated and will be removed in a future version. Use the PyTorch implementation torch.optim.AdamW instead, or set `no_deprecation_warning=True` to disable this warning
warnings.warn(
/home/keonwookim/anaconda3/envs/odqa/lib/python3.8/site-packages/pytorch_lightning/callbacks/model_checkpoint.py:631: UserWarning: Checkpoint directory /home/keonwookim/transformers/finetune_rag_token exists and is not empty.
rank_zero_warn(f"Checkpoint directory {dirpath} exists and is not empty.")
INFO:distributed_pytorch_retriever:initializing retrieval
INFO:distributed_pytorch_retriever:dist initialized
Traceback (most recent call last):
File "/home/keonwookim/anaconda3/envs/odqa/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 685, in _call_and_handle_interrupt
return trainer_fn(*args, **kwargs)
File "/home/keonwookim/anaconda3/envs/odqa/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 777, in _fit_impl
self._run(model, ckpt_path=ckpt_path)
File "/home/keonwookim/anaconda3/envs/odqa/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1199, in _run
self._dispatch()
File "/home/keonwookim/anaconda3/envs/odqa/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1279, in _dispatch
self.training_type_plugin.start_training(self)
File "/home/keonwookim/anaconda3/envs/odqa/lib/python3.8/site-packages/pytorch_lightning/plugins/training_type/training_type_plugin.py", line 202, in start_training
self._results = trainer.run_stage()
File "/home/keonwookim/anaconda3/envs/odqa/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1289, in run_stage
return self._run_train()
File "/home/keonwookim/anaconda3/envs/odqa/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1311, in _run_train
self._run_sanity_check(self.lightning_module)
File "/home/keonwookim/anaconda3/envs/odqa/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1368, in _run_sanity_check
self.call_hook("on_sanity_check_start")
File "/home/keonwookim/anaconda3/envs/odqa/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1495, in call_hook
callback_fx(*args, **kwargs)
File "/home/keonwookim/anaconda3/envs/odqa/lib/python3.8/site-packages/pytorch_lightning/trainer/callback_hook.py", line 78, in on_sanity_check_start
callback.on_sanity_check_start(self, self.lightning_module)
File "/home/keonwookim/transformers/examples/research_projects/rag/lightning_base.py", line 275, in on_sanity_check_start
pl_module.model.rag.retriever.init_retrieval(pl_module.hparams.distributed_port) # better to use hook functions.
File "/home/keonwookim/transformers/examples/research_projects/rag/distributed_pytorch_retriever.py", line 66, in init_retrieval
self.process_group = dist.new_group(ranks=None, backend="gloo")
File "/home/keonwookim/anaconda3/envs/odqa/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py", line 2898, in new_group
pg = _new_process_group_helper(
File "/home/keonwookim/anaconda3/envs/odqa/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py", line 684, in _new_process_group_helper
pg = ProcessGroupGloo(prefix_store, rank, world_size, timeout=timeout)
RuntimeError: Socket Timeout
```<|||||>Hi @kaiiwoo, I think there is no problem with the code, just forget the method I mentioned earlier. I think you forgot to pass the params `--distributed_retriever ray` when running `examples/research_projects/rag/finetune_rag.py`, maybe you can try that. I think compared to the original `CustomDDP`, currently, the `InitCallback` can only work when the `distributed_retriever` is set to `ray`<|||||>hi I am the person who revamped the RAG code base with the latest PL. Please follow @aRyBernAlTEglOTRO 's instructions. The current version is only working with the RAY.
https://github.com/shamanez/transformers/blob/main/examples/research_projects/rag/lightning_base.py#L396
The main reason is the latest PL -lightning doesn't encourage us to use pluggings. So please use the ray as the default distributed retriever.
@patrickvonplaten Shall we update the README ?
<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>We don't maintain the research folders actively. Feel free to open a PR though if you want @shamanez - happy to review and merge<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 18,703 | closed | 'topk_cpu not implemented for half' when using topk with bitsandbytes 8-bit quant | ### System Info
- `transformers` version: 4.22.0.dev0
- Platform: Linux-5.10.102.1-microsoft-standard-WSL2-x86_64-with-glibc2.10
- Python version: 3.8.13
- Huggingface_hub version: 0.8.1
- PyTorch version (GPU?): 1.12.1+cu116 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help?
@patrickvonplaten @Lysandre
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
When running the below example code, I get `RuntimeError: "topk_cpu" not implemented for 'Half'` I'm using device_map="auto", and the latest public version of bitsandbytes along with `load_in_8bit=True`. Works fine when using greedy instead of topk/p sampling.
```
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model = AutoModelForCausalLM.from_pretrained("facebook/opt-350m", device_map="auto", load_in_8bit=True)
# the fast tokenizer currently does not work correctly
tokenizer = AutoTokenizer.from_pretrained("facebook/opt-350m", use_fast=False)
prompt = "Hello, I am conscious and"
input_ids = tokenizer(prompt, return_tensors="pt").input_ids
generated_ids = model.generate(
input_ids,
do_sample=True,
max_length=200,
top_k=50,
top_p=0.95,
num_return_sequences=3)
result = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
print(result)
```
### Expected behavior
Inference should progress correctly and result should be printed to console. | 08-20-2022 04:24:20 | 08-20-2022 04:24:20 | Running `input_ids.cuda()` before feeding it into model.generate() resolves this error, but that isn't possible when using this equivalent pipeline route:
```
import pprint
from transformers import pipeline, logging
logging.set_verbosity_info()
name = "facebook/opt-30b"
text = "test prompt"
pipe = pipeline(model=name, model_kwargs= {"device_map": "auto", "load_in_8bit": True})
result = pipe(text, do_sample=True,
max_length=200,
top_k=50,
top_p=0.95,
num_return_sequences=3)
pprint.pprint(result)
```<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Hmm, I sadly won't have time to look into this anytime soon. cc @gante @ArthurZucker here maybe<|||||>FYI, I was able to work around this by passing `device=0` to `pipeline()`. Obviously not an ideal solution, but acceptable when also using `device_map={"": 0}`.<|||||>Added to the list of `generate` tasks -- @ArthurZucker lmk if you'd be interested in checking this issue!<|||||>Hey, gonna mark this as closed as #19468 fixes it!
Placing the `input_ids` on `"cuda"` solves the issue. A warning was added ! <|||||>(this should not be closed as it is not documented :D )<|||||>cc @younesbelkada I think you fixed this a long time ago no? It was bits and bytes issues (I might be wrong as it seems someone had the same problem) but have seen similar issue <|||||>No this is not a `bitsandbytes` issue as you can also reproduce it with `float16` models. To reproduce you can just run:
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
MAX_NEW_TOKENS = 128
model_name = 'gpt2'
text = """
Q: On average Joe throws 25 punches per minute. A fight lasts 5 rounds of 3 minutes.
How many punches did he throw?\n
A: Let’s think step by step.\n"""
tokenizer = AutoTokenizer.from_pretrained(model_name)
input_ids = tokenizer(text, return_tensors="pt").input_ids
model = AutoModelForCausalLM.from_pretrained(
model_name,
device_map='auto',
torch_dtype=torch.float16
)
generated_ids = model.generate(input_ids, max_length=len(input_ids[0])+25, do_sample=True, top_p=0.7)
print(tokenizer.decode(generated_ids[0]))
```
from: https://github.com/huggingface/transformers/pull/19468
This happens on edge cases where users pass a tensor that is on CPU to a model that is converted in `float16`.
This is because some operations such as `top_p` are called in a cpu half tensor since `accelerate` sets the output on the same device as the input. <|||||>Given how convenient the `pipeline` utilities are, it would be great if the `input_ids.to(0)` workaround could be included there, so that code like [in the comment above](https://github.com/huggingface/transformers/issues/18703#issuecomment-1221235096) would run out of the box. I found this issue after trying to run a very similar script for FlanT5.<|||||>Hi @steve-marmalade
Thanks for the message!
Indeed, this is something we are trying to fix in https://github.com/huggingface/transformers/pull/21479 <|||||>I think that the following should be supported.
```python
import pprint
from transformers import pipeline, logging
name = "facebook/opt-30b"
text = "test prompt"
pipe = pipeline(model=name, model_kwargs= {"device_map": "auto", "load_in_8bit": True}, device = 0)
result = pipe(text, do_sample=True,max_length=200, top_k=50, top_p=0.95, num_return_sequences=3)
```
<|||||>@ArthurZucker , not really as forcing `device=0` and `device_map="auto"` will lead to some unexpected behaviors that are described in #21479
Also as stated by @Narsil :
> Imo using device_map and device should be an error (ambiguous intent)<|||||>Yes, but I mean for small models, withou `device_map="auto"` should work <|||||>Hi @younesbelkada , that's awesome you are already working on it :superhero:
I will follow along on #21479
<|||||>Closing as #21479 was merged |
transformers | 18,702 | closed | Adds package and requirement spec output to version check exception | # What does this PR do?
It's difficult to understand what package is affected when `got_ver` here comes back None, so output the requirement and the package. The requirement probably contains the package but let's output both for good measure. This will hopefully be enough for the user to understand that something's wrong with the package if they've already installed it.
Non-exhaustive references for this problem aside from my own encounter:
* https://stackoverflow.com/questions/70151167/valueerror-got-ver-is-none-when-importing-tensorflow
* https://discuss.huggingface.co/t/valueerror-got-ver-is-none/17465
* https://github.com/UKPLab/sentence-transformers/issues/1186
* https://github.com/huggingface/transformers/issues/13356
I speculate that the root of the error comes from a conflict of conda-managed and pip-managed Python packages but I've not yet proven this.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Tagging @stas00 because I think that's who `git blame` says touched that area of the code last.
| 08-19-2022 22:39:24 | 08-19-2022 22:39:24 | _The documentation is not available anymore as the PR was closed or merged._<|||||>This looks like a good improvement to me! Looks good to you, @stas00?<|||||>Looks like the test failures are transient package retrieval errors. I don't seem to have permission to rerun them myself.<|||||>You can always cheat ;)
```
# to push an empty commit - e.g. to force a CI rebuild - when there is no diff to push
git commit --allow-empty -m "Trigger CI"
git push
```<|||||>It looks like the tests are failing still because of pip install slowness… I've pushed a blank commit up once, I'll let a maintainer take it from here.<|||||>Could you rebase your branch on main? It may be due to a new release of a dep we have pinned since then (thinking of TensorFlow 2.10)<|||||>Rebase completed, but that code quality check still timed out.<|||||>All worked this time - thank you for your patience, @colindean!<|||||>
Thank you! I'm happy to help. I hope somebody else will be helped by this fix!
|
transformers | 18,701 | closed | [Hotfix] pin detectron2 5aeb252 to avoid test fix | # What does this PR do?
Very hot fix. Similar to #18680 | 08-19-2022 21:04:28 | 08-19-2022 21:04:28 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Thanks a lot! |
transformers | 18,700 | closed | Add minor doc-string change to include hp_name param in hyperparameter_search | This is a very minor change. It seems like one of the parameters of the hyperparameter_search method hasn't been documented yet. :slightly_smiling_face: | 08-19-2022 21:00:45 | 08-19-2022 21:00:45 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_18700). All of your documentation changes will be reflected on that endpoint.<|||||>Just noticed, that the kwargs- argument is also not displayed correctly. Probably because of the missing type-information:

<|||||>Better, but still parsing issues. Maybe because of the missing white-space.

<|||||>Yes, that did the trick. Now it looks alright.

|
transformers | 18,699 | closed | Temp fix for broken detectron2 import | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Detectron2 is currently broken on master as can be seen here:
- https://github.com/facebookresearch/detectron2/issues/4489
- https://github.com/facebookresearch/detectron2/issues/4487
In order to not affect all kinds of tests, we add a temporary fix here that will only let `LayoutLMv2` fail that is dependent on `detectron2`, but other tests that just import the layoutlmv2 module won't be affected so that the Circle CI stays green
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 08-19-2022 20:49:55 | 08-19-2022 20:49:55 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Hi @patrickvonplaten . Please review the thread at https://github.com/facebookresearch/detectron2/issues/4489 and let me know if the landed change means we can now revert this temporary fix. Thanks. |
transformers | 18,698 | closed | device_map='"auto" fails with in big_modelling.py | ### System Info
An Ubuntu 20.04 Linux on a Ryzen 7 3900 CPU, 32GB RAM with a Nvidia RTX3070 GPU, a M2 SSD with plenty of free space.
Latest version of mkl, cpu only pytorch, transformers and accelerate in a freshly created venv.
### Who can help?
@LysandreJik, @Narsil
Sorry no person specified for Transformers or Accelerate libs.
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Clone bloom-560m data with git into a directory. In this example the directory is /media/Data/ai/bloom-560.
Run the following python file:
```
from transformers import AutoTokenizer, AutoModel, AutoModelForCausalLM
import torch
from time import time
from transformers import pipeline
tokenizer = AutoTokenizer.from_pretrained("/media/Data/ai/bloom/bloom-560m")
model = AutoModelForCausalLM.from_pretrained("/media/Data/ai/bloom/bloom-560m",device_map="auto",torch_dtype=torch.float32)
pipe = pipeline('text-generation',model=model, tokenizer=tokenizer, torch_dtype=torch.float32)
def local_inf(prompt, temperature=0.7, top_p=None, max_new_tokens=32, repetition_penalty=None, do_sample=False, num_return_sequences=1):
response = pipe(f"{prompt}",
temperature = temperature, # 0 to 1
top_p = top_p, # None, 0-1
max_new_tokens = max_new_tokens, # up to 2047 theoretically
return_full_text = False, # include prompt or not.
repetition_penalty = repetition_penalty, # None, 0-100 (penalty for repeat tokens.
do_sample = do_sample, # True: use sampling, False: Greedy decoding.
num_return_sequences = num_return_sequences)
return print(prompt + response[0]['generated_text']), response[0]['generated_text']
inp = """# Use OpenCV in Python"""
t = time()
resp = local_inf(inp, max_new_tokens=64)
delta = time() - t
print("Inference took %0.2f s." % delta)
```
An error shown below results.
```
File "/home/luk/dev/ai/bloom-560-cpu-testing/test1.py", line 12, in <module>
model = AutoModelForCausalLM.from_pretrained("/media/Data/ai/bloom/bloom-560m",device_map="auto",torch_dtype=torch.float32)
File "/home/luk/dev/.env_mlk_2022/lib/python3.8/site-packages/transformers/models/auto/auto_factory.py", line 446, in from_pretrained
return model_class.from_pretrained(pretrained_model_name_or_path, *model_args, config=config, **kwargs)
File "/home/luk/dev/.env_mlk_2022/lib/python3.8/site-packages/transformers/modeling_utils.py", line 2179, in from_pretrained
dispatch_model(model, device_map=device_map, offload_dir=offload_folder)
File "/home/luk/dev/.env_mlk_2022/lib/python3.8/site-packages/accelerate/big_modeling.py", line 215, in dispatch_model
main_device = [d for d in device_map.values() if d not in ["cpu", "disk"]][0]
IndexError: list index out of range
```
### Expected behavior
It is expected the code runs fine and infers the model. However it runs only when the workaround described below is implemented.
Additional information below:
This seems to happen because line 215 in accelerate/big_modeling.py doesn't want to select cpu as the main device when set to auto. This is the relevant bit:
```
if main_device is None:
main_device = [d for d in device_map.values() if d not in ["cpu", "disk"]][0]
```
While transformers/modeling_utils.py line 2179 calls it like this:
```
if device_map is not None:
dispatch_model(model, device_map=device_map, offload_dir=offload_folder)
```
With no obvious way to specify the main_device.
The problem can be worked around by changing line 2179 of modelling_utils.py temporarily to:
`dispatch_model(model, device_map=device_map, offload_dir=offload_folder,main_device='cpu')`
Or in big_modelling.py line 215 can be changed to:
`main_device = [d for d in device_map.values() if d not in ["disk"]][0]`
"disk" shouldn't be the main device, but the cpu seems to be a perfectly acceptable alternative in absence of GPU. | 08-19-2022 17:31:42 | 08-19-2022 17:31:42 | Hi,
which version of `transformers` + `accelerate` are you using ?
<|||||>cc @muellerzr, have you seen something similar in the past?<|||||>transformers version 4.21.1
accelerate version 0.12.0<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>There is no support for using the CPU as a main device in Accelerate yet. If you want to use the model on CPU, just don't specific `device_map="auto"`.
Not quite sure why your GPU is not visible to torch since you mention having an RTX3070, but that's the crux of the issue here. Maybe it does not have enough RAM available to host the largest layer of the model?<|||||>Thank you for the reply and the info CPU is not supported as the main device. Yes, I do have an RTX GPU. However it is not visible, because I wanted to run on CPU only. There is value in being able to use CPU as the main device due to it (usually)having much larger continuous RAM region available.
Would not setting device_map="auto" stop offloading data to disk altogether? In any case, hopefully running on CPU as the main device is supported in future in the meantime one can use the workaround as specified or one can not use device_map="auto" as you mentioned.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Note that we just merge support for `dveice_map="auto"` to work on a CPU-only env. Disk offload when executing on CPU might not work yet, but if your model fits into RAM, you won't have this error anymore (requires an install form source of Accelerate).<|||||>device_map="auto" ——> device_map={"": "cpu"} |
transformers | 18,697 | closed | [VisionEncoderDecoder] Add gradient checkpointing | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
@fgbelidji can you check whether this works?
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 08-19-2022 16:35:37 | 08-19-2022 16:35:37 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@patrickvonplaten this works, thanks! <|||||>Tests added - good to merge cc @NielsRogge FYI |
transformers | 18,696 | closed | Generate: add missing `**model_kwargs` in sample tests | # What does this PR do?
One call of the `sample`-related tests was missing `**model_kwargs`, which MAY explain the random failures we were seeing.
I've run all `test_sample_generate_dict_output` tests 100x with no failures. Before this change, it was failing once every ~10 calls of `py.test tests/ -k test_sample_generate_dict_output`. | 08-19-2022 14:10:15 | 08-19-2022 14:10:15 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@ydshieh when the attention mask is not passed, it is inferred from the input ([here](https://github.com/huggingface/transformers/blob/e54a1b49aa6268c484625c6374f952f318914743/src/transformers/generation_utils.py#L490)).
Depending on the pad and eos tokens, the mask inferred from the random input tokens may have issues. This automatic attention mask is a big source of issues in general, especially for the tests :(
Passing the mask explicitly is always preferred! |
transformers | 18,695 | closed | Add a use_parallel_residual argument to control the residual computing way | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Add a gpt_j_residual argument to control the residual computing way, the default value is False, that is
consistent with https://github.com/EleutherAI/gpt-neox/blob/main/megatron/model/transformer.py#L592. And we can convert the model trained by [gpt-neox](https://github.com/EleutherAI/gpt-neox) into huggingface more easily.
## Who can review?
Anyone in the community is free to review the PR once the tests have passed.
@LysandreJik @patrickvonplaten
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 08-19-2022 11:28:58 | 08-19-2022 11:28:58 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Thanks @NinedayWang !
If someone could review this, that would be great.
This PR will also allow loading our PolyCoder model in `transformers` (https://arxiv.org/pdf/2202.13169.pdf)<|||||>Thanks a lot for the PR @NinedayWang,
I'm however not 100% sure we want that as we don't try to make Transformer models be very configurable generally.
Are there already pretrained checkpoints that would be useful for the community with `gpt_j_residual=True`?<|||||>Hi @patrickvonplaten! I appreciate your perspective, but I think in this case supporting the variation is warranted. The default of nearly all [training configurations in the NeoX toolkit](https://github.com/EleutherAI/gpt-neox/tree/main/configs) is to have this flag set to `False`. Only the 20B configuration uses that residual. So I expect that supporting this variation will make deploying new models trained with the NeoX toolkit easier for a lot of folks. Given how costly these models are to train, we are not planning to create a new variant using this residual.<|||||>Sorry I don't fully follow here:
- The GPT-NeoX checkpoint can be loaded with this architecture: https://github.com/huggingface/transformers/blob/main/src/transformers/models/gpt_neox/modeling_gpt_neox.py
- So supporting this flag is only for GPT-NeoX, but we already have this checkpoint in Transfomers: https://huggingface.co/EleutherAI/gpt-neox-20b<|||||>Ah, yes let me clarify. [GPT-NeoX](https://github.com/EleutherAI/gpt-neox/) is a *toolkit* that can be (and is actively) used to train GPT-style models. It supports a broad range of model sizes, and has a few other hyper-parameters to vary the architecture in other ways, like that `gpt_j_residual` flag.
Now **Neox-20B** is a specific, 20B parameter *model* trained with this toolkit. It largely uses the same configuration that other models trained with GPT-NeoX would, with the notable exception of the aforementioned residual flag: that flag is set to `False` in all configurations by default, but was turned to `True` by the authors of the 20B model. As such, other models trained with the GPT-NeoX toolkit are unlikely to have this flag enabled.
So for HuggingFace/transformers to support most other models trained with the NeoX toolkit, including [PolyCoder](https://github.com/VHellendoorn/Code-LMs/), we could either add multiple other `modeling_gpt_neox_MODELNAME.py*` style architectures, or make the basic `modeling_gpt_neox.py` architecture a bit more flexible. The latter seems more reasonable to me, but if the HF community prefers the former, that could work for us too.
Hope this clarifies things!
-Vincent<|||||>> Ah, yes let me clarify. [GPT-NeoX](https://github.com/EleutherAI/gpt-neox/) is a _toolkit_ that can be (and is actively) used to train GPT-style models. It supports a broad range of model sizes, and has a few other hyper-parameters to vary the architecture in other ways, like that `gpt_j_residual` flag.
>
> Now **Neox-20B** is a specific, 20B parameter _model_ trained with this toolkit. It largely uses the same configuration that other models trained with GPT-NeoX would, with the notable exception of the aforementioned residual flag: that flag is set to `False` in all configurations by default, but was turned to `True` by the authors of the 20B model. As such, other models trained with the GPT-NeoX toolkit are unlikely to have this flag enabled.
>
> So for HuggingFace/transformers to support most other models trained with the NeoX toolkit, including [PolyCoder](https://github.com/VHellendoorn/Code-LMs/), we could either add multiple other `modeling_gpt_neox_MODELNAME.py*` style architectures, or make the basic `modeling_gpt_neox.py` architecture a bit more flexible. The latter seems more reasonable to me, but if the HF community prefers the former, that could work for us too.
>
> Hope this clarifies things! -Vincent
Hey @VHellendoorn,
Thanks for clarifying! Putting @LysandreJik and @sgugger in cc here. Given the "single-file" policy of Transformers (see post [here](https://huggingface.co/blog/transformers-design-philosophy)), I think we would indeed prefer to add a new file such as `modeling_poly_coder.py` if the architecture is too different to existing architectures, such as gpt_j or gpt_neox_20b.
Also just one more question, if PolyCoder follows the same architecture than GPT-NeoX, couldn't we just load it with https://github.com/huggingface/transformers/blob/main/src/transformers/models/gpt_neox/modeling_gpt_neox.py ?
We're definitely more than happy though to help get Polycoder added to Transformers (cc @lvwerra as well)<|||||>Hi @patrickvonplaten,
Thanks, yes that would work for us too. The reason we can't load PolyCoder with that architecture file is precisely because `modeling_gpt_neox.py` hard-codes the assumption that `gpt_j_residual` is set to `True`. Hence the change in this PR, which makes that a configurable boolean. If we add a special `modeling_polycoder.py` file, it will just be identical to the `modeling_gpt_neox.py` one *except* for using the "normal" residual branch that most other models trained with GPT-NeoX will tend to use. So a slightly weird consequence of splitting the architectures across two files would be that most new models trained with GPT-NeoX will have to be loaded with the polycoder architecture, instead of the neox one. This PR would avoid such duplication by making that a togglable boolean instead.
-Vincent
<|||||>As @patrickvonplaten mentioned, Transformers is not a modular toolkit. It's therefore not surprising that one toolkit class such as GPT-Neo-X in EleutherAI is split in several different classes in Transformers (exactly like BART from fairseq is split in multiple classes here).<|||||>Thanks for your reply @patrickvonplaten @sgugger.
Let me do some explanation. [GPT-NeoX](https://github.com/EleutherAI/gpt-neox/) supports two different residual computing ways using `gpt_j_residual` configuration:
https://github.com/EleutherAI/gpt-neox/blob/main/megatron/model/transformer.py#L592
And the default value is `gpt_j_residual=False`:
https://github.com/EleutherAI/gpt-neox/blob/main/megatron/neox_arguments/neox_args.py#L311
```
gpt_j_residual: bool = False
"""
If false, we use the conventional residual path:
x = x + attn(ln1(x))
x = x + mlp(ln2(x))
Otherwise, we use the residual path from GPT-J, which offers a slight speedup:
x = ln(x)
x = x + attn(x) + mlp(x)
"""
```
As @VHellendoorn said, [gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b) is a special case of specifying `gpt_j_residual=True` in the [20B.yml](https://github.com/EleutherAI/gpt-neox/blob/main/configs/20B.yml#L29) config file, most models trained with GPT-NeoX use the default value of False, such as [small.yml](https://github.com/EleutherAI/gpt-neox/blob/main/configs/small.yml), [medium.yml](https://github.com/EleutherAI/gpt-neox/blob/main/configs/medium.yml), [large.yml](https://github.com/EleutherAI/gpt-neox/blob/main/configs/large.yml), [2-7B.yml](https://github.com/EleutherAI/gpt-neox/blob/main/configs/2-7B.yml) and so on. However, `modeling_gpt_neox.py` is an implementation that assumes `gpt_j_residual=True`, so we cannot use `modeling_gpt_neox.py` to load PolyCoder, even though PolyCoder actually follows the same architecture as GPT-NeoX.
I've read about the "single-file" policy, but I think GPT-NeoX is a bit special. If we load `gpt-neox-20b` with `model_type=gpt_neox`, but `gpt-neox-2.7b` or `gpt-neox-0.4b` with `model_type=polycoder`, it can be confusing and people need more time to figure out which model_type is suitable.<|||||>Hey @NinedayWang,
Thanks for the explanation. Sorry some more questions to clarify: Why is it called `gpt_j_residual`? Could this be changed to another name? I don't fully understand the relation to GPT-J here.
If we have half the gpt-neox checkpoints using one residual architecture and gpt-neox-20b another architecture I'm actually not too against trying to fit it in one file.<|||||>> Hey @NinedayWang,
>
> Thanks for the explanation. Sorry some more questions to clarify: Why is it called `gpt_j_residual`? Could this be changed to another name? I don't fully understand the relation to GPT-J here.
>
> If we have half the gpt-neox checkpoints using one residual architecture and gpt-neox-20b another architecture I'm actually not too against trying to fit it in one file.
Thanks a lot! The name `gpt_j_residual` comes from the developers of GPT-NeoX. Actually, the unconventional residual architecture in GPT-NeoX is inherited from [GPT-J](https://github.com/huggingface/transformers/blob/4157e3cd7e2bb5a7be6dc065a3e20c49cc1300ab/src/transformers/models/gptj/modeling_gptj.py#L322). For clarity and to be the same as the original GPT-NeoX, I think it is better to keep the name `gpt_j_residual`.<|||||>> > Hey @NinedayWang,
> > Thanks for the explanation. Sorry some more questions to clarify: Why is it called `gpt_j_residual`? Could this be changed to another name? I don't fully understand the relation to GPT-J here.
> > If we have half the gpt-neox checkpoints using one residual architecture and gpt-neox-20b another architecture I'm actually not too against trying to fit it in one file.
>
> Thanks a lot! The name `gpt_j_residual` comes from the developers of GPT-NeoX. Actually, the unconventional residual architecture in GPT-NeoX is inherited from [GPT-J](https://github.com/huggingface/transformers/blob/4157e3cd7e2bb5a7be6dc065a3e20c49cc1300ab/src/transformers/models/gptj/modeling_gptj.py#L322). For clarity and to be the same as the original GPT-NeoX, I think it is better to keep the name `gpt_j_residual`.
Is this essentially the "parallel" residual computation that allows the model to be tensor-parallelized better (especially for TPUs) - e.g. the same architecture that was used in PALM: https://arxiv.org/abs/2204.02311 ?<|||||>> > > Hey @NinedayWang,
> > > Thanks for the explanation. Sorry some more questions to clarify: Why is it called `gpt_j_residual`? Could this be changed to another name? I don't fully understand the relation to GPT-J here.
> > > If we have half the gpt-neox checkpoints using one residual architecture and gpt-neox-20b another architecture I'm actually not too against trying to fit it in one file.
> >
> >
> > Thanks a lot! The name `gpt_j_residual` comes from the developers of GPT-NeoX. Actually, the unconventional residual architecture in GPT-NeoX is inherited from [GPT-J](https://github.com/huggingface/transformers/blob/4157e3cd7e2bb5a7be6dc065a3e20c49cc1300ab/src/transformers/models/gptj/modeling_gptj.py#L322). For clarity and to be the same as the original GPT-NeoX, I think it is better to keep the name `gpt_j_residual`.
>
> Is this essentially the "parallel" residual computation that allows the model to be tensor-parallelized better (especially for TPUs) - e.g. the same architecture that was used in PALM: https://arxiv.org/abs/2204.02311 ?
Yes, it's the same "parallel" architecture as PALM, which provides faster training speed when training large-scale models. |
transformers | 18,694 | closed | Type cast before normalize | # What does this PR do?
This shows the changes to VideoMAE - casting the images to numpy arrays before normalizing. This resolves issues when the return type isn't as expected if flags like `do_normalize` are false. The changes here are representative of the changes for all vision model feature extractors.
Once this has been approved. All model changes will be merged into this one for a final review before merging. A new PR was opened as I couldn't switch to the base repo (huggingface/transformers.git).
First PR introducing changes to the transforms to enable this: https://github.com/huggingface/transformers/pull/18499#event-7208520060
Details below copied from this PR (for easy reference):
Other model PRs to be merged in:
- https://github.com/amyeroberts/transformers/pull/12
- https://github.com/amyeroberts/transformers/pull/22
- https://github.com/amyeroberts/transformers/pull/13
- https://github.com/amyeroberts/transformers/pull/14
- https://github.com/amyeroberts/transformers/pull/1
- https://github.com/amyeroberts/transformers/pull/15
- https://github.com/amyeroberts/transformers/pull/17
- https://github.com/amyeroberts/transformers/pull/18
- https://github.com/amyeroberts/transformers/pull/2
- https://github.com/amyeroberts/transformers/pull/19
- https://github.com/amyeroberts/transformers/pull/20
- https://github.com/amyeroberts/transformers/pull/3
- https://github.com/amyeroberts/transformers/pull/4
- https://github.com/amyeroberts/transformers/pull/21
- https://github.com/amyeroberts/transformers/pull/5
- https://github.com/amyeroberts/transformers/pull/6
- https://github.com/amyeroberts/transformers/pull/7
- https://github.com/amyeroberts/transformers/pull/8
- https://github.com/amyeroberts/transformers/pull/10
- https://github.com/amyeroberts/transformers/pull/16
- https://github.com/amyeroberts/transformers/pull/11
## Details
At the moment, if do_normalize=False, do_resize=True and return_tensors=None then the output tensors will be a list of PIL.Image.Image objects if even if the inputs are numpy arrays. If do_normalize=False and return_tensors is specified ("pt", "np", "tf", "jax") an exception is raised.
The main reasons for this are:
BatchFeature can't convert PIL.Image.Image to the requested tensors.
The necessary conversion of PIL.Image.Image -> np.ndarray happens within the normalize method and the output of resize is PIL.Image.Image.
In order to have the type of the returned pixel_values reflect return_tensors we need to:
Convert PIL.Image.Image objects to numpy arrays before passing to BatchFeature
Be able to optionally rescale the inputs in the normalize method. If the input to normalize is a PIL.Image.Image it is converted to a numpy array using to_numpy_array which rescales to between [0, 1]. If do_resize=False then this rescaling won't happen if the inputs are numpy arrays.
The optional flags enable us to preserve the same default behaviour for the resize and normalize methods whilst modifying the internal logic of the feature extractor call.
Checks
The model PRs are all cherry picked (file diffs) of type-cast-before-normalize
The following was run to check the outputs:
```
from dataclasses import dataclass
import requests
import numpy as np
from PIL import Image
import pygit2
from transformers import AutoFeatureExtractor
@dataclass
class FeatureExtractorConfig:
model_name: str
checkpoint: str
return_type: str = "np"
feat_name: str = "pixel_values"
IMAGE_FEATURE_EXTRACTOR_CONFIGS = [
FeatureExtractorConfig(model_name="clip", checkpoint="openai/clip-vit-base-patch32"),
FeatureExtractorConfig(model_name="convnext", checkpoint="facebook/convnext-tiny-224"),
FeatureExtractorConfig(model_name="deit", checkpoint="facebook/deit-base-distilled-patch16-224"),
FeatureExtractorConfig(model_name="detr", checkpoint="facebook/detr-resnet-50"),
FeatureExtractorConfig(model_name="dpt", checkpoint="Intel/dpt-large"),
FeatureExtractorConfig(model_name="flava", checkpoint="facebook/flava-full"),
FeatureExtractorConfig(model_name="glpn", checkpoint="vinvino02/glpn-kitti"),
FeatureExtractorConfig(model_name="imagegpt", checkpoint="openai/imagegpt-small", feat_name='input_ids'),
FeatureExtractorConfig(model_name="layoutlmv2", checkpoint="microsoft/layoutlmv2-base-uncased"),
FeatureExtractorConfig(model_name="layoutlmv3", checkpoint="microsoft/layoutlmv3-base"),
FeatureExtractorConfig(model_name="levit", checkpoint="facebook/levit-128S"),
FeatureExtractorConfig(model_name="maskformer", checkpoint="facebook/maskformer-swin-base-ade", return_type="pt"),
FeatureExtractorConfig(model_name="mobilevit", checkpoint="apple/mobilevit-small"),
FeatureExtractorConfig(model_name="owlvit", checkpoint="google/owlvit-base-patch32"),
FeatureExtractorConfig(model_name="perceiver", checkpoint="deepmind/vision-perceiver-fourier"),
FeatureExtractorConfig(model_name="poolformer", checkpoint="sail/poolformer_s12"),
FeatureExtractorConfig(model_name="segformer", checkpoint="nvidia/mit-b0"),
FeatureExtractorConfig(model_name="vilt", checkpoint="dandelin/vilt-b32-mlm"),
FeatureExtractorConfig(model_name="vit", checkpoint="google/vit-base-patch16-224-in21k"),
FeatureExtractorConfig(model_name="yolos", checkpoint="hustvl/yolos-small"),
]
VIDEO_FEATURE_EXTRACTOR_CONFIGS = [
FeatureExtractorConfig(model_name="videomae", checkpoint="MCG-NJU/videomae-base"),
]
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
def produce_pixel_value_outputs():
BRANCH = pygit2.Repository('.').head.shorthand
def get_processed_outputs(inputs, model_checkpoint, feat_name):
feature_extractor = AutoFeatureExtractor.from_pretrained(model_checkpoint)
outputs = feature_extractor(inputs, return_tensors=fe_config.return_type)[feat_name]
return outputs
for fe_config in IMAGE_FEATURE_EXTRACTOR_CONFIGS:
print(fe_config.model_name, fe_config.checkpoint)
outputs = get_processed_outputs(image, fe_config.checkpoint, fe_config.feat_name)
np.save(f"{fe_config.model_name}_{BRANCH.replace('-', '_')}_pixel_values.npy", outputs)
for fe_config in VIDEO_FEATURE_EXTRACTOR_CONFIGS:
print(fe_config.model_name, fe_config.checkpoint)
outputs = get_processed_outputs([[image, image]], fe_config.checkpoint, fe_config.feat_name)
np.save(f"{fe_config.model_name}_{BRANCH.replace('-', '_')}_pixel_values.npy", outputs)
branch_main = "main"
branch_feature = "type-cast-before-normalize"
repo = pygit2.Repository('.git')
print("\nChecking out main")
branch = repo.lookup_branch('main')
ref = repo.lookup_reference(branch.name)
repo.checkout(ref)
produce_pixel_value_outputs()
print("\nChecking out type-cast-before-normalize")
branch = repo.lookup_branch('type-cast-before-normalize')
ref = repo.lookup_reference(branch.name)
repo.checkout(ref)
produce_pixel_value_outputs()
for fe_config in IMAGE_FEATURE_EXTRACTOR_CONFIGS + VIDEO_FEATURE_EXTRACTOR_CONFIGS:
model_name = fe_config.model_name
try:
output_1 = np.load(f"{model_name}_{branch_main}_pixel_values.npy")
output_2 = np.load(f"{model_name}_{branch_feature.replace('-', '_')}_pixel_values.npy")
max_diff = np.amax(np.abs(output_1 - output_2))
print(f"{model_name}: {max_diff:.5f}")
except Exception as e:
print(f"{model_name} failed check with {e}")
```
Output:
```
clip: 0.00000
convnext: 0.00000
deit: 0.00000
detr: 0.00000
dpt: 0.00000
flava: 0.00000
glpn: 0.00000
imagegpt: 0.00000
layoutlmv2: 0.00000
layoutlmv3: 0.00000
levit: 0.00000
maskformer: 0.00000
mobilevit: 0.00000
owlvit: 0.00000
perceiver: 0.00000
poolformer: 0.00000
segformer: 0.00000
vilt: 0.00000
vit: 0.00000
yolos: 0.00000
videomae: 0.00000
Fixes
https://github.com/huggingface/transformers/issues/17714
https://github.com/huggingface/transformers/issues/15055
```
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
| 08-19-2022 11:04:07 | 08-19-2022 11:04:07 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_18694). All of your documentation changes will be reflected on that endpoint.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>PR wasn't merged as it was superseded by the image processors and no longer needed. |
transformers | 18,693 | closed | add warning to let the user know that the `__call__` method is faster than `encode` + `pad` for a fast tokenizer | # What does this PR do?
In this PR I propose to specify in the docstring of the `pad` method and to log a warning when the pad method is called with a fast tokenizer. The goal is to encourage the user to use the `__call__` method to encode and pad their input at the same time since it will be faster then encoding and then padding it with the method `pad` (because it is not using the rust backend).
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 08-19-2022 09:58:11 | 08-19-2022 09:58:11 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_18693). All of your documentation changes will be reflected on that endpoint.<|||||>Thanks a lot for the advice @LysandreJik :hugs: ! Indeed, since this attribute `deprecation_warnings` exists, I have no reason to prefer `warnings.warn` to `logger.warning`. I made the change in https://github.com/huggingface/transformers/pull/18693/commits/775002fbd2c0c67001182d7ce07e5efc4a1b9c80. |
transformers | 18,692 | closed | Add an option to `HfArgumentParser.parse_{dict,json_file}` to raise an Exception when there extra keys | I added an option to `HfArgumentParser.parse_{dict,json_file}` to raise an Exception when there extra keys. The option is off by default, for backward compatibility.
For users of these functions, misspelled or incorrect keys in the parsed files/dicts could lead to hard-to-find errors, like putting `batch_size` in your config and wondering why the Trainer is using a different one, because the actual name is `per_device_train_batch_size`. The option will output a very similar error message to the normal argparse.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [X] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [X] Did you write any new necessary tests?
## Who can review?
@sgugger
| 08-19-2022 09:38:17 | 08-19-2022 09:38:17 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 18,691 | closed | FileNotFoundError: [Errno 2] No such file or directory: '/home/chaizhihua/.cache/huggingface/hub/models--.--resources--ltp/refs/main' | ### System Info
when i run (https://github.com/huggingface/transformers/blob/main/examples/research_projects/mlm_wwm/run_chinese_ref.py) this code , the code throw this error:
```
Traceback (most recent call last):
File "temp.py", line 147, in <module>
main(args)
File "temp.py", line 122, in main
ltp_tokenizer = LTP(args.ltp) # faster in GPU device
File "/home/chaizhihua/anaconda3/envs/graph/lib/python3.8/site-packages/ltp/interface.py", line 102, in LTP
config_file = hf_hub_download(
File "/home/chaizhihua/anaconda3/envs/graph/lib/python3.8/site-packages/huggingface_hub/file_download.py", line 1078, in hf_hub_download
with open(ref_path) as f:
FileNotFoundError: [Errno 2] No such file or directory: '/home/chaizhihua/.cache/huggingface/hub/models--.--resources--ltp/refs/main'
```
my transformers ==4.19.0
### Who can help?
@[julien-c](https://github.com/huggingface/transformers/commits?author=julien-c)
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
### Expected behavior
modify this error | 08-19-2022 08:32:21 | 08-19-2022 08:32:21 | Hi limuyun99, would you mind providing your script for running `run_chinese_ref.py`? According to your traceback, I think you may provide the wrong params for the `--ltp`. Your cmd for running `run_chinese_ref.py` should look like the below:
```
export TRAIN_FILE=/path/to/train/file
export LTP_RESOURCE=LTP/small
export BERT_RESOURCE=bert-base-uncased
export SAVE_PATH=/path/to/data/ref.txt
python run_chinese_ref.py \
--file_name=$TRAIN_FILE \
--ltp=$LTP_RESOURCE \
--bert=$BERT_RESOURCE \
--save_path=$SAVE_PATH
```<|||||>have you seen other paths like those @LysandreJik @Wauplin?
`/home/chaizhihua/.cache/huggingface/hub/models--.--resources--ltp/refs/main` looks incorrect, like it's encoding a local path instead of a repo id or something. (Not sure where the issue is though, if any)<|||||>A few things here:
1. [`--ltp` arg](https://github.com/huggingface/transformers/blob/main/examples/research_projects/mlm_wwm/run_chinese_ref.py#L141) is set by default to `"./resources/ltp"`
2. This value is then passed when initializing LTP, as `pretrained_model_name_or_path` (see [LTP source code](https://github.com/HIT-SCIR/ltp/blob/5c07bf909710c75c246da8abe3f5c44b89c0eea0/python/interface/ltp/interface.py#L14)). Script either expect a model name (e.g. `LTP/tiny` - see [list](https://huggingface.co/LTP)) or a path to an existing folder.
a. Here it seems you are using the default value `"./resources/ltp"` which refers to an empty/missing folder. Source code from `LTP` falls back to considering it as a model_id and download it from HF Hub.
b. To download it from the hub, it uses [`huggingface_hub`](https://github.com/huggingface/huggingface_hub) which doesn't throw an error because the model id is wrong but because it cannot access the cache at the given path.
From that:
1. As mentioned by @aRyBernAlTEglOTRO in [its comment](https://github.com/huggingface/transformers/issues/18691#issuecomment-1221293043), you need to provide a valid `--ltp` value.
2. In `huggingface_hub` it would still be good to throw an error that `"./resources/ltp" is not a valid repo_id` instead of the current `FileNotFoundError: [Errno 2] No such file or directory: '/home/chaizhihua/.cache/huggingface/hub/models--.--resources--ltp/refs/main'`. I'll create an issue to track that.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 18,690 | open | Implement a new model: Point-BERT 🌟 | ### Model description
Hi, I’m Adonai vera; I’m looking to contribute with a new point cloud model; Point cloud has become very relevant in applications such as autonomous cars, inspections, architecture, reconstruction, and more. Segmentation algorithms allow us to extract valuable information from these types of files. That is why it is very important for me to contribute to the HF community with the architecture of a new Point-Bert model. Point-BERT is a Pre-training 3D Point Cloud Transformers with Masked Point Modeling. According to the state of the art, it is one of the best models, surpassing PointNet, Pointnet++, and Transformer-OcCo, among others.
**Short description of the model and link to the paper:**
They devise a Masked Point Modeling (MPM) task to pre-train point cloud Transformers. Specifically, they first divide a point cloud into several local point patches. A point cloud Tokenizer with a discrete Variational AutoEncoder (dVAE) is designed to generate discrete point tokens containing meaningful local information. Then, they randomly mask out some patches of input point clouds and feed them into the backbone Transformers. (Abstract)
[Link to the paper](https://openaccess.thecvf.com/content/CVPR2022/papers/Yu_Point-BERT_Pre-Training_3D_Point_Cloud_Transformers_With_Masked_Point_Modeling_CVPR_2022_paper.pdf)
**Link to the implementation if it is open-source:**
[Pre-trained model in github](https://github.com/lulutang0608/Point-BERT)
[Talk about the new arquicture in CVPR 2022](https://www.youtube.com/watch?v=KMOCw68Veoo)
**Link to the model weights if they are available:**
[Cfgs](https://github.com/lulutang0608/Point-BERT/tree/master/cfgs)
[Architecture](https://github.com/lulutang0608/Point-BERT/blob/master/models/Point_BERT.py)
I’m happy to contribute to this process, but it's important to know If this model can be valuable to the community.
PDT: I'm trying to connect with the model's authors to get more insights into the implementation. 🤗
[EDIT] I talked with Xumin, one of the model's authors, and they support all the deployment in HF. 🚀🚀🚀
The authors are: :woman_teacher: :man_teacher:
@lulutang0608
@yuxumin
@raoyongming
### Open source status
- [X] The model implementation is available
- [X] The model weights are available
### Provide useful links for the implementation
[Architecture](https://github.com/lulutang0608/Point-BERT/blob/master/models/Point_BERT.py)
[Cfgs](https://github.com/lulutang0608/Point-BERT/tree/master/cfgs)
[Talk about the new arquicture in CVPR 2022](https://www.youtube.com/watch?v=KMOCw68Veoo)
[Pre-trained model in github](https://github.com/lulutang0608/Point-BERT)
[Link to the paper](https://openaccess.thecvf.com/content/CVPR2022/papers/Yu_Point-BERT_Pre-Training_3D_Point_Cloud_Transformers_With_Masked_Point_Modeling_CVPR_2022_paper.pdf)
| 08-19-2022 03:44:38 | 08-19-2022 03:44:38 | Hey @AdonaiVera, exciting! Pinging @NielsRogge, our expert in model contribution :smiley: exciting to see point cloud models!<|||||>Sure, let me know if you need any help :)<|||||>@AdonaiVera @NielsRogge Are there any updates on this model? |
transformers | 18,689 | closed | Is a bug in TFT5ForConditionalGeneration._shift_right func? | ### System Info
transformers==4.17.0
### Who can help?
@patrickvonplaten
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```py
from transformers import T5Tokenizer, TFT5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained("pretrained_model/t5-base-Chinese")
model = TFT5ForConditionalGeneration.from_pretrained("pretrained_model/t5-base-Chinese", from_pt=True)
decoder_label = tokenizer.batch_encode_plus(['你好我的朋友', '不好呀'], return_tensors='tf', padding='max_length', max_length=10)
decoder_input_ids = model._shift_right(inputs['input_ids'])
```
### Expected behavior
```
decoder_label is [[ 259 1575 1409 1265 22557 1 0 0 0 0], [ 259 17693 21839 1 0 0 0 0 0 0]]
decoder_input_ids is [[ 0 259 1575 1409 1265 22557 1 0 0 0], [ 0 259 17693 21839 1 0 0 0 0 0]]
T5's decoder_start_id is 0, end_id is 1, Correct decoder_input_ids should be [[ 0 259 1575 1409 1265 22557 0 0 0 0], [ 0 259 17693 21839 0 0 0 0 0 0]],
``` | 08-19-2022 02:24:12 | 08-19-2022 02:24:12 | Hey @TheHonestBob,
Sorry I don't have access to `"pretrained_model/t5-base-Chinese"` so I'm not sure I can help you here<|||||>> Hey @TheHonestBob,
>
> Sorry I don't have access to `"pretrained_model/t5-base-Chinese"` so I'm not sure I can help you here
this issuse is not t5 model's problem,just ._shift_right func problem, There seems to be a logic problem with this function.<|||||>@TheHonestBob If I'm getting it right, your issue is that `decoder_input_ids`, after calling `_shift_right`, should not contain the `end_id`. Is this correct?<|||||>>
right,when decoder_input_ids doesn't be padded, _shift_right is ok, or contain the end_id is correct?<|||||>@TheHonestBob, `shift_right` is used to automatically create the `decoder_input_ids` from `labels` for training. It depends on you what you put in the labels but if your labels are correct then you can assume the `shift_right` is also correct. Note that in general T5 should be trained with EOS for fine-tuning. For pre-training it's not necessarily but won't hurt<|||||>> @TheHonestBob, `shift_right` is used to automatically create the `decoder_input_ids` from `labels` for training. It depends on you what you put in the labels but if your labels are correct then you can assume the `shift_right` is also correct. Note that in general T5 should be trained with EOS for fine-tuning. For pre-training it's not necessarily but won't hurt
thanks a lot, I get it, shift_right maybe not right, but it's return is not hurt for training.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 18,688 | closed | Why does setting `--fp16 True` not save memory | https://github.com/huggingface/transformers/blob/e54a1b49aa6268c484625c6374f952f318914743/examples/pytorch/language-modeling/run_mlm.py#L247-L250
https://github.com/huggingface/transformers/blob/e54a1b49aa6268c484625c6374f952f318914743/src/transformers/training_args.py#L663-L666
I want to pre-train Roberta on my dataset. However, the Batch size can be set to 32 at most. Otherwise, OOM is reported. I plan to use `Mixed-precision` to save memory. So I set `--fp16 True`. However, the Batch size can only be set to 32 at most. Otherwise, OOM will be reported. It seems that setting up `FP16` is not doing much to save memory. | 08-19-2022 02:03:11 | 08-19-2022 02:03:11 | Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests.
Could you ask your question on the [forum](https://discuss.huggingface.co) instead?
Thanks!<|||||>This question is now on the forum here: https://discuss.huggingface.co/t/why-does-setting-fp16-true-not-save-memory-as-expected/22400 |
transformers | 18,687 | closed | improve `add_tokens` docstring | # What does this PR do?
This PR proposes to clarify the fact that added tokens and tokens of the vocabulary of the tokenization algorithm are not treated the same way by detailing the docstring of the `add_tokens` method.
Fixes #18662
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 08-18-2022 18:49:24 | 08-18-2022 18:49:24 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_18687). All of your documentation changes will be reflected on that endpoint. |
transformers | 18,686 | closed | add task_type_id to BERT to support ERNIE-2.0 and ERNIE-3.0 models | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
[ERNIE2.0](https://arxiv.org/abs/1907.12412)  and [ERNIE3.0](https://arxiv.org/abs/2107.02137)  are a series of powerful models based on BERT, especially in Chinese tasks. These models introduce `task_type_embeddings` in the embedding layer, so this PR is to support this feature.
the config of ERNIE2.0 / ERNIE3.0 models have the following two params:
```json
...
"task_type_vocab_size": 3,
"use_task_id": true
...
```
the released ERNIE2.0 / ERNIE3.0 models have the weight of `bert.embeddings.task_type_embeddings.weight`
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case. https://github.com/huggingface/transformers/issues?q=is%3Aissue+ernie
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
I write a [script](https://github.com/nghuyong/ERNIE-Pytorch/blob/master/convert_ernie3.0.py) to convert the official released [ERNIE3.0 model (paddlepaddle version)](https://github.com/PaddlePaddle/PaddleNLP/tree/develop/model_zoo/ernie-3.0). And I have checked that the model results before and after transformation are consistent (with `task_type_embedding` added)
```Python
import paddle
import torch
from paddlenlp.transformers import ErnieForMaskedLM, ErnieTokenizer
# this PR version
from transformers import BertTokenizer, BertForMaskedLM
tokenizer = BertTokenizer.from_pretrained('nghuyong/ernie-3.0-base-zh')
model = BertForMaskedLM.from_pretrained('nghuyong/ernie-3.0-base-zh')
input_ids = torch.tensor([tokenizer.encode(text="[MASK][MASK][MASK]是中国神魔小说的经典之作,与《三国演义》《水浒传》《红楼梦》并称为中国古典四大名著。",
add_special_tokens=True)])
model.eval()
with torch.no_grad():
predictions = model(input_ids)[0][0]
predicted_index = [torch.argmax(predictions[i]).item() for i in range(predictions.shape[0])]
predicted_token = [tokenizer._convert_id_to_token(predicted_index[i]) for i in
range(1, (predictions.shape[0] - 1))]
print('huggingface result')
print('predict result:\t', predicted_token)
print('[CLS] logit:\t', predictions[0].numpy())
tokenizer = ErnieTokenizer.from_pretrained("ernie-3.0-base-zh")
model = ErnieForMaskedLM.from_pretrained("ernie-3.0-base-zh")
inputs = tokenizer("[MASK][MASK][MASK]是中国神魔小说的经典之作,与《三国演义》《水浒传》《红楼梦》并称为中国古典四大名著。")
inputs = {k: paddle.to_tensor([v]) for (k, v) in inputs.items()}
model.eval()
with paddle.no_grad():
predictions = model(**inputs)[0]
predicted_index = [paddle.argmax(predictions[i]).item() for i in range(predictions.shape[0])]
predicted_token = [tokenizer._convert_id_to_token(predicted_index[i]) for i in
range(1, (predictions.shape[0] - 1))]
print('paddle result')
print('predict result:\t', predicted_token)
print('[CLS] logit:\t', predictions[0].numpy())
“”“
huggingface result
predict result: ['西', '游', '记', '是', '中', '国', '神', '魔', '小', '说', '的', '经', '典', '之', '作', ',', '与', '《', '三', '国', '演', '义', '》', '《', '水', '浒', '传', '》', '《', '红', '楼', '梦', '》', '并', '称', '为', '中', '国', '古', '典', '四', '大', '名', '著', '。']
[CLS] logit: [-20.574057 -29.192085 -15.638802 ... -1.9127564 -1.4329851 -1.8172828]
paddle result
predict result: ['西', '游', '记', '是', '中', '国', '神', '魔', '小', '说', '的', '经', '典', '之', '作', ',', '与', '《', '三', '国', '演', '义', '》', '《', '水', '浒', '传', '》', '《', '红', '楼', '梦', '》', '并', '称', '为', '中', '国', '古', '典', '四', '大', '名', '著', '。']
[CLS] logit: [-20.573637 -29.193172 -15.639115 ... -1.9127647 -1.4330447 -1.816982 ]
”“”
```
## Who can review?
@LysandreJik
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 08-18-2022 17:43:19 | 08-18-2022 17:43:19 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Hey @nghuyong, thanks for your PR!
In this situation, we'd rather have a new model class "Ernie" rather than modifying the "Bert" model class. This will result in a larger PR, but it should be very little additional work for you.
I encourage you to follow the following guide: https://github.com/huggingface/transformers/tree/main/templates/adding_a_new_model#add-new-model-like-command
It seems like it would be as simple as using the scfript to add a new model like BERT, but with the ERNIE name; then applying the changes above.
@ydshieh, would you be down to help @nghuyong if they run into any problem?
Thanks a lot!<|||||>Thanks for your advice. I will try to do this work. <|||||>Agree with @LysandreJik to have a new model file for this. And it should be fairly straightforward.
Glad to know you already have the checkpoints available! Let me me if you need any help, @nghuyong for the PR.
Looking forward to review it! Thanks, @nghuyong !
<|||||>@ydshieh @LysandreJik I have updated this PR and add Ernie model.
Please help to review it, if you have any questions, you can AT me, thanks!<|||||>Wonderful, thank you @nghuyong!<|||||>Hi @nghuyong, Great job! I will review the PR. Currently, the failing tests are caused by
```bash
E AttributeError: module transformers.models.ernie has no attribute BasicTokenizer
```
which is from your change in
```
src/transformers/__init__.py
```
I will discuss my colleagues to see how should we do for the tokenizers.<|||||>@ydshieh So, is there anything that needs to be updated now?<|||||>Hi @nghuyong Thanks a lot :-). I will take a full review this week.
The current failing tests (most of them) could be fixed by updating your branch. You can do it as
```python
git checkout main
git pull upstream main
git checkout [YOUR-WORKING-BRANCH]
git rebase main
git push --force-with-lease
```
Before doing so, it would be a good idea to keep a backup of current branch in case of the commit history being messed up.
```python
git checkout add_task_type_id
git checkout -b add_task_type_id_backup
```
Once the PR branch is updated, you can also try to fix the style/quality issues by running
```bash
make style
make quality
```
You can check the CI results in
```bash
ci/circleci: check_code_quality
ci/circleci: check_repository_consistency
```
for the details and suggestions.
Let me know if you encounter any difficulty.<|||||>Thanks, @ydshieh my branch has been synced with master now<|||||>@nghuyong
你好,你的ernie的分支是已经合并到mater了吗——意思是安装最新的transformers库,就可以直接使用ernie3和你提供的ernie3的中文权重吧;如不需要通过之前重装transformers的方式:
pip install git+https://github.com/nghuyong/transformers@add_task_type_id<|||||>@HUSTHY, still not, and `add_task_type_id` has been changed, so, you cannot load ernie3 now.
<|||||>已收到!谢谢! ——黄洋<|||||>@nghuyong 好吧 我还以为能用了呢 那你实现的这个分支的代码是没有问题的吧。。。 我直接把代码放在本地应该OK的<|||||>@nghuyong When you add argument `task_type_ids` to the other model classes, you will also need to remove some `# copied from`, for example this part
```
# Copied from transformers.models.bert.modeling_bert.BertModel with BERT->ERNIE,Bert->Ernie
class ErnieModel(ErniePreTrainedModel):
```
You can instead do the following (and same for other methods), although it's more tedious
```
# Copied from transformers.models.bert.modeling_bert.BertModel.__init__ with BERT->ERNIE,Bert->Ernie
def __init__(self, config, add_pooling_layer=True):
```<|||||>@ydshieh Thanks for your review!! I have updated following your advice.<|||||>Thank you, @nghuyong. Here are a few remaining step from my side
- First, address the above 2 review comments.
- Then
```bash
make style
```
it will fix
```
transformers\src\transformers\__init__.py
transformers\src\transformers\models\ernie\__init__.py
```
- Finally
```bash
pip install -U huggingface_hub
make fix-copies
```
to fix some README files.
Then we can have the core maintainers to have a final look 🚀 <|||||>hi, @ydshieh , I run `make style`, It has changed a lot files
<img width="709" alt="image" src="https://user-images.githubusercontent.com/16462374/188933636-991d9eea-3000-4629-9905-592753a181db.png">
and actually the following two files don't change after I run `make style`
```
transformers\src\transformers\__init__.py
transformers\src\transformers\models\ernie\__init__.py
```<|||||>For the style, could you try `pip install -U hf-doc-builder` before `make style`<|||||>If there are still issues, I can make the change and push to this PR directly, if you are OK with this<|||||>Hi @ydshieh , I'd like to describe the problems I encountered in the development environment
First, I create a new conda env:
```
conda create -n hf python=3.9
```
and run `$ pip install -e ".[dev]"` get this error
```
ERROR: Could not find a version that satisfies the requirement tensorflow<2.10,>=2.3; extra == "dev" (from transformers[dev]) (from versions: none)
ERROR: No matching distribution found for tensorflow<2.10,>=2.3; extra == "dev"
```
Then I install `tensorflow` by conda first
```
conda install tensorflow
brew install mecab
pip install -e ".[dev]"
```
but still get a error
```
ERROR: Could not find a version that satisfies the requirement tensorflow-text; extra == "dev" (from transformers[dev]) (from versions: none)
ERROR: No matching distribution found for tensorflow-text; extra == "dev"
```
I found `tensorflow-text` is not support Apple M1 Pro now, so I comment the `tensorflow-text` in `setup.py` and then run
```
pip install -e ".[dev]"
```
finally, it installed
---------
Then, in this new env, I run:
```
pip install -U hf-doc-builder
make style
```
it works !<|||||>Glad it works for you. It's probably not working well TF + Mac (not very sure, I rarely use this combination).
I think you haven't run this yet
```
pip install -U huggingface_hub
make fix-copies
```
right? It's necessary to pass the `check_repository_consistency` CI. Let me know if I can help on this.<|||||>I run and push a new commit now<|||||>Thanks a lot. I will take care of the remaining issue in the CI tomorrow. Somehow strange here.<|||||>@ydshieh Because I use `ERNIE` not `Ernie` in Readme. so the ci get failed<|||||>The solution should be to change this line
https://github.com/huggingface/transformers/blob/ae3132d17a787ffdb1bc6097b06330fb249ae779/src/transformers/models/auto/configuration_auto.py#L319
to
`("ernie", "ERNIE"),`
`ERNIE` is the good name to use instead of `Ernie`.
Also, `"tensorflow-text": "tensorflow-text",` should be added back (I know your env. have issue with it, but it shouldn't be removed when push to the remote).
Hopefully everything will be good now with these :-) <|||||>@ydshieh OK, thanks!!
of course, I do not submit the change to `setup.py`.<|||||>You can ignore the test failure in `ci/circleci: run_tests_hub`.<|||||>@ydshieh OK, `run_tests_hub` could be ignored, may be the ci has no problems now.<|||||>Hi @nghuyong I push [a commit](https://github.com/huggingface/transformers/pull/18686/commits/1514ca636152b2a5d7062349089838d7cac7f37b) which adds several `# copied from ...` <|||||>@ydshieh OK, thanks a lot<|||||>@sgugger This is model is just BERT, but with a new argument `task_type_ids` which is used to create a new embedding to be summed. Time for you to have a final review 🙏
<|||||>Thanks again for your contribution! @ydshieh can't merge until you change your "Request changes" to an approval.<|||||>Thank you @nghuyong! I pushed a [final commit](https://github.com/huggingface/transformers/pull/18686/commits/685728232335792f1746f2a5772e60545a545810) to remove the remaining `ErnieLMHeadModel` in a log message.
@sgugger I approved :-) <|||||>failed test is irrelevant to this PR. |
transformers | 18,685 | closed | TorchDynamo related cleanup | Related to https://github.com/huggingface/transformers/issues/18127
cc @ydshieh @stas00 | 08-18-2022 17:22:20 | 08-18-2022 17:22:20 | Thank you for fixing, @anijain2305
@ydshieh, perhaps let's use this PR to add the CI bits?<|||||>> Thank you for fixing, @anijain2305
>
> @ydshieh, perhaps let's use this PR to add the CI bits?
OK. I will push my work (on CI) to this PR.
Before merge, I would have to push to HF's transformers repo to test everything is fine.<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_18685). All of your documentation changes will be reflected on that endpoint.<|||||>@ydshieh Please feel free to take over this PR. Do not hesitate to reach out to me if you see more Dynamo related errors.
I will open a tracker in Dynamo repo to avoid such failures in future.<|||||>@anijain2305 I rebase my branch on this PR
my branch: https://github.com/huggingface/transformers/commits/to_run_torchdynamo_tests
and it still has an error (for `test_torchdynamo_full_eval`)
```
def on_enter():
global most_recent_backend
if (
most_recent_backend is not None
and most_recent_backend is not compiler_fn
):
> raise ResetRequired()
E torchdynamo.exc.ResetRequired:
E Must call `torchdynamo.reset()` before changing backends. Detected two calls to
E `torchdynamo.optimize(...)` with a different backend compiler arguments.
```
See this [run page](https://github.com/huggingface/transformers/runs/7914698826?check_suite_focus=true)
Add `torchdynamo.reset()` to some places in `test_torchdynamo_full_eval` should work I guess. But I prefer for you to push the change (to your PR), thank you.<|||||>Oh, that test was getting skipped for me. I am gonna send a patch.<|||||>@anijain2305 Ping me once it's ready on your side :-), thank you.<|||||>@anijain2305, @Chillee - we are excited to integrate your amazing work in our software and make it available to users. But please commit to support it. Otherwise we are spinning wheels waiting and waiting and things continue to remain broken meanwhile.
If you don't feel this is desirable any longer or if the situations has changed and you don't have the resources we can discuss removing what we has been added altogether and only re-add it once you officially release these new features as part of pytorch.
Thank you!
<|||||>@ydshieh @stas00 Sorry about the long wait. I spent some time to setup trt earlier but was stuck, so this got delayed.
We are committed, and we still believe this is the right approach. Let me prioritize it this week. If I am still stuck with TRT install, I will ping.<|||||>Thank you for the follow up, @anijain2305! Please keep us posted on the progress.
<|||||>@ydshieh Can you restart the CI on your branch when you get a chance with my latest changes?<|||||>> @ydshieh Can you restart the CI on your branch when you get a chance with my latest changes?
Sure. Will let you know<|||||>@anijain2305 The 2 failed tests now pass. I will incorporate your changes into my PR :-) Thanks a lot for the fix!<|||||>Thank you for the fix, @anijain2305! this is super helpful! |
transformers | 18,684 | closed | Fix train_step, test_step and tests for CLIP | CLIP models were not being tested correctly with `fit()` because the test skipped models without a `hf_compute_loss` method. This skip was added to skip base models like `TFBERTModel` that do not have specific output heads and losses. However, it also skips models like CLIP that do not use `compute_loss` / `hf_compute_loss` methods.
The new test checks whether the model's return type dataclass has a `loss` key, which is a more reliable check. Enabling this reveals the bug in `fit()` for TFClip, so this PR also includes fixes to `train_step` and `test_step` for CLIP and models like it that require `return_loss=True` to be passed, but do not set it by default.
Draft for now because this will likely flush out other bugs or cause other problems!
Fixes #18670. | 08-18-2022 15:16:42 | 08-18-2022 15:16:42 | _The documentation is not available anymore as the PR was closed or merged._<|||||>The tests passed. I'm as surprised as everyone else. It's ready for review!<|||||>Thanks @Rocketknight1 for quick fix. I am wondering if it makes sense to wrap the loss computation for TFCLIP into a `hf_compute_loss`?
I also see for `TFSegformerForSemanticSegmentation`, it defines `hf_compute_loss` under its own model class.
If doable, maybe it's good to add `TFSemanticSegmentationLoss`, `TFcontrastiveLoss` class etc.
Just want to hear some opinions. cc @gante @amyeroberts <|||||>@ydshieh I don't think that's necessary - the new check we use means we don't need to rely on `hf_compute_loss` being present anymore!<|||||>Just realized that using `return` instead of `continue` in the tests was skipping a lot of tests, which might have been why it was green so easily. Rerunning everything!<|||||>Further updates: Now that we're no longer incorrectly skipping tests, this turned up quite a few bugs! The main source of issues is that some more recent models are returning scalar losses, but both our `test_loss_computation` test and Keras `fit()` expect that the loss has some shape, even if that shape is `(1,)`. |
transformers | 18,683 | closed | Add `accelerate` support for ViLT | # Motivation
Add `bnb` support for ViLT model as it has been asked by a user in https://github.com/TimDettmers/bitsandbytes/issues/14
. This involved adding `accelerate` support for this model.
# What does this PR do?
Adds `_no_split_modules` attribute at `ViltModel` class to support loading the model with `device_map=auto`. This also implied adding a `.to` operation inside `ViltLayer`.
I also redefined `accelerate` tests since for this model the hidden states are not deterministic. However, it is possible to check the correctness of the operation by checking some output attributes such as `logits` or `pooler_output`.
# Questions
The test `ViltModelIntegrationTest::test_inference_natural_language_visual_reasoning` seem to never pass on my machine (aka even without `_no_split_modules`), is it related to something I am missing? Also it seems that those tests were failing too on the nightly run: https://github.com/huggingface/transformers/runs/7882898294?check_suite_focus=true
cc @NielsRogge @ArthurZucker @ydshieh
| 08-18-2022 13:59:47 | 08-18-2022 13:59:47 | > I also redefined accelerate tests since for this model the hidden states are not deterministic.
Could you elaborate this part in more depth, please?<|||||>Sure,
The accelerate tests defined in `test_modeling_common.py` such as [`test_cpu_offload`](https://github.com/huggingface/transformers/blob/d243112b651f64b87d5d2509ff75042794484c20/tests/test_modeling_common.py#L2321) or [`test_model_parallism`](https://github.com/huggingface/transformers/blob/d243112b651f64b87d5d2509ff75042794484c20/tests/test_modeling_common.py#L2352) compares the first element of the output dictionary returned by the model (aka [`base_output[0]` and `new_output[0]`](https://github.com/huggingface/transformers/blob/d243112b651f64b87d5d2509ff75042794484c20/tests/test_modeling_common.py#L2379) ). This is not correct in the case of `ViLT` since in most of the cases, the first element of the output list is a [`hidden_state`](https://github.com/huggingface/transformers/blob/d243112b651f64b87d5d2509ff75042794484c20/src/transformers/models/vilt/modeling_vilt.py#L868) which appears to be stochastic according to [this line](https://github.com/huggingface/transformers/blob/d243112b651f64b87d5d2509ff75042794484c20/tests/models/vilt/test_modeling_vilt.py#L317). This can be also confirmed also by running several inferences and see that the hidden states are not the same. This is because `VilT samples image tokens from a multinomial distribution` according to the linked line above.
However, you can still check logits correctness using `logits` and `pooler_output` instead of the hidden states, therefore I added those in this PR. <|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>Thank you, @younesbelkada
Let me bother you a bit more: why `logits` and `pooler_output` are deterministic while `hidden_states` is not. Strange to me, but I must miss some details in this model.
<|||||>No worries!
After deeply looking at the code it appears that the `pooler` and the other heads that are used by the model take as an input the [first element](https://github.com/huggingface/transformers/blob/358fc18613a737f3fbcebc5b2abed43386ff9cbc/src/transformers/models/vilt/modeling_vilt.py#L885) of the variable `hidden_states` (with the latest having the shape `batch_size`x `nb_tokens` x `hidden_dim`). With the hidden states containing an embedding for each image patch ([stochastic](https://github.com/huggingface/transformers/blob/358fc18613a737f3fbcebc5b2abed43386ff9cbc/src/transformers/models/vilt/modeling_vilt.py#L174)) [concatenated together with the text embeddings (non stochastic)](https://github.com/huggingface/transformers/blob/358fc18613a737f3fbcebc5b2abed43386ff9cbc/src/transformers/models/vilt/modeling_vilt.py#L236). Therefore since the heads uses the first text embedding (or CLS token) to perform its downstream task, there is no stochasticity involved there. The stochasticity is only involved on the image patch embedding side.
This can be confirmed as well from the model achitecture (taken from [https://arxiv.org/pdf/2102.03334.pdf](arxiv)):

Although one can always take the first hidden states to perform the `accelerate` tests, I do think the proposed fix is slightly better since it checks the output of modules that would not be checked if we consider only the first hidden states (e.g. `Pooler`).
<|||||>Should still be deterministic from my intuition, let me have a look<|||||>After looking into it with @ArthurZucker fixing a manual seed on each accelerate test before each forward pass seems to fix the non passing tests.
We can either keep the changes as they are, or re-define the tests with a seed that is set before each forward pass<|||||>I am open to the fix of using seed. It makes the change smaller.
But I am confused by the fact that the daily scheduled CI never report this issue (I should check on slack channels too).
https://github.com/huggingface/transformers/runs/7891383348?check_suite_focus=true
The nightly CI you mentioned above uses nightly torch version (i.e. daily built torch) for which more test failures are expected.<|||||>I think it's better for us to figure out why the relevant tests don't fail on scheduled CI for a long period, but here it fails. This seems contradictory.<|||||>Could you mention on which hardware you tested, @younesbelkada ?<|||||>For the `accelerate` tests it is normal that they were passing since the `_no_split_modules` attribute was never defined in the model class, therefore these tests were never run.
Regarding the second test maybe it's a hardware issue yes! Let me print you the details:
```
Python version: 3.10.4 (main, Mar 31 2022, 08:41:55) [GCC 7.5.0]
transformers version: 4.22.0.dev0
Torch version: 1.12.1
Cuda available: True
Cuda version: 11.3
CuDNN version: 8302
Number of GPUs available: 2
NCCL version: (2, 10, 3)
DeepSpeed version: None
TensorFlow version: 2.9.1
TF GPUs available: True
Number of TF GPUs available: 2
```<|||||>The non passing test pass on my VM with `atol=4e-2` by the way<|||||>Thanks a lot for your comments @ydshieh & @NielsRogge
@NielsRogge : I can confirm this decorator works fine for ViTMAE too, I replaced the function you pointed me by:
```
@set_reproducible(2)
def test_save_load(self):
super().test_save_load()
```
And the test was passing so I guess we can safely replace stochastic tests with this kind of trick
I would love to have a quick review from @sgugger or @stas00 if possible 🙏 As I think that this decorator can be useful for future models
Thanks 💪 <|||||>Just a small comment, in terms of performances I think the decorator can be a little bit improve to only run on the model's forward and not on every single forward pass (if I understand correctly here all the functions named `forward` are affected, instead of just the model's main forward pass) <|||||>I think that @ArthurZucker wanted to enhance the util to make it more memory efficient, will let him decide whenever he feels it ready to merge 💪 <|||||>I can confirm that it now fixes the tests for `vit_mae` as suggested by @NielsRogge. Just added a quick fix to only set the seed on the model's forward. LGTM <|||||>Thank you all for your comments!
Reverted the changes related to the context manager and kept the tests to be as simple as possible!
Can confirm all the slow tests pass now (expect for [this test](https://github.com/huggingface/transformers/pull/18683#discussion_r949422087) but this is expected as stated in the comment)
I propose to address the stochasticity issue potentially in a follow-up PR, and keep this PR only for its main goal: add `ViLT` support for `accelerate` <|||||>@sgugger thanks for your comment! And sorry for my late reply
I should have addressed the proposed suggestion in the commit 43b087a and can confirm the slow tests for this model are passing! I hesitated between this solution and having an attribute on `ModelTesterMixin` but I thought that this solution is simpler to understand for future users (the `ModelTesterMixin` has already a lot of attributes)
I can also take care of opening a follow-up PR to update the tests of ViTMAE to make sure these changes are consistent for stochastic models tests inside `transformers`<|||||>Perfect thanks! @sgugger
Should have addressed the changes now! Let me know if you still need some modifications |
transformers | 18,682 | closed | Fix repo consistency | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes a repo consistency error introduced by the merge of https://github.com/huggingface/transformers/pull/17976
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 08-18-2022 13:36:36 | 08-18-2022 13:36:36 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 18,681 | closed | Padding offsets mapping via `tokenizer.pad` | ### Feature request
While preparing the dataset for the Named Entity Recognition task, I noticed that `tokenizer.pad` does not apply padding for `offset_mapping`, which is necessary not only for the Named Entity Recognition task.
<img src="https://i.ibb.co/hXRzwL1/Screenshot-11.png" alt="Screenshot-11" border="0">
### Motivation
In order to get "padded" `offset_mapping` I need to write some additional lines, which seems very frustrating near the great HuggingFace library API.
<img src="https://i.ibb.co/9tjCVvN/Screenshot-12.png" alt="Screenshot-12" border="0">
<img src="https://i.ibb.co/m8qJ5nF/Screenshot-13.png" alt="Screenshot-13" border="0">
### Your contribution
You just need to add some lines of code for padding `offset_mapping` with `(0, 0)` tuples, which stands for special tokens offsets.
<img src="https://i.ibb.co/WcTj6XT/Screenshot-14.png" alt="Screenshot-14" border="0"> | 08-18-2022 11:30:54 | 08-18-2022 11:30:54 | Maybe cc @SaulLu?<|||||>Yes. Good idea.<|||||>Hi @vad13irt,
I wrote a small use case to try to see if we don't already have the feature you are describing. With my example the output seems to match the behavior expected by feature described:
```python
from transformers import AutoTokenizer
text = "This is a text"
tokenizer = AutoTokenizer.from_pretrained("gpt2")
tokenizer.pad_token = tokenizer.eos_token
encoding = tokenizer([text], pad_to_multiple_of=16, padding=True, return_offsets_mapping=True)
print(encoding)
# {
# 'input_ids': [[1212, 318, 257, 2420, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256]],
# 'attention_mask': [[1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]],
# 'offset_mapping': [[(0, 4), (4, 7), (7, 9), (9, 14), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0)]]
# }
```
Am I missing something on the feature you wish you had? :blush:
To understand the use case you have in mind, it would be really great if you could share with us:
- metrics describing your environment (You can run the command `transformers-cli env` and copy-paste its output)
- a complete example whose behavior does not match your needs (If you can copy and paste this snippet -and not screenshot it-, for us it's much easier to iterate on it afterwards :hugs: )
<|||||>I apologize, by rereading your feature request I actually understand better your request which concerns - as you say - the `pad` common to slow and fast tokenizers and not the `__call__` method .
This feature makes sense, we could absolutely add the offsets. Would you be interested in working on it?<|||||>Thank you, @SaulLu for your feedback. The `tokenizer.__call__` method works fine, i.e pads everything, however, `tokenizer.pad` (which takes the `BatchEncoding` as input) doesn't pad `offset_mapping`. Please refer to https://github.com/huggingface/transformers/blob/e54a1b49aa6268c484625c6374f952f318914743/src/transformers/tokenization_utils_base.py#L3258.
This issue is clearly seen when using Dynamic Padding techniques during training or inference.
Let's see an example:
```py
import transformers
from transformers import AutoTokenizer
print(transformers.__version__)
# 4.21.1
# Config
MAX_LENGTH = 25
MODEL_PATH = "bert-base-uncased"
# load the tokenizer
tokenizer = AutoTokenizer.from_pretrained(MODEL_PATH)
# define batch of texts
texts = ["HuggingFace Transformers is great library", "We need to fix this issue :)."]
# encode texts and send to batch
encoded_inputs = []
for text in texts:
encoded_input = tokenizer(
text=text,
add_special_tokens=True,
return_attention_mask=True,
return_offsets_mapping=True,
padding="do_not_pad",
truncation=True,
return_token_type_ids=False,
)
encoded_inputs.append(encoded_input)
print(encoded_inputs, end="\n"*3)
# [
# {
# 'input_ids': [101, 17662, 12172, 19081, 2003, 2307, 3075, 102],
# 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1],
# 'offset_mapping': [(0, 0), (0, 7), (7, 11), (12, 24), (25, 27), (28, 33), (34, 41), (0, 0)]
# },
# {
# 'input_ids': [101, 2057, 2342, 2000, 8081, 2023, 3277, 1024, 1007, 1012, 102],
# 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1],
# 'offset_mapping': [(0, 0), (0, 2), (3, 7), (8, 10), (11, 14), (15, 19), (20, 25), (26, 27), (27, 28), (28, 29), (0, 0)]
# }
# ]
# Dynamic Padding
padded_encoded_inputs = tokenizer.pad(
encoded_inputs=encoded_inputs,
padding="max_length",
max_length=MAX_LENGTH,
)
print(padded_encoded_inputs)
# {
# 'input_ids': [[101, 17662, 12172, 19081, 2003, 2307, 3075, 102, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [101, 2057, 2342, 2000, 8081, 2023, 3277, 1024, 1007, 1012, 102, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]],
# 'attention_mask': [[1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]],
# 'offset_mapping': [[(0, 0), (0, 7), (7, 11), (12, 24), (25, 27), (28, 33), (34, 41), (0, 0)], [(0, 0), (0, 2), (3, 7), (8, 10), (11, 14), (15, 19), (20, 25), (26, 27), (27, 28), (28, 29), (0, 0)]]
# }
```
We observe that `input_ids` and `attention_mask` were padded with `0`, but `offset_mapping` was ignored and still the same.
> Would you be interested in working on it?
Yeah, I am interested in it. There are no big changes in the code, just check the availability of `offset_mapping` and then pad them with `(0, 0)`.<|||||>> Yeah, I am interested in it. There are no big changes in the code, just check the availability of offset_mapping and then pad them with (0, 0).
Thank you! Yes exactly and also adding some tests to check this new behavior :blush:<|||||>Ok, I will write it as soon as possible.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 18,680 | closed | Pin `detectron2` for CircleCI tests | # What does this PR do?
The commit `36a65a0907d90ed591479b2ebaa8b61cfa0b4ef0` in `detectron2 ` break things, see [here](https://github.com/facebookresearch/detectron2/commit/36a65a0907d90ed591479b2ebaa8b61cfa0b4ef0#comments).
Ping the parent commit until that issue is fixed.
Issue found by @amyeroberts | 08-18-2022 10:25:18 | 08-18-2022 10:25:18 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Merge now to avoid CI failure on PRs. |
transformers | 18,679 | closed | Add an examples folder for code downstream tasks | # What does this PR do?
This PR adds a folder in CodeParrot directory to store examples for downstream tasks on code models. | 08-18-2022 09:13:05 | 08-18-2022 09:13:05 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 18,678 | closed | [LayoutLMv3] Add TensorFlow implementation | Adds a TensorFlow implementation of LayoutLMv3.
We have TensorFlow weights available that we have tested are able to produce the same results as the PyTorch models at microsoft/layoutlmv3-base and microsoft/layoutlmv3-large. Can you help us upload the TF weights to those locations?
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
@LysandreJik @NielsRogge | 08-18-2022 09:06:49 | 08-18-2022 09:06:49 | _The documentation is not available anymore as the PR was closed or merged._<|||||>A fix was added for the `run_tests_layoutlmv2_and_v3` tests: #18680. Merging in `main` should resolve. <|||||>Thank you for your comments :) I updated everything accordingly.
> * The reason for skipping `test_loss_computation` points to some funny behaviour somewhere. We should figure out what's happening and either update that comment or write a custom `test_loss_computation` for the model. This is the main thing I think is necessary to address before merging.
I know why test_loss_computation failed and have fixed it in an overriden method. I am not sure why it works on other models that accept both pixel_values and input_ids though.
> * Lots of great type hinting throughout. It would be great to add return types too for the full 💯 effect
No problem. Have added them 👍
> * We can help you with adding weights to the hub when the time comes. At the moment, the tests to compare the logits of the pytorch and TF models are failing, and so the difference is too large for the weights to be added yet. Tests for ref:
>
> * `tests/models/layoutlmv3/test_modeling_layoutlmv3.py::LayoutLMv3ModelTest::test_pt_tf_model_equivalence`
> * `tests/models/layoutlmv3/test_modeling_tf_layoutlmv3.py::TFLayoutLMv3ModelTest::test_pt_tf_model_equivalence`
>
> To run these locally, you can do: `RUN_PT_TF_CROSS_TESTS=True pytest tests/models/layoutlmv3/test_modeling_layoutlmv3.py::LayoutLMv3ModelTest::test_pt_tf_model_equivalence`
Ah thank you, didn't know about those tests. Both of those tests should pass now. The problem was that we had implemented the relative position embeddings through an Embedding layer, but the PyTorch implementation uses a one-hot encoding followed by a dense layer. It "confused" the weight conversion function. The TensorFlow implementation does the same as the PyTorch implementation now.
<|||||>@ChrisFugl Thank you for adding the TF version of LayoutLMv3 🧡 I'll be the third (and last) reviewer, so hopefully this will be quick 🔥
While I do the actual review, there are two things we will need before we merge:
1. As you mentioned, we will need TF weights in the LayoutLMv3 repos. We have just finished the pieces to make the process open to all (as opposed to exclusive to HF members), so I'd like to ask you to be our first user 🎉 Assuming you are logged in to the Hub in your terminal (if not, run `huggingface-cli login`) and are on this branch, run `transformers-cli pt-to-tf --model-name foo/bar`, where `foo/bar` is the target model repo. Assuming it passes the CLI's checks, it will open a PR on the model hub. Please tag `@nielsr, @sgugger, @joaogante` in the opened Hub PRs :)
2. After the TF model weights are merged, re-run the test suite for the model locally and confirm that all tests pass (command: `NVIDIA_TF32_OVERRIDE=0 RUN_SLOW=1 py.test -vv tests/models/layoutlmv3/modeling_tf_layoutlmv3.py`)<|||||>> @ChrisFugl Thank you for adding the TF version of LayoutLMv3 🧡 I'll be the third (and last) reviewer, so hopefully this will be quick 🔥
>
> While I do the actual review, there are two things we will need before we merge:
>
> 1. As you mentioned, we will need TF weights in the LayoutLMv3 repos. We have just finished the pieces to make the process open to all (as opposed to exclusive to HF members), so I'd like to ask you to be our first user 🎉 Assuming you are logged in to the Hub in your terminal (if not, run `huggingface-cli login`) and are on this branch, run `transformers-cli pt-to-tf --model-name foo/bar`, where `foo/bar` is the target model repo. Assuming it passes the CLI's checks, it will open a PR on the model hub. Please tag `@nielsr, @sgugger, @joaogante` in the opened Hub PRs :)
> 2. After the TF model weights are merged, re-run the test suite for the model locally and confirm that all tests pass (command: `NVIDIA_TF32_OVERRIDE=0 RUN_SLOW=1 py.test -vv tests/models/layoutlmv3/modeling_tf_layoutlmv3.py`)
Thanks! Sure, I am happy to try it out :)
I get an error though when running `transformers-cli pt-to-tf --model-name microsoft/layoutlmv3-base`:
```
/tmp/microsoft/layoutlmv3-base is already a clone of https://huggingface.co/microsoft/layoutlmv3-base. Make sure you pull the latest changes with `repo.git_pull()`.
No detected architecture, using AutoModel/TFAutoModel
Some weights of the PyTorch model were not used when initializing the TF 2.0 model TFLayoutLMv3Model: ['layoutlmv3.embeddings.position_ids']
- This IS expected if you are initializing TFLayoutLMv3Model from a PyTorch model trained on another task or with another architecture (e.g. initializing a TFBertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing TFLayoutLMv3Model from a PyTorch model that you expect to be exactly identical (e.g. initializing a TFBertForSequenceClassification model from a BertForSequenceClassification model).
All the weights of TFLayoutLMv3Model were initialized from the PyTorch model.
If your task is similar to the task the model of the checkpoint was trained on, you can already use TFLayoutLMv3Model for predictions without further training.
Reusing dataset cifar10 (/Users/x/y/z/.cache/huggingface/datasets/cifar10/plain_text/1.0.0/447d6ec4733dddd1ce3bb577c7166b986eaa4c538dcd9e805ba61f35674a9de4)
huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...
To disable this warning, you can either:
- Avoid using `tokenizers` before the fork if possible
- Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)
huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...
To disable this warning, you can either:
- Avoid using `tokenizers` before the fork if possible
- Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)
/Users/x/y/z/transformers/.env/lib/python3.8/site-packages/transformers/modeling_utils.py:721: FutureWarning: The `device` argument is deprecated and will be removed in v5 of Transformers.
warnings.warn(
Some weights of the PyTorch model were not used when initializing the TF 2.0 model TFLayoutLMv3Model: ['layoutlmv3.embeddings.position_ids']
- This IS expected if you are initializing TFLayoutLMv3Model from a PyTorch model trained on another task or with another architecture (e.g. initializing a TFBertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing TFLayoutLMv3Model from a PyTorch model that you expect to be exactly identical (e.g. initializing a TFBertForSequenceClassification model from a BertForSequenceClassification model).
All the weights of TFLayoutLMv3Model were initialized from the PyTorch model.
If your task is similar to the task the model of the checkpoint was trained on, you can already use TFLayoutLMv3Model for predictions without further training.
Traceback (most recent call last):
File "/Users/x/y/z/transformers/.env/bin/transformers-cli", line 8, in <module>
sys.exit(main())
File "/Users/x/y/z/transformers/.env/lib/python3.8/site-packages/transformers/commands/transformers_cli.py", line 55, in main
service.run()
File "/Users/x/y/z/transformers/.env/lib/python3.8/site-packages/transformers/commands/pt_to_tf.py", line 289, in run
max_crossload_output_diff = max(output_differences.values())
ValueError: max() arg is an empty sequence
```
I debugged line 289 of pt_to_tf.py and found that `output_differences` is empty because it excludes model outputs with "hidden" in the output name, and both the PyTorch and TensorFlow model only outputs these keys:
* last_hidden_state
* hidden_states
That seems like correct behaviour on the model side - is the check buggy?
<|||||>@ChrisFugl probably the `pt-to-tf` CLI is missing the correct behavior for this type of models -- will take a look :)<|||||>> Thank you for this high-quality implementation -- I appreciated the little details like type hints and better variable names (wrt PT implementation) ❤️
>
> I've added a few minor comments. Some model classes are also missing a few `_keys_to_ignore` members (they don't impact correctness, but prevent some useless warnings from popping up)
That is great! And thank you :)
I was not entirely sure of which keys would be relevant to ignore, but I think it is only "position_ids"?
Let me know when the CLI is ready - then I will try and push the model weights again.<|||||>@ChrisFugl copying the keys to ignore from PT should be good enough, we can tweak after merging if we are still missing some.
As for the CLI, I'll be doing a fix now -- I will tag you in the PR once it is open 👍 <|||||>@ChrisFugl if you rebase this PR with `main` you should be able to run the CLI now.
Note: I've tried converting `microsoft/layoutlmv3-base` (w/o opening a PR) and it failed a check in one of its hidden layers. The maximum error is slightly above the constant we defined. This constant is a "guesstimate", it's okay to override it with `--max-hidden-error` if the problem consists of a minor difference in one layer.<|||||>@gante I just tried again after rebasing and increasing the max error, but I get a new error now. I think it is because it recognises the architecture from the config now, which it didn't before (see detected architecture). I don't know why that part changed. What do you think?
```
/tmp/microsoft/layoutlmv3-base is already a clone of https://huggingface.co/microsoft/layoutlmv3-base. Make sure you pull the latest changes with `repo.git_pull()`.
Detected architecture: LayoutLMv3Model
All PyTorch model weights were used when initializing TFLayoutLMv3Model.
All the weights of TFLayoutLMv3Model were initialized from the PyTorch model.
If your task is similar to the task the model of the checkpoint was trained on, you can already use TFLayoutLMv3Model for predictions without further training.
Reusing dataset cifar10 (/Users/x/y/z/.cache/huggingface/datasets/cifar10/plain_text/1.0.0/447d6ec4733dddd1ce3bb577c7166b986eaa4c538dcd9e805ba61f35674a9de4)
/Users/x/y/z/transformers/src/transformers/modeling_utils.py:721: FutureWarning: The `device` argument is deprecated and will be removed in v5 of Transformers.
warnings.warn(
All PyTorch model weights were used when initializing TFLayoutLMv3Model.
All the weights of TFLayoutLMv3Model were initialized from the PyTorch model.
If your task is similar to the task the model of the checkpoint was trained on, you can already use TFLayoutLMv3Model for predictions without further training.
Traceback (most recent call last):
File "/Users/x/y/z/transformers/.env/bin/transformers-cli", line 8, in <module>
sys.exit(main())
File "/Users/x/y/z/transformers/src/transformers/commands/transformers_cli.py", line 55, in main
service.run()
File "/Users/x/y/z/transformers/src/transformers/commands/pt_to_tf.py", line 290, in run
raise ValueError(
ValueError: Something went wrong -- the config file has architectures (['LayoutLMv3Model']), but no model head output was found. All outputs start with 'hidden'
```<|||||>@ChrisFugl Interesting -- that model has no `architectures` field in [its config](https://huggingface.co/microsoft/layoutlmv3-base/blob/main/config.json). From your error message, it means that it is being initialized somewhere, implying that the logic for the conversion script is wrong in these cases or that is something wrong with the model's configuration class.
Going to dig deeper!
[Apologies for all these hiccups, we are still tuning the conversion script to work in all cases]<|||||>@ChrisFugl I've rerun the CLI from this branch, and it works fine on my end -- i.e. `architectures` is set to `None`, as expected, and the error above doesn't get triggered 🤔
Is there any chance the files in `/tmp/microsoft/layoutlmv3-base` were touched? Can you try deleting the folder and running the CLI again? <|||||>@gante No worries about the hiccups :)
I don't know why it worked since I haven't touched the files in /tmp/microsoft/layoutlmv3-base myself, but it worked to delete the folder.
I now have permission issues though. I am logged in with `huggingface-cli login` and have saved the token in git credentials store.
```
Cloning https://huggingface.co/microsoft/layoutlmv3-base into local empty directory.
Download file pytorch_model.bin: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 478M/478M [10:09<00:00, 823kB/s]
Clean file pytorch_model.bin: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 478M/478M [05:04<00:00, 1.65MB/s]
No detected architecture, using AutoModel/TFAutoModel█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 478M/478M [05:04<00:00, 1.93MB/s]
All PyTorch model weights were used when initializing TFLayoutLMv3Model.
All the weights of TFLayoutLMv3Model were initialized from the PyTorch model.
If your task is similar to the task the model of the checkpoint was trained on, you can already use TFLayoutLMv3Model for predictions without further training.
Reusing dataset cifar10 (/Users/x/y/.cache/huggingface/datasets/cifar10/plain_text/1.0.0/447d6ec4733dddd1ce3bb577c7166b986eaa4c538dcd9e805ba61f35674a9de4)
/Users/x/y/z/transformers/src/transformers/modeling_utils.py:721: FutureWarning: The `device` argument is deprecated and will be removed in v5 of Transformers.
warnings.warn(
All PyTorch model weights were used when initializing TFLayoutLMv3Model.
All the weights of TFLayoutLMv3Model were initialized from the PyTorch model.
If your task is similar to the task the model of the checkpoint was trained on, you can already use TFLayoutLMv3Model for predictions without further training.
All model checkpoint layers were used when initializing TFLayoutLMv3Model.
All the layers of TFLayoutLMv3Model were initialized from the model checkpoint at /tmp/microsoft/layoutlmv3-base.
If your task is similar to the task the model of the checkpoint was trained on, you can already use TFLayoutLMv3Model for predictions without further training.
Uploading the weights into a new PR...
Traceback (most recent call last):
File "/Users/x/y/z/transformers/.env/bin/transformers-cli", line 8, in <module>
sys.exit(main())
File "/Users/x/y/z/transformers/src/transformers/commands/transformers_cli.py", line 55, in main
service.run()
File "/Users/x/y/z/transformers/src/transformers/commands/pt_to_tf.py", line 366, in run
hub_pr_url = create_commit(
File "/Users/x/y/z/transformers/.env/lib/python3.8/site-packages/huggingface_hub/hf_api.py", line 1817, in create_commit
upload_lfs_files(
File "/Users/x/y/z/transformers/.env/lib/python3.8/site-packages/huggingface_hub/_commit_api.py", line 211, in upload_lfs_files
batch_actions, batch_errors = post_lfs_batch_info(
File "/Users/x/y/z/transformers/.env/lib/python3.8/site-packages/huggingface_hub/lfs.py", line 217, in post_lfs_batch_info
resp.raise_for_status()
File "/Users/x/y/z/transformers/.env/lib/python3.8/site-packages/requests/models.py", line 1021, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 403 Client Error: Forbidden for url: https://huggingface.co/microsoft/layoutlmv3-base.git/info/lfs/objects/batch
```<|||||>@ChrisFugl permission to programmatically open a Hub PR as a non-admin was just released -- run `pip install huggingface_hub -U` and then you should be able to open the PRs :D <|||||>@gante Ah thank you - got it working now :D
https://huggingface.co/microsoft/layoutlmv3-base/discussions/3
https://huggingface.co/microsoft/layoutlmv3-large/discussions/1<|||||>Awesome 🚀
Next steps:
1. Wait for the `base` model weights to be merged, as tests depend on it;
2. [can be done now] Rebase the PR with `main`, as the diff contains changes that are already in `main` (pull main -> rebase branch with main -> force push);
3. Run the slow tests locally and confirm that they pass, after 1. and 2. (`NVIDIA_TF32_OVERRIDE=0 RUN_SLOW=1 py.test -vv tests/models/layoutlmv3/test_modeling_tf_layoutlmv3.py`);
4. After the three points above are complete, I will merge the PR 🙌 <|||||>@ChrisFugl BTW, this will be the first TF PR where an external contributor was able to do the whole process on their own 🎉 (previously only HF admins could add weights)<|||||>@ChrisFugl if it is not asking too much, can you open a PR for [this](https://huggingface.co/microsoft/layoutlmv3-base-chinese) layoutlmv3 as well? :D<|||||>> @ChrisFugl BTW, this will be the first TF PR where an external contributor was able to do the whole process on their own 🎉 (previously only HF admins could add weights)
@gante Well I feel honoured :)
> @ChrisFugl if it is not asking too much, can you open a PR for [this](https://huggingface.co/microsoft/layoutlmv3-base-chinese) layoutlmv3 as well? :D
Not at all. There seems to be some problem though. I have tried to remove /tmp/microsoft and run it again, but that didn't work.
```
/tmp/microsoft/layoutlmv3-base-chinese is already a clone of https://huggingface.co/microsoft/layoutlmv3-base-chinese. Make sure you pull the latest changes with `repo.git_pull()`.
No detected architecture, using AutoModel/TFAutoModel
All PyTorch model weights were used when initializing TFLayoutLMv3Model.
All the weights of TFLayoutLMv3Model were initialized from the PyTorch model.
If your task is similar to the task the model of the checkpoint was trained on, you can already use TFLayoutLMv3Model for predictions without further training.
Reusing dataset cifar10 (/Users/x/y/z/.cache/huggingface/datasets/cifar10/plain_text/1.0.0/447d6ec4733dddd1ce3bb577c7166b986eaa4c538dcd9e805ba61f35674a9de4)
The tokenizer class you load from this checkpoint is not the same type as the class this function is called from. It may result in unexpected tokenization.
The tokenizer class you load from this checkpoint is 'XLMRobertaTokenizer'.
The class this function is called from is 'LayoutLMv3Tokenizer'.
Traceback (most recent call last):
File "/Users/x/y/z/transformers/.env/bin/transformers-cli", line 8, in <module>
sys.exit(main())
File "/Users/x/y/z/transformers/src/transformers/commands/transformers_cli.py", line 55, in main
service.run()
File "/Users/x/y/z/transformers/src/transformers/commands/pt_to_tf.py", line 276, in run
pt_input, tf_input = self.get_inputs(pt_model, config)
File "/Users/x/y/z/transformers/src/transformers/commands/pt_to_tf.py", line 218, in get_inputs
processor = AutoProcessor.from_pretrained(self._local_dir)
File "/Users/x/y/z/transformers/src/transformers/models/auto/processing_auto.py", line 250, in from_pretrained
return PROCESSOR_MAPPING[type(config)].from_pretrained(pretrained_model_name_or_path, **kwargs)
File "/Users/x/y/z/transformers/src/transformers/processing_utils.py", line 182, in from_pretrained
args = cls._get_arguments_from_pretrained(pretrained_model_name_or_path, **kwargs)
File "/Users/x/y/z/transformers/src/transformers/processing_utils.py", line 226, in _get_arguments_from_pretrained
args.append(attribute_class.from_pretrained(pretrained_model_name_or_path, **kwargs))
File "/Users/x/y/z/transformers/src/transformers/tokenization_utils_base.py", line 1768, in from_pretrained
return cls._from_pretrained(
File "/Users/x/y/z/transformers/src/transformers/tokenization_utils_base.py", line 1798, in _from_pretrained
slow_tokenizer = (cls.slow_tokenizer_class)._from_pretrained(
File "/Users/x/y/z/transformers/src/transformers/tokenization_utils_base.py", line 1923, in _from_pretrained
tokenizer = cls(*init_inputs, **init_kwargs)
File "/Users/x/y/z/transformers/src/transformers/models/layoutlmv3/tokenization_layoutlmv3.py", line 324, in __init__
with open(vocab_file, encoding="utf-8") as vocab_handle:
TypeError: expected str, bytes or os.PathLike object, not NoneType
```<|||||>> 3. Run the slow tests locally and confirm that they pass, after 1. and 2. (`NVIDIA_TF32_OVERRIDE=0 RUN_SLOW=1 py.test -vv tests/models/layoutlmv3/test_modeling_tf_layoutlmv3.py`);
@gante Oh and I forgot to mention that the tests pass now.<|||||>@ChrisFugl no worries about the last conversion, we'll see what's wrong afterward. Regarding failing tests, it seems like it is a problem on our end.
No further input from you should be needed to merge this PR 🎉 <|||||>@gante That is awesome! Of course still happy to help if it is needed :)<|||||>@ChrisFugl the underlying issue is fixed -- can you rebase the PR please? 🙏
We should be able to merge the PR after that 🎉 <|||||>> @ChrisFugl the underlying issue is fixed -- can you rebase the PR please? 🙏
>
> We should be able to merge the PR after that 🎉
@gante I followed your instructions on how to rebase, but I am afraid that it messed things up a bit. There are now 45 files changed in this PR - most of which this PR haven't even touched upon.
I resolved all conflicts while rebasing by consistently accepting the incoming changes. The tests still pass.
I think the problems with rebasing must be because I merged upstream/main into this PR prior to the first rebase.
Sorry about the rebasing issues. I am not sure what the right course of action is now?<|||||>@ChrisFugl can you try the solutions from [this StackOverflow page](https://stackoverflow.com/questions/16306012/github-pull-request-showing-commits-that-are-already-in-target-branch)?
The commits will be squashed at merge time, so everything will be fine as long as the diff we see here is correct :) <|||||>@gante Ah that was simple :) Thanks! Should be all good now.<|||||>@ChrisFugl Fantastic, I'll merge the PR now! 🧡
Thank you so much for participating through the whole process, even with all the hiccups. LayoutLMv3 is a very useful model, I'm confident TF users will deeply appreciate your contribution 🤗
There are two post-merge action points. As always, a helping hand is appreciated (but we will do them otherwise):
- Find and fix the tokenizer problem that is preventing the conversion of `microsoft/layoutlmv3-base-chinese` (and convert the model)
- [EDIT: we are going to write about it today, ~6pm CEST] Announce the TF model on social media (twitter and LinkedIn)! Unannounced changes are changes that most users will miss. [EDIT2: shared here -- https://twitter.com/joao_gante/status/1564650841688006656 ; https://www.linkedin.com/posts/gante_tensorflow-ai-microsoft-activity-6970415100406951936-dywm?utm_source=share&utm_medium=member_desktop]<|||||>Thank you @gante - and for your help with getting the PR merged in 😁 Super nice to get it merged in.
I can give a shot at the tokenizer problem 👌
Happy to help with sharing as well. Already shared the LinkedIn post. We will probably also do some more spreading of the news in my team :) |
transformers | 18,677 | closed | Rename method to avoid clash with property | # What does this PR do?
Fixes a bug when the `self.rescale` method was being called, but `self.rescale` was set as a property. This arose after a refactoring of the feature extractors in my PR: https://github.com/huggingface/transformers/pull/18499, causing an issue in any model that had `rescale` as a property in its config. Renaming considered safe as the new `rescale` method was introduced yesterday.
Issue raised by user here: https://github.com/huggingface/transformers/commit/49e44b216b2559e34e945d5dcdbbe2238859e29b#comments
and caught in doctests.
Renames the method to be less ambiguous.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests? | 08-18-2022 09:03:28 | 08-18-2022 09:03:28 | _The documentation is not available anymore as the PR was closed or merged._<|||||>The layoulmv2 and layoutlmv3 tests appear to be failing due to an issue when importing from the [detectron2](https://github.com/facebookresearch/detectron2) package.
When `from detectron2.modeling import META_ARCH_REGISTRY` is called, it fails with the error `ImportError: cannot import name 'is_fx_tracing' from 'torch.fx._symbolic_trace'`. This has been reported by other users of the library - see [commit comments](https://github.com/facebookresearch/detectron2/commit/36a65a0907d90ed591479b2ebaa8b61cfa0b4ef0#comments).<|||||>@ydshieh I believe it comes from the `preprocessor_config.json` file: https://huggingface.co/google/owlvit-large-patch14/blob/main/preprocessor_config.json.
<|||||>> @ydshieh I believe it comes from the `preprocessor_config.json` file: https://huggingface.co/google/owlvit-large-patch14/blob/main/preprocessor_config.json.
I will update the `preprocessor_config.json` files, the rescale argument is not used anymore.<|||||>@ydshieh Yep - will do! |
transformers | 18,676 | closed | `model.tie_weights()` should be applied after `accelerator.prepare()` | # What does this PR do?
Weight tying should be done after the model has been moved to XLA device as mentioned on PyTorch/XLA Troubleshooting guide [here](https://github.com/pytorch/xla/blob/master/TROUBLESHOOTING.md#xla-tensor-quirks). This PR fixes this in `examples/pytorch/language-modeling/run_mlm_no_trainer.py` example script.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review ?
@sgugger @muellerzr | 08-18-2022 08:24:51 | 08-18-2022 08:24:51 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Thanks @Gladiator07 great work! |
transformers | 18,675 | closed | Add hallucination filter | # What does this PR do?
As per the discussions in https://github.com/huggingface/transformers/issues/18354, this PR aims to add a `HallucinationPenaltyLogitsProcessor` that takes in a `hallucination_penalty` that is applied to any tokens that are not in the original input. This acts as a hallucination filter and the higher it is set the more likely the text is going to contain only the input tokens. For summarisation this means the higher the hallucination penalty the more extractive the summary is going to be and the less likely it is to have a hallucination.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@gante, @patrickvonplaten
Library:
- text generation: @patrickvonplaten | 08-18-2022 07:06:53 | 08-18-2022 07:06:53 | _The documentation is not available anymore as the PR was closed or merged._<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Hi @KMFODA are you still planning to work on this? We can reopen the PR :)<|||||>Hey @gante yes I still plan to work on this. My apologies I had fallen ill this past month and couldn't spend time on this. If you re-open the PR I will prioritise working on this over the next few weeks.<|||||>@KMFODA absolutely no rush, take your time -- personal health always takes precedence! I hope you're feeling better 🤗 <|||||>(@KMFODA let us know when you'd like a new review)<|||||>Thanks @gante. Just managed to get all the tests to pass so a review now would be much appreciated.<|||||>Hi @gante, I added a `test_encoder_repetition_penalty_dist_process` to cover the 1st type of test. The 2nd test you've linked seems to be more focused on beam searches and stopping criteria. What type of test did you have in mind here for the encoder_repetition_penalty? Would ensuring it's initialised by adding it to this [test](https://github.com/huggingface/transformers/blob/0e83c9664b2a440ade59066a77fb01d0143e4d18/tests/generation/test_generation_utils.py#L101) cover this?<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_18675). All of your documentation changes will be reflected on that endpoint.<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_18675). All of your documentation changes will be reflected on that endpoint.<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_18675). All of your documentation changes will be reflected on that endpoint.<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_18675). All of your documentation changes will be reflected on that endpoint.<|||||>Done, thanks for all the helpful comments to get this merged and apologies it took so long.<|||||>@KMFODA did this processor help with your use case? :)<|||||>Thanks for asking @gante! It worked yes, although I had to use a very small penalty to eventually remove the hallucination. In a call with Karim and Joao (changing the names to protect the real data) the model I'm using generates the following action:
`Tom will give Joao his email address.`
Where Tom is a hallucination and an individual not in this call. After applying a penalty of 0.001 I get the following output:
`Karim will send Joao an email today.`<|||||>Hey @gante, let me know if anything else is needed to get this merged. Using this in the inference pipeline / API would be of really helpful.<|||||>Thanks @ArthurZucker. Added the doc strings in the 3 different files you mentioned. I've only got one test failing which I can't recreate locally:
`tests/pipelines/test_pipelines_zero_shot.py::ZeroShotClassificationPipelineTests::test_small_model_tf`
It seems like the outputs changed in the `zero-shot-classification` pipeline although I'm not sure why. Are you able to point me towards what might be causing this to fail?<|||||>Managed to fix the failing test by rebasing to main. Hopefully should be good to merge now but if not let me know!<|||||>Cool let's just ask for a final review from @sgugger ! 🤗 <|||||>Hey @KMFODA -- the big change we just merged clashed with your PR, as @sgugger mentioned above.
In a nutshell, new generation parameters should go in `GenerationConfig` ([here](https://github.com/huggingface/transformers/blob/1543cee7c8c95ef47f832b1f37625ba2923c4994/src/transformers/generation/configuration_utils.py#L38)), and generate always uses a generation config (related [docs](https://huggingface.co/docs/transformers/main/en/main_classes/text_generation)). We need to change this PR to make it consistent with the new changes :D
I understand this PR has been a long process, so it's okay if you don't want to make further changes. Just let me know if you are no longer interested in working on it :)<|||||>Hey @gante thanks for getting back to me. Not a problem, I'd still like to work on this as it'll help me learn about the new generation engine. I'll get working on this after the holidays.<|||||>Hey @gante, moved encoder_repetition_penalty to `GenerationConfig` and fixed all failing tests. Let me know if more is needed to merge this PR.<|||||>Thanks @gante helpful changes. Just implemented and fixed failing tests.<|||||>@KMFODA awesome, thanks 🙏
@sgugger this PR should be ready to go in, feel free to merge if you are also happy with it :)<|||||>Hopefully should be good to merge now. If not let me know.<|||||>Thanks for your contribution! |
transformers | 18,674 | closed | Deberta MaskedLM Corrections | # What does this PR do?
The current implementations of DebertaForMaskedLM and DebertaV2ForMaskedLM do not load all of the weights from the checkpoints. After consulting the [original repo](https://github.com/microsoft/DeBERTa/blob/master/DeBERTa/deberta/bert.py), I modified the MaskedLM classes to load the weights correctly and to be able to be used for fill-mask task out of the box (for v1 and v2, v3 wasn't trained for that).
I didn't know what to implement for `get_output_embeddings` and `set_output_embeddings`.
## TODO:
- [ ] Implement `get_output_embeddings`
- [ ] Implement `set_output_embeddings`
- [ ] Implement `resize_token_embeddings`
Fixes # (issue)
https://github.com/huggingface/transformers/issues/15216
https://github.com/huggingface/transformers/issues/15673
https://github.com/huggingface/transformers/issues/16456
https://github.com/huggingface/transformers/issues/18659
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@LysandreJik @sgugger
I'm sorry this took so long. | 08-17-2022 22:11:59 | 08-17-2022 22:11:59 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_18674). All of your documentation changes will be reflected on that endpoint.<|||||>Thanks for working on this, @nbroad1881! This is a good improvement, but it will unfortunately break all existing models that have a head named `cls`. I'm trying to see if there is a non-backward breaking approach that would enable loading the existing model head; it'll likely mean updating the weights in the repo rather than updating the code here.
I wonder what would be the most breaking. It would be better to have a non breaking approach, but I'm not entirely sure we can get away with it.<|||||>> I wonder what would be the most breaking. It would be better to have a non breaking approach, but I'm not entirely sure we can get away with it.
Could there be two versions and anytime AutoModelForMaskedLM gets called in the future, it defaults to the new implementation but also checks the config.json file or state dict to see if it uses the old implementation?
Scenarios
1. AutoModelForMaskedLM.from_pretrained/from_config(canonical repo) --> use new implementation
2. AutoModelForMaskedLM.from_pretrained/from_config(custom repo/local path) --> check config.json/state dict to decide if using new/old implementation
One other question:
What should the `get_output_embeddings` function do? BERT's implementation makes it look like it just returns the linear layer (decoder) that maps output_embeddings to token logits. This layer is slightly different for deberta. Instead of `Linear(hidden_size, vocab_size)` it goes `Linear(hidden_size, hidden_size)` and then [there is another step where the output of that is multiplied by word embeddings.](https://github.com/huggingface/transformers/blob/0038a3caa5c7d0c5014704005dd67ab347451ddc/src/transformers/models/deberta/modeling_deberta.py#L1095-L1112)
<|||||>@sgugger, do you have an opinion on this?<|||||>I am not sure I fully understand the problem here. It looks like the canonical repos have weights that mismatch our code. If this is the case, those weights should be updated to match the code in Transformers, not the opposite, to avoid breaking all other checkpoints.<|||||>It's not just a naming issue. The current code uses a different mechanism to make masked LM predictions.
[Current way](https://github.com/huggingface/transformers/blob/main/src/transformers/models/deberta/modeling_deberta.py#L1121-L1156): hidden_states * linear layer -> logits for each token
[batch_size, sequence_length, hidden_size] * [hidden_size, vocab_size] -> [batch_size, sequence_length, vocab_size]
[The way it is done in the official deberta repo](https://github.com/microsoft/DeBERTa/blob/master/DeBERTa/deberta/mlm.py#L17-L38)
hidden_states * linear layer * word embeddings.T -> logits for each token
[batch_size, sequence_length, hidden_size] * [hidden_size, hidden_size] * [hidden_size, vocab_size] -> [batch_size, sequence_length, vocab_size]
I skipped some operations that don't change the size of the tensors, but I think this proves my point.
If it is done the second way, then the fillmask pipeline will work (for deberta v1 and v2) from the canonical weights<|||||>Thanks for explaining @nbroad1881 I now understand the problem a little bit better. I don't think we can avoid having two classes for masked LM (for instance `OldDebertaForMaskedLM` and `NewDebertaForMaskedLM`) along with `DebertaForMaskedLM` to dispatch to the proper one depending on the config, to be able to maintain backward compatibility.
If you want to amend the PR to write a new class for masked LM for now with the proper changes (leaving the current masked LM class as is), I can then follow-up with the rest and write this in a fully backward-compatible manner.<|||||>@sgugger, that sounds good to me. Do you know what I should put for the `get_output_embeddings` and `set_output_embeddings` functions? <|||||>It needs to be the weights/bias that have the vocab_size dim.<|||||>> It needs to be the weights/bias that have the vocab_size dim.
There are weights that are [hidden_size, hidden_size] and a bias that has [vocab_size] dimensions. Which one do I use?<|||||>Leave those two to None for now then. I'll add that in the followup PR.<|||||>@sgugger , I implemented both Old and New Deberta(V1/V2)ForMaskedLM and I'm wondering which should be used for AutoModelForMaskedLM. Since the other version doesn't have an associated Auto class, it will fail some tests<|||||>The classes `OldDebertaForMaskedLM` and `NewDebertaForMaskedLM` are not meant to be public. This is an internal artifact to maintain backward compatibility, the user will only use the `DebertaForMaskedLM` class and a config parameter will internally decide which of the classes should be used.
For this PR, you should just add the `NewDebertaForMaskedLM` without any change to the doc/auto classes and don't touch the current `DebertaForMaskedLM`.<|||||>Ah ok. Got it. Thanks<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>@nbroad1881 Do you want me to fully take over on this?<|||||>@sgugger, I made the changes and then made the mistake of diving too deeply into checking whether the EMD is correctly implemented. I don't think it is, but I'll leave that for someone else or another time. Let me push the changes, and I'll ping you when I do.
Thanks for following up 🤗 <|||||>On second thought, you should just take it over @sgugger. Let me know if you have questions<|||||>Ok, will have a look early next week!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>unstale, still planning to address this!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>@ArthurZucker has taken over this as part of his refactor of the DeBERTa model.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 18,673 | closed | Allow empty reference summaries | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
The `run_summarization.py` has a check to skip examples where either the `text_column` or `summary_column` is `None`. However, the way the check was written would catch any falsely values, like an empty string.
This caused all examples to be skipped for datasets where either the `text_column` or `summary_column` was the empty string (e.g. the test set of the [MS^2 dataset](https://huggingface.co/datasets/allenai/mslr2022)).
This PR just updates the check so it looks for `None` values explicitly.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 08-17-2022 21:01:24 | 08-17-2022 21:01:24 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@ydshieh, would you like to take a look at what is proposed here?<|||||>@JohnGiorgi I am not familiar with this dataset. But I am wondering if a model trained on this dataset is expected to learn to deal with empty document (this seems to strange) and/or learn to predict empty summary for some document?<|||||>@ydshieh They are just using the empty string as a placeholder. The reference summaries are held-out and not available publically. I imagine there could be other cases like this.
I guess another option is to log a warning if the inputs are empty but still proceed.<|||||>Log a warning sounds good to me. But before doing this, I am wondering what would be the benefits to process those held-out (empty string) test examples. Those examples won't be useful, right?<|||||>You may still want to generate predictions for examples even if they don’t have reference summaries (as is the case for the test set of MS^2). So another option still is to make the check just look for empty _inputs_, and ignore (but log a warning) for empty _reference summaries_
<|||||>@ydshieh Okay how's that? The only change now is that examples with empty reference summaries are still included, but a warning will be logged.<|||||>Thank you @JohnGiorgi, this LGTM overall!
The logging on each example with empty summary might be too spam. (Imagine the test dataset has 10K examples).
We can probably set a flag. If an empty summary is found -> and if flag is False -> warning -> set flag to True.
Let's wait one of the core maintainers to give a final review.<|||||>> Thank you @JohnGiorgi, this LGTM overall!
>
> The logging on each example with empty summary might be too spam. (Imagine the test dataset has 10K examples). We can probably set a flag. If an empty summary is found -> and if flag is False -> warning -> set flag to True.
>
> Let's wait one of the core maintainers to give a final review.
Ah very good point! Is there an example somewhere else in the codebase of something like this? I'm happy to update my change following that<|||||>Hi! Something like `deprecation_warnings` in
https://github.com/huggingface/transformers/blob/30992ef0d911bdeca425969d210771118a5cd1ac/src/transformers/tokenization_utils_base.py#L1520
could work (of course with another name), but in the training script, I think a single Boolean variable is enough.<|||||>Hi @ydshieh, is there anything blocking this that I can address?<|||||>Hi @JohnGiorgi . Sorry for being late in the review. As you can see [in this failed CI job](https://app.circleci.com/pipelines/github/huggingface/transformers/46308/workflows/5bf0bbad-5a99-47da-aa71-d6db2ed0ae37/jobs/544078), there is some issue of variable scope.
A quick solution could be
```python
...
# A placeholder to determine whether we have already warned about empty summaries.
empty_summary_warning = {"warned": False}
def preprocess_function(examples):
if not examples[summary_column][i] and not empty_summary_warning["warned"]:
...
empty_summary_warning["warned"] = True
```
Let me know your opinion. Once the CIs all pass, I will request a final review from my colleagues :-)
Thanks a lot, for the PR and for the patience 🤗 <|||||>Ah, didn't look carefully at why the build was failing. Thanks, @ydshieh, your solution causes all the tests to pass!<|||||>This is too complicated for one of the example, which are just examples, not production-ready apps that should work in every case. Readability is more important than fixing this edge case IMO.<|||||>> not production-ready apps that should work in every case. Readability is more important than fixing this edge case IMO.
Fair point.
@JohnGiorgi Still thank you a lot for the PR. I am sorry that I haven't thought in another angle as a repository maintainer, and let you spend quite some time working on several of my suggestions.
You can definitely tweak the example script to meet your own needs. I am closing the issue. |
transformers | 18,672 | closed | [WIP] Inputs embeds for flax gpt neo | # What does this PR do?
Adds the option to pass `inputs_embeds` to FlaxGPTNeoForCausalLM.
This is already an option for the PyTorch version of the model GPTNeoForCausalLM.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #18036
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@sanchit-gandhi
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 08-17-2022 20:43:28 | 08-17-2022 20:43:28 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_18672). All of your documentation changes will be reflected on that endpoint.<|||||>Hey @mattf1n - let me know if you're still interested in completing this PR and I'd be happy to help with any questions/queries!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 18,671 | closed | [bnb] Move documentation | # What does this PR do?
- move bnb documentation to `perf_infer_gpu_many.mdx` as it was previously set to `perf_train_gpu_one.mdx` which is not relevant in the case of `bitsandbytes` integration since it supports inference only.
cc @stas00
I do have a question though, what about `perf_infer_gpu_one.mdx`? I think that the `bnb` documentation could fit well in this file as well since it supports single GPU inference too. | 08-17-2022 20:14:44 | 08-17-2022 20:14:44 | _The documentation is not available anymore as the PR was closed or merged._<|||||>yeah, I wasn't sure, probably you're right and then like perf_train_gpu_many.mdx says on top to first read perf_train_gpu_one.mdx - add the same to perf_infer_gpu_many.mdx?<|||||>Yep makes sense! I propose a small refactoring at 2018285dd60c34d2654b0c51e523d7b1e815b989 !
Let me know if this works for you <|||||>The proposed change would be difficult to maintain and 2 copies will get out of sync. Only one copy please - if you prefer the one gpu doc that's where it should be. the other one linking to it.<|||||>Proposed a change in 5f8a3aeb035e8b086d0f9045550000b6be0fb630 ! Let me know if this works for you <|||||>Thanks a lot @stas00 for iterating on the changes 💪 |
transformers | 18,670 | closed | TFClipModel fails to train because of None loss | ### System Info
transformers version: 4.21.1
Platform: MacOS BigSur 11.6.7
Python version: 3.8.13
Huggingface_hub version: 0.8.1
Tensorflow version (GPU?): 2.7.3 (False)
Flax version (CPU?/GPU?/TPU?): not installed (NA)
Jax version: not installed
JaxLib version: not installed
Using GPU in script?: no
Using distributed or parallel set-up in script?: no
@patil-suraj
### Who can help?
@patil-suraj
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
This is the script run to attempt to fit the model to the example data. It is verbatim from the 4.21.1 docs with the addition of `model.fit`. The same error arose when working with my own project. The loss is always `None` as is `y` `y_pred` . Somewhere in the logic of https://github.com/huggingface/transformers/blob/132402d752044301b37e54405832738b16f49df6/src/transformers/modeling_tf_utils.py#L1116.
```python
from PIL import Image
import requests
from transformers import CLIPProcessor, TFCLIPModel
import tensorflow as tf
model = TFCLIPModel.from_pretrained("openai/clip-vit-base-patch32")
processor = CLIPProcessor.from_pretrained("openai/clip-vit-base-patch32")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = processor(
text=["a photo of a cat", "a photo of a dog"], images=[image, image], return_tensors="tf", padding=True
)
outputs = model(**inputs)
model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=0.00001))
model.fit(dict(inputs))
```
Results with an error of `zero gradient` because the gradients are all 0s, which I expect is caused by `y` and `y_pred` both being empty dicts.
### Expected behavior
Model.fit() on inputs from preprocessor completes a training step without error. | 08-17-2022 18:00:23 | 08-17-2022 18:00:23 | Also, I do not see a test case for the fit method anywhere in https://github.com/huggingface/transformers/blob/main/tests/models/clip/test_modeling_tf_clip.py, which should probably be added<|||||>> Also, I do not see a test case for the fit method anywhere in [https://github.com/huggingface/transformers/blob/main/tests/models/clip/test_modeling_tf_clip.py](https://github.com/huggingface/transformers/blob/main/tests/models/clip/test_modeling_tf_clip.py?rgh-link-date=2022-08-17T19%3A45%3A04Z), which should probably be added
Hi! There is
https://github.com/huggingface/transformers/blob/0ea53822f8bdc2c0c41b45bcd04fa6c031e5e700/tests/test_modeling_tf_common.py#L1406
defined in the parent class `TFModelTesterMixin`.<|||||>For the issue, could you check if there is `labels` key in `inputs`? cc @Rocketknight1 <|||||>Hi @taymills, the issue is caused by the `TFClipModel` needing `return_loss=True` to be set to return a loss - you can see this in [the model docs](https://huggingface.co/docs/transformers/model_doc/clip#transformers.TFCLIPModel).
I agree it's unintuitive that `fit()` does not set this to True by default, and I'm not sure why the issue is not detected in the tests - I'll investigate now and hopefully push a fix soon.<|||||>Update: The `test_keras_fit` test skips models that do not have a `hf_compute_loss` method. This check was added to skip the test on 'base' models like `TFBertModel` that do not have a specific output head or loss, because these models cannot be fit directly. However, `TFClipModel` does have a specific task and loss, but does not have `hf_compute_loss`.
The solution is to rewrite the check to correctly identify model types that can be fit, while still not running the test on base models that cannot. I'm working on that now.<|||||>@taymills your code sample runs correctly for me with the latest patch. Can you check and confirm it works for you too? To use the PR branch, run `pip install --upgrade git+https://github.com/huggingface/transformers.git@return_loss_fix`<|||||>Fantastic thanks for the quick response. I will give it a whirl. Also @Rocketknight1 I agree that it is non-intuitive given the docs as it is a reasonable assumption that defaulting to the "loss you probably want" implies that it actually would return said loss.<|||||>@taymills yes, that's part of this PR! When using the built-in loss, we now force `return_loss=True` for models where it is an argument. That should avoid this for CLIP and for other similar models in future.<|||||>Everything looks good! Really appreciate the quick turn around on that @Rocketknight1 . Kudos 🎉 !!!<|||||>@taymills No problem! Fixing the tests has exposed a few other issues though, which that PR will need to fix as well. Unfortunately, you're stuck in the PR branch for now, but I'll ping you and close this issue when it's merged to main!<|||||>@Rocketknight1 I am curious, when you fixed the test for the model fit for TFClipModel, did you start hitting a bunch of issues with the func `shape_list` in `tf_utils`? I am running into lots of bugs with that when trying to infer the shape.<|||||>@taymills
My colleague is currently off. The PR is not completed yet, there are still some failing tests to fix, but I don't know if it is related to the issue you encounter.
Could you provide a short code snippet to show the issues?<|||||>Actually I have tried this with a couple of TFModels now and it appears that Transformers model fit in general does not work with tf.data.TFRecordDataset tensors. Seems it only works with EagerTensors unless I am missing something. Only way I have been able to get it to work is coercing to numpy iterator.
This is a bit off topic so feel free to ignore this as it turns out it is not germane to the current issue.
e.g.
`conftest.py`
```python
from typing import *
import io
import pickle
import random
import numpy as np
import pytest
import requests
import tensorflow as tf
import transformers
from PIL import Image
from product_ds_shared_utils.model_io.tensorflow.tfdata_helpers import (
create_example_proto,
)
# This is an example image pulled from public dataset to use for model and data loarder smoke tests
# TODO jmills: Might be better to store this on GCS or in repo
EXAMPLE_IMAGE_URL: str = "http://images.cocodataset.org/val2017/000000039769.jpg"
# This is the schema for inputs to clip
# This is the size of the test dataset. One example image and example text is repeated this number of times
DATASET_LENGTH: int = 5
# This is the clip pretrained model name for any clip fixtures
CLIP_PRETRAINED_NAME = "openai/clip-vit-base-patch32"
# This is the clip pretrained model name for any clip fixtures
BERT_PRETRAINED_NAME = "distilbert-base-uncased"
@pytest.fixture(scope="session")
def example_image_raw() -> bytes:
image = requests.get(EXAMPLE_IMAGE_URL, stream=True).content
return image
@pytest.fixture(scope="session")
def example_text() -> str:
return "This is an image of a cat"
@pytest.fixture(scope="session")
def example_dataset_clip_tfrecords(
example_image_raw, example_text, tmpdir_factory
) -> Tuple[str, str]:
ds_path = str(tmpdir_factory.mktemp("test_clip_model").join("dataset.tfrecords"))
desc_path = str(tmpdir_factory.mktemp("test_clip_model").join("feature_desc.pkl"))
example_image = Image.open(io.BytesIO(example_image_raw))
processor = transformers.AutoProcessor.from_pretrained(CLIP_PRETRAINED_NAME)
feature_dict_processed = dict(
processor(
images=example_image, text=example_text, return_tensors="tf", padding=True
)
)
with tf.io.TFRecordWriter(ds_path) as file_writer:
for i in range(0, DATASET_LENGTH, 1):
feature_dict_processed["labels"] = tf.constant([random.choice([0, 1])])
example_proto, feature_description = create_example_proto(
feature_dict_processed
)
file_writer.write(example_proto.SerializeToString())
with open(desc_path, "wb") as f:
pickle.dump(feature_description, f)
return ds_path, desc_path
```
test_train.py - note `trainer.model` is a TFClipModel.from_pretrained
```python
"""
Test trainer class using clip pretrained model
"""
from typing import *
import pathlib
import pickle
import pytest
import tensorflow as tf
import tensorflow_addons as tfa
from product_embeddings.trainer import EmbeddingTrainer
MODEL_NAME: str = "openai/clip-vit-base-patch32"
@pytest.fixture(scope="function")
def parsed_dataset_tfrecords(example_dataset_clip_tfrecords):
dataset_path, schema_path = example_dataset_clip_tfrecords
with open(schema_path, "rb") as f:
schema = pickle.load(f)
def decode_fn(record_bytes: bytes) -> Tuple[Dict[str, tf.Tensor], tf.Tensor]:
parsed_example = tf.io.parse_single_example(record_bytes, schema)
parsed_example["input_ids"] = tf.io.parse_tensor(
parsed_example["input_ids"], tf.int32
)
parsed_example["pixel_values"] = tf.io.parse_tensor(
parsed_example["pixel_values"], tf.float32
)
parsed_example["attention_mask"] = tf.io.parse_tensor(
parsed_example["attention_mask"], tf.int32
)
parsed_example["labels"] = tf.io.parse_tensor(
parsed_example["labels"], tf.int32
)
labels = parsed_example.pop("labels")
return parsed_example, labels
dataset = tf.data.TFRecordDataset([str(dataset_path)]).map(
decode_fn, num_parallel_calls=tf.data.AUTOTUNE
)
return dataset
def test_dataset_format(parsed_dataset_tfrecords):
for features, label in parsed_dataset_tfrecords:
assert list(features.keys()) == ["attention_mask", "input_ids", "pixel_values"]
for key, val in features.items():
assert isinstance(
val, tf.Tensor
), f"Feature {key} is not a tensor but got {val}"
assert isinstance(label, tf.Tensor)
assert features["input_ids"].shape == (1, 9)
assert features["pixel_values"].shape == (1, 3, 224, 224)
assert label.shape == (1)
def test_clip_model_trainer_tfrecords(tmpdir_factory, parsed_dataset_tfrecords):
checkpoint_path = pathlib.Path(
tmpdir_factory.mktemp("test_clip_model").join("checkpoints")
)
log_path = pathlib.Path(tmpdir_factory.mktemp("test_clip_model").join("logs"))
trainer = EmbeddingTrainer(
model_name=MODEL_NAME,
checkpoint_dir=str(checkpoint_path),
log_dir=str(log_path),
)
trainer.compile(loss=tfa.losses.contrastive_loss)
trainer.model.run_train(
training_dataset=parsed_dataset_tfrecords.as_numpy_iterator(),
validation_dataset=parsed_dataset_tfrecords.as_numpy_iterator(),
epochs=1,
)
```<|||||>@taymills That's quite an odd issue - it's definitely unrelated to this one, but if you haven't managed to figure it out, can you copy it into a new issue and tag me? It looks like your model is coming from some external library in the second example though (our models don't have a `run_train` method), so if that's the case we probably can't do much about the problem. |
transformers | 18,669 | closed | [LongT5] Correct docs long t5 | Corrects the docs of LongT5 according to discussion in https://github.com/huggingface/transformers/issues/18502 | 08-17-2022 16:30:06 | 08-17-2022 16:30:06 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 18,668 | closed | Warn on TPUs when the custom optimizer and model device are not the same | # What does this PR do?
This PR raises an error when a user creates a custom optimizer on a TPU and they did not move the model to the right TPU device beforehand. Not doing so will cause issues such as the ones described in https://github.com/pytorch/xla/issues/3675#issuecomment-1171702988 and https://github.com/huggingface/transformers/issues/18635. This check is performed similar to the one in Accelerate
Fixes # (issue)
https://github.com/huggingface/transformers/issues/18635
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@LysandreJik
| 08-17-2022 15:16:12 | 08-17-2022 15:16:12 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 18,667 | closed | remvoe `_create_and_check_torch_fx_tracing` in specific test files | # What does this PR do?
Remvoe `_create_and_check_torch_fx_tracing` in specific model test files, as the common one can handle them correctly.
The only exception is `Hubert` model, but we can also remove it, and set `fx_compatible` to `False` (just as for `Wav2Vec2`).
It might be better to add `torch_script_compatible` to handle `Hubert` and related models.
**Motivation**: Make the change in #18547 available to all tests. | 08-17-2022 14:33:21 | 08-17-2022 14:33:21 | _The documentation is not available anymore as the PR was closed or merged._<|||||>The common case test for TorchScript, and if I recall correctly there was an issue for those models on that aspect?
You suggest to add a flag called `torch_script_compatible`? If so, [that is what I suggested back then](https://github.com/huggingface/transformers/pull/17206#discussion_r875989932), pinging @sgugger here.
Also, I think that some of those models can actually be torchscripted with torch 1.12, but the issue was that we are (were?) testing in torch 1.11.<|||||>> The common case test for TorchScript, and if I recall correctly there was an issue for those models on that aspect?
There might be before. But as far as I can tell, the issue probably came from the input and label names preparation. As the tests pass after I remove their re-definitions from the specific model test files, I think it's fine and better to clean them up. (The only failure is from Hubert).
> You suggest to add a flag called torch_script_compatible?
This is to allow torch trace test still run while skip the torch script test, as currently Hubert test will fail on torchscript.
But I would prefer to add this flag (if the idea is approved) in a separate PR (and where we can enable the test for Wav2Vec2 too, for example)
> Also, I think that some of those models can actually be torchscripted with torch 1.12, but the issue was that we are (were?) testing in torch 1.11.
We can re-evaluate this, but again, let's not to do changes regarding this part in this PR.
This PR is merely to avoid overwriting `_create_and_check_torch_fx_tracing` unnecessary :-)<|||||>@michaelbenayoun If this PR is OK on your side, I am going to merge. Regarding the flag, let's see what we can do in a separate PR. |
transformers | 18,666 | closed | Add evaluate to examples requirements | # What does this PR do?
This PR adds `evaluate` as part of the requirements to all of the examples scripts that need them
Fixes # (issue)
Closes https://github.com/huggingface/transformers/issues/18663
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@LysandreJik
| 08-17-2022 13:37:35 | 08-17-2022 13:37:35 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 18,665 | closed | Unexpected keyword argument 'trust_remote_code' when using `table-question-answering` pipeline | ### System Info
- `transformers` version: 4.21.1
- Platform: Linux-5.4.188+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.13
- Huggingface_hub version: 0.8.1
- PyTorch version (GPU?): 1.12.1+cu113 (False)
- Tensorflow version (GPU?): 2.8.2 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("microsoft/tapex-base-finetuned-wtq").save_pretrained("test")
model = AutoModelForSeq2SeqLM.from_pretrained("microsoft/tapex-base-finetuned-wtq").save_pretrained("test")
```
2. create pipeline object
```python
from transformers import pipeline
tq = pipeline("table-question-answering",model="test")
```
3. receive error
```python
pipeline(task, model, config, tokenizer, feature_extractor, framework, revision, use_fast, use_auth_token, device_map, torch_dtype, trust_remote_code, model_kwargs, pipeline_class, **kwargs)
655 task=task,
656 **hub_kwargs,
--> 657 **model_kwargs,
658 )
659
[/usr/local/lib/python3.7/dist-packages/transformers/pipelines/base.py](https://localhost:8080/#) in infer_framework_load_model(model, config, model_classes, task, framework, **model_kwargs)
255
256 try:
--> 257 model = model_class.from_pretrained(model, **kwargs)
258 if hasattr(model, "eval"):
259 model = model.eval()
[/usr/local/lib/python3.7/dist-packages/transformers/modeling_utils.py](https://localhost:8080/#) in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
2104
2105 with ContextManagers(init_contexts):
-> 2106 model = cls(config, *model_args, **model_kwargs)
2107
2108 if device_map == "auto":
TypeError: __init__() got an unexpected keyword argument 'trust_remote_code'
```
### Expected behavior
The model should normally load | 08-17-2022 12:26:54 | 08-17-2022 12:26:54 | It is not only for local model broken
```python
from transformers import pipeline
tq = pipeline("table-question-answering",model="microsoft/tapex-base-finetuned-wtq")
```
creates the same error
<|||||>cc @NielsRogge @Narsil <|||||>Hi @philschmid, I tried your code and raised the same error, but after I did some debugging, I found some info that may be useful to you.Here we take the code `pipeline("table-question-answering",model="microsoft/tapex-base-finetuned-wtq")`for example.
Under the hood, the pipeline for table question answering will infer the config type base on your model name which is `microsoft/tapex-base-finetuned-wtq` here.
https://github.com/huggingface/transformers/blob/f0d496828d3da3bf1e3c8fbed394d7847e839fa6/src/transformers/pipelines/__init__.py#L574
But unfortunately, the config type of this model is `BartConfig`. To initialize the pipeline, we need to provide the model which can infer the type of config is `TapasConfig`, for example, model `google/tapas-base`. I think you can try to initialize the pipeline with the following code:
`tq = pipeline("table-question-answering", model="google/tapas-base")`<|||||>@aRyBernAlTEglOTRO it works in `4.20.1` with `tapex` and the issue comes from `trust_remote_code`, which might be missing somewhere in the files.<|||||>Hi @philschmid, After I downgrade the transformers to 4.20.1, although I can run the code with `microsoft/tapex-base-finetuned-wtq`. but I will raise the info below:
`The model 'BartForConditionalGeneration' is not supported for table-question-answering. Supported models are ['TapasForQuestionAnswering'].`
Therefore, I think even if you can initialize the pipeline, the pipeline may work in the wrong way. I still think we should initialize the pipeline with the model which can infer the `tapasConfig`.<|||||>Hi @philschmid, if you insist to initialize the pipeline with `microsoft/tapex-base-finetuned-wtq`. I found some info that may be useful. When `AutoModelForTableQuestionAnswering` try to init the model, it will remove the `trust_remote_code` from `kwargs`. you can check the following code:
https://github.com/huggingface/transformers/blob/30992ef0d911bdeca425969d210771118a5cd1ac/src/transformers/models/auto/auto_factory.py#L420
Therefore, If you want to initialize the pipeline with `microsoft/tapex-base-finetuned-wtq`, which will build a model `BartForConditionalGeneration` under the hood, thus you need to add some modifications to the `from_pretrained` method of `BartForConditionalGeneration`, which is code mentioned below:
https://github.com/huggingface/transformers/blob/30992ef0d911bdeca425969d210771118a5cd1ac/src/transformers/modeling_utils.py#L1606
and you may already found that they add this update in version `4.22.0.dev0`
https://github.com/huggingface/transformers/blob/30992ef0d911bdeca425969d210771118a5cd1ac/src/transformers/modeling_utils.py#L1829
After you upgrade the transformers to version `4.22.0.dev0`, everything should work fine, but you still get the info that:
`The model 'BartForConditionalGeneration' is not supported for table-question-answering. Supported models are ['TapasForQuestionAnswering'].`
<|||||>@philschmid I can't be able to reproduce on `@main` branch, so the issues seems to have been fixed.
I confirm the error exists in `4.21.1` though, I am not sure how to backport things or workaround this.
`trust_remote_code` was added recently @sgugger so he might know more.
As for the warning, the warnings is a bit outdated as the pipeline does support BartGeneration.
Added a PR to fix the warningS: https://github.com/huggingface/transformers/pull/18711<|||||>Hi @Narsil, thank you for your reply. I found the warning raised from the code below:
https://github.com/huggingface/transformers/blob/0f257a87749e0a72bda260c6f319a45dae1e7c4d/src/transformers/pipelines/table_question_answering.py#L102
which means we need to update the `OrderDict` in the following code to remove warning:
https://github.com/huggingface/transformers/blob/0f257a87749e0a72bda260c6f319a45dae1e7c4d/src/transformers/models/auto/modeling_auto.py#L590
I don't know except for the `BartForConditionalGeneration` model, what else model should be added to this `OrderDict`?<|||||>TAPEX wasn't added to the MODEL_FOR_TABLE_QUESTION_ANSWERING_MAPPING_NAMES because that mapping defines which models are supported by the `AutoModelForTableQuestionAnswering` class.
However, the table QA pipeline does support both TAPAS and TAPEX, so we may want to suppress this warning.<|||||>@aRyBernAlTEglOTRO @NielsRogge The PR to fix the warnings is here. https://github.com/huggingface/transformers/pull/18711
Basically any Seq2seq model should work (Well only the trained models will actually provide good results, but the pipeline WILL work.)
In general pipelines tries not to look at individual models, but only type for model (`ForXXX`)<|||||>The error was fixed in https://github.com/huggingface/transformers/pull/18428 @philschmid.
I'll likely do a patch PR later today containing this fix (v4.21.2).<|||||>Closing as solved by https://github.com/huggingface/transformers/pull/18428 |
transformers | 18,664 | closed | Cannot import pipelines from transformers | Hi, I use a Windows 10 Proffessional laptop for development and am using python 3.7.6 conda virtual env. I get the following run time error when running my code (below, after the error details).
<<
bertqa interactive window [PTVS 17.0.22089.1-17.0]
Type $help for a list of commands.
The interactive window has not yet started.
Running D:\Projects2017\bertqa\bertqa\aaa_scratch.py
Traceback (most recent call last):
File "d:\python\Anaconda3\envs\transformers_qa\lib\site-packages\transformers\utils\import_utils.py", line 905, in _get_module
return importlib.import_module("." + module_name, self.__name__)
File "d:\python\Anaconda3\envs\transformers_qa\lib\importlib\__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1006, in _gcd_import
File "<frozen importlib._bootstrap>", line 983, in _find_and_load
File "<frozen importlib._bootstrap>", line 967, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 677, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 728, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "d:\python\Anaconda3\envs\transformers_qa\lib\site-packages\transformers\pipelines\__init__.py", line 50, in <module>
from .image_classification import ImageClassificationPipeline
File "d:\python\Anaconda3\envs\transformers_qa\lib\site-packages\transformers\pipelines\image_classification.py", line 15, in <module>
from PIL import Image
File "d:\python\Anaconda3\envs\transformers_qa\lib\site-packages\PIL\Image.py", line 69, in <module>
from . import _imaging as core
ImportError: DLL load failed: The specified module could not be found.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "D:\Projects2017\bertqa\bertqa\aaa_scratch.py", line 5, in <module>
from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipelines
File "<frozen importlib._bootstrap>", line 1032, in _handle_fromlist
File "d:\python\Anaconda3\envs\transformers_qa\lib\site-packages\transformers\utils\import_utils.py", line 893, in __getattr__
value = self._get_module(name)
File "d:\python\Anaconda3\envs\transformers_qa\lib\site-packages\transformers\utils\import_utils.py", line 910, in _get_module
) from e
RuntimeError: Failed to import transformers.pipelines because of the following error (look up to see its traceback):
DLL load failed: The specified module could not be found.
>>>
>>
The following is the code I am trying to execute:
<<
#imports
import os, sys
import pandas as PD
import numpy as NP
from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipelines
_text = 'I bought a Toyota Corala last January. It works well..'
_questions = [
'What did the person buy?',
'What is working?',
'When did he buy?',
]
_model_name = "deepset/tinyroberta-squad2"
def main():
'''Main entry point'''
nlp = pipeline('question-answering', model=model_name, tokenizer=model_name)
qa_input = {}
qa_input['question'] = _questions[0]
qa_input['context'] = _text
answer = nlp(QA_input)
print(answer)
'''
Required for all python programs.
'''
if __name__ == '__main__':
print('Starting the QA tool.')
main()
print('Done')
>>
I will appreciate if you let me know how to overcome this issue. Awaiting your reply. | 08-17-2022 11:37:30 | 08-17-2022 11:37:30 | Hi @balachander1964 -- if you check the error log you shared, the error does not come from the `transformers` library. From a quick google search, the error seems to be from the environment set up (try googling `DLL load failed: The specified module could not be found.`)
As per our [issues guidelines](https://github.com/huggingface/transformers/blob/main/ISSUES.md), we reserve GitHub issues for bugs in the repository and/or feature requests. For any other matters, we'd like to invite you to use our [forum](https://discuss.huggingface.co/) 🤗<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 18,663 | closed | No module named 'evaluate' | https://github.com/huggingface/transformers/blob/c99e984657b64dd8f19de74405bbf13763ab4f2b/examples/pytorch/language-modeling/run_mlm.py#L35
When I pre-train RoBerta from scratch using the latest version `run_mlm.py`, there is an error :
```
import evaluate
ModuleNotFoundError: No module named 'evaluate'
``` | 08-17-2022 10:40:17 | 08-17-2022 10:40:17 | Hi! You need to do `pip install evaluate`. #18666 will also add it to each of the `examples` internal `requirements.txt` file, so it will be installed when you do `pip install -r requirements.txt`<|||||>> Hi! You need to do `pip install evaluate`. #18666 will also add it to each of the `examples` internal `requirements.txt` file, so it will be installed when you do `pip install -r requirements.txt`
Thanks! It work. |
transformers | 18,662 | closed | BartTokenizer add_tokens feature. | Hi @LysandreJik ,
I am working on audio captioning, and the ground truth captions are tokenized using the BartTokenizer. I have observed that some of the words in the captions are not tokenized correctly. For instance, the word 'rumbling'. There is no such word in the tokenizer, and its tokenizing is ['Ġr', 'umbling']. I have tried to add the tokens(the word 'Ġrumbling')and change the model token embeddings. But instead of tokenizing the word correctly, it is still tokenizing as ['Ġr', 'umbling']. Did I miss anything here? I have also faced the same issues with the some other words too!
Here is my code!
```
tokenizer = AutoTokenizer.from_pretrained("facebook/bart-base", use_fast=True)
tokenizer.add_tokens(['Ġrumbling'])
model = BartForConditionalGeneration.from_pretrained("facebook/bart-base")
model.resize_token_embeddings(len(tokenizer))
print(tokenizer.is_fast)
ou_e = 'the rain falls down while someone is pounding a car passes by and the thunder is rumbling'
tok_e = tokenizer(ou_e, max_length=64, return_tensors='pt', padding='max_length')
seq = tokenizer.tokenize(ou_e)
print(seq)
summary_ids = model.generate(tok_e['input_ids'], num_beams=4, min_length=5, max_length=100)
summary = tokenizer.batch_decode(summary_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)
print(summary)
``` | 08-17-2022 10:17:05 | 08-17-2022 10:17:05 | Thanks for opening an issue @Charithavarma!
@ydshieh, could you take a look here?<|||||>Could confirm the issue. Also occur for slow bart tokenizer. I can see the word is added to the tokenizers, but the output don't change.<|||||>```python
from transformers import AutoTokenizer
tokenizer_slow = AutoTokenizer.from_pretrained("facebook/bart-base", use_fast=False)
tokenizer_fast = AutoTokenizer.from_pretrained("facebook/bart-base", use_fast=True)
print(len(tokenizer_slow.get_vocab()))
print(len(tokenizer_fast.vocab))
print('Ġrumbling' in tokenizer_slow.get_vocab())
print('Ġrumbling' in tokenizer_fast.vocab)
tokenizer_slow.add_tokens(['Ġrumbling'], special_tokens=True)
tokenizer_fast.add_tokens(['Ġrumbling'], special_tokens=True)
print(len(tokenizer_slow.get_vocab()))
print(len(tokenizer_fast.vocab))
print('Ġrumbling' in tokenizer_slow.get_vocab())
print('Ġrumbling' in tokenizer_fast.vocab)
text = 'the rain falls down while someone is pounding a car passes by and the thunder is rumbling'
seq = tokenizer_slow.tokenize(text)
print(seq)
```
gives
```bash
50265
50265
False
False
50266
50266
True
True
['the', 'Ġrain', 'Ġfalls', 'Ġdown', 'Ġwhile', 'Ġsomeone', 'Ġis', 'Ġpounding', 'Ġa', 'Ġcar', 'Ġpasses', 'Ġby', 'Ġand', 'Ġthe', 'Ġthunder', 'Ġis', 'Ġr', 'umbling']
```<|||||>Hi @Charithavarma,
In transformers, an added token will be a token that will be preserved before the tokenizer is applied. In this case, since in the initial sentence the spaces are `" "` and not `"Ġ"`, the `'Ġrumbling'` token is not identified anywhere.
To achieve what you want to do I advise you to try to add the token `" rumbling"`.
Let me know if it solves your issue! :hugs: <|||||>Thank you, @SaulLu ! Is this documented somewhere (I believe so) 🙏 .<|||||>By searching a little I realize that the current documentation is not very explicit on this point. I propose to detail it a little in the PR https://github.com/huggingface/transformers/pull/18687 :relaxed: <|||||>Hi @SaulLu,
Thank you. It solved my problem. But the performance of the BART in my model was reduced!
Is it possible to use the manual tokenizer instead of this BART Tokenizer? Is it compatible?
<|||||>@Charithavarma If you want to use the trained model `facebook/bart-base`, it's always good to use the corresponding tokenizer. If you change the tokenizer (for example, here you add a new token, where the tokenization of sentences may change too - for some examples), it is normal that the model performance is affected (as it never sees the word/token `rumbling` before).
If adding new tokens is really important in your task, you probably would consider finetuning the original model with this changed tokenizer. |
transformers | 18,661 | open | Refactor Pytorch `model.generate` method to work on TPU | ### Feature request
Refactor PT version of the method `model.generate` for text generating models to make it compatible with XLA and speed up inference on TPU.
### Motivation
Right now, `model.generate` on PT is extremely slow on TPU compared to CPU and GPU. This is probably due to the fact that some operations done in the PT version of `model.generate` are not XLA compatible, and thus the generation process falls back on CPU. This makes inference on TPU infeasible. A major refactoring work has already been done on its TF counterpart, so it would be nice to have the PT version working as well.
A more in-depth discussion with @gante took place in #12322 and on this [huggingface discussion](https://huggingface.co/spaces/joaogante/tf_xla_generate_benchmarks/discussions/1).
### Your contribution
If there is some interest from the HF team, I can definitely assist during the work. | 08-17-2022 09:25:55 | 08-17-2022 09:25:55 | cc @patrickvonplaten<|||||>Hey @mikcnt,
This sounds like a very cool project and I think we should sooner or later focus on it. Currently I won't have the time to take a closer look here, but my advice would be:
- I think you're totally right in that PyTorch/XLA often falls back on CPU which is why it is very slow. We're luckier here with Jax and TF because if things fall back on CPU the code fails
- It'll take some time to get this fully working so we should start with the easiest example -> see what code changes are necessary to make PyTorch/XLA work with `greedy(...)`
- To set expectations: PyTorch's generate method is one of Transformers most used functions - it's extremely important and we're trying very hard to keep the code readable, easy to understand. If making PyTorch XLA-compatible requires too many changes or makes the code too unreadable we might come to the conclusion that it's just not worth it and maybe just add it as a "experimental" additional function but not in "main" generate. Also @michaelbenayoun @mfuntowicz is that maybe something we want to have only in optimum maybe but not in Transformers? <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Hi,
Any updates on this? When can we expect to generate a function to work on TPUs? Also, will it be part of transformers or optimum? as mentioned by @patrickvonplaten above?<|||||>I won't have time to look into this sadly anytime soon. @gante maybe? <|||||>Added to my `generate` task queue 👍
@divyanshuaggarwal it would be part of `transformers`!<|||||>Thanks @gante!<|||||>Hi, @gante just noticed it had been marked WIP, any ETAs on when can we expect this feature?<|||||>This is not a prioritized feature as you can already use TPUs for generation in Flax and TensorFlow. Since you can easily convert a model from one framework to the other, there is an easy workaround :-)<|||||>Is there any update on this PR?<|||||>@deveworld we are atm exploring PT-level optimizations, which include the static shapes needed for XLA (TPU). A significant upgrade in this direction is likely in the next releases (keep an eye there :) )<|||||>@gante folks from Meta were able to do llama inference on TPU using pytorch XLA. Might be helpful for this issue.
https://pytorch.org/blog/path-achieve-low-inference-latency/?utm_content=254892693&utm_medium=social&utm_source=linkedin&hss_channel=lcp-78618366 |
transformers | 18,660 | closed | _no_load_in_8bit module list have custom ignored layers | ### Feature request
Could be awesome to have the property `_no_load_in_8bit = []` for the 8bit quantisation to allow custom layers to not be converted.
CC @younesbelkada
### Motivation
Would increase flexibility
### Your contribution
Would be testing on `Jukebox`
| 08-17-2022 07:05:17 | 08-17-2022 07:05:17 | Hi @ArthurZucker
Thanks a lot for the feature request!
I have addressed a commit in https://github.com/huggingface/transformers/pull/18646 that should support adding an argument `no_load_in_8bit_modules` in `from_pretrained` function. Could you try it in your usecase and let me know if this helped?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Closing as completed in #18646 |
transformers | 18,659 | closed | DeBERTa can't load some parameters | ### System Info
- `transformers` version: 4.21.1
- Platform: Linux-5.4.0-81-generic-x86_64-with-glibc2.31
- Python version: 3.9.12
- Huggingface_hub version: 0.8.1
- PyTorch version (GPU?): 1.11.0+cu113 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@LysandreJik
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
- Reproduction
```python
from transformers import pipeline
text = "The capital of France is [MASK]"
mlm_pipeline = pipeline('fill-mask', model='microsoft/deberta-base', tokenizer='microsoft/deberta-base')
print(mlm_pipeline(text))
```
- Warning Message
```
Some weights of the model checkpoint at microsoft/deberta-base were not used when initializing DebertaForMaskedLM: ['lm_predictions.lm_head.LayerNorm.bias', 'lm_predictions.lm_head.bias', 'lm_predictions.lm_head.dense.bias', 'lm_predictions.lm_head.dense.weight', 'deberta.embeddings.position_embeddings.weight', 'lm_predictions.lm_head.LayerNorm.weight']
- This IS expected if you are initializing DebertaForMaskedLM from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing DebertaForMaskedLM from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Some weights of DebertaForMaskedLM were not initialized from the model checkpoint at microsoft/deberta-base and are newly initialized: ['cls.predictions.transform.LayerNorm.bias', 'cls.predictions.transform.dense.weight', 'cls.predictions.transform.LayerNorm.weight', 'cls.predictions.transform.dense.bias', 'cls.predictions.decoder.weight', 'cls.predictions.bias']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
```
- Output
```
The capital of France isumption
The capital of France is�
The capital of France iszag
The capital of France isreply
The capital of France isnerg
```
### Expected behavior
When DeBERTa model load using transformers, It seems that doesn't load the weights needed for the MLM head. (+ positional embedding weights).
There are some issues similar to mine.
- https://github.com/huggingface/transformers/issues/15216
- https://github.com/huggingface/transformers/issues/15673
- https://github.com/microsoft/DeBERTa/issues/74
But it doesn't seem to be working out yet.
Can you check it? | 08-17-2022 02:54:06 | 08-17-2022 02:54:06 | #18674 should fix this. Thanks for reporting!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.