repo
stringclasses
1 value
number
int64
1
25.3k
state
stringclasses
2 values
title
stringlengths
1
487
body
stringlengths
0
234k
created_at
stringlengths
19
19
closed_at
stringlengths
19
19
comments
stringlengths
0
293k
transformers
16,753
closed
ValueError: Reference at 'refs/heads/master' does not exist
Hi, in the RAG example. I got the error (ValueError: Reference at 'refs/heads/master' does not exist) in opt/anaconda3/envs/chatbotper/lib/python3.7/site-packages/git/refs/symbolic.py", line 184, in_get_ref_info_helper raise ValueError("Reference at %r does not exist" % ref_path) after running: python examples/research_projects/rag/finetune_rag.py \ --data_dir data_dir \ --output_dir output_dir \ --model_name_or_path facebook/rag-sequence-nq \ --model_type rag_sequence \ --fp16 \ --gpus 8 --index_name custom --passages_path path/to/data/my_knowledge_dataset --index_path path/to/my_knowledge_dataset_hnsw_index.faiss any advice? thanx
04-13-2022 15:07:09
04-13-2022 15:07:09
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Same question from training yolov5, it seems that there are no effective solutions...
transformers
16,752
closed
`translation_XX_to_YY` pipeline warnings about no max_length when both max_length and truncation are provided
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.18.0 - Platform: Linux-3.10.0-1160.59.1.el7.x86_64-x86_64-with-glibc2.17 - Python version: 3.9.12 - Huggingface_hub version: 0.5.1 - PyTorch version (GPU?): 1.11.0 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: None ### Who can help @patil-suraj @Narsil <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - ALBERT, BERT, XLM, DeBERTa, DeBERTa-v2, ELECTRA, MobileBert, SqueezeBert: @LysandreJik - T5, Pegasus, EncoderDecoder: @patrickvonplaten - Blenderbot, MBART, BART, Marian, Pegasus: @patil-suraj - Reformer, TransfoXL, XLNet, FNet: @patrickvonplaten - Longformer, BigBird: @ydshieh - FSMT: @stas00 - Funnel: @sgugger - GPT-2, GPT: @patil-suraj, @patrickvonplaten, @LysandreJik - RAG, DPR: @patrickvonplaten, @lhoestq - TensorFlow: @Rocketknight1 - JAX/Flax: @patil-suraj - TAPAS, LayoutLM, LayoutLMv2, LUKE, ViT, BEiT, DEiT, DETR, CANINE: @NielsRogge - GPT-Neo, GPT-J, CLIP: @patil-suraj - Wav2Vec2, HuBERT, SpeechEncoderDecoder, UniSpeech, UniSpeechSAT, SEW, SEW-D, Speech2Text: @patrickvonplaten, @anton-l If the model isn't in the list, ping @LysandreJik who will redirect you to the correct contributor. Library: - Benchmarks: @patrickvonplaten - Deepspeed: @stas00 - Ray/raytune: @richardliaw, @amogkam - Text generation: @patrickvonplaten @narsil - Tokenizers: @SaulLu - Trainer: @sgugger - Pipelines: @Narsil - Speech: @patrickvonplaten, @anton-l - Vision: @NielsRogge, @sgugger Documentation: @sgugger Model hub: - for issues with a model, report at https://discuss.huggingface.co/ and tag the model's creator. HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj For research projetcs, please ping the contributor directly. For example, on the following projects: - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> ## Information Model I am using (Bert, XLNet ...): mBART The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. Create `translation_XX_to_YY` pipeline with `max_length` populated and `truncation=True` 2. Run an example through the pipeline 3. Observe warning ``` from transformers import pipeline, MBart50TokenizerFast, MBartForConditionalGeneration model = MBartForConditionalGeneration.from_pretrained("facebook/mbart-large-50-many-to-one-mmt") tokenizer = MBart50TokenizerFast.from_pretrained("facebook/mbart-large-50-many-to-one-mmt") tokenizer.src_lang = "es_XX" tokenizer.tgt_lang = "en_XX" pipe = pipeline("translation_es_to_en", model=model, tokenizer=tokenizer, src_lang="es_XX", tgt_lang="en_XX", device=0, batch_size=16) translations = pipe("X"*1000, num_beams=5, max_length=512, truncation=True) # Asking to truncate to max_length but no maximum length is provided and the model has no predefined maximum length. Default to no truncation. print(translations) # [{'translation_text': 'enXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX'}] ``` ## Expected behavior This warning should not appear. It's unclear whether the max_length is actually respected here, but since the model doesn't die, it seems it might.
04-13-2022 14:31:56
04-13-2022 14:31:56
Hi @erip , It seems the tokenizer does not define itself `tokenizer.model_max_length` which is usually used to set the max length (so `truncation=True` can have a meaning). The problem with passing `max_length` as you do, is that this is actually passed to the `generate(..)` function, which **also** has a `max_length` (it means the maximum length of the generated content). You can tentatively fix by doing this: ```python model = MBartForConditionalGeneration.from_pretrained("facebook/mbart-large-50-many-to-one-mmt") tokenizer = MBart50TokenizerFast.from_pretrained("facebook/mbart-large-50-many-to-one-mmt") tokenizer.src_lang = "es_XX" tokenizer.tgt_lang = "en_XX" tokenizer.model_max_length = 1024 # <-------------------------------------- pipe = pipeline("translation_es_to_en", model=model, tokenizer=tokenizer, src_lang="es_XX", tgt_lang="en_XX", device=0, batch_size=16) translations = pipe("X"*1000, num_beams=5, max_length=512, truncation=True) # Asking to truncate to max_length but no maximum length is provided and the model has no predefined maximum length. Default to no truncation. print(translations) ```` However I feel like this field should be inferred automatically, can you confirm/infirm @patil-suraj ?<|||||>Hmm, I thought I had also tried populating `max_model_length` when using `...Tokenizer.from_pretrained`, but I will need to double-check.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
16,751
closed
CI: setup-dependent pip cache
# What does this PR do? This PR makes two changes to the way we cache our pip dependencies in the `add-model-like.yml` GH actions workflow: 1. The name of the cache depends on the hash of `setup.py`; 2. We do not restore the cache from partial name matches. (this pattern exists in one of our CI files, `github-torch-hub.yml` , [here](https://github.com/huggingface/transformers/blob/main/.github/workflows/github-torch-hub.yml#L30)) Together, these changes will make us start from a fresh environment whenever we change `setup.py`. Having a stale cache was causing us to have dependency problems (e.g. [old, incompatible protobuf version](https://github.com/huggingface/transformers/runs/6007067240?check_suite_focus=true)), and potentially miss dependency issues from fresh installs. If you agree, I will also port these changes to `model-templates.yml` and `update_metdata.yml`, which have the same pattern/issue. EDIT: ported.
04-13-2022 14:22:18
04-13-2022 14:22:18
_The documentation is not available anymore as the PR was closed or merged._<|||||>Cool, porting the changes to the other two files as well then 👍 <|||||>@gante: to learn more from you: how you figured out the cause for the error you mentioned: https://github.com/huggingface/transformers/runs/6007067240?check_suite_focus=true If it was me, I don't even know if I could figure this out!<|||||>@ydshieh It definitely helps that I had this exact issue (stale CI caches) in my previous role :) To pin the error to this issue, I reran the failing CI workflow locally, from a fresh virtual env. Since it ran without issues, I had a look at the `.yml` file, and saw that it had a cache for `pip`. Then I went on to see that `pip install -e .[dev]` was doing in the failing CI file, and I noticed that it had error messages due to incompatible package versions, which I did not have locally -- because an old version was cached.
transformers
16,750
closed
Batch size < GPU number when training with Trainer and deepspeed.
# 🚀 Feature request Hi, I am fine-tuning T5-11b using Trainer with deepspeed feature. I use the deepspeed zero3 stage to split T5-11b and its gradients to different GPUs. However, when I try to use multi GPUs, I found that the argument `per_device_train_batch_size` must be an integer, which means it at least is 1. So when I use more GPUs, the batch size must increase at the same time, which will cost must more GPU memory. Thus, it turns out that I can't fine-tune T5-11b with 2, 4 or 8 A100 (40G) GPUs. So, in general, deepspeed feature doesn't solve the memory issue if the model's size is similar to or larger than the memory of one of the multi GPUs. So, I request the feature that batch size is smaller than the number of GPUs like train batch size of 2 on 4 GPUs. here is the link to the argument`per_device_train_batch_size` https://github.com/huggingface/transformers/blob/main/src/transformers/training_args.py#L122 ## Motivation Fine-tune an extremely large model with a few small memory GPUs. ## Your contribution I think the deepspeed package supports this feature already. So, adding this feature to Trainer is not hard.
04-13-2022 14:12:17
04-13-2022 14:12:17
cc @sgugger <|||||>cc @stas00 for DeepSpeed ;-)<|||||>@zhaowei-wang98, could you please try again explaining what is the problem that you're running into? Please show the actual command line / config you're running as I have a hard time understanding your Issue. > So when I use more GPUs, the batch size must increase at the same time, which will cost must more GPU memory There is no such requirement. In general there is no problem running T5-11B on A100 (40GB) w/ Deepspeed ZeRO-3 - or at least it worked last time I run it - It was done already more than a year ago, perhaps have a look at this old thread https://github.com/huggingface/transformers/issues/9996 and then if you're still stuck tell us more details about your particular setup?<|||||>> Hi @stas00, I am trying to fine-tune t11-3b without CPU offload. So, all the parameters in the model and momentum in the optimizer are loaded on the GPUs. I do this because I found it is very slow to use CPU offload (I have 500k data with an average length of 32 for both input and output). In other words, I deleted: "offload_param": { "device": "cpu", "pin_memory": true }, in the deepspeed configuration file: https://github.com/huggingface/transformers/blob/main/tests/deepspeed/ds_config_zero3.json In contrast, your old thread #9996 used CPU RAM to store the model.<|||||>OK, but I still can't help you since I don't know how to reproduce your issue as you gave no specific instructions to do so, nor you shared anything about your setup other than the type of GPUs. But I can probably do some guessing: ### understanding the memory requirements To train t5-11b you need at least `11*18=200`GB of memory just for the optim/grads/weights (I assume mixed precision) plus memory for activations and temps, so let's say roughly 240GB. With 40GB GPUs, that means at least 6 gpus. So it should be possible to load it on 8x 40GB gpus using deepspeed w/o any offload. ### use the sharded checkpoint Also I recommend for you to switch to the sharded version of t5-11b which I have just [made](https://github.com/huggingface/transformers/issues/16884), by passing to the trainer: `--model_name_or_path t5-11b --model_revision sharded` and use `huggingface@main` as this feature hasn't yet been released. Because if you don't shard you would need 44GB of CPU memory per process, just to load the checkpoint (deepspeed shards it directly to gpus). And with 8 gpus you'd need 352GB of CPU memory just to load 8 checkpoints concurrently. I think with 10GB shards some 100GB of CPU memory should be enough to load the checkpoints concurrently in 8 processes, but then there are extras to copy things and temps. ---------------- I'd be very happy to help you sort it out, but you need to help me first. To continue please be very specific: 1. here is my hardware setup 2. here is my software setup 3. here is my command line (using public data) and ds config file to reproduce the problem with 4. here is the traceback Thank you! <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
16,749
closed
Large differences between T5 weight initialization in TF and torch
- `transformers` version: 4.18.0, master branch ### Who can help @patrickvonplaten I found some significant differences in weight init between the PT and TF implementations of T5. The **embeddings** (model.shared): - In PT, according to `T5PreTrainedModel._init_weights`, they are initialized with random normal with std=1.0: `module.shared.weight.data.normal_(mean=0.0, std=factor * 1.0)` - In TF (TFT5Model), the embeddings are initialized as such: `self.shared = TFSharedEmbeddings(config.vocab_size, config.d_model, name="shared")` Since initializer_range is not being provided, it is using the default, which is `hidden_size**-0.5` (see TFSharedEmbeddings). This means that in the base model (d=768), the weights in PT are being initialized with **stdev=1.0**, and in TF they are being initialized with **stdev=0.036**. The **LM head** (model.lm_head): - In PT, the initializer is not specified, meaning it is being initialized with a uniform distribution in [-sqrt(1/d_model), sqrt(1/d_model)] (https://pytorch.org/docs/stable/generated/torch.nn.Linear.html). The weights don't seem to be initialized in _init_weights either. `lm_head = nn.Linear(config.d_model, config.vocab_size, bias=False)` - In TF, the initializer is explicitly provided (TFT5ForConditionalGeneration): `lm_head_initializer = tf.keras.initializers.RandomNormal(mean=0, stddev=config.initializer_factor)` So, in the base model, the weights in PT are initialized with a uniform distribution of **[-0.036, 0.036]**, and in TF they are initialized with a random normal with **stdev=1.0**. I'm not entirely sure about the actual implications of this in model training. But at least the lm_head weights will have a huge impact in loss values initially. Based on other transformer models I've seen, the "correct" answer seems to be that both weights should be initialised with stdev=1.0. But none of the implementations actually does this.
04-13-2022 14:11:18
04-13-2022 14:11:18
cc @gante @Rocketknight1 <|||||>Thanks a lot for the issue @jorgemcgomes! I think for PyTorch and Tensorflow we actually never really made sure that the init is correct because we mostly focused on fine-tuning. But we should correct this! I think I made sure that the init is correct in Flax's T5 implementation so we could/should use this as a gold-standard. So let's look there at the Embedding and lm_head: - https://github.com/huggingface/transformers/blob/048443db863214aef9c8341517b427edced63c81/src/transformers/models/t5/modeling_flax_t5.py#L1366 - https://github.com/huggingface/transformers/blob/048443db863214aef9c8341517b427edced63c81/src/transformers/models/t5/modeling_flax_t5.py#L1384 So it looks like for both the word embeddings and the lm_head the init should be: `random_normal(mean=0, stddev=config.initializer_factor)` Guess PyTorch got one right and TF the other one. @craffel can you confirm this, that a gaussian normal distribution is used as an init for T5's word embeddings and language model head (in case it's not tied to the word embeddings)<|||||>Yep, MTF initializes embeddings as a standard Gaussian. https://github.com/tensorflow/mesh/blob/a32810e32709e0eaad3b241475d3be0957409adc/mesh_tensorflow/layers.py#L2096<|||||>@jorgemcgomes Thanks for spotting this! Would you be willing to make a PR to bring the TF/PT implementations in line with the JAX one?<|||||>Sure. I can do that in the coming days. But there might be more to this. Based on the experiences I was doing (my problem/data is very specific, and I'm running some modifications in T5, so take this with a grain of salt), "fixing" the lm_head init (from σ=0.03 to σ=1.0) caused huge initial train/valid losses, even causing instability with the same LR. There's this interesting bit: https://github.com/huggingface/transformers/blob/de8b06f9bf908ef1e6317ecb1f74a02313eee72e/src/transformers/models/t5/modeling_t5.py#L1662-L1667 * with tie_word_embeddings=True, the input to the final layer is scaled down by d^-0.5 and multiplied with standard gaussian weights (the embeddings weights). * with tie_word_embeddings=False, the input to the final layer is **not** scaled down, and **if the proposed fix is introduced** it is also multiplied with standard gaussian weights (the lm_head weights). This doesn't sound right, and can explain the large loss values and instability I mentioned. And it might also explain why the current PT implementation of T5v1.1 appears to be working fine: the sequence input is not scaled down, but it is being multiplied with small weights instead (initialised with σ=d^-0.5). Two wrongs that cancel each other? This would mean that the current PT implementation is "fine", but TF and Flax are broken.<|||||>That explanation makes sense to me. Just to confirm, is training stable in the current version with the small TF/Flax init?<|||||>To summarise, based on my experiments with a non-tied LM head (T5v1.1): - small embeddings init, small lm_head init --> stable - small embeddings init, large lm_head init [as in TF] --> unstable - large embeddings init, small lm_head init [as in PT] --> stable - large embeddings init, large lm_head init [as in Flax] --> unstable The init of the embeddings doesn't seem to matter that much at all. Maybe layer norm takes care of that? And large lm_head inits (as found in the current TF and Flax implementations) are always unstable.<|||||>Training > Yep, MTF initializes embeddings as a standard Gaussian. https://github.com/tensorflow/mesh/blob/a32810e32709e0eaad3b241475d3be0957409adc/mesh_tensorflow/layers.py#L2096 Thanks for looking this up. So I think both embeddings should then be initialized as: tf.random_normal_initializer( mean=0.0, stddev=0.05, seed=None ) meaning `self.config.initializer_factor` should be set to 0.05. The most important thing is to match the original code-base here. Don't think we need to run different pretrainings to find the best init scheme since stability is always data-dependent. => Seems like the Flax init methods were good to me so I'd suggest to just apply this to PT and TF as well <|||||>@jorgemcgomes, Would you like to open a PR to fix the initialization for T5 here as described in the comment above? Otherwise happy to take over the issue!<|||||>Please take over the issue @patrickvonplaten . This got pretty muddy and I'm not sure what is the right approach here.
transformers
16,748
closed
[SpeechEncoderDecoderModel] Fix bug in reshaping labels
Currently, the target `labels` are reshaped using the `view` method before being passed into the loss function: https://github.com/huggingface/transformers/blob/06b4aac9ebab77a0065ec2cab40a8085ad71946f/src/transformers/models/speech_encoder_decoder/modeling_speech_encoder_decoder.py#L560 The `view` method requires the Torch Tensor to be _contiguous_ (_cf_ https://pytorch.org/docs/stable/generated/torch.Tensor.view.html). There are certain operations that are commonly performed on the `labels` that might cause them to not be contiguous, for example _slicing_. For speech seq2seq models, if the bos token is appended in the tokenisation step, we cut the bos token by slicing the `labels` as follows: https://github.com/huggingface/transformers/blob/06b4aac9ebab77a0065ec2cab40a8085ad71946f/examples/pytorch/speech-recognition/run_speech_recognition_seq2seq.py#L207-L210 This slicing operation causes the `labels` to be not contiguous. If `labels` are not contiguous, calling `labels.view(-1)` will throw a RuntimeError. This is demonstrated by the following code snippet: ```python import torch labels = torch.ones((2, 10), dtype=torch.int64) print(f"Contiguous without slicing: {labels.is_contiguous()}") labels.view(-1) labels = torch.ones((2, 10), dtype=torch.int64) labels = labels[:, 1:] print(f"Contiguous with slicing: {labels.is_contiguous()}") labels.view(-1) ``` Output: ``` Contiguous without slicing: True Contiguous with slicing: False --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) Input In [137], in <cell line: 10>() 8 labels = labels[:, 1:] 9 print(f"Contiguous with slicing: {labels.is_contiguous()}") ---> 10 labels.view(-1) RuntimeError: view size is not compatible with input tensor's size and stride (at least one dimension spans across two contiguous subspaces). Use .reshape(...) instead. ``` And similarly for the speech encoder-decoder model: ```python import torch from transformers import SpeechEncoderDecoderModel model = SpeechEncoderDecoderModel.from_pretrained('hf-internal-testing/tiny-random-speech-encoder-decoder') input_values = torch.ones((2, 1000), dtype=torch.float32) labels = torch.ones((2, 10), dtype=torch.int64) labels = labels[:, 1:] outputs = model(input_values, labels=labels) ``` Output: ``` --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) Input In [138], in <cell line: 11>() 8 labels = torch.ones((2, 10), dtype=torch.int64) 9 labels = labels[:, 1:] ---> 11 outputs = model(input_values, labels=labels) File ~/venv/lib/python3.8/site-packages/torch/nn/modules/module.py:1110, in Module._call_impl(self, *input, **kwargs) 1106 # If we don't have any hooks, we want to skip the rest of the logic in 1107 # this function, and just call forward. 1108 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks 1109 or _global_forward_hooks or _global_forward_pre_hooks): -> 1110 return forward_call(*input, **kwargs) 1111 # Do not call functions when jit is used 1112 full_backward_hooks, non_full_backward_hooks = [], [] File ~/transformers/src/transformers/models/speech_encoder_decoder/modeling_speech_encoder_decoder.py:560, in SpeechEncoderDecoderModel.forward(self, inputs, attention_mask, decoder_input_ids, decoder_attention_mask, encoder_outputs, past_key_values, decoder_inputs_embeds, labels, use_cache, output_attentions, output_hidden_states, input_values, input_features, return_dict, **kwargs) 558 logits = decoder_outputs.logits if return_dict else decoder_outputs[0] 559 loss_fct = CrossEntropyLoss() --> 560 loss = loss_fct(logits.reshape(-1, self.decoder.config.vocab_size), labels.view(-1)) 562 if not return_dict: 563 if loss is not None: RuntimeError: view size is not compatible with input tensor's size and stride (at least one dimension spans across two contiguous subspaces). Use .reshape(...) instead. ``` This PR follows the advice provided in the PyTorch docs by calling the `.reshape(...)` method instead of `.view(...)`. Calling `reshape` returns `view` if the shapes are compatible, and copies (equivalent to calling [`contiguous()`](https://pytorch.org/docs/stable/generated/torch.Tensor.contiguous.html#torch.Tensor.contiguous)) otherwise. ```python import torch labels = torch.ones((2, 10), dtype=torch.int64) labels = labels[:, 1:] print(f"Contiguous with slicing: {labels.is_contiguous()}") labels.reshape(-1) # no error despite labels being non-contiguous ```
04-13-2022 13:30:24
04-13-2022 13:30:24
_The documentation is not available anymore as the PR was closed or merged._<|||||>Sounds good! <|||||>If I remember correctly `reshape()` == `view()` if the tensor does not need to call `contiguous()`, so good for me!<|||||>> If I remember correctly `reshape()` == `view()` if the tensor does not need to call `contiguous()`, so good for me! Yes, exactly that! Calling `reshape()` returns `view()` if the shapes are compatible, and copies (equivalent to calling [`contiguous()`](https://pytorch.org/docs/stable/generated/torch.Tensor.contiguous.html#torch.Tensor.contiguous)) otherwise.
transformers
16,747
open
pointer to transformer (big) model
# 🌟 New model addition ## Model description <!-- Important information --> Hi, needed a pointer on how to instantiate a Transformer-big from the original Vaswani et. al. paper (Attention Is All You Need). I could only find versions of Transformer-like architectures, so would be useful if this could also be added. ## Open source status * [x] the model implementation is available: (give details): https://research.google/pubs/pub46201/ * [ ] the model weights are available: (give details) * [ ] who are the authors: (mention them, if possible by @gh-username)
04-13-2022 13:18:38
04-13-2022 13:18:38
@anirudt Can I work on this issue?<|||||>Any leads on this?
transformers
16,746
closed
Tensor size mismatch in RoBERTa
The following error pops up while running a `TranslationPipeline` using a PyTorch `EncoderDecoderModel` (@patrickvonplaten) consisting of two `RoBERTas`. (@LysandreJik) Curiously it happens on a very specific datapoint in a large-ish dataset, but I'm having trouble digging it out. (It does well on tens of thousands of examples prior to that though.) I think it's the same issue as https://github.com/microsoft/CodeBERT/issues/73, but I don't know how to go about fixing it. Many thanks for any pointers! I'm using `transformers==4.18.0` and the same issue was present on `4.17`, too. ```python File ~/my-repo/.venv/lib/python3.9/site-packages/transformers/pipelines/text2text_generation.py:159, in Text2TextGenerationPipeline._forward(self, model_inputs, **generate_kwargs) 157 generate_kwargs["max_length"] = generate_kwargs.get("max_length", self.model.config.max_length) 158 self.check_inputs(input_length, generate_kwargs["min_length"], generate_kwargs["max_length"]) --> 159 output_ids = self.model.generate(**model_inputs, **generate_kwargs) 160 out_b = output_ids.shape[0] 161 if self.framework == "pt": File ~/my-repo/.venv/lib/python3.9/site-packages/torch/autograd/grad_mode.py:27, in _DecoratorContextManager.__call__.<locals>.decorate_context(*args, **kwargs) 24 @functools.wraps(func) 25 def decorate_context(*args, **kwargs): 26 with self.clone(): ---> 27 return func(*args, **kwargs) File ~/my-repo/.venv/lib/python3.9/site-packages/transformers/generation_utils.py:1156, in GenerationMixin.generate(self, inputs, max_length, min_length, do_sample, early_stopping, num_beams, temperature, top_k, top_p, typical_p, repetition_penalty, bad_words_ids, force_words_ids, bos_token_id, pad_token_id, eos_token_id, length_penalty, no_repeat_ngram_size, encoder_no_repeat_ngram_size, num_return_sequences, max_time, max_new_tokens, decoder_start_token_id, use_cache, num_beam_groups, diversity_penalty, prefix_allowed_tokens_fn, logits_processor, stopping_criteria, constraints, output_attentions, output_hidden_states, output_scores, return_dict_in_generate, forced_bos_token_id, forced_eos_token_id, remove_invalid_values, synced_gpus, exponential_decay_length_penalty, **model_kwargs) 1149 model_kwargs["attention_mask"] = self._prepare_attention_mask_for_generation( 1150 inputs_tensor, pad_token_id, eos_token_id 1151 ) 1153 if self.config.is_encoder_decoder and "encoder_outputs" not in model_kwargs: 1154 # if model is encoder decoder encoder_outputs are created 1155 # and added to `model_kwargs` -> 1156 model_kwargs = self._prepare_encoder_decoder_kwargs_for_generation( 1157 inputs_tensor, model_kwargs, model_input_name 1158 ) 1160 # 4. Prepare `input_ids` which will be used for auto-regressive generation 1161 if self.config.is_encoder_decoder: File ~/my-repo/.venv/lib/python3.9/site-packages/transformers/generation_utils.py:524, in GenerationMixin._prepare_encoder_decoder_kwargs_for_generation(self, inputs_tensor, model_kwargs, model_input_name) 522 encoder_kwargs["return_dict"] = True 523 encoder_kwargs[model_input_name] = inputs_tensor --> 524 model_kwargs["encoder_outputs"]: ModelOutput = encoder(**encoder_kwargs) 526 return model_kwargs File ~/my-repo/.venv/lib/python3.9/site-packages/torch/nn/modules/module.py:1110, in Module._call_impl(self, *input, **kwargs) 1106 # If we don't have any hooks, we want to skip the rest of the logic in 1107 # this function, and just call forward. 1108 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks 1109 or _global_forward_hooks or _global_forward_pre_hooks): -> 1110 return forward_call(*input, **kwargs) 1111 # Do not call functions when jit is used 1112 full_backward_hooks, non_full_backward_hooks = [], [] File ~/my-repo/.venv/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py:817, in RobertaModel.forward(self, input_ids, attention_mask, token_type_ids, position_ids, head_mask, inputs_embeds, encoder_hidden_states, encoder_attention_mask, past_key_values, use_cache, output_attentions, output_hidden_states, return_dict) 815 if hasattr(self.embeddings, "token_type_ids"): 816 buffered_token_type_ids = self.embeddings.token_type_ids[:, :seq_length] --> 817 buffered_token_type_ids_expanded = buffered_token_type_ids.expand(batch_size, seq_length) 818 token_type_ids = buffered_token_type_ids_expanded 819 else: RuntimeError: The expanded size of the tensor (746) must match the existing size (514) at non-singleton dimension 1. Target sizes: [8, 746]. Tensor sizes: [1, 514] ```
04-13-2022 12:23:10
04-13-2022 12:23:10
I found out what it was: ``` Token indices sequence length is longer than the specified maximum sequence length for this model (746 > 512). Running this sequence through the model will result in indexing errors ``` It would've been easier to diagnose if whatever triggered this message had also emitted an explicit error, I think. Wrapping my loop in `try:` and `except RuntimeError:` allowed me to skip this problematic datapoint even without filtering the dataset based on input sequence length.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Getting the same error here. I think that the tokenizer is not truncating correctly, that would be a bug, no?
transformers
16,745
closed
KeyError when using AutoTokenizer for facebook/detr-resnet-*
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.16.0 - Platform: Ubuntu 20.04 - Python version: 3.8.12 - PyTorch version (GPU?): 1.11.0 - Tensorflow version (GPU?): NA - Using GPU in script?: NA - Using distributed or parallel set-up in script?: No ### Who can help - DETR: @NielsRogge - Tokenizers: @SaulLu ## Information I'm following this tutorial https://huggingface.co/docs/transformers/serialization on how to export models to ONNX. Tr from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("facebook/detr-resnet-101")ying to export one for DETR but I can't proceed as I'm stuck with this error on AutoTokenizer: ``` Traceback (most recent call last): File "detr_config.py", line 2, in <module> tokenizer = AutoTokenizer.from_pretrained("facebook/detr-resnet-101") File "/home/juan/miniconda3/envs/pytorch/lib/python3.8/site-packages/transformers/models/auto/tokenization_auto.py", line 530, in from_pretrained tokenizer_class_py, tokenizer_class_fast = TOKENIZER_MAPPING[type(config)] File "/home/juan/miniconda3/envs/pytorch/lib/python3.8/site-packages/transformers/models/auto/auto_factory.py", line 565, in __getitem__ raise KeyError(key) KeyError: <class 'transformers.models.detr.configuration_detr.DetrConfig'> ``` Here's the snippet of code to reproduce the error: ``` from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("facebook/detr-resnet-101") ```
04-13-2022 11:42:07
04-13-2022 11:42:07
Hi, DETR is a vision model, not a text model, hence it doesn't have a tokenizer, but a so-called feature extractor (useful for preparing images for the model). You can load it using the AutoFeatureExtractor API: ``` from transformers import AutoFeatureExtractor feature_extractor = AutoFeatureExtractor.from_pretrained("facebook/detr-resnet-101") ```<|||||>thanks @NielsRogge . That worked for me.
transformers
16,744
closed
Reduce Funnel PT/TF diff
# What does this PR do? Same as #15684, but on PT test side. As mentioned in #15684, this is not a real bug in model. Just a setting in the test configuration. ## Comment The change in `modeling_tf_funnel.py` is to address **a real issue** regarding **weight initialization**, see https://github.com/huggingface/transformers/blob/15de7a010ddcdec0532b30d1eb6c28e7b314a6a9/src/transformers/models/funnel/configuration_funnel.py#L79-L84 and https://github.com/huggingface/transformers/blob/15de7a010ddcdec0532b30d1eb6c28e7b314a6a9/src/transformers/models/funnel/modeling_funnel.py#L812-L814 **(But, this issue is not the cause of the large diff between PT/TF)**
04-13-2022 09:27:20
04-13-2022 09:27:20
_The documentation is not available anymore as the PR was closed or merged._
transformers
16,743
closed
[TAPEX] Update drop_rows_to_fit
# What does this PR do? This PR makes `drop_rows_to_fit` an attribute of `TapexTokenizer`, rather than a standalone `TruncationStrategy`. The truncation strategies that can be used are the same as those of BART (as TAPEX is a BART model), meaning `truncation=True` will truncate to the maximum length. However, one can still randomly drop rows based on answers using this attribute.
04-13-2022 09:04:18
04-13-2022 09:04:18
_The documentation is not available anymore as the PR was closed or merged._<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Closing this as "drop_rows_to_fit" probably deserves to be a truncation strategy on its own.
transformers
16,742
closed
Some weights of the model checkpoint at microsoft/layoutlmv2-base-uncased were not used when initializing LayoutLMv2Model
## Environment info - `transformers` version: 4.18.0 - Platform: Linux-5.13.0-39-generic-x86_64-with-glibc2.17 - Python version: 3.8.13 - Huggingface_hub version: 0.5.1 - PyTorch version (GPU?): 1.8.2+cu111 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: yes - Using distributed or parallel set-up in script?: No ### Who can help @NielsRogge ## Information Model I am using (Bert, XLNet ...): LayoutLMv2 and LayoutXLM The problem arises when using: * [√] the official example scripts: (give details below) ## To reproduce Steps to reproduce the behavior: I just try to load pretrained LayoutLMv2Model and it seems like weights mismatch in visual backbone. It also happens when I try to load LayoutXLM model. It said: `This IS NOT expected if you are initializing LayoutLMv2Model from the checkpoint of a model that you expect to be exactly identical.` Detectron2 installed with: ``` python -m pip install detectron2 -f \ https://dl.fbaipublicfiles.com/detectron2/wheels/cu111/torch1.8/index.html ``` CODE: ``` from transformers import LayoutLMv2Model model = LayoutLMv2Model.from_pretrained("microsoft/layoutlmv2-base-uncased") ``` and ``` from transformers import LayoutLMv2Model model = LayoutLMv2Model.from_pretrained("microsoft/layoutxlm-base") ``` OUTPUT: ``` Some weights of the model checkpoint at microsoft/layoutlmv2-base-uncased were not used when initializing LayoutLMv2Model: ['layoutlmv2.visual.backbone.bottom_up.res4.20.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.12.conv3.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.3.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res2.1.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.5.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res5.0.conv3.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.19.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res2.1.conv3.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.11.conv3.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.8.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.0.shortcut.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.22.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.17.conv3.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res5.0.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.7.conv3.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.8.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.0.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.13.conv3.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.9.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res5.1.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.15.conv3.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res2.0.shortcut.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.10.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res5.2.conv3.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.10.conv3.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.6.conv3.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.15.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res5.2.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res5.1.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.2.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.1.conv3.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.13.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.11.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res5.0.shortcut.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.12.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.20.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res5.0.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res3.0.conv3.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.11.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res3.2.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.3.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.7.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.18.conv3.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res3.3.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res3.3.conv3.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.9.conv3.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.16.conv3.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res2.2.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.20.conv3.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res3.2.conv3.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.0.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res2.2.conv3.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res3.1.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res3.1.conv3.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.12.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.21.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.21.conv3.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.22.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.10.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.1.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res3.1.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.6.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.19.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.16.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.18.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.18.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.14.conv3.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res2.0.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.14.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res5.2.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res3.0.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.2.conv3.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.8.conv3.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.5.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.7.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res2.0.conv3.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.17.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res2.1.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.19.conv3.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res5.1.conv3.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.21.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.15.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res2.0.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.6.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res3.2.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.1.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res3.3.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.4.conv3.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.22.conv3.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.3.conv3.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res2.2.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.4.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.5.conv3.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.4.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.2.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.14.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.16.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res3.0.shortcut.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.0.conv3.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.stem.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res3.0.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.9.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.13.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.17.conv1.norm.num_batches_tracked'] - This IS expected if you are initializing LayoutLMv2Model from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model). - This IS NOT expected if you are initializing LayoutLMv2Model from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). ``` ## Expected behavior Is it normal for this mismatch or I have something wrong?
04-13-2022 06:05:54
04-13-2022 06:05:54
Hi, Yes this is expected, as you can see the warning only prints "num_batches_tracked", these are statistics for batch norm layers, these aren't trainable parameters.<|||||>@NielsRogge I understand now, thank you for your reply🤗
transformers
16,741
closed
[modeling_utils] better explanation of ignore keys
Integrating the improved explanation of ignore keys by @sgugger at https://github.com/huggingface/transformers/issues/16719#issuecomment-1096878395 with some tweaks from myself. It's still unclear whether they should include the base model prefix or not, but we can sort it out when https://github.com/huggingface/transformers/issues/16719 gets more clarity @sgugger
04-13-2022 02:54:05
04-13-2022 02:54:05
_The documentation is not available anymore as the PR was closed or merged._
transformers
16,740
closed
[trainer / deepspeed] fix hyperparameter_search
This PR: - fixes `hyperparameter_search` deepspeed config reset fix up that got out of sync with the normal code path - adds a test so that will not happen in the future. - adds a new group of pip deps: `deepspeed-testing` @sgugger Fixes: https://github.com/huggingface/transformers/pull/11966#issuecomment-1058493821
04-13-2022 02:26:11
04-13-2022 02:26:11
_The documentation is not available anymore as the PR was closed or merged._
transformers
16,739
closed
Replace assertion with exception
# What does this PR do? Replaces assert with Exceptions as per https://github.com/huggingface/transformers/issues/12789. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @sgugger
04-12-2022 22:00:36
04-12-2022 22:00:36
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_16739). All of your documentation changes will be reflected on that endpoint.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
16,738
closed
Add self training code for text classification
This is an implementation of the self-training algorithm (without task augmentation) for classification tasks proposed in the [EMNLP 2021](https://2021.emnlp.org/) paper: [STraTA: Self-Training with Task Augmentation for Better Few-shot Learning](https://arxiv.org/abs/2109.06270). For the original codebase, please check out https://github.com/google-research/google-research/tree/master/STraTA. Note that this code can be used as a tool for automatic data labeling. The pull request includes a README.md file with detailed instructions on how to set up a virtual environment and install necessary packages. It also includes a demo `run.sh` on how to perform self-training with a BERT Base model on the SciTail science entailment dataset using 8 labeled examples per class.
04-12-2022 21:34:40
04-12-2022 21:34:40
_The documentation is not available anymore as the PR was closed or merged._<|||||>> Very nice, thanks a lot for adding this new example! Just to be sure, the empty strata file is intended? I didn't get why it's there. Good catch, @sgugger. Just removed the empty strata file. Thanks!
transformers
16,737
closed
Fix the Conda package build
I saw that [the Conda builds are failing since v4.12](https://github.com/huggingface/transformers/actions/workflows/release-conda.yml). The main problem is that, for some reason, [the build tries to install `setuptools` but Conda build forbids it](https://github.com/huggingface/transformers/runs/5853294684?check_suite_focus=true#step:6:2172). I found [an answer in StackOverflow](https://stackoverflow.com/a/64825075/1165181) that shows it can be fixed by adding the flags ` --single-version-externally-managed --record=record.txt` to the `python setup.py install` command in the `build.sh` file (note the `--record` flag is also needed, otherwise the command fails, stating so). I also updated the tokenizers version specification, which seemed to have been forgotten to be updated in this file as well. I added `conda-verify`, which `conda build` uses for some sanity checks. Finally, I changed `conda-build` to `conda build`, which seems to be the way to use this command. It'd be good if somebody can check this on their end, to double-check it's working fine: ```bash conda create -n build-transformers -c huggingface python=3.8 anaconda-client conda-build conda-verify conda activate build-transformers TRANSFORMERS_VERSION=$(python setup.py --version) conda build .github/conda ```
04-12-2022 19:09:09
04-12-2022 19:09:09
_The documentation is not available anymore as the PR was closed or merged._<|||||>@sgugger I saw you have worked on the Conda packaging of this repo before. Can you look into it? IMHO this PR doesn't take a lot of time to review.<|||||>@LysandreJik friendly ping!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>@LysandreJik friendly ping :)<|||||>Sorry, just got to it! I managed to get the build to run correctly by just adding the `tokenizers` line :tada: Let me know if you'd like for me to send logs. Are you sure we need the rest? I'd be happy to merge your PR with just the tokenizers changes :)<|||||>Yeah. I explained why in the PR description<|||||>Would you like me to share the logs I got locally showing only the `tokenizers` change was necessary?<|||||>> Would you like me to share the logs I got locally showing only the `tokenizers` change was necessary? I left only the `tokenizers` change now.<|||||>The failing tests are flaky, right?
transformers
16,736
closed
[Flax] Torch fp16 model weights not upcast when loaded in Flax
In some scenarios, one may want to load a Flax model directly from pre-trained PyTorch model weights. In this process, the original dtype of the PyTorch model weights is maintained when loaded into Flax. For models such as [bart-large](https://huggingface.co/facebook/bart-large), which has it's PyTorch weights stored in fp16 on the Hub, this can result in a Flax model with weights in an undesirable dtype. This is highlighted by the following code snippet, which first loads a FlaxSpeechEncoderDecoderModel from entirely fp32 PyTorch weights, and then again from fp32 encoder weights and fp16 decoder weights: ```python from transformers import FlaxSpeechEncoderDecoderModel # fp32 PyTorch weights encoder_id = 'hf-internal-testing/tiny-random-wav2vec2' decoder_id = 'hf-internal-testing/tiny-random-bart' model = FlaxSpeechEncoderDecoderModel.from_encoder_decoder_pretrained(encoder_id, decoder_id, encoder_from_pt=True, decoder_from_pt=True) print("-----------From fp32 PyTorch weights-----------") print(f"Encoder dtype: {model.params['encoder']['masked_spec_embed'].dtype}") print(f"Decoder dtype: {model.params['decoder']['model']['decoder']['embed_tokens']['embedding'].dtype}") # same decoder as previously, but with weights downcasted to fp16 decoder_id = 'sanchit-gandhi/tiny-random-bart-fp16' model = FlaxSpeechEncoderDecoderModel.from_encoder_decoder_pretrained(encoder_id, decoder_id, encoder_from_pt=True, decoder_from_pt=True) print("---------From fp32/fp16 PyTorch weights---------") print(f"Encoder dtype: {model.params['encoder']['masked_spec_embed'].dtype}") print(f"Decoder dtype: {model.params['decoder']['model']['decoder']['embed_tokens']['embedding'].dtype}") ``` Output: ``` -----------From fp32 PyTorch weights----------- Encoder dtype: float32 Decoder dtype: float32 ---------From fp32/fp16 PyTorch weights--------- Encoder dtype: float32 Decoder dtype: float16 ``` Having a model stored in two different dtype raises issues with training - Optax optimisers expect the model to maintain one uniform dtype. Furthermore, the default assumption is that all Flax model weights are in fp32. This weight conversion is handled by the general conversion script: https://github.com/huggingface/transformers/blob/main/src/transformers/modeling_flax_pytorch_utils.py. Would it be wise to inform the user of the potentially erroneous model dtype in this scenario? If informed, they could then call the `to_fp32` method from `modeling_flax_utils` to upcast the weights to fp32: https://github.com/huggingface/transformers/blob/a9604067225219e132abdff2793f78ead798453b/src/transformers/modeling_flax_utils.py#L231
04-12-2022 18:12:27
04-12-2022 18:12:27
cc @patrickvonplaten @patil-suraj <|||||>Great catch! Gosh, we should have never uploaded `bart-large` with fp16 weights - I think this happened accidentally and a long time ago :-/ Usually we want all weights to be stored as full fp32 weights. To be honest, for now I think this is really an edge-case - I don't know any model besides bart that has its weights uploaded in fp16, so I think we could do three things here: - 1. Don't do anything - it's an edge case - 2. Make `from_pretrained(...)` error out - 3. Automatically convert to fp32 I strongly advocate fro 1. or 2. here. I'll upload the original weights of `bart-large` in full fp32 probably in a separate repo now. What do you think ? @patil-suraj @sanchit-gandhi <|||||>Might be related to https://github.com/huggingface/transformers/issues/15559<|||||>If this is solely an issue concerning `bart-large` and this truly is an edge-case, then 1 or 2 seem reasonable. 3 could cause some serious ramifications for instances where the fp16 model is currently used (e.g. new OOM's with training). In 2, would the error out completely prohibit the user from loading weights in fp16, or just provide them with a warning and the advice to upcast the weights/load from fp32?<|||||>In 2. I'd completely error out and state that the two checkpoints have different precision and can't be combined<|||||>My worry with a complete error out would be that it prevents the user from ever being able to load the model, even if they have the intent of upcasting/correcting for the dtype. My suggestion would be to add a warning to the [`from_pretrained`](https://github.com/huggingface/transformers/blob/b24201fa44e1a14e83be890dcbc231e926c37ec1/src/transformers/modeling_flax_utils.py#L298) method in [`modeling_flax_utils.py`](https://github.com/huggingface/transformers/blob/main/src/transformers/modeling_flax_utils.py) if the Flax model weights are loaded in a dtype other than fp32. This could then be proceeded by the advice that the user should upcast to fp32 using the provided method `to_fp32`. By displaying a warning instead of an error out, the user is still able to load the model and then subsequently rectify any dtype mismatch.<|||||>Guess adding a warning and no automatic upcasting is fine as well! Just not in favor of automatic upcasting :-)<|||||>Agree with @sanchit-gandhi here. I'm in favour of adding a warning and letting the user know that weights are not `fp32`.<|||||>The user warning for the Flax `.from_pretrained` method was implemented in #16762. As an extreme edge case and following an extensive offline discussion, it was decided that the fp16 PyTorch weights for [bart-large](https://huggingface.co/facebook/bart-large) will remain as is. The original checkpoint has been reconverted and uploaded it in fp32 to another repo for those wishing to explicitly use full-precision weights: https://huggingface.co/patrickvonplaten/bart-large-fp32 Note that the fp16 weights should not be an issue for any PyTorch models: the PyTorch `.from_pretrained` method automatically upcasts model weights to fp32.
transformers
16,735
closed
[PegasusConfig] wrong default vocab_size
In PegasusConfig (https://github.com/huggingface/transformers/blob/v4.18.0/src/transformers/models/pegasus/configuration_pegasus.py#L108), the default vocab size should be 96000 instead of 50265. @patrickvonplaten
04-12-2022 18:00:39
04-12-2022 18:00:39
Hey @yaozhaogoogle, Thanks for the issue, could you maybe link to the original configuration that shows a default vocab size of 96000?<|||||>from the github https://github.com/google-research/pegasus , there is a link to the checkpoints and vocabs, https://pantheon.corp.google.com/storage/browser/pegasus_ckpt . They are all using a single vocab size of 96k<|||||>Thanks for the link @yaozhaogoogle, Note that in the configuration we just provide a default value that could be used when initializing Pegagus from scratch. If one loads a pretrained checkpoint the vocab size is overwritten by the value defined in the config on the HF Hub. E.g. this Pegagus checkpoint: https://huggingface.co/google/pegasus-large/blob/main/config.json#L122 has a vocab size of 96000 which would be used when doing: ```py from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("google/pegasus-large") model = AutoModelForSeq2SeqLM.from_pretrained("google/pegasus-large") ```<|||||>Thanks for the explanation!
transformers
16,734
closed
Partial checkpoint support for SMP
# What does this PR do? - Adds 3 new training args(`smp_save_partial` and `smp_load_partial`) to support partial checkpointing with SMP. `smp_tensor_parallel_full_model` to apply tensor parallelism to whole model. - Uses the right ranks for partial checkpoint saving in should_save. - Uses `local_state_dict()` with partial checkpoint saving. - Uses `smp.save` instead of `torch.save` when partial checkpoint saving is enabled. - Uses `smp.load` instead of `torch.load` when partial checkpoint loading is enabled. Reorders partial checkpoint loading to happen after wrapping of model, since `smp.load` can only load to a smp model. - Updated checks for the existence of checkpoint files since smp partial checkpoints contain postfixes in addition to filename(example: filename_0_0 or filename_0_0_0). - Skip checkpoint sharding when smp is enabled. - `smp_gather` is causing increased memory usage on GPU0 when tensor parallelism is enabled. Switches to `distributed_concat` for ddp. - adds `load_best_model_at_end` support for SMP. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
04-12-2022 17:17:05
04-12-2022 17:17:05
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_16734). All of your documentation changes will be reflected on that endpoint.<|||||>@cavdard could you please run `make style` to apply the correct coding formatting? <|||||>> @cavdard could you please run `make style` to apply the correct coding formatting? Update: Resolved by running `pip install -e .[quality]` @philschmid Having this error. Am I missing a step? ``` make style black examples tests src utils make: black: No such file or directory ```<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
16,733
closed
[FlaxBartForCausalLM] Embed tokens not loaded in Flax decoder model from encoder-decoder weights
The embeddings module `embed_tokens` is not loaded from pre-trained Flax model weights when a FlaxBartForCausalLM model is instantiated in isolation. As a consequence, these embedding weights are randomly initialised. The following code snippet demonstrates this fact by comparing the FlaxBartForCausalLM model to its PyTorch equivalent, BartForCausalLM. For the PyTorch (resp. Flax) model, the weights are loaded from pre-trained PyTorch (resp. Flax) weights at https://huggingface.co/sanchit-gandhi/tiny-random-bart. These model weights are identical to those in the repository at https://huggingface.co/hf-internal-testing/tiny-random-bart, but with the exception that this repository contains both Flax and PyTorch weights, unlike those at hf-internal-testing which contain only PyTorch weights. ```python from transformers import BartForCausalLM, FlaxBartForCausalLM import tempfile from flax.traverse_util import flatten_dict pt_dec_model = BartForCausalLM.from_pretrained('sanchit-gandhi/tiny-random-bart') fx_dec_model = FlaxBartForCausalLM.from_pretrained('sanchit-gandhi/tiny-random-bart') # Convert the PT model to FX with tempfile.TemporaryDirectory() as tmpdirname: pt_dec_model.save_pretrained(tmpdirname) pt_dec_model_to_fx = FlaxBartForCausalLM.from_pretrained(tmpdirname, from_pt=True) # easier to work in terms of flattened dicts pt_dec_params_to_fx = flatten_dict(pt_dec_model_to_fx.params) fx_dec_params = flatten_dict(fx_dec_model.params) # Check that all keys match assert fx_dec_params.keys() == pt_dec_params_to_fx.keys() # Check that all the weights are **precisely** equal for param in pt_dec_params_to_fx: assert (fx_dec_params[param] == pt_dec_params_to_fx[param]).all(), param ``` Output: ``` --------------------------------------------------------------------------- AssertionError Traceback (most recent call last) Input In [211], in <cell line: 21>() 20 # Check that all the weights are **precisely** equal 21 for param in pt_dec_params_to_fx: ---> 22 assert (fx_dec_params[param] == pt_dec_params_to_fx[param]).all(), param AssertionError: ('model', 'decoder', 'embed_tokens', 'embedding') ``` We see here that the embedding weights do not match for the standalone decoder models: the `embed_tokens` are not loaded from pre-trained in Flax, and are instead randomly initialised. Loading full encoder-decoder models, we see that the weights match for the embeddings: ```python from transformers import BartModel, FlaxBartModel import tempfile from flax.traverse_util import flatten_dict pt_model = BartModel.from_pretrained('sanchit-gandhi/tiny-random-bart') fx_model = FlaxBartModel.from_pretrained('sanchit-gandhi/tiny-random-bart') # Convert the PT model to FX with tempfile.TemporaryDirectory() as tmpdirname: pt_model.save_pretrained(tmpdirname) pt_model_to_fx = FlaxBartModel.from_pretrained(tmpdirname, from_pt=True) # easier to work in terms of flattened dicts pt_params_to_fx = flatten_dict(pt_model_to_fx.params) fx_params = flatten_dict(fx_model.params) # Check that all keys match assert fx_params.keys() == pt_params_to_fx.keys() # Check that all the weights are **precisely** equal for param in pt_params_to_fx: assert (fx_params[param] == pt_params_to_fx[param]).all(), param ``` A fix is needed to be able to load the Flax encoder-decoder embedding weights into a standalone decoder module.
04-12-2022 16:23:26
04-12-2022 16:23:26
In the PyTorch Bart modelling script, we first define a 'shared' nn.Embedding module, which we then directly pass to the encoder and decoder modules to explicitly tie their word embeddings: https://github.com/huggingface/transformers/blob/14daa6102a0e8a35ef734dd21bfcf31d9b0207d1/src/transformers/models/bart/modeling_bart.py#L1146-L1149 Due to the stateful nature of PyTorch modules, we can then overwrite this embedding in the `init` method of the encoder or decoder, depending on whether or not the optional keyword argument `embed_tokens` is specified: https://github.com/huggingface/transformers/blob/cc034f72eb6137f4c550e911fba67f8a0e1e98fa/src/transformers/models/bart/modeling_bart.py#L710-L713 (Note that for decoder-only models, we do not specify the argument `embed_tokens` for the decoder module. Thus, it defaults to being initialised in the decoder module's `init`). For the encoder-decoder model, there are three instances in which the embeddings are defined: as `shared` under the BartModel, and again as `embed_tokens` in the encoder and decoder models. This yields the following parameter tree: ``` PT enc-dec model shared encoder embed_tokens ... decoder embed_tokens ... ``` Likewise, in the Flax Bart modelling script, we first define a 'shared' nn.Embed module, which we then directly pass to the encoder and decoder modules to explicitly tie their word embeddings: https://github.com/huggingface/transformers/blob/cc034f72eb6137f4c550e911fba67f8a0e1e98fa/src/transformers/models/bart/modeling_flax_bart.py#L839-L847 However, due to the stateless nature of JAX/Flax models, we cannot then overwrite this embedding in the `setup` method of the encoder or decoder. To address this, it was decided in #15920 that the keyword argument `embed_tokens` must always be specified to the encoder/decoder modules. Thus, there is only one instance in which the embeddings are defined: as `shared` under the FlaxBartModel. This results in different parameter tree to that in PyTorch: ``` FX enc-dec model shared encoder ... decoder ... ``` For encoder-decoder models, PyTorch to Flax conversion is possible: the Flax encoder-decoder model is able to leverage the PyTorch `shared` embedding weights, and then pass these into the encoder and decoder separately (effectively tying the weights, but only having one variable)
. However, an issue arises for decoder only models. Here, the Flax decoder cannot leverage all of the Flax encoder-decoder model weights. This is due to the format of its parameter tree, which is constructed jointly through the [FlaxBartDecoderWrapper](https://github.com/huggingface/transformers/blob/a192f61e0825150e54e15fdc451cf37e23532b3f/src/transformers/models/bart/modeling_flax_bart.py#L1863) and [FlaxBartForCausalLM](https://github.com/huggingface/transformers/blob/a192f61e0825150e54e15fdc451cf37e23532b3f/src/transformers/models/bart/modeling_flax_bart.py#L1885) module: ``` FX dec-only model decoder embed_tokens ... ``` Since there is no module `shared` in Flax decoder only models, the system is not able to leverage the embedding weights registered under `shared` from the Flax encoder-decoder model weights. However, `embed_tokens` is now defined under the decoder module, meaning that we are able to leverage PyTorch encoder-decoder or decoder-only model weights and load them into Flax: ``` PT dec-only model decoder embed_tokens ... ``` Potential solutions: - In the [FlaxBartDecoderWrapper](https://github.com/huggingface/transformers/blob/a192f61e0825150e54e15fdc451cf37e23532b3f/src/transformers/models/bart/modeling_flax_bart.py#L1863), we can rename `embed_tokens` to `self.shared`, thus bringing the param trees of the Flax encoder-decoder and decoder-only models into alignment. Doing so enables the decoder only embeddings to be loaded from Flax encoder-decoder model weights. However, this is not an ideal solution: by renaming the module, we will no longer be able to load Flax decoder only model weights from PyTorch (encoder-)decoder weights, as these parameter trees will not match. - We could define a `self.shared` nn.Embedding module in the PyTorch [DecoderWrapper](https://github.com/huggingface/transformers/blob/a192f61e0825150e54e15fdc451cf37e23532b3f/src/transformers/models/bart/modeling_bart.py#L1680) and then pass this into the decoder model. This maintains consistency between the encoder-decoder style models and the decoder-only ones. (Define `shared` outside the modules, then pass it in, thus registering a state-dict of `(shared), (decoder, embed_tokens)` instead of just `(decoder, embed_tokens)`). However, this is a breaking change for PyTorch Bart models, and should be avoided. - What is probably easier and more effective than both of the above is explicit naming of the `embed_tokens` module in the Flax Bart encoder and decoder modules, giving a parameter tree that exactly matches the PyTorch one, both for encoder-decoder and decoder only models. ``` Enc-dec model shared encoder embed_tokens ... decoder embed_tokens ... ``` ``` Dec-only model decoder embed_tokens ... ``` It is not apparent from the Flax docs how this explicitly naming can be achieved, but I have asked the Flax community on the Flax discussions page how to go about doing this: https://github.com/google/flax/discussions/2046#discussion-4004536<|||||>cc @patrickvonplaten @patil-suraj <|||||>As a temporary fix, a standalone Flax decoder model can be loaded entirely from it's equivalent PyTorch weights and the `embed_tokens` made to match: ```python from transformers import BartForCausalLM, FlaxBartForCausalLM import tempfile from flax.traverse_util import flatten_dict pt_dec_model = BartForCausalLM.from_pretrained('sanchit-gandhi/tiny-random-bart') # force Flax weights to be loaded from PyTorch - enables `embed_tokens` to be loaded correctly fx_dec_model = FlaxBartForCausalLM.from_pretrained('sanchit-gandhi/tiny-random-bart', from_pt=True) # Convert the PT model to FX with tempfile.TemporaryDirectory() as tmpdirname: pt_dec_model.save_pretrained(tmpdirname) pt_dec_model_to_fx = FlaxBartForCausalLM.from_pretrained(tmpdirname, from_pt=True) # easier to work in terms of flattened dicts pt_dec_params_to_fx = flatten_dict(pt_dec_model_to_fx.params) fx_dec_params = flatten_dict(fx_dec_model.params) # Check that all keys match assert fx_dec_params.keys() == pt_dec_params_to_fx.keys() # Check that all the weights are **precisely** equal for param in pt_dec_params_to_fx: assert (fx_dec_params[param] == pt_dec_params_to_fx[param]).all(), param ```<|||||>Hmm isn't the problem here that the weights are not correctly mapped? E.g. when I run the first part of your codesnippet: ```py from transformers import BartForCausalLM, FlaxBartForCausalLM import tempfile from flax.traverse_util import flatten_dict pt_dec_model = BartForCausalLM.from_pretrained('sanchit-gandhi/tiny-random-bart') fx_dec_model = FlaxBartForCausalLM.from_pretrained('sanchit-gandhi/tiny-random-bart') # Convert the PT model to FX with tempfile.TemporaryDirectory() as tmpdirname: pt_dec_model.save_pretrained(tmpdirname) pt_dec_model_to_fx = FlaxBartForCausalLM.from_pretrained(tmpdirname, from_pt=True) ``` It says that: ``` ... Some weights of FlaxBartForCausalLM were not initialized from the model checkpoint at sanchit-gandhi/tiny-random-bart and are newly initialized: {('model', 'decoder', 'embed_tokens', 'embedding')} You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. Some weights of the model checkpoint at /tmp/tmpqaqqrtft were not used when initializing FlaxBartForCausalLM: {('lm_head', 'kernel')} - This IS expected if you are initializing FlaxBartForCausalLM from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model). - This IS NOT expected if you are initializing FlaxBartForCausalLM from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). ``` which shows that those weights are not correctly loaded into the Flax model, so IMO the bug is that the Flax model doesn't correctly convert the naming here. Also note that 99% of all Bart models have their output embeddings tied to their input embeddings, see: https://github.com/huggingface/transformers/blob/eb5bdcdfa51f743887ee1d9c7f230444d7a8b23c/src/transformers/models/bart/modeling_flax_bart.py#L1929 in which case the `lm_head` weights are irrelevant. But agree that there is an error nevertheless. The solution should be to correct the naming / weight conversion here though IMO<|||||>The initialisation of parameters works slightly differently between PyTorch and Flax. In PyTorch, any module defined under a model's `init` will be added to the model's state-dict. In Flax, modules are first defined in the `setup` method, but are only added to the param dict if traced in the `call` method when the dummy forward pass is performed during model initialisation. With that being said, the `lm_head` is always added to the state-dict in PyTorch, whether or not the word embeddings are tied. However, this is not the case in Flax - the `lm_head` is only added to the param dict _if_ used in the `call` method. Inspecting the model code, we see this is only the case if the word embeddings are not tied: https://github.com/huggingface/transformers/blob/eb5bdcdfa51f743887ee1d9c7f230444d7a8b23c/src/transformers/models/bart/modeling_flax_bart.py#L1927-L1931 If we look back to the code snippet for the loading of the decoder-only PyTorch and Flax models, we can confirm that the word embeddings are tied, and that the `lm_head` is instantiated for the PyTorch model and not the Flax one (as expected): ```python from transformers import BartForCausalLM, FlaxBartForCausalLM pt_dec_model = BartForCausalLM.from_pretrained('sanchit-gandhi/tiny-random-bart') fx_dec_model = FlaxBartForCausalLM.from_pretrained('sanchit-gandhi/tiny-random-bart') print(f"Tie word embeddings? PyTorch: {pt_dec_model.config.tie_word_embeddings}, Flax: {fx_dec_model.config.tie_word_embeddings}") print(f"PyTorch Decoder modules: {[n for n, _ in pt_dec_model.named_children()]}") print(f"Flax Decoder modules: {fx_dec_model.params.keys()}") ``` Output: ``` ... Some weights of FlaxBartForCausalLM were not initialized from the model checkpoint at sanchit-gandhi/tiny-random-bart and are newly initialized: {('model', 'decoder', 'embed_tokens', 'embedding')} ... Tie word embeddings? PyTorch: True, Flax: True PyTorch Decoder modules: ['model', 'lm_head'] Flax Decoder modules: dict_keys(['model']) ``` When loading from pre-trained Flax weights, we see that the only parameters randomly initialised are the `embed_tokens`. We perform a slightly different operation when comparing the PyTorch weights to those in Flax - we first save the PyTorch model to a temporary directory (`.save_pretrained(tmpdirname)`) and then load this model from it's PyTorch weights into Flax: ```python # Convert the PT model to FX with tempfile.TemporaryDirectory() as tmpdirname: pt_dec_model.save_pretrained(tmpdirname) pt_dec_model_to_fx = FlaxBartForCausalLM.from_pretrained(tmpdirname, from_pt=True) ``` Here, the PyTorch weights for the `lm_head` are saved. However, since the `lm_head` is not used in the `call` method of the Flax model, they are subsequently not used when loading the PyTorch model into Flax. Thus, we expect to see the aforementioned message: ``` Some weights of the model checkpoint at /tmp/tmpqaqqrtft were not used when initializing FlaxBartForCausalLM: {('lm_head', 'kernel')} ``` When we run the full code snippet, we see that the only weights that do not match between PyTorch and Flax decoder-only models are the `embed_tokens` - the `lm_head` is not used in Flax and so is ignored from this comparison. This is the core issue, which arises due to a different parameter structure between the Flax encoder-decoder models and the Flax decoder-only models. For Flax encoder-decoder, the tied word embeddings are held under the module `shared`, which explicitly ties the word embedding tokens for the encoder and decoder: ``` FX enc-dec model shared encoder ... decoder ... ``` For Flax decoder-only models, we do not have the module `shared`, giving the modified parameter tree: ``` FX dec-only model decoder embed_tokens ... ``` The reason we omit the `shared` module is to give one-to-one equivalence to the corresponding PyTorch decoder-only state-dict: ``` PT dec-only model decoder embed_tokens ... ``` To remedy this issue, we have three choices: 1. We can either insert a module named `shared` for the Flax decoder-only, and enable it to be compatible with Flax encoder-decoder models: ``` FX dec-only model shared decoder ... ``` However, this would then break equivalence between PT and FX decoder-only models, the parameter trees now differing. 2. We keep the current structure and allow for PT and FX decoder-only model equivalence. 3. We explicitly add `embed_tokens` as a named module under the FX encoder-decoder model: ``` FX enc-dec model shared encoder embed_tokens ... decoder embed_tokens ... ``` Which enables PT - FX equivalence for both encoder-decoder and decoder-only models. Of the three, the latter is my preference, as it allows for full compatibility between the different frameworks.<|||||>The full code snippet that examines the `tie_word_embeddings` variable as well as the parameter weights: ```python from transformers import BartForCausalLM, FlaxBartForCausalLM pt_dec_model = BartForCausalLM.from_pretrained('sanchit-gandhi/tiny-random-bart') fx_dec_model = FlaxBartForCausalLM.from_pretrained('sanchit-gandhi/tiny-random-bart') print(f"Tie word embeddings? PyTorch: {pt_dec_model.config.tie_word_embeddings}, Flax: {fx_dec_model.config.tie_word_embeddings}") print(f"PyTorch Decoder modules: {[n for n, _ in pt_dec_model.named_children()]}") print(f"Flax Decoder modules: {fx_dec_model.params.keys()}") # Convert the PT model to FX with tempfile.TemporaryDirectory() as tmpdirname: pt_dec_model.save_pretrained(tmpdirname) pt_dec_model_to_fx = FlaxBartForCausalLM.from_pretrained(tmpdirname, from_pt=True) # easier to work in terms of flattened dicts pt_dec_params_to_fx = flatten_dict(pt_dec_model_to_fx.params) fx_dec_params = flatten_dict(fx_dec_model.params) # Check that all keys match assert fx_dec_params.keys() == pt_dec_params_to_fx.keys() # Check that all the weights are **precisely** equal mismatch_params = [] print("Checking weights match...") for param in pt_dec_params_to_fx: if (fx_dec_params[param] != pt_dec_params_to_fx[param]).all(): mismatch_params.append(param) if len(mismatch_params) == 0: print("✅ All PyTorch and Flax parameters match") else: print("❌ The following weights do not match:") for param in mismatch_params: print(param) ``` Output: ``` Tie word embeddings? PyTorch: True, Flax: True PyTorch Decoder modules: ['model', 'lm_head'] Flax Decoder modules: dict_keys(['model']) Checking weights match... ❌ The following weights do not match: ('model', 'decoder', 'embed_tokens', 'embedding') ```<|||||>cc @patil-suraj here since he added `bart-large`<|||||>I'm fine with whatever solution as this is really an edge case - however we should not break backward compatibility here especially with respect to the weights structure. Also we should **not** touch the PyTorch Bart code<|||||>Agree that we should not change the PyTorch modelling code! My preference is modifying the Flax encoder-decoder param dict to explicitly include `embed_tokens` under the encoder and decoder modules (as with the PyTorch models and the Flax decoder-only models) which will bring compatibility between all four models (PyTorch encoder-decoder, Flax encoder-decoder, PyTorch decoder-only and Flax decoder-only)<|||||>Ok for me! @patil-suraj what do you think?<|||||>I'm fine with modifying the param dict of `bart` here, since flax doesn't add those `embed_tokens` weights under `encoder` and `decoder` if they are initialised outside and shared. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>From the discussion with the Flax authors at https://github.com/google/flax/discussions/2046#discussion-4004536, the best option appears to be handling this in Flax weights loading script.<|||||>Generally, not very keen on changing the general Flax weight conversion script because of only a single model. But happy to iterate over the design in a PR. @sanchit-gandhi, could you maybe open a PR to show how you would like to solve the problem and then we take it from there? <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
16,732
closed
Remove duplicate header in doc
# What does this PR do? <s>`doc-builder` doesn't accept duplicated headers anymore, this PR should fix the doc build. Will merge as soon as the Build Doc PR job is green to fix the main branch :-)</s> The change has been reverted on the `doc-builder` side, but this was a mistake worth fixing anyway.
04-12-2022 15:38:23
04-12-2022 15:38:23
transformers
16,731
closed
Improve test_pt_tf_model_equivalence on PT side
# What does this PR do? Same as in #16557, but on PT test side. Now we only have 2 `def test_pt_tf_model_equivalence` in the common test files 💯
04-12-2022 15:36:21
04-12-2022 15:36:21
_The documentation is not available anymore as the PR was closed or merged._<|||||>Merged (after a rebase)
transformers
16,730
closed
Change the chunk_iter function to handle
the subtle cases where the last chunk gets ignored since all the data is in the `left_strided` data. We need to remove the right striding on the previous item. # What does this PR do? Change the chunk_iter function to handle the subtle cases where the last chunk gets ignored since all the data is in the `left_strided` data. We need to remove the right striding on the previous item. Fixes https://github.com/huggingface/transformers/issues/16671 @LysandreJik @patrickvonplaten <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
04-12-2022 14:38:00
04-12-2022 14:38:00
Very nice - thanks for fixing it!<|||||>@sgugger - I think the build doc failing test is unrelated here no?<|||||>Yes, will look into that.
transformers
16,729
closed
TF: remove set_tensor_by_indices_to_value
# What does this PR do? Removes our TF `set_tensor_by_indices_to_value` function and replaces all its uses by `tf.where`. They are the same, but with a different input order -- removing it means one fewer function to test while making the code easier for TF users.
04-12-2022 14:27:26
04-12-2022 14:27:26
_The documentation is not available anymore as the PR was closed or merged._<|||||>@sgugger The last diff belongs to @patrickvonplaten, but because the function was moved (from [here](https://github.com/huggingface/transformers/blame/a3dbbc346763c8eaa49577a448e5b5a2da1428ed/src/transformers/generation_tf_utils.py#L1631) (link from the commit immediatly before it was moved)). Last time it was touched was 2 years ago 😅
transformers
16,728
closed
[FlaxSpeechEncoderDecoder] Fix input shape bug in weights init
The tuple `input_shape` is required in the `init` method of the FlaxSpeechEncoderDecoderModel in order to initialise the model weights - one must specify these input shapes to enable JAX to trace through the model dimensions. This tuple consists of two entries: the encoder and decoder input lengths. Speech encoders almost always downsample the sequence length dimension. Given an encoder input length, the decoder input length is computed through a convolutional formula. This convolutional formula should take into consideration two convolutional based modules: 1. Feature extractor 2. Adapter module (optional) Currently, only the first of these two convolutional based modules is accounted for. This PR amends the model script to account for the second of the two, i.e. the adapter module.
04-12-2022 13:01:57
04-12-2022 13:01:57
_The documentation is not available anymore as the PR was closed or merged._
transformers
16,727
closed
Add image classification script, no trainer
# What does this PR do? This PR adds an example script for image classification that leverages Accelerate instead of the HuggingFace Trainer. To do: - [x] verify local `train_dir` and `validation_dir` - [x] update README - [x] add log fixes (Tensorboard) Both can be updated after #16585 is merged.
04-12-2022 12:13:16
04-12-2022 12:13:16
_The documentation is not available anymore as the PR was closed or merged._<|||||>@sgugger not sure why, but the test for the script fails: ``` WARNING datasets.builder:builder.py:388 Using custom data configuration huggingface--image-classification-test-sample-b7448dc7ae37f2cf INFO run_image_classification_no_trainer:run_image_classification_no_trainer.py:388 ***** Running training ***** INFO run_image_classification_no_trainer:run_image_classification_no_trainer.py:389 Num examples = 8 INFO run_image_classification_no_trainer:run_image_classification_no_trainer.py:390 Num Epochs = 3 INFO run_image_classification_no_trainer:run_image_classification_no_trainer.py:391 Instantaneous batch size per device = 2 INFO run_image_classification_no_trainer:run_image_classification_no_trainer.py:392 Total train batch size (w. parallel, distributed & accumulation) = 2 INFO run_image_classification_no_trainer:run_image_classification_no_trainer.py:393 Gradient Accumulation steps = 1 INFO run_image_classification_no_trainer:run_image_classification_no_trainer.py:394 Total optimization steps = 12 INFO run_image_classification_no_trainer:run_image_classification_no_trainer.py:471 epoch 0: {'accuracy': 0.0} INFO run_image_classification_no_trainer:run_image_classification_no_trainer.py:471 epoch 1: {'accuracy': 0.0} INFO run_image_classification_no_trainer:run_image_classification_no_trainer.py:471 epoch 2: {'accuracy': 0.0} ``` Weirdly, it passes locally for me.<|||||>I'm getting issues when only passing `id2label` and `label2id` to the config, but not the `num_labels`: ``` if size_average is not None or reduce is not None: reduction = _Reduction.legacy_get_string(size_average, reduce) > return torch._C._nn.cross_entropy_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index, label_smoothing) E IndexError: Target 6 is out of bounds. ```<|||||>Oh ok, shouldn't be the case. Let's put back the `num_labels` for now and I'll have a look later at why it failed to update properly.
transformers
16,726
closed
[ASR pipeline] fix chunking
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> ASR chunking currently cuts final pieces of the transcription. The error lies in the postprocessing of the ASR pipeline. Fixes #https://github.com/huggingface/transformers/issues/16671 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
04-12-2022 11:51:29
04-12-2022 11:51:29
_The documentation is not available anymore as the PR was closed or merged._<|||||>Superseded by https://github.com/huggingface/transformers/pull/16730
transformers
16,725
closed
[FlaxWav2Vec2Model] Fix bug in attention mask
Currently, the FlaxWav2Vec2 reduced attention mask is computed by calling the function `_get_feat_extract_output_lengths`, without explicit specification of whether an (optional) adapter module is used: https://github.com/huggingface/transformers/blob/924484ee4a6ebc79426d27eef31a1ee7d13cbb9a/src/transformers/models/wav2vec2/modeling_flax_wav2vec2.py#L959-L960 By default, if `add_adapter` is `None`, the boolean `add_adapter` will be set based on the `config`: https://github.com/huggingface/transformers/blob/924484ee4a6ebc79426d27eef31a1ee7d13cbb9a/src/transformers/models/wav2vec2/modeling_flax_wav2vec2.py#L1001-L1008 For this default setting, if the model contains an adapter module, then `add_adapter` will be set to `True`. This results in the convolutional formula including the downsampling performed by the convolutional layers in the feature extractor **and** the adapter module. However, since the reduced attention mask is required for the encoder module, it should be computed based on the convolutional layers of the feature extractor **only**, and not those of the subsequent adapter module. This is highlighted by the PyTorch Wav2Vec2 modelling code: https://github.com/huggingface/transformers/blob/924484ee4a6ebc79426d27eef31a1ee7d13cbb9a/src/transformers/models/wav2vec2/modeling_wav2vec2.py#L1350-L1354 The following code snippet demonstrates the effect of this bug by means of a PyTorch-Flax cross-test: ```python import torch import numpy as np from transformers import Wav2Vec2Model, FlaxWav2Vec2Model import tempfile import random encoder_id = "hf-internal-testing/tiny-random-wav2vec2" fx_model = FlaxWav2Vec2Model.from_pretrained(encoder_id, add_adapter=True, from_pt=True) with tempfile.TemporaryDirectory() as tmpdirname: fx_model.save_pretrained(tmpdirname) pt_model = Wav2Vec2Model.from_pretrained(tmpdirname, config=fx_model.config, from_flax=True) # create synthetic input data def ids_tensor(shape, vocab_size, rng=None): """Creates a random int32 tensor of the shape within the vocab size.""" if rng is None: rng = random.Random() total_dims = 1 for dim in shape: total_dims *= dim values = [] for _ in range(total_dims): values.append(rng.randint(0, vocab_size - 1)) output = np.array(values).reshape(shape) return output def random_attention_mask(shape, rng=None): attn_mask = ids_tensor(shape, vocab_size=2, rng=rng) # make sure that at least one token is attended to for each batch attn_mask[:, -1] = 1 return attn_mask def floats_tensor(shape, scale=1.0): """Creates a random float32 tensor""" total_dims = 1 for dim in shape: total_dims *= dim values = [] for _ in range(total_dims): values.append(np.random.randn() * scale) return np.array(values, dtype=np.float32).reshape(shape) def fx_batch(batch_size=2, input_length=96000): input_ids = floats_tensor([batch_size, input_length]) attention_mask = random_attention_mask([batch_size, input_length]) fx_inputs = { "input_values": input_ids, "attention_mask": attention_mask, } return fx_inputs fx_inputs = fx_batch() pt_inputs = {k: torch.tensor(v.tolist()) for k, v in fx_inputs.items()} fx_outputs = fx_model( **fx_inputs, output_hidden_states=True) pt_outputs = pt_model(**pt_inputs, output_hidden_states=True) # helper function for our analysis def assert_almost_equals(a: np.ndarray, b: np.ndarray, tol: float = 1e-2): diff = np.abs((a - b)).max() if diff < tol: print(f"✅ Difference between Flax and PyTorch is {diff} (< {tol})") else: print(f"❌ Difference between Flax and PyTorch is {diff} (>= {tol})") print("--------------------------Checking hidden states match--------------------------") for fx_state, pt_state in zip(fx_outputs.hidden_states, pt_outputs.hidden_states): assert fx_state.shape == pt_state.shape assert_almost_equals(fx_state, pt_state.detach().numpy()) print("--------------------------Checking last hidden states match--------------------------") print(f"Encoder-decoder output shape: {fx_outputs.last_hidden_state.shape}, encoder-only output shape: {pt_outputs.last_hidden_state.shape}") assert_almost_equals(fx_outputs.last_hidden_state, pt_outputs.last_hidden_state.detach().numpy()) ``` Output prior to fix: ``` --------------------------Checking encoder hidden states match-------------------------- ❌ Difference between Flax and PyTorch is 0.43152332305908203 (>= 0.01) ❌ Difference between Flax and PyTorch is 0.43074753880500793 (>= 0.01) ❌ Difference between Flax and PyTorch is 0.42613524198532104 (>= 0.01) ❌ Difference between Flax and PyTorch is 0.4301084578037262 (>= 0.01) ❌ Difference between Flax and PyTorch is 4.519614219665527 (>= 0.01) --------------------------Checking encoder last hidden states match-------------------------- Encoder-decoder output shape: (2, 188, 16), encoder-only output shape: torch.Size([2, 188, 16]) ✅ Difference between Flax and PyTorch is 0.0015139428433030844 (< 0.01) ``` Output following fix: ``` --------------------------Checking encoder hidden states match-------------------------- ✅ Difference between Flax and PyTorch is 3.9674341678619385e-07 (< 0.01) ✅ Difference between Flax and PyTorch is 4.041939973831177e-07 (< 0.01) ✅ Difference between Flax and PyTorch is 4.041939973831177e-07 (< 0.01) ✅ Difference between Flax and PyTorch is 3.948807716369629e-07 (< 0.01) ✅ Difference between Flax and PyTorch is 4.947185516357422e-06 (< 0.01) --------------------------Checking encoder last hidden states match-------------------------- Encoder-decoder output shape: (2, 188, 16), encoder-only output shape: torch.Size([2, 188, 16]) ✅ Difference between Flax and PyTorch is 1.0913936421275139e-09 (< 0.01) ```
04-12-2022 11:33:27
04-12-2022 11:33:27
_The documentation is not available anymore as the PR was closed or merged._
transformers
16,724
closed
Add type hints GPT-J pytorch
# What does this PR do? Added type hints for GPT-J pytorch following #16059 @Rocketknight1
04-12-2022 11:32:28
04-12-2022 11:32:28
_The documentation is not available anymore as the PR was closed or merged._
transformers
16,723
closed
[Quicktour Audio] Improve && remove ffmpeg dependency
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes #16563 As discussed in #16563 , it's not good if the official quicktour example depends on Quicktour. Let's rather let `datasets` handle the audio loading and resampling here. IMO, it's also important to directly showcase here how to resample the audio. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
04-12-2022 11:14:08
04-12-2022 11:14:08
_The documentation is not available anymore as the PR was closed or merged._
transformers
16,722
closed
[Bart] correct doc test
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes doc test `transformers.models.bart.modeling_bart.BartForConditionalGeneration.forward` after @gante 's https://github.com/huggingface/transformers/pull/16668 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
04-12-2022 07:34:28
04-12-2022 07:34:28
_The documentation is not available anymore as the PR was closed or merged._
transformers
16,721
closed
ResumableUploadAbortException: 409 The object has already been created in an earlier attempt and was overwritten, possibly due to a race condition.
I am fine tuning masked language model from XLM Roberta large on google machine specs. When I copy the model using `gsutil and subprocess` from container to GCP bucket it gives me error. ### Versions Versions torch==1.11.0+cu113 torchvision==0.12.0+cu113 torchaudio==0.11.0+cu113 transformers==4.17.0 I am using pre-trained Hugging face model. `I launch it as train.py file which I copy inside docker image and use vertex-ai ( GCP) to launch it using Containerspec` `machineSpec = MachineSpec(machine_type="a2-highgpu-4g",accelerator_count=4,accelerator_type="NVIDIA_TESLA_A100")` ``` python -m torch.distributed.launch --nproc_per_node 4 train.py --bf16 ``` I am using https://huggingface.co/xlm-roberta-large ``` tokenizer = tr.XLMRobertaTokenizer.from_pretrained("xlm-roberta-large",local_files_only=True) model = tr.XLMRobertaForMaskedLM.from_pretrained("xlm-roberta-large", return_dict=True,local_files_only=True) ``` **Training Code** ``` training_args = tr.TrainingArguments( output_dir='****' ,logging_dir='****' # directory for storing logs ,save_strategy="epoch" ,run_name="****" ,learning_rate=2e-5 ,logging_steps=1000 ,overwrite_output_dir=True ,num_train_epochs=10 ,per_device_train_batch_size=4 ,prediction_loss_only=True ,gradient_accumulation_steps=2 # ,gradient_checkpointing=True ,bf16=True #57100 ,optim="adafactor" ) trainer = tr.Trainer( model=model, args=training_args, data_collator=data_collator, train_dataset=train_data ) ``` **Train.py** ``` import torch import numpy as np import pandas as pd from transformers import BertTokenizer, BertForSequenceClassification import transformers as tr from sentence_transformers import SentenceTransformer from transformers import XLMRobertaTokenizer, XLMRobertaForMaskedLM from transformers import AdamW from transformers import AutoTokenizer from transformers import BertTokenizerFast as BertTokenizer, BertModel, AdamW, get_linear_schedule_with_warmup,BertForMaskedLM from transformers import DataCollatorForLanguageModeling from scipy.special import softmax import scipy import random import pickle import os import time import subprocess as sp # torch.cuda.empty_cache() start=time.time() device = torch.device('cuda') if torch.cuda.is_available() else torch.device('cpu') print("Using", device) torch.backends.cudnn.deterministic = True tr.trainer_utils.set_seed(0) print("here") tokenizer = tr.XLMRobertaTokenizer.from_pretrained("xlm-roberta-large",local_files_only=True) model = tr.XLMRobertaForMaskedLM.from_pretrained("xlm-roberta-large", return_dict=True,local_files_only=True) model.gradient_checkpointing_enable() #included as new line print("included gradient checkpoint") model.to(device) print("Model loaded successfully") df=pd.read_csv("data.csv") train_df=df.text.tolist() print(len(train_df)) train_df=list(set(train_df)) train_df = [x for x in train_df if str(x) != 'nan'] print("Length of training data is \n ",len(train_df)) print("DATA LOADED successfully") train_encodings = tokenizer(train_df, truncation=True, padding=True, max_length=512, return_tensors="pt") print("encoding done") data_collator = DataCollatorForLanguageModeling(tokenizer=tokenizer, mlm=True, mlm_probability=0.15) print("data collector done") class SEDataset(torch.utils.data.Dataset): def __init__(self, encodings): self.encodings = encodings def __getitem__(self, idx): item = {key: torch.tensor(val[idx]) for key, val in self.encodings.items()} return item def __len__(self): return len(self.encodings["attention_mask"]) train_data = SEDataset(train_encodings) print("train data created") training_args = tr.TrainingArguments( output_dir='results_mlm_exp1' ,logging_dir='logs_mlm_exp1' # directory for storing logs ,save_strategy="epoch" ,learning_rate=2e-5 ,logging_steps=500 ,overwrite_output_dir=True ,num_train_epochs=20 ,per_device_train_batch_size=4 ,prediction_loss_only=True ,gradient_accumulation_steps=2 ,bf16=True #Ampere GPU ) trainer = tr.Trainer( model=model, args=training_args, data_collator=data_collator, train_dataset=train_data ) trainer.train() print("model training finished") trainer.save_model("model_mlm_exp1") print("training finished") end=time.time() print("total time taken in hours is", (end-start)/3600) ``` **Error** trainer.save_model("model_mlm_exp1") subprocess.call('gsutil cp -r /pythonPackage/trainer/model_mlm_exp1 gs://******/model_mlm_exp1', shell=True, stdout=subprocess.PIPE) ERROR ResumableUploadAbortException: 409 The object has already been created in an earlier attempt and was overwritten, possibly due to a race condition.
04-12-2022 04:37:43
04-12-2022 04:37:43
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
16,720
closed
Replace assertion with exception
# What does this PR do? Replaces assert with Exceptions as per https://github.com/huggingface/transformers/issues/12789. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @sgugger
04-12-2022 04:31:23
04-12-2022 04:31:23
_The documentation is not available anymore as the PR was closed or merged._<|||||>@sgugger @LysandreJik thanks for the review! I have incorporated all the requested changes. <|||||>@sgugger unsure why the PR Documentation check is failing<|||||>Failure is unrelated to this PR and is fixed independently. Thanks a lot for addressing all comments!
transformers
16,719
closed
[modeling] keys to ignore revisited
If possible let's please revisit: 1. `_keys_to_ignore_on_save ` 2. `_keys_to_ignore_on_load_unexpected` 3. `_keys_to_ignore_on_load_missing` I'm trying to debug a key that refuses to be ignored on load and I'm not sure if I'm not setting it correctly in all those `keys_to_ignore_*` patterns. ----------- 1. should the keys include the model prefix or not? e.g. here it's a mixed bunch: https://github.com/huggingface/transformers/blob/69233cf03be5fbce0492f3997e139c4d05499e27/src/transformers/models/m2m_100/modeling_m2m_100.py#L1215-L1227 should they all have the `model.` prefix, or all not have it? 2. should we consistently escape the \. or not? Again see the example above for a mixed bunch I know I was adding non-escaped keys, because there was no ambiguity in things like: `encoder.embed_positions.weights` - do we ever need to escape it? Whatever the decision I ask that we use a consistent way so that when things don't work it's easy to know how it should be written correctly. 3. I'm not very clear about the naming of the last 2 keys, At the point of the model itself it's hard to remember what they mean, and their explanation is really hard to understand. Could the following explanation be revised. I have a hard time parsing this text: https://github.com/huggingface/transformers/blob/69233cf03be5fbce0492f3997e139c4d05499e27/src/transformers/modeling_utils.py#L726-L731 4. I think the logic of defining which keys not to load is either completely missing or incomplete. I'm trying to tell m2m_100 not to load `encoder.embed_positions.weights` (and same for decoder), I added it to all 3 keys to ignore and it still loads it, which is invalid since the [model](https://huggingface.co/hf-internal-testing/tiny-random-m2m_100/blob/main/config.json) has these saved and I want to load a model with a different `max_position_embeddings` value and I can't. ``` stderr: RuntimeError: Error(s) in loading state_dict for M2M100ForConditionalGeneration: stderr: size mismatch for model.encoder.embed_positions.weights: copying a param with shape torch.Size([22, 16]) from checkpoint, the shape in current model is torch.Size([514, 16]). stderr: size mismatch for model.decoder.embed_positions.weights: copying a param with shape torch.Size([22, 16]) from checkpoint, the shape in current model is torch.Size([514, 16]). ``` Either the current logic needs to be further refined or we need a new key `_keys_to_ignore_on_load_always`? The current logic is here: https://github.com/huggingface/transformers/blob/69233cf03be5fbce0492f3997e139c4d05499e27/src/transformers/modeling_utils.py#L1921-L1964 It's easy to see how it fails if `set(expected_keys) == set(loaded_keys))` which is the case in this situation: https://github.com/huggingface/transformers/blob/69233cf03be5fbce0492f3997e139c4d05499e27/src/transformers/modeling_utils.py#L1953-L1954 I think the "bug" is here: ``` expected_keys = list(model_state_dict.keys()) ``` This further needs to be processed to remove `_keys_to_ignore_on_save`, since they are not expected even if they are in the model. I think the logic is missing and I propose to fix it with this additional chunk (first if): 1. ``` if cls._keys_to_ignore_on_save is not None: for pat in cls._keys_to_ignore_on_save: expected_keys = [k for k in expected_keys if re.search(pat, k) is None] missing_keys = list(set(expected_keys) - set(loaded_keys)) unexpected_keys = list(set(loaded_keys) - set(expected_keys)) ``` 2. and it never removes the `unexpected_keys` from `state_dict` - so all these still get loaded in `_load_state_dict_into_model` which doesn't get the list of keys to load and loads everything from the `state_dict` ------------ If I piled up too many issues together please let me know and I will split it up, they are just all seem to be interconnected. Thank you! @LysandreJik, @sgugger, @patrickvonplaten, @patil-suraj
04-12-2022 04:28:43
04-12-2022 04:28:43
I'll address point 4 for now. It looks like you are just missing an `ignore_mismatched_sizes=True`. `_keys_to_ignore_on_save` does not impact the loading, as its name indicates, and there is no `_keys_to_ignore_on_load` as if a weight should be ignored on load, it shouldn't be in the checkpoint in the first place. I'll comment more on points 1 to 3 when I have more time.<|||||>Thank you, Sylvain. 1. I can't pass `ignore_mismatched_sizes` since I'm using examples in the tests 2. Unless I'm missing something `ignore_mismatched_sizes` is an incorrect solution for positional encodings since if the size does match it'd still load and use the key and it shouldn't load any of the keys inside `_keys_to_ignore_on_save` - these need to be generated by the model and not overwritten. I can of course just make a new tiny checkpoint that doesn't have this problem in the first place, but I think it's a good exercise at validating paths that are quite undefined behavior-wise.<|||||>I'll let @patil-suraj and @patrickvonplaten give their advice on how to solve this problem with M2M100, but personally very much against any mechanism that will ignore keys in a checkpoint as it's a multitude of bugs waiting to happen. If keys are not supposed to be in a checkpoint, they should just not be inside it.<|||||>wrt to point 4 there is no problem with M2M100 per se, it just happened to be one of the tests that is failing since the tiny checkpoint was created in a way that made it somewhat inflexible and a new checkpoint can be made instead.<|||||>Regarding your other questions: 1. It depends and that's some tricky logic of the `from_pretrained` method. The prefix will be removed/added in the model state_dict keys when the model you are using expects it or not, depending on whether the checkpoint has it or not. This is to deal with model with heads vs base models and make sure that you can load a checkpoint of a model with head in a base model and vice versa. 2. `_keys_to_ignore_on_load_missing` and `_keys_to_ignore_on_load_unexpected` use re, so the dot should be escaped. Absolutely no problem on my side to have the same for `_keys_to_ignore_on_save` which currently does not use `re`, so should not escape the . 3. `_keys_to_ignore_on_load_missing` -> those are keys that should be removed from the list of missing keys we find (keys inside the model but not in the checkpoint) `_keys_to_ignore_on_load_unexpected` -> those are keys that should be removed from the list of unexpected keys we find (keys inside the checkpoint but not the model) Comments should be clearer, I completely agree! <|||||>> Regarding your other questions: > > 1. It depends and that's some tricky logic of the `from_pretrained` method. The prefix will be removed/added in the model state_dict keys when the model you are using expects it or not, depending on whether the checkpoint has it or not. This is to deal with model with heads vs base models and make sure that you can load a checkpoint of a model with head in a base model and vice versa. Yes, and so how does one define the keys to ignore wrt prefix? It sounds like `base_model_prefix` should be excluded. Which means that this is incorrect then: https://github.com/huggingface/transformers/blob/69233cf03be5fbce0492f3997e139c4d05499e27/src/transformers/models/m2m_100/modeling_m2m_100.py#L1215-L1227 (and the same issue afflicts many other model files with deterministic positional encodings) > 2. `_keys_to_ignore_on_load_missing` and `_keys_to_ignore_on_load_unexpected` use re, so the dot should be escaped. Absolutely no problem on my side to have the same for `_keys_to_ignore_on_save` which currently does not use `re`, so should not escape the . Yes, please! Let's make it consistent! > 3. `_keys_to_ignore_on_load_missing` -> those are keys that should be removed from the list of missing keys we find (keys inside the model but not in the checkpoint) > `_keys_to_ignore_on_load_unexpected` -> those are keys that should be removed from the list of unexpected keys we find (keys inside the checkpoint but not the model) > Comments should be clearer, I completely agree! Super. That's a way easier to understand. Thank you! Made a PR here: https://github.com/huggingface/transformers/pull/16741<|||||>do we want to resolve this or let it lapse?<|||||>On my side, it's just missing the point 2, as you solved point 3. Do you want to make a PR or should I do it? For point 4, pinging again @patrickvonplaten and @patil-suraj <|||||>> On my side, it's just missing the point 2, as you solved point 3. Do you want to make a PR or should I do it? so we do we want to escape it or just have it unescaped everywhere? `r'.'` will just match any char, including the actual `.` , and the keys are usually quite unique to fail to match w/o escaping. IMHO just having the unescaped `.` everywhere is more readable and easier to copy-n-paste/extend/etc.<|||||>Agreed!<|||||>Thank you! OK, I will make a PR then. <|||||>hmm, as I started working on it I see that it'd be tricky to make it consistent w/o backslashes, as some keys have regex bits in them as in: ``` src/transformers/models/gptj/modeling_gptj.py: _keys_to_ignore_on_load_missing = [r"h\.\d+\.attn\.masked_bias", r"h\.\d+\.attn\.bias", r"lm_head\.weight"] ``` Not sure then. Thoughts? <|||||>Those who have clear regex patterns should be escaped and use \., for the ones that only use strings, I think it's okay to just leave the dot as is.<|||||>> Those who have clear regex patterns should be escaped and use ., Did you mean to say: > Those who have clear regex patterns should be escaped and use `\.`,... ? or as an example: ``` _keys_to_ignore_on_load_missing = [r"h\.\d+\.attn\.masked_bias", r"h\.\d+\.attn\.bias", r"lm_head\.weight"] ``` should remain as is, right?<|||||>Yes, sorry about the confusion.<|||||>So https://github.com/huggingface/transformers/pull/17722 will resolve item (2). So point (4) is remaining to be resolved.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
16,718
closed
[WIP]-Add Fast Pitch 1.1
# What does this PR do? Adds FastPitch 1.1# (issue)[https://github.com/huggingface/transformers/issues/16349] ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests?
04-12-2022 03:56:48
04-12-2022 03:56:48
Thanks for the PR @ArEnSc! Let us know if you need assistance at any point!<|||||>> Thanks for the PR @ArEnSc! > > Let us know if you need assistance at any point! Will do! have been lagging on this due to family and my day job =)<|||||>update: going to continue to work on this soon<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Still working on this, just doing some reading working on burn out.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>closing for now while I deal with covid =(
transformers
16,717
closed
[deepspeed / m2m_100] make deepspeed zero-3 work with layerdrop
Same as I had to fix in `wav2vec2` it looks that this fix should eventually go to all models that use `LayerDrop`. At least at the moment Deepspeed is not capable of randomly skipping layers, so this PR uses the same now well tested workaround I used in `wav2vec2`, where all layers always run when deepspeed zero-3 is detected, but the results are ignored if it was meant to be skipped. https://github.com/huggingface/transformers/blob/69233cf03be5fbce0492f3997e139c4d05499e27/src/transformers/models/wav2vec2/modeling_wav2vec2.py#L817-L849 Perhaps one day Deepspeed will be able to randomly skip layers, at the moment the solution is not the most efficient one. I made a [request](https://github.com/microsoft/DeepSpeed/issues/1888). When ZeRO-3 is not used the original code path is taken. The test exercising this code path will be merged as part of this huge additional tests set PR https://github.com/huggingface/transformers/pull/12695 (it's been long overdue). For posterity, the error for this issue will look something like: ``` RuntimeError: tracing error at step 42: expected the next 2 parameters in the parameter fetch queue to be ({'id': 26, 'status': 'AVAILABLE', 'numel': 1024, 'ds_numel': 1024, 'shape': (1024,), 'ds_shape': (1024,), 'requires_grad': True, 'grad_shape': None, 'persist': True, 'active_sub_modules': {24}}, {'id': 27, 'status': 'AVAILABLE', 'numel': 1024, 'ds_numel': 1024, 'shape': (1024,), 'ds_shape': (1024,), 'requires_grad': True, 'grad_shape': None, 'persist': True, 'active_sub_modules': {24}}) but got ({'id': 115, 'status': 'NOT_AVAILABLE', 'numel': 0, 'ds_numel': 1024, 'shape': (0,), 'ds_shape': (1024,), 'requires_grad': True, 'grad_shape': None, 'persist': True, 'active_sub_modules': set()}, {'id': 116, 'status': 'NOT_AVAILABLE', 'numel': 0, 'ds_numel': 1048576, 'shape': (0,), 'ds_shape': (1024, 1024), 'requires_grad': True, 'grad_shape': None, 'persist': False, 'active_sub_modules': set()}). ``` Fixes: https://github.com/huggingface/transformers/issues/16688 @patil-suraj, @sgugger
04-12-2022 00:55:48
04-12-2022 00:55:48
_The documentation is not available anymore as the PR was closed or merged._
transformers
16,716
closed
Predicting incorrect loss when eval data size is not a multiple of batch size
## Environment info - `transformers` version: 4.18.0.dev0 - Platform: Linux-5.4.0-96-generic-x86_64-with-debian-buster-sid - Python version: 3.7.13 - Huggingface_hub version: 0.5.1 - PyTorch version (GPU?): 1.12.0.dev20220411+cu102 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): 0.4.1 (cpu) - Jax version: 0.3.5 - JaxLib version: 0.3.5 - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: Yes ### Who can help @sgugger ## Issue: When the input data size is not a multiple of batch_size, the loss calculated seems wrong to me. As mentioned in this line https://github.com/huggingface/transformers/blob/69233cf03be5fbce0492f3997e139c4d05499e27/src/transformers/trainer.py#L2469 The loss is repeated batch_size times which does not makes sense for the last input which is not divisible by the batch_size. This also leads to the failure of HF test case (tests/trainer/test_trainer.py::TrainerIntegrationTest::test_evaluate) when I am running this on my device. ## To reproduce Steps to reproduce the behavior: 1. Install pytest 2. RUN: pytest tests/trainer/test_trainer.py::TrainerIntegrationTest::test_evaluate Error: FAILED tests/trainer/test_trainer.py::TrainerIntegrationTest::test_evaluate - AssertionError: 0.517515242099762 != 0.41851458 within 7 places (0.09900066256523132 difference) ## Expected behavior The test should pass.
04-12-2022 00:40:04
04-12-2022 00:40:04
No, the evaluation loss is properly computed thanks to this line actually. Repeating it the number of times then truncating to the length of the dataset [here](https://github.com/huggingface/transformers/blob/924484ee4a6ebc79426d27eef31a1ee7d13cbb9a/src/transformers/trainer.py#L2551) makes the final evaluation loss the proper average of all losses. As for the test not passing, I think you are running it on 2 GPUs? It's only intended to work on one.<|||||>Thank you for the quick reply. Yes, I was running the code on 2 GPUs and it works fine on 1 GPU. May I ask why is it intended to work on 1 GPU?<|||||>The batch size is actually wrong in that case. Pushing a fix!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
16,715
closed
fix image type DETR feature extraction for panoptic segmentation
# What does this PR do? The __call__ function of the DETR, expects an object with a `shape` when `pad_and_return_pixel_mask=True` If try to use the extract feature with `do_resize=False` and `do_normalize=False` will crash on https://github.com/huggingface/transformers/blob/69233cf03be5fbce0492f3997e139c4d05499e27/src/transformers/models/detr/feature_extraction_detr.py#L584 because if pass the image as PIL format will need to convert this to an object with shape. And the conversion to Numpy array is done on this PR, to fix at `prepare_coco_panoptic`. ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @NielsRogge Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR.
04-11-2022 23:05:50
04-11-2022 23:05:50
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_16715). All of your documentation changes will be reflected on that endpoint.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
16,714
closed
ValueError if answer in truncated table rows and columns in Tapas tokenization
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.17.0 - Platform: Ubuntu 20 LTS - Python version: 3.8.10 - PyTorch version (GPU?): 1.9.0 - Tensorflow version (GPU?): 2.8.0 - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help TAPAS: @NielsRogge ## Information Model I am using (TAPAS): The problem arises when using: * [x] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) ## Description Tapas tokenization truncates long tables to `num_rows` when truncation strategy is `drop_rows_to_fit`. It does not truncate the dropped rows and columns from `answer_coordinates`. If `answer_coordinates` contains the dropped rows, it results in `ValueError: Couldn't find all answers` in `_get_answer_ids` due to mismatch in `row_ids`, `col_ids` and `answer_coordinates` ## To reproduce Steps to reproduce the behavior: 1. Initialize a large table, and required fields. Answer coordinates exceeds the `model_max_length` ```python import numpy as np import pandas as pd from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("google/tapas-base-finetuned-wikisql-supervised") tab = np.random.choice(5, 513) table = pd.DataFrame(data=tab, columns=["Value"]).astype('str') answer_texts = [table.iloc[512]["Value"]] answer_coordinates=[(512,0)] question="dummy question" ``` 2. Tokenize the large table ```python encoding = tokenizer( table=table, queries=question, answer_coordinates=answer_coordinates, answer_text=answer_texts, truncation=True, padding="max_length", return_tensors="pt", ) ``` Output: ```python Traceback (most recent call last): File "/py3/lib/python3.8/site-packages/IPython/core/interactiveshell.py", line 3441, in run_code exec(code_obj, self.user_global_ns, self.user_ns) File "<ipython-input-13-2827f7bd32ad>", line 1, in <module> encoding = tokenizer( File "/py3lib/python3.8/site-packages/transformers/models/tapas/tokenization_tapas.py", line 624, in __call__ return self.encode_plus( File "/py3/lib/python3.8/site-packages/transformers/models/tapas/tokenization_tapas.py", line 990, in encode_plus return self._encode_plus( File "/py3/lib/python3.8/site-packages/transformers/models/tapas/tokenization_tapas.py", line 1044, in _encode_plus return self.prepare_for_model( File "//py3/lib/python3.8/site-packages/transformers/models/tapas/tokenization_tapas.py", line 1203, in prepare_for_model labels = self.get_answer_ids(column_ids, row_ids, table_data, answer_text, answer_coordinates) File "/py3/lib/python3.8/site-packages/transformers/models/tapas/tokenization_tapas.py", line 1789, in get_answer_ids return self._get_answer_ids(column_ids, row_ids, answer_coordinates_question) File /py3/lib/python3.8/site-packages/transformers/models/tapas/tokenization_tapas.py", line 1778, in _get_answer_ids raise ValueError("Couldn't find all answers") ValueError: Couldn't find all answers ``` ## Expected behavior Remove truncated rows and columns from `answer_coordinates` and return the truncated `labels`. Throw an exception if `answer_coordinates` is empty.
04-11-2022 22:59:23
04-11-2022 22:59:23
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
16,713
closed
TF generate refactor - XLA sample
# What does this PR do? This PR brings XLA to `sample`, in `generate`. Four important details before reviewing: 1. The diff has the changes of https://github.com/huggingface/transformers/pull/16704, review that PR first plz :) It fixes a test from `beam_search`. I will rebase as soon as the other PR gets merged (the changes were bundled to confirm that it passes all generate tests). 2. The body is mostly copy/paste from `greedy_search`; 3. The sample step was changed from the previous implementation -- if we want to seed sampling with XLA, we need to use the `stateless` functions; 4. The XLA sample tests do not compare all generated tokens to their non-XLA sample counterparts, due to the numerical instabilities discussed on Slack. We do compare the first tokens, which are the same. Finally, tests have been run for the usual models (`gpt2`, `t5`, `rag`, `speech2text`, `encoder_decoder`, `vision_encoder_decoder`, `bart`). ____________________________ I've also run a quick sanity check on GPU. Using GPT2+sample, on an Nvidia T4: - eager TF: ~1.7s - XLA TF: ~54ms (~22s compile time) :point_right: 31x speedup
04-11-2022 21:58:32
04-11-2022 21:58:32
_The documentation is not available anymore as the PR was closed or merged._<|||||>@patrickvonplaten the `stateless` TF functions accept a `seed` argument that is a tuple of two integers 😅 Not very intuitive, I agree. They correspond to the `key` and `counter` used in the internal RNG algorithms ([source](https://www.tensorflow.org/api_docs/python/tf/random/Generator#from_key_counter)). If you think it will be unintuitive for users, I can change it so that our `seed` argument corresponds to the `key` of the tuple (i.e. a single integer), and fix the `counter` to `0`. For practical purposes, it should be the same thing.<|||||>> @patrickvonplaten the `stateless` TF functions accept a `seed` argument that is a tuple of two integers sweat_smile Not very intuitive, I agree. They correspond to the `key` and `counter` used in the internal RNG algorithms ([source](https://www.tensorflow.org/api_docs/python/tf/random/Generator#from_key_counter)). > > If you think it will be unintuitive for users, I can change it so that our `seed` argument corresponds to the `key` of the tuple (i.e. a single integer), and fix the `counter` to `0`. For practical purposes, it should be the same thing. I see - ok maybe better to leave as is then to be aligned with TF<|||||>While running tests for T5 (as suggested by @Rocketknight1), I found out that our XLA code is not behaving properly for T5, for both `sample` and `greedy_search`. Because the problem is not exclusive to `sample`, I'm merging this PR and fixing the issue in a future one. (example) ![image](https://user-images.githubusercontent.com/12240844/163792042-d23d01be-c2c9-40d4-bee2-30ea32c47a45.png)
transformers
16,712
open
Run the scheduled tests
:warning: Do not merge this PR! PR 2/2: in order to finish running suite, rebase this PR on [`test-tokenizers-main`](https://github.com/huggingface/transformers/tree/test-tokenizers-main), branch of PR https://github.com/huggingface/transformers/pull/16708. --- This PR builds on top of https://github.com/huggingface/transformers/pull/16708. It leverages the docker images created in the PR above, and updates the channel in which to report the tests to be a dummy one.
04-11-2022 21:12:42
04-11-2022 21:12:42
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_16712). All of your documentation changes will be reflected on that endpoint.
transformers
16,711
closed
AutoModelForMaskedLM produces NaN if no token is masked
This is a more for discussion if a bugfix is wanted (and if yes, how it should look like). I already fixed it locally for my training script. **Background** I currently run MLM pre-training for large models where each batch can only consist of a single example. In some cases, the text can be rather short, for example, just a sentence. Here it can happen that the `DataCollatorForLanguageModeling` does not mask any token and the computed loss is NaN, which produces problems down the multi-processing script as a NaN loss cannot be correctly back-propagated & shared across the workers. Here a short simplified script that shows the problem: ```python from transformers import DataCollatorForLanguageModeling, AutoTokenizer from transformers import AutoModelForMaskedLM model_name = "nreimers/BERT-Tiny_L-2_H-128_A-2" tokenizer = AutoTokenizer.from_pretrained(model_name) coll = DataCollatorForLanguageModeling(tokenizer=tokenizer, mlm_probability=0.15) print("MASK token:", tokenizer.convert_tokens_to_ids(tokenizer.mask_token)) text = "This is an example" data = tokenizer(text, padding=True, max_length=512, return_special_tokens_mask=True) model = AutoModelForMaskedLM.from_pretrained(model_name) for _ in range(10): mini_batch = coll([data]) #Is equivalent to getting a mini batch from a DataSet / DataLoader print("Input", mini_batch['input_ids']) print("Labels", mini_batch['labels']) output = model(**mini_batch) print(output.loss) print("------") ``` Output: ``` // Here tokens 1-3 were masked Input tensor([[ 101, 103, 103, 2019, 2742, 102]]) Labels tensor([[-100, 2023, 2003, 2019, -100, -100]]) Loss tensor(0.9181, grad_fn=<NllLossBackward0>) ------ // Here no tokens were masked Input tensor([[ 101, 2023, 2003, 2019, 2742, 102]]) Labels tensor([[-100, -100, -100, -100, -100, -100]]) Loss tensor(nan, grad_fn=<NllLossBackward0>) ``` When there are masked tokens, then the loss is computed correctly. But as we mask just 15% of the tokens, it can happen for short sequences that no tokens are masked (i.e. labels are all -100), hence the loss is `nan`. For long sequences and / or large batches the issues does not really happen, as the probability that no token is masked is rather low. But for short sequences with small batch sizes this happens fairly often. If you train large models, often you cannot increase you batch size and short text sequences in your dataset can kill your process. **Discussion** If we want to fix this, we could fix in two possible ways: 1) Update the `DataCollatorForLanguageModeling` to make sure that at least 1 token per text is masked 2) Update the loss in `AutoModelForMaskedLM` that the loss is 0 if no token is selected for masking. **Work around** My current solution (for Pytorch) looks like this: If there is no token selected for masking, I select the first token (the first token after the CLS token). Not a perfect solution, but it solves the issue for me. ```python class MyDataCollatorForLanguageModeling(DataCollatorForLanguageModeling): def torch_mask_tokens(self, inputs, special_tokens_mask = None): """ Prepare masked tokens inputs/labels for masked language modeling: 80% MASK, 10% random, 10% original. """ import torch labels = inputs.clone() # We sample a few tokens in each sequence for MLM training (with probability `self.mlm_probability`) probability_matrix = torch.full(labels.shape, self.mlm_probability) if special_tokens_mask is None: special_tokens_mask = [ self.tokenizer.get_special_tokens_mask(val, already_has_special_tokens=True) for val in labels.tolist() ] special_tokens_mask = torch.tensor(special_tokens_mask, dtype=torch.bool) else: special_tokens_mask = special_tokens_mask.bool() probability_matrix.masked_fill_(special_tokens_mask, value=0.0) masked_indices = torch.bernoulli(probability_matrix).bool() # Nils added code: Make sure at least 1 token is masked for idx in range(len(masked_indices)): if not torch.any(masked_indices[idx]): masked_indices[idx][1] = True # /Nils added code labels[~masked_indices] = -100 # We only compute loss on masked tokens # 80% of the time, we replace masked input tokens with tokenizer.mask_token ([MASK]) indices_replaced = torch.bernoulli(torch.full(labels.shape, 0.8)).bool() & masked_indices inputs[indices_replaced] = self.tokenizer.convert_tokens_to_ids(self.tokenizer.mask_token) # 10% of the time, we replace masked input tokens with random word indices_random = torch.bernoulli(torch.full(labels.shape, 0.5)).bool() & masked_indices & ~indices_replaced random_words = torch.randint(len(self.tokenizer), labels.shape, dtype=torch.long) inputs[indices_random] = random_words[indices_random] # The rest of the time (10% of the time) we keep the masked input tokens unchanged return inputs, labels ```
04-11-2022 20:46:36
04-11-2022 20:46:36
Hey @nreimers, thanks for the issue! I think the propositions are sensible, what do you think @sgugger ?<|||||>I think this should be solved in the `MaskedLM` directly to return a loss of 0.0 and no NaNs.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Any fix on this? I'm still facing this issue.
transformers
16,710
closed
AutoConfig.from_pretrained can fail with Tokenizers
`tokenizer_config = AutoConfig.from_pretrained("/path/to/tokenizer_config.json")` resolves the Auto Class by checking the `model_type` key, which is not always included for tokenizers, see e.g. https://huggingface.co/seyonec/ChemBERTa-zinc-base-v1/blob/main/tokenizer_config.json. Perhaps `model_type` should be exported in `tokenizer_config.json` when running `trainer.save(/path/to)`?
04-11-2022 18:28:34
04-11-2022 18:28:34
The `AutoConfig` utility is a utility for models only, it's the tool one can use to instantiate a model. What would you like to do by instantiating a configuration for a tokenizer? The `AutoTokenizer` class should handle everything on its own.<|||||>Ahh I was under the impression that you needed to provide a separate `tokenizer-config.json`. Must've been an old example or someone overcomplicating their own usage. Thanks for clarifying!
transformers
16,709
closed
Add defensive check for config num_labels and id2label
# What does this PR do? As seen in #16600, there can be some unclear errors when the user tries to pass together an inconsistent `num_labels` and `id2label`. This PR addresses that with a clear error message.
04-11-2022 17:55:22
04-11-2022 17:55:22
_The documentation is not available anymore as the PR was closed or merged._<|||||>Might I suggest to add an example in the error message? Especially for regression where num_labels=1, it might not be obvious to users what the id2label map has to look like. Suggestion: ``` f"You passed along `num_labels={kwargs['num_labels']}` with an incompatible ID to label map: {kwargs['id2label']}. The id2label map should be a dictionary of ID (int) to label (str). E.g., for num_labels=1 (regression), it should look like this: {0: "LABEL_0"}, even if you do not have any explicitly labelled data." ```<|||||>I really don't understand why you want to pass both, as setting `num_labels=1` will do exactly that.<|||||>> I really don't understand why you want to pass both, as setting `num_labels=1` will do exactly that. Exactly, but this comes back to the original issue that I posted. Passing num_labels=1, id2label=None causes issues, because id2label=None overwrites the generated id2label map. So the only thing that works is: ```python config = BertConfig.from_pretrained(model_name_or_path, num_labels=1, id2label=None) config.num_labels = num_labels ``` or ```python config = BertConfig.from_pretrained(model_name_or_path, num_labels=1, id2label={0: "LABEL_0"}) ``` Yet it is not obvious why ```python config = BertConfig.from_pretrained(model_name_or_path, num_labels=1, id2label=None) ``` doesn't work even though you don't strictly need labels for a regression problem, and even though id2label=None is the default argument. And yes, I am aware that _most users_ will not encounter this issue because they will not explicitly pass id2label=None, but that does not mean that it cannot happen. And if it does it should be made obvious to the user why something goes wrong. I often write code for different use-cases, and as my issue showed, you will encounter this issue if you need to write code for different num_labels/tasks dynamically. If users are not expected to write code like that, it doesn't hurt to tell them in the error message how they should write their code instead. You are right though that my message seemed to imply that they _have_ to provide an id2label map. Suggestion: f"You passed along `num_labels={kwargs['num_labels']}` with an incompatible ID to label map: {kwargs['id2label']}. If given (not required), the id2label map should be a dictionary of ID (int) to label (str). Note that explicitly setting id2label to None may lead to unexpected errors. Instead, do not pass the id2label argument at all or pass a dummy id2label with the same len as num_labels."<|||||>I adapted the error message slightly to insist on removing one of the incompatible kwarg.
transformers
16,708
open
Build docker images for tokenizers main branch
:warning: Do not merge this PR! PR 1/2: in order to run the full test suite (slow tests included), with the `main` branch of the `tokenizers` library, rebase this PR on `main`. Once the workflows have finished, head to PR 2/2 here: https://github.com/huggingface/transformers/pull/16712 --- This PR is one of two items to run the full test suite for the tokenizers current `main` branch. In order to re-run, rebuild the docker images, publish them to the docker hub, and rebase this PR on the `main` branch of this repository. Steps done in order to create this PR: - Edit the dockerfiles so that they successfully install `tokenizers` from source in the container - Edit the `build-docker-images.yml` action to: - Have it push these images to the Docker Hub. - Remove all non important images - Edit the identifier to contain `internal` as a prefix, and `tokenizers-main` as a suffix. - Convert these images to private visibility so that it does not surprise users These images will be built on each commit, so rebasing this branch on `main` will retrigger the workflow
04-11-2022 17:20:44
04-11-2022 17:20:44
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_16708). All of your documentation changes will be reflected on that endpoint.
transformers
16,707
closed
Private repo TrainingArgument
# What does this PR do? Creates a new argument for `TrainingArguments` called `hub_private_repo`. If True, the hub repo created by `Trainer` will be set to private. Defaults to False (public). ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @sgugger
04-11-2022 16:36:32
04-11-2022 16:36:32
_The documentation is not available anymore as the PR was closed or merged._
transformers
16,706
closed
[from_pretrained] refactor find_mismatched_keys
This PR refactors 2 large identical code copies introduced by the recent sharded checkpoint PR into a helper function which is then called from 2 places. There is no change in functionality. It's an intermediary step for this PR: https://github.com/huggingface/transformers/pull/16657 which revamps `low_cpu_mem_usage` and integrates it better with the sharded checkpoint code branch. I explained here why the helper function is not a closure but needs the input args explicitly: https://github.com/huggingface/transformers/pull/16657#discussion_r846812714 @sgugger
04-11-2022 16:14:36
04-11-2022 16:14:36
_The documentation is not available anymore as the PR was closed or merged._
transformers
16,705
closed
Cuda Memory leak (OOM) when using HF Trainer DDP mode
## Environment info - `transformers` version: 4.12.0 - Platform: Linux-5.4.0-1073-azure-x86_64-with-glibc2.29 - Python version: 3.8.10 - PyTorch version (GPU?): 1.10.0+cu102 (True) - Tensorflow version (GPU?): 2.6.0 (True) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: Yes ## Description: @patrickvonplaten @patil-suraj @sgugger Hi! In a nut shell, Im trying to train mBart for a seq2seq2 generation task using huggingface transformers trainer with Distributed Data Parallel (DDP) mode, but encountered CUDA OOM error. Specifically, the problem is, with the same setting (batch_size, and same length of data, etc), I can train it with single GPU successfully! But always encounter CUDA OOM error when using DDP mode. I also tried to decrease batch size to 1, and length of input data to 50 (it was 256 for encoder and 100 for decoder), but still had the issue. I run the code below by: `%sh OMP_NUM_THREADS=10 python -m torch.distributed.launch --nproc_per_node=4 Train_MBart_DDP.py` # Code: #### Libraries from transformers import MBartForConditionalGeneration, MBartTokenizer from transformers import Trainer, TrainingArguments from transformers.models.bart.modeling_bart import shift_tokens_right from transformers import AutoModelForSeq2SeqLM, DataCollatorForSeq2Seq, Seq2SeqTrainingArguments, Seq2SeqTrainer from datasets import Dataset from datasets import load_dataset, load_metric import torch import numpy as np import nltk import os import logging logging.basicConfig(level=logging.DEBUG,format='%(asctime)s %(message)s') os.environ["WANDB_DISABLED"] = "true" metric =load_metric('/metrics/rouge/rouge.py') #### set up MLFOW and get local rank os.environ["DATABRICKS_HOST"] = "[MASKED]" os.environ["DATABRICKS_TOKEN"] = "[MASKED]" os.environ["WANDB_WATCH"] = "false" os.environ["NCCL_DEBUG"] = "INFO" local_rank = int(os.environ["LOCAL_RANK"]) client = MlflowClient() experiment = client.get_experiment([MASKED]) remote_server_uri = mlflow.tracking.get_tracking_uri()[] mlflow.set_tracking_uri(remote_server_uri) mlflow.set_experiment('[MASKED]/mBart_DDP') #### get data def apply_process(row): question, answers, ctxs = row answer = answers[0] candidates = np.array([d['text'] for d in row['ctxs']]) candidates = pd.unique(candidates) candidates = ' '.join(candidates[:5].tolist()) question_passage = question + ' ' + candidates return question_passage, answer df_path = '/tmp/top200_output.json' f = open(df_path) df = json.load(f) f.close() dff = pd.DataFrame(df) dff[['question_passage', 'answer']] = dff.apply(apply_process, axis = 1, result_type="expand") dff = dff[['question_passage', 'answer']] #### get dataset and model def convert_to_features(dataset): input_encodings = tokenizer.batch_encode_plus(dataset['question_passage'], pad_to_max_length=True, padding='max_length', max_length = 256, truncation=True) target_encodings = tokenizer.batch_encode_plus(dataset['answer'], pad_to_max_length=True, padding='max_length', max_length = 100, truncation=True) labels = target_encodings['input_ids'] labels = torch.tensor(labels) decoder_input_ids = shift_tokens_right(labels, model.config.pad_token_id, 0) decoder_input_ids = np.array(decoder_input_ids) labels[labels[:, :] == model.config.pad_token_id] = -100 labels = np.array(labels) encodings = { 'input_ids': input_encodings['input_ids'], 'attention_mask': input_encodings['attention_mask'], 'decoder_input_ids': decoder_input_ids, 'labels': labels, } return encodings tokenizer = MBartTokenizer.from_pretrained('/tmp/mbart-large-cc25', src_lang="en_XX", local_files_only=True) model = MBartForConditionalGeneration.from_pretrained('/tmp/mbart-large-cc25', local_files_only=True) model.config.decoder_start_token_id = tokenizer.lang_code_to_id["en_XX"] dataset = Dataset.from_pandas(dff) test = Dataset.from_dict(dataset[:10]) train = Dataset.from_dict(dataset[500:]) test = test.map(convert_to_features, batched=True) columns = ['input_ids', 'labels', 'decoder_input_ids','attention_mask',] test.set_format(type='torch', columns=columns) test = test.remove_columns(['question_passage', 'answer']) train = train.map(convert_to_features, batched=True) columns = ['input_ids', 'labels', 'decoder_input_ids','attention_mask',] train.set_format(type='torch', columns=columns) train = train.remove_columns(['question_passage', 'answer']) #### set trainer args = Seq2SeqTrainingArguments( "/tmp/bart_training", evaluation_strategy = "epoch", learning_rate=2e-5, per_device_train_batch_size=4, per_device_eval_batch_size=4, weight_decay=0.01, save_total_limit=1, num_train_epochs=3, predict_with_generate=True, gradient_accumulation_steps = 4, disable_tqdm=False, dataloader_num_workers = 10, fp16=True, local_rank= os.environ["LOCAL_RANK"], do_train=True, do_eval=True, overwrite_output_dir = True, sharded_ddp = 'simple', dataloader_pin_memory = True, adafactor =True, skip_memory_metrics = True, ddp_find_unused_parameters =True, sortish_sampler=True, generation_max_length =50, gradient_checkpointing =False ) data_collator = DataCollatorForSeq2Seq(tokenizer, model=model) def compute_metrics(eval_pred): predictions, labels = eval_pred decoded_preds = tokenizer.batch_decode(predictions, skip_special_tokens=True) # Replace -100 in the labels as we can't decode them. labels = np.where(labels != -100, labels, tokenizer.pad_token_id) decoded_labels = tokenizer.batch_decode(labels, skip_special_tokens=True) # Rouge expects a newline after each sentence decoded_preds = ["\n".join(nltk.sent_tokenize(pred.strip())) for pred in decoded_preds] decoded_labels = ["\n".join(nltk.sent_tokenize(label.strip())) for label in decoded_labels] logging.info(decoded_preds) logging.info('\n\n') logging.info(decoded_labels) result = metric.compute(predictions=decoded_preds, references=decoded_labels, use_stemmer=True) # Extract a few results result = {key: value.mid.fmeasure * 100 for key, value in result.items()} # Add mean generated length prediction_lens = [np.count_nonzero(pred != tokenizer.pad_token_id) for pred in predictions] result["gen_len"] = np.mean(prediction_lens) return {k: round(v, 4) for k, v in result.items()} trainer = Seq2SeqTrainer( model, args, train_dataset=train, eval_dataset=test, data_collator=data_collator, tokenizer=tokenizer, compute_metrics=compute_metrics ) trainer.train() ## The error: #### Error: 0220-221927-imgswodk-10-232-244-83:9106:9106 [2] NCCL INFO Bootstrap : Using eth0:10.232.244.83<0> 0220-221927-imgswodk-10-232-244-83:9106:9106 [2] NCCL INFO NET/Plugin : No plugin found (libnccl-net.so), using internal implementation 0220-221927-imgswodk-10-232-244-83:9106:9106 [2] NCCL INFO NET/IB : No device found. 0220-221927-imgswodk-10-232-244-83:9106:9106 [2] NCCL INFO NET/Socket : Using [0]eth0:10.232.244.83<0> 0220-221927-imgswodk-10-232-244-83:9106:9106 [2] NCCL INFO Using network Socket 0220-221927-imgswodk-10-232-244-83:9104:9263 [0] NCCL INFO Channel 00/02 : 0 1 2 3 0220-221927-imgswodk-10-232-244-83:9104:9263 [0] NCCL INFO Channel 01/02 : 0 1 2 3 0220-221927-imgswodk-10-232-244-83:9104:9263 [0] NCCL INFO Trees [0] 1/-1/-1->0->-1 [1] 1/-1/-1->0->-1 0220-221927-imgswodk-10-232-244-83:9104:9263 [0] NCCL INFO Setting affinity for GPU 0 to 0fff 0220-221927-imgswodk-10-232-244-83:9105:9265 [1] NCCL INFO Trees [0] 2/-1/-1->1->0 [1] 2/-1/-1->1->0 0220-221927-imgswodk-10-232-244-83:9106:9266 [2] NCCL INFO Trees [0] 3/-1/-1->2->1 [1] 3/-1/-1->2->1 0220-221927-imgswodk-10-232-244-83:9105:9265 [1] NCCL INFO Setting affinity for GPU 1 to 0fff 0220-221927-imgswodk-10-232-244-83:9106:9266 [2] NCCL INFO Setting affinity for GPU 2 to 0fff 0220-221927-imgswodk-10-232-244-83:9107:9264 [3] NCCL INFO Trees [0] -1/-1/-1->3->2 [1] -1/-1/-1->3->2 0220-221927-imgswodk-10-232-244-83:9107:9264 [3] NCCL INFO Setting affinity for GPU 3 to 0fff 0220-221927-imgswodk-10-232-244-83:9104:9263 [0] NCCL INFO Channel 00 : 0[100000] -> 1[200000] via direct shared memory 0220-221927-imgswodk-10-232-244-83:9107:9264 [3] NCCL INFO Channel 00 : 3[400000] -> 0[100000] via direct shared memory 0220-221927-imgswodk-10-232-244-83:9104:9263 [0] NCCL INFO Channel 01 : 0[100000] -> 1[200000] via direct shared memory 0220-221927-imgswodk-10-232-244-83:9107:9264 [3] NCCL INFO Channel 01 : 3[400000] -> 0[100000] via direct shared memory 0220-221927-imgswodk-10-232-244-83:9106:9266 [2] NCCL INFO Channel 00 : 2[300000] -> 3[400000] via direct shared memory 0220-221927-imgswodk-10-232-244-83:9105:9265 [1] NCCL INFO Channel 00 : 1[200000] -> 2[300000] via direct shared memory 0220-221927-imgswodk-10-232-244-83:9106:9266 [2] NCCL INFO Channel 01 : 2[300000] -> 3[400000] via direct shared memory 0220-221927-imgswodk-10-232-244-83:9105:9265 [1] NCCL INFO Channel 01 : 1[200000] -> 2[300000] via direct shared memory 0220-221927-imgswodk-10-232-244-83:9107:9264 [3] NCCL INFO Connected all rings 0220-221927-imgswodk-10-232-244-83:9106:9266 [2] NCCL INFO Connected all rings 0220-221927-imgswodk-10-232-244-83:9107:9264 [3] NCCL INFO Channel 00 : 3[400000] -> 2[300000] via direct shared memory 0220-221927-imgswodk-10-232-244-83:9107:9264 [3] NCCL INFO Channel 01 : 3[400000] -> 2[300000] via direct shared memory 0220-221927-imgswodk-10-232-244-83:9106:9266 [2] NCCL INFO Channel 00 : 2[300000] -> 1[200000] via direct shared memory 0220-221927-imgswodk-10-232-244-83:9106:9266 [2] NCCL INFO Channel 01 : 2[300000] -> 1[200000] via direct shared memory 0220-221927-imgswodk-10-232-244-83:9107:9264 [3] NCCL INFO Connected all trees 0220-221927-imgswodk-10-232-244-83:9107:9264 [3] NCCL INFO threadThresholds 8/8/64 | 32/8/64 | 8/8/512 0220-221927-imgswodk-10-232-244-83:9107:9264 [3] NCCL INFO 2 coll channels, 2 p2p channels, 2 p2p channels per peer 0220-221927-imgswodk-10-232-244-83:9105:9265 [1] NCCL INFO Connected all rings 0220-221927-imgswodk-10-232-244-83:9104:9263 [0] NCCL INFO Connected all rings 0220-221927-imgswodk-10-232-244-83:9105:9265 [1] NCCL INFO Channel 00 : 1[200000] -> 0[100000] via direct shared memory 0220-221927-imgswodk-10-232-244-83:9105:9265 [1] NCCL INFO Channel 01 : 1[200000] -> 0[100000] via direct shared memory 0220-221927-imgswodk-10-232-244-83:9104:9263 [0] NCCL INFO Connected all trees 0220-221927-imgswodk-10-232-244-83:9104:9263 [0] NCCL INFO threadThresholds 8/8/64 | 32/8/64 | 8/8/512 0220-221927-imgswodk-10-232-244-83:9104:9263 [0] NCCL INFO 2 coll channels, 2 p2p channels, 2 p2p channels per peer 0220-221927-imgswodk-10-232-244-83:9106:9266 [2] NCCL INFO Connected all trees 0220-221927-imgswodk-10-232-244-83:9106:9266 [2] NCCL INFO threadThresholds 8/8/64 | 32/8/64 | 8/8/512 0220-221927-imgswodk-10-232-244-83:9106:9266 [2] NCCL INFO 2 coll channels, 2 p2p channels, 2 p2p channels per peer 0220-221927-imgswodk-10-232-244-83:9105:9265 [1] NCCL INFO Connected all trees 0220-221927-imgswodk-10-232-244-83:9105:9265 [1] NCCL INFO threadThresholds 8/8/64 | 32/8/64 | 8/8/512 0220-221927-imgswodk-10-232-244-83:9105:9265 [1] NCCL INFO 2 coll channels, 2 p2p channels, 2 p2p channels per peer 0220-221927-imgswodk-10-232-244-83:9105:9265 [1] NCCL INFO comm 0x7fba7c001240 rank 1 nranks 4 cudaDev 1 busId 200000 - Init COMPLETE 0220-221927-imgswodk-10-232-244-83:9106:9266 [2] NCCL INFO comm 0x7f6a70001240 rank 2 nranks 4 cudaDev 2 busId 300000 - Init COMPLETE 0220-221927-imgswodk-10-232-244-83:9104:9263 [0] NCCL INFO comm 0x7f4f40001240 rank 0 nranks 4 cudaDev 0 busId 100000 - Init COMPLETE 0220-221927-imgswodk-10-232-244-83:9107:9264 [3] NCCL INFO comm 0x7f0918001240 rank 3 nranks 4 cudaDev 3 busId 400000 - Init COMPLETE 0220-221927-imgswodk-10-232-244-83:9104:9104 [0] NCCL INFO Launch mode Parallel ***** Running training ***** Num examples = 500 Num Epochs = 3 Instantaneous batch size per device = 4 Total train batch size (w. parallel, distributed & accumulation) = 16 Gradient Accumulation steps = 1 Total optimization steps = 96 0%| | 0/96 [00:00<?, ?it/s][W reducer.cpp:1303] Warning: find_unused_parameters=True was specified in DDP constructor, but did not find any unused parameters in the forward pass. This flag results in an extra traversal of the autograd graph every iteration, which can adversely affect performance. If your model indeed never has any unused parameters in the forward pass, consider turning this flag off. Note that this warning may be a false positive if your model has flow control causing later iterations to have unused parameters. (function operator()) [W reducer.cpp:1303] Warning: find_unused_parameters=True was specified in DDP constructor, but did not find any unused parameters in the forward pass. This flag results in an extra traversal of the autograd graph every iteration, which can adversely affect performance. If your model indeed never has any unused parameters in the forward pass, consider turning this flag off. Note that this warning may be a false positive if your model has flow control causing later iterations to have unused parameters. (function operator()) [W reducer.cpp:1303] Warning: find_unused_parameters=True was specified in DDP constructor, but did not find any unused parameters in the forward pass. This flag results in an extra traversal of the autograd graph every iteration, which can adversely affect performance. If your model indeed never has any unused parameters in the forward pass, consider turning this flag off. Note that this warning may be a false positive if your model has flow control causing later iterations to have unused parameters. (function operator()) [W reducer.cpp:1303] Warning: find_unused_parameters=True was specified in DDP constructor, but did not find any unused parameters in the forward pass. This flag results in an extra traversal of the autograd graph every iteration, which can adversely affect performance. If your model indeed never has any unused parameters in the forward pass, consider turning this flag off. Note that this warning may be a false positive if your model has flow control causing later iterations to have unused parameters. (function operator()) 1%| | 1/96 [00:01<01:53, 1.19s/it]Traceback (most recent call last): File "Train_MBart_reader_DDP.py", line 197, in <module> Traceback (most recent call last): File "Train_MBart_reader_DDP.py", line 197, in <module> Traceback (most recent call last): File "Train_MBart_reader_DDP.py", line 197, in <module> Traceback (most recent call last): File "Train_MBart_reader_DDP.py", line 197, in <module> trainer.train() File "/databricks/python/lib/python3.8/site-packages/transformers/trainer.py", line 1316, in train trainer.train() File "/databricks/python/lib/python3.8/site-packages/transformers/trainer.py", line 1316, in train tr_loss_step = self.training_step(model, inputs)trainer.train() File "/databricks/python/lib/python3.8/site-packages/transformers/trainer.py", line 1867, in training_step File "/databricks/python/lib/python3.8/site-packages/transformers/trainer.py", line 1316, in train tr_loss_step = self.training_step(model, inputs) File "/databricks/python/lib/python3.8/site-packages/transformers/trainer.py", line 1867, in training_step loss.backward() File "/databricks/python/lib/python3.8/site-packages/torch/_tensor.py", line 307, in backward tr_loss_step = self.training_step(model, inputs) File "/databricks/python/lib/python3.8/site-packages/transformers/trainer.py", line 1867, in training_step loss.backward()torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs) File "/databricks/python/lib/python3.8/site-packages/torch/_tensor.py", line 307, in backward File "/databricks/python/lib/python3.8/site-packages/torch/autograd/__init__.py", line 154, in backward Variable._execution_engine.run_backward( RuntimeError : torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs) CUDA out of memory. Tried to allocate 382.00 MiB (GPU 0; 15.78 GiB total capacity; 14.02 GiB already allocated; 339.50 MiB free; 14.05 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF File "/databricks/python/lib/python3.8/site-packages/torch/autograd/__init__.py", line 154, in backward Variable._execution_engine.run_backward( RuntimeError: CUDA out of memory. Tried to allocate 382.00 MiB (GPU 2; 15.78 GiB total capacity; 14.02 GiB already allocated; 339.50 MiB free; 14.05 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF loss.backward() File "/databricks/python/lib/python3.8/site-packages/torch/_tensor.py", line 307, in backward torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs) File "/databricks/python/lib/python3.8/site-packages/torch/autograd/__init__.py", line 154, in backward Variable._execution_engine.run_backward( RuntimeError: CUDA out of memory. Tried to allocate 382.00 MiB (GPU 3; 15.78 GiB total capacity; 14.02 GiB already allocated; 339.50 MiB free; 14.05 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF trainer.train() File "/databricks/python/lib/python3.8/site-packages/transformers/trainer.py", line 1316, in train tr_loss_step = self.training_step(model, inputs) File "/databricks/python/lib/python3.8/site-packages/transformers/trainer.py", line 1867, in training_step loss.backward() File "/databricks/python/lib/python3.8/site-packages/torch/_tensor.py", line 307, in backward torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs) File "/databricks/python/lib/python3.8/site-packages/torch/autograd/__init__.py", line 154, in backward Variable._execution_engine.run_backward( RuntimeError: CUDA out of memory. Tried to allocate 382.00 MiB (GPU 1; 15.78 GiB total capacity; 14.02 GiB already allocated; 339.50 MiB free; 14.05 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF ## To reproduce You could reproduce it using other data like summarization CNN Daily Dataset.
04-11-2022 16:11:59
04-11-2022 16:11:59
transformers
16,704
closed
TF beam search: handle case without past
# What does this PR do? Fixes `tests/vision_encoder_decoder/test_modeling_tf_vision_encoder_decoder.py::TFViT2GPT2ModelIntegrationTest::test_inference_coco_en`, whose root cause was in `beam_search` (it was not handling correctly cases without cache). This one slipped through the cracks, I probably forgot to run a final check on this test file before merging -- my bad 😅
04-11-2022 15:59:18
04-11-2022 15:59:18
_The documentation is not available anymore as the PR was closed or merged._<|||||>(merging as Patrick has approved #16713, which also contains these changes)
transformers
16,703
closed
Don't push checkpoints to hub in `no_trainer` scripts
# Don't push checkpoints to the Hub in `no_trainer` scripts ## What does this add? - Creates a `gitignore` file in the base folder if `push_to_hub` was passed and one does not exist - During each call to `save_state`, if `push_to_hub` was passed then then directory is added to the .gitignore ## Why is it needed? Users shouldn't expect to have all of their checkpoints sent to both the Hub and saved locally, it should just be locally until they are ready for the final save
04-11-2022 15:37:13
04-11-2022 15:37:13
_The documentation is not available anymore as the PR was closed or merged._
transformers
16,702
closed
group_texts function in language-modeling seems get wrong
During preprocssing of language-modeling, function `group_texts` as https://github.com/huggingface/transformers/blob/main/examples/pytorch/language-modeling/run_mlm.py#L447-L460 seeems ingore the special tokens like [CLS] and [SEP], and just concatenated them together. An obvious bug would appear that one example may have multiple [CLS] tokens that confuses model and also the [CLS] may not be sure to be placed in the first place.
04-11-2022 14:06:08
04-11-2022 14:06:08
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>I am confused about the efficiency of this function. Does anyone test the performance of trained models by this way?<|||||>I got my except answer from https://github.com/huggingface/transformers/issues/10737, closing this issue.
transformers
16,701
closed
Optional keys in TrainingArguments aren't always labelled as such
Inside `TrainingArguments` there are a few keys such as [tf32](https://github.com/huggingface/transformers/blob/098b0026447271a340d2d7e6bff428c82cb6d744/src/transformers/training_args.py#L594) which are optional and have a default of `None`, but aren't explicitly labelled as such. This can cause problems downstream; for example, OmegaConf will complain that ``` omegaconf.errors.ValidationError: Non optional field cannot be assigned None full_key: tf32 object_type=Seq2SeqTrainingArguments ``` The fix is simple, just make sure every key with a default of `None` has `Optional` type. I'm busy with another PR atm, but can have a look later.
04-11-2022 13:58:56
04-11-2022 13:58:56
cc @sgugger
transformers
16,700
closed
update decoder_vocab_size when resizing embeds
# What does this PR do? - Update the `config.decoder_vocab_size` when resizing embeds if the enc, dec embeds are shared. - Use `config.decoder_vocab_size` to reshape the `lm_logits` Fixes #16670
04-11-2022 12:33:20
04-11-2022 12:33:20
_The documentation is not available anymore as the PR was closed or merged._
transformers
16,699
closed
Fix drop_path_rates argument passed to ConvNextStage
# What does this PR do? - Pass a sub-list of `drop_path_rates` to ConvNextStage instead of a single element. - Fix the following issue: Argument `drop_path_rates (List[float])` passed to `ConvNextStage` is a single float when `drop_path_rate` is specified by the config. Reproducer: ``` from transformers import ConvNextModel, ConvNextConfig configuration = ConvNextConfig(drop_path_rate=0.1) model = ConvNextModel(configuration) ``` Throws: ``` Traceback (most recent call last): File "repro-droprate.py", line 4, in <module> model = ConvNextModel(configuration) File "/home/alexandrec/.local/lib/python3.6/site-packages/transformers/models/convnext/modeling_convnext.py", line 311, in __init__ self.encoder = ConvNextEncoder(config) File "/home/alexandrec/.local/lib/python3.6/site-packages/transformers/models/convnext/modeling_convnext.py", line 221, in __init__ drop_path_rates=drop_path_rates[cur], File "/home/alexandrec/.local/lib/python3.6/site-packages/transformers/models/convnext/modeling_convnext.py", line 197, in __init__ *[ConvNextLayer(config, dim=out_channels, drop_path=drop_path_rates[j]) for j in range(depth)] File "/home/alexandrec/.local/lib/python3.6/site-packages/transformers/models/convnext/modeling_convnext.py", line 197, in <listcomp> *[ConvNextLayer(config, dim=out_channels, drop_path=drop_path_rates[j]) for j in range(depth)] TypeError: 'float' object is not subscriptable ``` Reproduced with Transformers 4.18.0 , Ubuntu 18.04 + Python 3.6.9 Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
04-11-2022 11:56:04
04-11-2022 11:56:04
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_16699). All of your documentation changes will be reflected on that endpoint.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Thanks for pointing out, I'll remove the unclear `cur` variable and will fix it for the TF implementation as well.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Closing as this was fixed in #17280
transformers
16,698
closed
Fix TF_MASKED_LM_SAMPLE
# What does this PR do? Fix `TF_MASKED_LM_SAMPLE`: there is currently a dimension issue regarding `mask_token_index` and `predicted_token_id`, which gives different results between PT/TF masked LM code samples PT: `paris` TF: `p a r i s` See below for details. (This is related to #16523) ## ### PT_MASKED_LM_SAMPLE ```python from transformers import BertTokenizer, BertForMaskedLM import torch mask = "[MASK]", checkpoint = "bert-base-uncased" tokenizer = BertTokenizer.from_pretrained(f"{checkpoint}") model = BertForMaskedLM.from_pretrained(f"{checkpoint}") inputs = tokenizer(f"The capital of France is {mask}.", return_tensors="pt") with torch.no_grad(): logits = model(**inputs).logits # retrieve index of {mask} mask_token_index = (inputs.input_ids == tokenizer.mask_token_id)[0].nonzero(as_tuple=True)[0] predicted_token_id = logits[0, mask_token_index].argmax(axis=-1) expected_output = tokenizer.decode(predicted_token_id) print(mask_token_index) # tensor([8]): row dimension from `nonzero()` print(predicted_token_id) # tensor([3000]) print(expected_output) # paris ``` ### TF_MASKED_LM_SAMPLE (on `main`) ```python from transformers import BertTokenizer, TFBertForMaskedLM import tensorflow as tf tokenizer = BertTokenizer.from_pretrained(f"{checkpoint}") model = TFBertForMaskedLM.from_pretrained(f"{checkpoint}") inputs = tokenizer(f"The capital of France is {mask}.", return_tensors="tf") logits = model(**inputs).logits # retrieve index of {mask} mask_token_index = tf.where(inputs.input_ids == tokenizer.mask_token_id)[0][1] predicted_token_id = tf.math.argmax(logits[0, mask_token_index], axis=-1) expected_output = tokenizer.decode(predicted_token_id) print(mask_token_index) # tf.Tensor(8, shape=(), dtype=int64): no row dimension print(predicted_token_id) # tf.Tensor(3000, shape=(), dtype=int64) print(tokenizer.decode(predicted_token_id)) # p a r i s (not good) ``` ### TF_MASKED_LM_SAMPLE (this PR) ```python # retrieve index of {mask} mask_token_index = tf.where((inputs.input_ids == tokenizer.mask_token_id)[0]) selected_logits = tf.gather_nd(logits[0], indices=mask_token_index) predicted_token_id = tf.math.argmax(selected_logits, axis=-1) expected_output = tokenizer.decode(predicted_token_id) print(mask_token_index) # tf.Tensor([[8]], shape=(1, 1), dtype=int64): with row dimension print(predicted_token_id) # tf.Tensor([3000], shape=(1,), dtype=int64) print(tokenizer.decode(predicted_token_id)) # paris ```
04-11-2022 11:03:47
04-11-2022 11:03:47
_The documentation is not available anymore as the PR was closed or merged._
transformers
16,697
closed
ViLT vs VIT Classifier heads question
Is there any specific reason for the classifier head on the ViLT model for tasks say ```ViltForImagesAndTextClassification``` or ```ViltForQuestionAnswering ``` has a ```LayerNorm and GELU``` and not just a Linear Input and output Layers (Like below) whereas classifier head on the VIT for ```ViTForImageClassification``` Has only a linear layer ? Please advice. i.e This ```python # Classifier head self.classifier = nn.Sequential( nn.Linear(config.hidden_size, config.hidden_size * 2), nn.LayerNorm(config.hidden_size * 2), nn.GELU(), nn.Linear(config.hidden_size * 2, config.num_labels), ) ``` and NOT this ? ```python # Classifier head self.classifier = nn.Linear(config.hidden_size, config.num_labels) if config.num_labels > 0 else nn.Identity() ```
04-11-2022 09:08:34
04-11-2022 09:08:34
Hi, thanks for your interest in ViLT. That was the decision of the authors. Maybe you can ask them 😉 <|||||>Ok :-)
transformers
16,696
closed
Handle image_embeds in ViltModel
# What does this PR do? Handle `image_embeds` in `ViltModel` / `ViltForImagesAndTextClassification`. (looks like `Vilt` is the first model introducing `image_embeds` argument.) ## More Info The `image_embeds` in `ViltForImagesAndTextClassification` should have `num_images` dimension as `pixel_values` has, as far as I understand.
04-11-2022 09:07:11
04-11-2022 09:07:11
_The documentation is not available anymore as the PR was closed or merged._<|||||>> Thanks for improving this! > > Out of interest: were you experimenting with ViLT? Not for this PR. I tried to fix a CI (vit-mae), which was about `test_torchscript`. It turns out to be related to model main input -> I worked/improved on it -> more models involved including ViLT -> I just took this chance to work on this PR (otherwise I would forget it very quickly)
transformers
16,695
closed
Enable ONNX support for multiple-choice classification heads
# 🚀 Feature request <!-- A clear and concise description of the feature proposal. Please provide a link to the paper and code in case they exist. --> Currently, the `transformers.onnx` package doesn't support the export of models with a multiple-choice classification head, i.e. all `ModelForMultiplChoice` classes. We should enable this to provide full coverage of our current exports. Implementing this involves: * Adding a `multiple-choice` feature to the `FeaturesManager` * Generating the appropriate dummy inputs * Updating the features of all existing models which have a corresponding head cc @michaelbenayoun
04-11-2022 08:41:45
04-11-2022 08:41:45
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
16,694
closed
Question: Add Embedding layer to BERT
I am going to add a BERT embedding layer. ```python class CustomBertEmbeddings(nn.Module): def __init__(self, config): super().__init__() self.word_embeddings = nn.Embedding(config.vocab_size, config.hidden_size, padding_idx=config.pad_token_id) self.position_embeddings = nn.Embedding(config.max_position_embeddings, config.hidden_size) self.token_type_embeddings = nn.Embedding(config.type_vocab_size, config.hidden_size) ############# THIS PART ################ self.entity_type_embeddings = nn.Embedding(3, config.hidden_size, max_norm=True) ############# THIS PART ################ # self.LayerNorm is not snake-cased to stick with TensorFlow model variable name and be able to load # any TensorFlow checkpoint file self.LayerNorm = nn.LayerNorm(config.hidden_size, eps=config.layer_norm_eps) self.dropout = nn.Dropout(config.hidden_dropout_prob) # position_ids (1, len position emb) is contiguous in memory and exported when serialized self.position_embedding_type = getattr(config, "position_embedding_type", "absolute") self.register_buffer("position_ids", torch.arange(config.max_position_embeddings).expand((1, -1))) if version.parse(torch.__version__) > version.parse("1.6.0"): self.register_buffer( "token_type_ids", torch.zeros(self.position_ids.size(), dtype=torch.long), persistent=False, ) ``` This layer works properly as long as it has only two values, 0 and 1. However, I want to use 3 values ​​0,1,2. However, if 3 values ​​are used, the following error occurs. ``` /pytorch/aten/src/ATen/native/cuda/Indexing.cu:699: indexSelectLargeIndex: block: [573,0,0], thread: [0,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:699: indexSelectLargeIndex: block: [573,0,0], thread: [1,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:699: indexSelectLargeIndex: block: [573,0,0], thread: [2,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:699: indexSelectLargeIndex: block: [573,0,0], thread: [3,0,0] ... /pytorch/aten/src/ATen/native/cuda/Indexing.cu:699: indexSelectLargeIndex: block: [573,0,0], thread: [122,0,0] Assertion `srcIndex < srcSelectDimSize` failed /pytorch/aten/src/ATen/native/cuda/Indexing.cu:699: indexSelectLargeIndex: block: [573,0,0], thread: [123,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:699: indexSelectLargeIndex: block: [573,0,0], thread: [124,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:699: indexSelectLargeIndex: block: [573,0,0], thread: [125,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:699: indexSelectLargeIndex: block: [573,0,0], thread: [126,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:699: indexSelectLargeIndex: block: [573,0,0], thread: [127,0,0] Assertion `srcIndex < srcSelectDimSize` failed. ``` I can't understand. All I have to do is add a new embedding vector as shown below. ```python entity_type_embeddings = self.token_type_embeddings(entity_loc_ids) embeddings = inputs_embeds + token_type_embeddings + entity_type_embeddings ``` can anyone help me?
04-11-2022 08:37:43
04-11-2022 08:37:43
self.token_type_embeddings(entity_loc_ids) -> self.entity_type_embeddings(entity_loc_ids) there was typo
transformers
16,693
closed
Rename the method test_torchscript
# What does this PR do? `class ModelTesterMixin` has attribute `test_torchscript` as well as method named `def test_torchscript`. In a few model specific test files, we have `test_torchscript = True` defined, for example, `T5ModelTest`: https://github.com/huggingface/transformers/blob/7c5d79912a21880ce13d77881940458e90d98917/tests/t5/test_modeling_t5.py#L515 `DistilBertModelTest`: https://github.com/huggingface/transformers/blob/7c5d79912a21880ce13d77881940458e90d98917/tests/distilbert/test_modeling_distilbert.py#L214 This actually makes the `test_torchscript` **being a boolean value** instead of a method, therefore `test_torchscript` is **not run** for these places. See the last section for a dummy example. ## Fix Although this could be fixed just by removing `test_torchscript = True`, it would be a better idea to rename the method `def test_torchscript` to something else like `def test_torchscript_simple`. ## Dummy example (to reproduce the issue) ```python class DummyCommonTest: test_me = True def test_me(self): a = 3 print(a) class DummyModelTest1(DummyCommonTest): pass class DummyModelTest2(DummyCommonTest): test_me = True dummy_test = DummyCommonTest() print(dummy_test.test_me) dummy_test.test_me() dummy_model_test_1 = DummyModelTest1() # A method print(dummy_model_test_1.test_me) # can be called dummy_model_test_1.test_me() dummy_model_test_2 = DummyModelTest2() # A boolean print(dummy_model_test_2.test_me) # can't be called: (TypeError: 'bool' object is not callable) dummy_model_test_2.test_me() ```
04-11-2022 07:59:02
04-11-2022 07:59:02
_The documentation is not available anymore as the PR was closed or merged._
transformers
16,692
closed
ViLT Fine-tuning Bug: ValueError: operands could not be broadcast together with shapes
## Environment info - `transformers` version: 4.19.0.dev0 - Platform: Linux-5.13.0-39-generic-x86_64-with-glibc2.31 - Python version: 3.9.12 - Huggingface_hub version: 0.5.1 - PyTorch version (GPU?): 1.11.0 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): GPU - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: No - Using distributed or parallel set-up in script?: No Models: - TAPAS, LayoutLM, LayoutLMv2, LUKE, ViT, ViLT BEiT, DEiT, DETR, CANINE: @NielsRogge Library: - Vision: @NielsRogge, @sgugger ## Information The model I am using **ViLT**: The problem arises when using: * ViLT official example scripts: (give details below) https://github.com/NielsRogge/Transformers-Tutorials/blob/master/ViLT/Fine_tuning_ViLT_for_VQA.ipynb The tasks I am working on is: * Fine-tune ViLT for QVA on the complete VQAv2 validation dataset ## To reproduce Steps to reproduce the behavior: 1. Download all the required dependencies 2. Change the data size in **Cell [29]** from **questions=questions[:100]** to **questions=questions[0:]** and **annotations=annotations[:100]** to **annotations=annotations[0:]** 3. Run the example notebook **Replace the code below in the example notebook Cell [29] to reproduce the behavior** ``` from transformers import ViltProcessor processor = ViltProcessor.from_pretrained("dandelin/vilt-b32-mlm") dataset = VQADataset(questions=questions[0:], annotations=annotations[0:], processor=processor) ``` **Error Message is shown below:** ``` 1%|▌ | 236/26795 [01:31<2:52:07, 2.57it/s] --------------------------------------------------------------------------- ValueError Traceback (most recent call last) Input In [29], in <cell line: 4>() 4 for epoch in range(50): # loop over the dataset multiple times 5 print(f"Epoch: {epoch}") ----> 6 for batch in tqdm(train_dataloader, total=len(train_dataloader)): 7 # get the inputs; 8 batch = {k:v.to(device) for k,v in batch.items()} 10 # zero the parameter gradients File ~/anaconda3/envs/hugging-face/lib/python3.8/site-packages/tqdm/std.py:1195, in tqdm.__iter__(self) 1192 time = self._time 1194 try: -> 1195 for obj in iterable: 1196 yield obj 1197 # Update and possibly print the progressbar. 1198 # Note: does not call self.update(1) for speed optimisation. File ~/anaconda3/envs/hugging-face/lib/python3.8/site-packages/torch/utils/data/dataloader.py:530, in _BaseDataLoaderIter.__next__(self) 528 if self._sampler_iter is None: 529 self._reset() --> 530 data = self._next_data() 531 self._num_yielded += 1 532 if self._dataset_kind == _DatasetKind.Iterable and \ 533 self._IterableDataset_len_called is not None and \ 534 self._num_yielded > self._IterableDataset_len_called: File ~/anaconda3/envs/hugging-face/lib/python3.8/site-packages/torch/utils/data/dataloader.py:570, in _SingleProcessDataLoaderIter._next_data(self) 568 def _next_data(self): 569 index = self._next_index() # may raise StopIteration --> 570 data = self._dataset_fetcher.fetch(index) # may raise StopIteration 571 if self._pin_memory: 572 data = _utils.pin_memory.pin_memory(data) File ~/anaconda3/envs/hugging-face/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py:49, in _MapDatasetFetcher.fetch(self, possibly_batched_index) 47 def fetch(self, possibly_batched_index): 48 if self.auto_collation: ---> 49 data = [self.dataset[idx] for idx in possibly_batched_index] 50 else: 51 data = self.dataset[possibly_batched_index] File ~/anaconda3/envs/hugging-face/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py:49, in <listcomp>(.0) 47 def fetch(self, possibly_batched_index): 48 if self.auto_collation: ---> 49 data = [self.dataset[idx] for idx in possibly_batched_index] 50 else: 51 data = self.dataset[possibly_batched_index] Input In [19], in VQADataset.__getitem__(self, idx) 19 image = Image.open(id_to_filename[annotation['image_id']]) 20 text = questions['question'] ---> 22 encoding = self.processor(image, text, padding="max_length", truncation=True, return_tensors="pt") 23 # remove batch dimension 24 for k,v in encoding.items(): File ~/anaconda3/envs/hugging-face/lib/python3.8/site-packages/transformers/models/vilt/processing_vilt.py:91, in ViltProcessor.__call__(self, images, text, add_special_tokens, padding, truncation, max_length, stride, pad_to_multiple_of, return_token_type_ids, return_attention_mask, return_overflowing_tokens, return_special_tokens_mask, return_offsets_mapping, return_length, verbose, return_tensors, **kwargs) 72 encoding = self.tokenizer( 73 text=text, 74 add_special_tokens=add_special_tokens, (...) 88 **kwargs, 89 ) 90 # add pixel_values + pixel_mask ---> 91 encoding_feature_extractor = self.feature_extractor(images, return_tensors=return_tensors) 92 encoding.update(encoding_feature_extractor) 94 return encoding File ~/anaconda3/envs/hugging-face/lib/python3.8/site-packages/transformers/models/vilt/feature_extraction_vilt.py:265, in ViltFeatureExtractor.__call__(self, images, pad_and_return_pixel_mask, return_tensors, **kwargs) 254 images = [ 255 self._resize( 256 image=image, (...) 262 for image in images 263 ] 264 if self.do_normalize: --> 265 images = [self.normalize(image=image, mean=self.image_mean, std=self.image_std) for image in images] 267 if pad_and_return_pixel_mask: 268 # pad images up to largest image in batch and create pixel_mask 269 max_size = self._max_by_axis([list(image.shape) for image in images]) File ~/anaconda3/envs/hugging-face/lib/python3.8/site-packages/transformers/models/vilt/feature_extraction_vilt.py:265, in <listcomp>(.0) 254 images = [ 255 self._resize( 256 image=image, (...) 262 for image in images 263 ] 264 if self.do_normalize: --> 265 images = [self.normalize(image=image, mean=self.image_mean, std=self.image_std) for image in images] 267 if pad_and_return_pixel_mask: 268 # pad images up to largest image in batch and create pixel_mask 269 max_size = self._max_by_axis([list(image.shape) for image in images]) File ~/anaconda3/envs/hugging-face/lib/python3.8/site-packages/transformers/image_utils.py:186, in ImageFeatureExtractionMixin.normalize(self, image, mean, std) 184 return (image - mean[:, None, None]) / std[:, None, None] 185 else: --> 186 return (image - mean) / std ValueError: operands could not be broadcast together with shapes (384,576) (3,) ``` ## Expected behavior The ViLT model should be fin-tuned with the provided dataset without any error.
04-11-2022 07:43:31
04-11-2022 07:43:31
Hi, Thanks for your interest in ViLT! Could you try adding `.convert("RGB")` when reading an image in the getitem method of the PyTorch dataset?<|||||>> Hi, > > Thanks for your interest in ViLT! Could you try adding `.convert("RGB")` when reading an image in the getitem method of the PyTorch dataset? Thank you for your quick reply! I will have a try now.<|||||>@NielsRogge Problem Solved! Thank you so much!
transformers
16,691
closed
Reduce the memory leak caused by `torch.jit.trace`
# What does this PR do? Reduce the memory leak caused by `torch.jit.trace`. Without this fix, each call to `_create_and_check_torchscript` increases RAM usage by ~20MB. (Even with this call, there are still memory leak by ~0.04MB) ## Remark Since our torchscript tests are slow tests, they are run only in scheduled CI (where the test jobs are organized by models), this memory leak issue is not important (memory is released after each job process is done/exit). However, for PR like #16679, I need to make sure the modified tests will pass. When I run in a GCP VM, I got OOM issue. So the change in this PR still have its value, I think. ## More information The method is copied from `torch` https://github.com/pytorch/pytorch/blob/bcf6974c207ac0339bfb8bdfdb0b0ec348f7a22f/torch/testing/_internal/jit_utils.py#L59 This is called in torch/test, including https://github.com/pytorch/pytorch/blob/bcf6974c207ac0339bfb8bdfdb0b0ec348f7a22f/test/test_jit.py#L12962 ```python # clear the class registry as we will be defining foo multiple times jit_utils.clear_class_registry() ```
04-11-2022 07:29:40
04-11-2022 07:29:40
_The documentation is not available anymore as the PR was closed or merged._
transformers
16,690
closed
A question about the position of language indicator tokens of mBART
## Environment info - `transformers` version: 4.17.0 - Platform: Linux-3.10.0-1160.49.1.el7.x86_64-x86_64-with-glibc2.10 - Python version: 3.8.5 - PyTorch version (GPU?): 1.11.0+cu102 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ### Who can help Models: - MBART, BART: @patil-suraj ## Information <img width="655" alt="image" src="https://user-images.githubusercontent.com/3585459/162672533-17804a89-a22e-4716-bcea-bc89ea9c1dde.png"> According to Figure 1 of [the mBART paper](https://arxiv.org/pdf/2001.08210.pdf), I think, * `input_ids = [s1, s2, ..., sn, </s>, <SRC_LANG>]` * `decoder_input_ids = [<TGT_LANG>, d1, d2, ..., dm, </s>]` * `labels = [d1, d2, ..., dm, </s>, <TGT_LANG>]` Then, I checked the output of the mBART tokenizer, ```python tokenizer = AutoTokenizer.from_pretrained('facebook/mbart-large-50-many-to-many-mmt', src_lang='en_XX', tgt_lang='ro_RO') src = tokenizer.tokenize('UN Chief Says There Is No Military Solution in Syria', add_special_tokens=True) # ['en_XX', '▁UN', '▁Chief', '▁Say', 's', '▁There', '▁Is', '▁No', '▁Militar', 'y', '▁Solution', '▁in', '▁Syria', '</s>'] with tokenizer.as_target_tokenizer(): tgt = tokenizer.tokenize('Şeful ONU declară că nu există o soluţie militară în Siria', add_special_tokens=True) # ['ro_RO', '▁Şe', 'ful', '▁ONU', '▁de', 'cla', 'ră', '▁că', '▁nu', '▁există', '▁o', '▁solu', 'ţie', '▁militar', 'ă', '▁în', '▁Siria', '</s>'] ``` Seems, the language indicator token is placed at the beginning of each sentence. However, I found `BartForConditionalGeneration` shifted the `labels` and feed it as the `decoder_input_ids` to the decoder while leaving the `input_ids` unchanged. https://github.com/huggingface/transformers/blob/7c5d79912a21880ce13d77881940458e90d98917/src/transformers/models/bart/modeling_bart.py#L1343-L1346 Isn't this procedure changing the sequences to be like? * `input_ids = [<SRC_LANG>, s1, s2, ..., sn, </s>]` * `decoder_input_ids = [</s>, <TGT_LANG>, d1, d2, ..., dm]` * `labels = [<TGT_LANG>, d1, d2, ..., dm, </s>]` I think this is quite different from the original paper, Is this implementation correct?
04-11-2022 06:09:20
04-11-2022 06:09:20
I noticed there is [a relevant issue](https://github.com/huggingface/transformers/issues/16583), @patil-suraj has mentioned > In BART `eos` (`</s>`) token is used as the `decoder_start_token_id` But why do we need `</s>` to be the `decoder_start_token_id`? In my understanding of the original paper, the `<TGT_LANG>` plays this role, according to Figure 1 again.<|||||>Hi! In the code snippet you are using `mbart-50` tokenizer, which uses a different format than mbart-25 (the model mentioned in the paper) as explained in the [doc](https://huggingface.co/docs/transformers/model_doc/mbart#mbart-and-mbart50). If you use the mbart-25 tokenizer in your code you'll see that that it follows the format from the paper. ```python tokenizer = AutoTokenizer.from_pretrained('facebook/mbart-large-cc25', src_lang='en_XX', tgt_lang='ro_RO') ``` Also > But why do we need </s> to be the decoder_start_token_id? In my understanding of the original paper, the <TGT_LANG> plays this role, according to Figure 1 again. This is an artifact of the `fairseq` repo. In fairseq the `labels` are shifted to the right to get the `decoder_input_ids`, which makes the `eos` token the first token in the sequence.<|||||>@patil-suraj Thanks for your reply. I tried `mbart-large-cc25` as you mentioned as the following, ```python from transformers import MBartForConditionalGeneration, AutoTokenizer from transformers.models.mbart.modeling_mbart import shift_tokens_right tokenizer = AutoTokenizer.from_pretrained( 'facebook/mbart-large-cc25', src_lang='en_XX', tgt_lang='ro_RO', ) model = MBartForConditionalGeneration.from_pretrained('facebook/mbart-large-cc25') print(tokenizer.tokenize('UN Chief Says There Is No Military Solution in Syria', add_special_tokens=True)) # ['▁UN', '▁Chief', '▁Say', 's', '▁There', '▁Is', '▁No', '▁Militar', 'y', '▁Solution', '▁in', '▁Syria', '</s>', 'en_XX'] src = tokenizer('UN Chief Says There Is No Military Solution in Syria', return_tensors='pt') with tokenizer.as_target_tokenizer(): tgt = tokenizer('Şeful ONU declară că nu există o soluţie militară în Siria', return_tensors='pt') hidden = model.forward( input_ids=src['input_ids'], attention_mask=src['attention_mask'], decoder_input_ids=shift_tokens_right(tgt['input_ids'], pad_token_id=tokenizer.pad_token_id), decoder_attention_mask=tgt['attention_mask'], ) out = hidden.logits.argmax(dim=-1) print(tokenizer.convert_ids_to_tokens(out[0].detach().tolist())) # ['<s>', 'f', '▁of', '▁Say', '▁Say', 'ry', '▁Say', '▁Say', '▁Say', '▁Say', '▁Say', '▁Say', '▁Say', '▁Say', '▁Say', '▁Say', '▁Say', '▁Chief'] ``` Yes, the tokenizer works fine, the `en_XX` is put at the end of the sentence. However, the predictions from `hidden` are quite different from the ground truth. Especially, the prediction starts with a `<s>` and ends with a `▁Chief`, we are not supposed to use `<s>`, are we? I am not sure if I did something wrong.<|||||>Thanks anyway.
transformers
16,689
closed
`"histogram_cpu" not implemented for 'BFloat16'` when using deepspeed and reporting to wandb
## Environment info - `transformers` version: 4.18.0 - Platform: Linux-5.13.0-20-generic-x86_64-with-glibc2.10 - Python version: 3.8.5 - Huggingface_hub version: 0.5.1 - PyTorch version (GPU?): 1.11.0+cu113 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: True - Using distributed or parallel set-up in script?: Deepspeed ### Who can help @stas00 ## Information Model I am using (Bert, XLNet ...): bart-large The problem arises when using: * [X] the official example scripts: (give details below) * [X] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [X] my own task or dataset: (give details below) ## To reproduce I'm using a training script adapted from the run_summarization.py example with a model using bart-large architecture and a custom tokenizer. I'm working locally on my workstation with two RTX 3090s. I had been training using deepspeed and fp16, but I saw that the latest transformers update added bf16 support to the deepspeed integration, so I wanted to try that in order to reduce the constant overflow errors I had been getting. But when using deepspeed, bf16, and reporting to wandb, my training crashes. I'm able to reproduce the error using the example scripts: ``` deepspeed run_summarization.py \ --model_name_or_path facebook/bart-large \ --dataset_name cnn_dailymail --dataset_config_name 3.0.0 \ --do_train --per_device_train_batch_size 4 --bf16 \ --overwrite_output_dir --output_dir models/text_summarization \ --deepspeed config/deepspeed_config-zero2-bf16.json ``` with the deepspeed config being: ``` { "bf16": { "enabled": true }, "optimizer": { "type": "AdamW", "params": { "lr": "auto", "betas": "auto", "eps": "auto", "weight_decay": "auto" } }, "scheduler": { "type": "WarmupLR", "params": { "warmup_min_lr": "auto", "warmup_max_lr": "auto", "warmup_num_steps": "auto" } }, "zero_optimization": { "stage": 2, "allgather_partitions": true, "allgather_bucket_size": 2e8, "overlap_comm": true, "reduce_scatter": true, "reduce_bucket_size": 2e8, "contiguous_gradients": true, "cpu_offload": false }, "gradient_accumulation_steps": "auto", "gradient_clipping": "auto", "steps_per_print": 2000, "train_batch_size": "auto", "train_micro_batch_size_per_gpu": "auto", "wall_clock_breakdown": false } ``` After 500 steps (when saving the first checkpoint), wandb throws this error: `RuntimeError: "histogram_cpu" not implemented for 'BFloat16'` The error doesn't occur if I run the same script without deepspeed. And no other error gets thrown if I use deepspeed and don't report to wandb. [A very similar issue was reported to wandb last month](https://github.com/wandb/client/issues/3332). The wandb people say it's an issue with pytorch and not wandb, but since everything is working without deepspeed, maybe there's something different about how the deepspeed integration is reporting to wandb? ## Expected behavior The training should continue without crashing, and should report as much info to wandb as possible (not sure if there are limits to that introduced by bf16 )
04-10-2022 23:22:06
04-10-2022 23:22:06
Hey @jncasey, thanks for the report. > since everything is working without deepspeed, maybe there's something different about how the deepspeed integration is reporting to wandb? Could you check that the script still breaks when you set `"bf16"` to `"enabled": false` in the DeepSpeed config? Also, a full traceback would be helpful!<|||||>Updating the DeepSpeed config to set `"bf16" { "enabled": false }` raises this error, as one might expect: ``` ValueError: Please correct the following DeepSpeed config values that mismatch TrainingArguments values: - ds bf16.enabled=False vs hf bf16|bf16_full_eval=True ``` Setting `"bf16" { "enabled": "auto" }` yields the original error I reported. Here's the full traceback: ``` 0%|▎ | 499/107670 [03:39<13:13:26, 2.25it/s]Traceback (most recent call last): File "./bin/run_summarization.py", line 706, in <module> main() File "./bin/run_summarization.py", line 625, in main train_result = trainer.train(resume_from_checkpoint=checkpoint) File "/opt/miniconda3/envs/hf/lib/python3.8/site-packages/transformers/trainer.py", line 1422, in train tr_loss_step = self.training_step(model, inputs) File "/opt/miniconda3/envs/hf/lib/python3.8/site-packages/transformers/trainer.py", line 2027, in training_step loss = self.deepspeed.backward(loss) File "/opt/miniconda3/envs/hf/lib/python3.8/site-packages/deepspeed/utils/nvtx.py", line 11, in wrapped_fn return func(*args, **kwargs) File "/opt/miniconda3/envs/hf/lib/python3.8/site-packages/deepspeed/runtime/engine.py", line 1667, in backward self.optimizer.backward(loss) File "/opt/miniconda3/envs/hf/lib/python3.8/site-packages/deepspeed/runtime/zero/stage_1_and_2.py", line 1921, in backward self.loss_scaler.backward(loss.float(), retain_graph=retain_graph) File "/opt/miniconda3/envs/hf/lib/python3.8/site-packages/deepspeed/runtime/fp16/loss_scaler.py", line 53, in backward scaled_loss.backward(retain_graph=retain_graph) File "/opt/miniconda3/envs/hf/lib/python3.8/site-packages/torch/_tensor.py", line 363, in backward torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs) File "/opt/miniconda3/envs/hf/lib/python3.8/site-packages/torch/autograd/__init__.py", line 173, in backward Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass File "/opt/miniconda3/envs/hf/lib/python3.8/site-packages/wandb/wandb_torch.py", line 266, in <lambda> handle = var.register_hook(lambda grad: _callback(grad, log_track)) File "/opt/miniconda3/envs/hf/lib/python3.8/site-packages/wandb/wandb_torch.py", line 264, in _callback self.log_tensor_stats(grad.data, name) File "/opt/miniconda3/envs/hf/lib/python3.8/site-packages/wandb/wandb_torch.py", line 215, in log_tensor_stats tensor = flat.histc(bins=self._num_bins, min=tmin, max=tmax) RuntimeError: "histogram_cpu" not implemented for 'BFloat16' ```<|||||>Please help me understand this issue - why are we discussing a problem in wandb at `transformers`? Clearly wandb can't handle bf16 inputs in at least one code path - what does it have to do with deepspeed or transformers? The next logical step is to either have `wandb` workaround bf16 if it can't handle it and not use `histogram_cpu` for bf16 inputs or ask pytorch to implement it for bf16/cpu.<|||||>Hi Stas, My thought was that since the same script can report to wandb using bf16 when not using DeepSpeed, there might be something different in how the DeepSpeed integration handles the reporting, and it might be possible to avoid the problem the way the non-DeepSpeed run does. But I admit I'm out of my depth here, so maybe m thinking is flawed and there's nothing to be done on the transformers side.<|||||>The difference is that normally w/o deepspeed you're using bf16/amp, which keeps the model and activations in fp32 and downcasts them to bf16 when needed. Deepspeed doesn't use amp and uses a different approach where it keeps the model and activations in the half precision mode from the get going (fp16 or bf16) (but keeps a fp32 weights copy in its optimizer), and so it trips wandb's code which doesn't expect bf16 tensors. It's not something that can be changed in Deepspeed or the HF integration. Possible solutions: For example, if the bf16 input < 64k, wandb could make a copy and safely convert it to fp16 and run `histogram_cpu` on it (I assume it supports fp16), if not perhaps it could do the processing on gpu. But the simplest solution is to request pytorch to implement `histogram_cpu` for `BFloat16` inputs, which you or the wandb folks can ask via a "feature request" at https://github.com/pytorch/pytorch/issues/new/choose but which of course will take time. So possibly both solutions could be used together. <|||||>Got it! Thanks for the super clear explanation.<|||||>Late update: This seems to be fixed with the release of pytorch 1.12<|||||>Super! Thank you for the update, @jncasey <|||||>I'm still having this bug with torch 0.13.0+cu116 (`RuntimeError: "histogram_cpu" not implemented for 'Byte'`)<|||||>> I'm still having this bug with torch 0.13.0+cu116 (`RuntimeError: "histogram_cpu" not implemented for 'Byte'`) @julien-blanchon Did you solve it? I am getting this error when reporting to wandb using `torch==2.0.1`, `wandb==0.15.5` and `accelerate==0.20.3`.<|||||>No, I'm still waiting for a patch here: https://github.com/pytorch/pytorch/issues/75667
transformers
16,688
closed
Cannot train M2M100 using run_translation.py and DeepSpeed ZeRO stage 3
## Environment info - `transformers` version: 4.18.0 - Platform: Linux - Python version: 3.8.12 - PyTorch version (GPU?): 1.10 - Tensorflow version (GPU?): - - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: Deepspeed ZeRO stage 3 Library Versions: - deepspeed 0.6.1 - transformers 4.18.0 - pytorch 1.10 ### Who can help @[stas00](https://github.com/stas00) ## Information The problem arises when: * I try to finetune the Hugging Face `facebook/m2m100_418M` model using the `run_translation.py` script under `transformers/examples/pytorch/translation/run_translation.py` and deepspeed ZeRO stage 3. If I use `t5-small` instead of `facebook/m2m100_418M` then the model trains. Also, if I use `facebook/m2m100_418M` and `ds_config_zero2.json` instead of `ds_config_zero3.json`, then the models trains again. ## To reproduce ``` deepspeed run_translation.py \ --deepspeed ds_config_zero3.json \ --model_name_or_path facebook/m2m100_418M \ --per_device_train_batch_size 8 \ --per_device_eval_batch_size 8 \ --output_dir output_dir --overwrite_output_dir \ --fp16 \ --do_train --do_eval --do_predict \ --max_train_samples 500 --max_eval_samples 50 --max_predict_samples 50 \ --num_train_epochs 3 \ --dataset_name wmt16 --dataset_config "ro-en" \ --source_lang en --target_lang ro \ --predict_with_generate --forced_bos_token ro ``` where: - `run_translation.py` is the same file as in `transformers/examples/pytorch/translation/run_translation.py` - `ds_config_zero3.json` is the same file as in `transformers/tests/deepspeed/ds_config_zero3.json` Error: ``` Traceback (most recent call last): File "run_translation.py", line 636, in <module> main() File "run_translation.py", line 553, in main train_result = trainer.train(resume_from_checkpoint=checkpoint) File "/opt/conda/lib/python3.8/site-packages/transformers/trainer.py", line 1422, in train tr_loss_step = self.training_step(model, inputs) File "/opt/conda/lib/python3.8/site-packages/transformers/trainer.py", line 2011, in training_step loss = self.compute_loss(model, inputs) File "/opt/conda/lib/python3.8/site-packages/transformers/trainer.py", line 2043, in compute_loss outputs = model(**inputs) File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl return forward_call(*input, **kwargs) File "/opt/conda/lib/python3.8/site-packages/deepspeed/utils/nvtx.py", line 11, in wrapped_fn return func(*args, **kwargs) File "/opt/conda/lib/python3.8/site-packages/deepspeed/runtime/engine.py", line 1556, in forward loss = self.module(*inputs, **kwargs) File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1120, in _call_impl result = forward_call(*input, **kwargs) File "/opt/conda/lib/python3.8/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1306, in forward outputs = self.model( File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1120, in _call_impl result = forward_call(*input, **kwargs) File "/opt/conda/lib/python3.8/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1164, in forward encoder_outputs = self.encoder( File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1120, in _call_impl result = forward_call(*input, **kwargs) File "/opt/conda/lib/python3.8/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 819, in forward layer_outputs = encoder_layer( File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1120, in _call_impl result = forward_call(*input, **kwargs) File "/opt/conda/lib/python3.8/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 379, in forward hidden_states = self.self_attn_layer_norm(hidden_states) File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1109, in _call_impl result = hook(self, input) File "/opt/conda/lib/python3.8/site-packages/deepspeed/utils/nvtx.py", line 11, in wrapped_fn return func(*args, **kwargs) File "/opt/conda/lib/python3.8/site-packages/deepspeed/runtime/zero/stage3.py", line 1411, in _pre_forward_module_hook self.pre_sub_module_forward_function(module) File "/opt/conda/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 28, in decorate_context return func(*args, **kwargs) File "/opt/conda/lib/python3.8/site-packages/deepspeed/runtime/zero/stage3.py", line 1528, in pre_sub_module_forward_function self.param_coordinator.fetch_sub_module(sub_module) File "/opt/conda/lib/python3.8/site-packages/deepspeed/utils/nvtx.py", line 11, in wrapped_fn return func(*args, **kwargs) File "/opt/conda/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 28, in decorate_context return func(*args, **kwargs) File "/opt/conda/lib/python3.8/site-packages/deepspeed/runtime/zero/stage3.py", line 358, in fetch_sub_module raise RuntimeError( RuntimeError: tracing error at step 42: expected the next 2 parameters in the parameter fetch queue to be ({'id': 26, 'status': 'AVAILABLE', 'numel': 1024, 'ds_numel': 1024, 'shape': (1024,), 'ds_shape': (1024,), 'requires_grad': True, 'grad_shape': None, 'persist': True, 'active_sub_modules': {24}}, {'id': 27, 'status': 'AVAILABLE', 'numel': 1024, 'ds_numel': 1024, 'shape': (1024,), 'ds_shape': (1024,), 'requires_grad': True, 'grad_shape': None, 'persist': True, 'active_sub_modules': {24}}) but got ({'id': 115, 'status': 'NOT_AVAILABLE', 'numel': 0, 'ds_numel': 1024, 'shape': (0,), 'ds_shape': (1024,), 'requires_grad': True, 'grad_shape': None, 'persist': True, 'active_sub_modules': set()}, {'id': 116, 'status': 'NOT_AVAILABLE', 'numel': 0, 'ds_numel': 1048576, 'shape': (0,), 'ds_shape': (1024, 1024), 'requires_grad': True, 'grad_shape': None, 'persist': False, 'active_sub_modules': set()}). 1%|█ | 1/189 [00:01<04:33, 1.45s/it] [2022-04-10 20:34:32,488] [INFO] [launch.py:178:sigkill_handler] Killing subprocess 41615 [2022-04-10 20:34:32,488] [ERROR] [launch.py:184:sigkill_handler] ['/opt/conda/bin/python3.8', '-u', 'run_translation.py', '--local_rank=0', '--deepspeed', 'config/ds_config_zero3.json', '--model_name_or_path', 'facebook/m2m100_418M', '--per_device_train_batch_size', '8', '--per_device_eval_batch_size', '8', '--output_dir', 'output_dir', '--overwrite_output_dir', '--fp16', '--do_train', '--do_eval', '--do_predict', '--max_train_samples', '500', '--max_eval_samples', '50', '--max_predict_samples', '50', '--num_train_epochs', '3', '--dataset_name', 'wmt16', '--dataset_config', 'ro-en', '--source_lang', 'en', '--target_lang', 'ro', '--predict_with_generate', '--forced_bos_token', 'ro'] exits with return code = 1 ``` ## Expected behavior The model trains. ## Additional info Changing deepspeed version from 0.6.1 to 0.5.10 and transformers version from 4.18.0 to 4.16.2, results in the following error: ``` Traceback (most recent call last): File "run_translation.py", line 636, in <module> main() File "run_translation.py", line 553, in main train_result = trainer.train(resume_from_checkpoint=checkpoint) File "/opt/conda/lib/python3.8/site-packages/transformers/trainer.py", line 1365, in train tr_loss_step = self.training_step(model, inputs) File "/opt/conda/lib/python3.8/site-packages/transformers/trainer.py", line 1956, in training_step loss = self.deepspeed.backward(loss) File "/opt/conda/lib/python3.8/site-packages/deepspeed/runtime/engine.py", line 1697, in backward self.optimizer.backward(loss) File "/opt/conda/lib/python3.8/site-packages/deepspeed/runtime/zero/stage3.py", line 2944, in backward self.loss_scaler.backward(loss.float(), retain_graph=retain_graph) File "/opt/conda/lib/python3.8/site-packages/deepspeed/runtime/fp16/loss_scaler.py", line 53, in backward scaled_loss.backward(retain_graph=retain_graph) File "/opt/conda/lib/python3.8/site-packages/torch/_tensor.py", line 307, in backward torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs) File "/opt/conda/lib/python3.8/site-packages/torch/autograd/__init__.py", line 154, in backward Variable._execution_engine.run_backward( File "/opt/conda/lib/python3.8/site-packages/torch/autograd/function.py", line 199, in apply return user_fn(self, *args) File "/opt/conda/lib/python3.8/site-packages/deepspeed/runtime/zero/stage3.py", line 562, in backward ctx.pre_backward_function(ctx.module) File "/opt/conda/lib/python3.8/site-packages/deepspeed/runtime/zero/stage3.py", line 1456, in _run_before_backward_function self.pre_sub_module_backward_function(sub_module) File "/opt/conda/lib/python3.8/site-packages/deepspeed/runtime/zero/stage3.py", line 1551, in pre_sub_module_backward_function self.param_coordinator.prefetch_next_sub_modules(sub_module, File "/opt/conda/lib/python3.8/site-packages/deepspeed/runtime/zero/stage3.py", line 358, in prefetch_next_sub_modules params_to_prefetch = self.prefetch_coordinator.get_params_to_prefetch( File "/opt/conda/lib/python3.8/site-packages/deepspeed/runtime/zero/stage3.py", line 220, in get_params_to_prefetch if sub_module.id != self.sub_module_trace[self.step_id]: IndexError: list index out of range 1%|█ | 1/189 [00:01<04:02, 1.29s/it] [2022-04-10 20:44:02,482] [INFO] [launch.py:160:sigkill_handler] Killing subprocess 45884 [2022-04-10 20:44:02,482] [ERROR] [launch.py:166:sigkill_handler] ['/opt/conda/bin/python3.8', '-u', 'run_translation.py', '--local_rank=0', '--deepspeed', 'config/ds_config_zero3.json', '--model_name_or_path', 'facebook/m2m100_418M', '--per_device_train_batch_size', '8', '--per_device_eval_batch_size', '8', '--output_dir', 'output_dir', '--overwrite_output_dir', '--fp16', '--do_train', '--do_eval', '--do_predict', '--max_train_samples', '500', '--max_eval_samples', '50', '--max_predict_samples', '50', '--num_train_epochs', '3', '--dataset_name', 'wmt16', '--dataset_config', 'ro-en', '--source_lang', 'en', '--target_lang', 'ro', '--predict_with_generate', '--forced_bos_token', 'ro'] exits with return code = 1 ```
04-10-2022 20:50:56
04-10-2022 20:50:56
Thank you, @evros-chris I can reproduce this and I think this might be related to the peculiarity of this model that creates a new Parameter in `forward`, which currently Deepspeed isn't equipped to deal with. It somehow needs to be repartition its flattened tensors with the new Parameter. I have reported this problem [here](https://github.com/microsoft/DeepSpeed/pull/1606). But it could be something else. There is another issue with this model: ``` PYTHONPATH=src deepspeed --master_port 6666 --num_nodes 1 --num_gpus 2 examples/pytorch/translation/run_translation.py --train_file tests/fixtures/tests_samples/wmt_en_ro/train.json --source_lang en --target_lang ro --model_name_or_path hf-internal-testing/tiny-random-m2m_100 --do_train --max_train_samples 4 --per_device_train_batch_size 2 --num_train_epochs 1 --fp16 --report_to none --overwrite_output_dir --deepspeed tests/deepspeed/ds_config_zero3.json --output_dir /tmp/tmpi4k4wz8s --save_steps 1 [...] File "/mnt/nvme0/code/huggingface/transformers-master/src/transformers/models/m2m_100/modeling_m2m_100.py", line 175, in forward self.weights.requires_grad = False File "/home/stas/anaconda3/envs/py38-pt111/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1176, in __getattr__ self.make_weights(max_pos + self.offset, self.embedding_dim, self.padding_idx) File "/mnt/nvme0/code/huggingface/transformers-master/src/transformers/models/m2m_100/modeling_m2m_100.py", line 134, in make_weights self.weights.requires_grad = False File "/home/stas/anaconda3/envs/py38-pt111/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1176, in __getattr__ return _parameters[name] File "/mnt/nvme0/code/github/00optimize/deepspeed/deepspeed/runtime/zero/stage3.py", line 150, in __getitem__ if param.ds_status == ZeroParamStatus.NOT_AVAILABLE: AttributeError: 'Parameter' object has no attribute 'ds_status'return _parameters[name] ``` that's the one I reported to deepspeed. But I think they are related. The solution for the latter one is to ensure that when creating the model `config.max_position_embeddings` is set to the longest_seqlen so that it doesn't need to remake the positional embeddings and thus it won't create a new `Parameter` once it started training. I'm trying to figure out where the problem is coming from. I will keep you posted once I make some progress.<|||||>Thanks a lot @stas00!<|||||>OK, it's the `LayerDrop` that causes this problem. Here is one of them: https://github.com/huggingface/transformers/blob/69233cf03be5fbce0492f3997e139c4d05499e27/src/transformers/models/m2m_100/modeling_m2m_100.py#L799-L804 To quickly unblock you set the layerdrop probability directly in the model config or the application to `0.0`: ``` diff --git a/examples/pytorch/translation/run_translation.py b/examples/pytorch/translation/run_translation.py index f7e98276d..f5af70417 100755 --- a/examples/pytorch/translation/run_translation.py +++ b/examples/pytorch/translation/run_translation.py @@ -349,6 +349,9 @@ def main(): revision=model_args.model_revision, use_auth_token=True if model_args.use_auth_token else None, ) + #config.max_position_embeddings = 2048 + config.encoder_layerdrop = 0 + config.decoder_layerdrop = 0 model = AutoModelForSeq2SeqLM.from_pretrained( model_args.model_name_or_path, from_tf=bool(".ckpt" in model_args.model_name_or_path), ``` Meanwhile I will work on a workaround - since Deepspeed doesn't expect layers disappearing from `forward` stack.<|||||>OK, please try this PR: https://github.com/huggingface/transformers/pull/16717 It should work now with normal config and `LayerDrop`<|||||>Thanks a lot for your immediate help and explanation of the error @stas00! Setting `config.encoder_layerdrop` = 0 and `config.decoder_layerdrop = 0` works! However, I tried the PR: https://github.com/huggingface/transformers/pull/16717 and I still get the error below. To reproduce: ``` deepspeed examples/pytorch/translation/run_translation.py \ --deepspeed tests/deepspeed/ds_config_zero3.json \ --model_name_or_path facebook/m2m100_418M \ --per_device_train_batch_size 8 \ --per_device_eval_batch_size 8 \ --output_dir output_dir --overwrite_output_dir \ --fp16 \ --do_train --do_eval --do_predict \ --max_train_samples 500 --max_eval_samples 50 --max_predict_samples 50 \ --num_train_epochs 3 \ --dataset_name wmt16 --dataset_config "ro-en" \ --source_lang en --target_lang ro \ --predict_with_generate --forced_bos_token ro ``` Error: ``` Traceback (most recent call last): File "examples/pytorch/translation/run_translation.py", line 636, in <module> main() File "examples/pytorch/translation/run_translation.py", line 553, in main train_result = trainer.train(resume_from_checkpoint=checkpoint) File "/opt/conda/lib/python3.8/site-packages/transformers/trainer.py", line 1422, in train tr_loss_step = self.training_step(model, inputs) File "/opt/conda/lib/python3.8/site-packages/transformers/trainer.py", line 2011, in training_step loss = self.compute_loss(model, inputs) File "/opt/conda/lib/python3.8/site-packages/transformers/trainer.py", line 2043, in compute_loss outputs = model(**inputs) File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl return forward_call(*input, **kwargs) File "/opt/conda/lib/python3.8/site-packages/deepspeed/utils/nvtx.py", line 11, in wrapped_fn return func(*args, **kwargs) File "/opt/conda/lib/python3.8/site-packages/deepspeed/runtime/engine.py", line 1556, in forward loss = self.module(*inputs, **kwargs) File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1120, in _call_impl result = forward_call(*input, **kwargs) File "/opt/conda/lib/python3.8/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1306, in forward outputs = self.model( File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1120, in _call_impl result = forward_call(*input, **kwargs) File "/opt/conda/lib/python3.8/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1164, in forward Traceback (most recent call last): File "examples/pytorch/translation/run_translation.py", line 636, in <module> encoder_outputs = self.encoder( File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1120, in _call_impl result = forward_call(*input, **kwargs) File "/opt/conda/lib/python3.8/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 819, in forward main() File "examples/pytorch/translation/run_translation.py", line 553, in main layer_outputs = encoder_layer( File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1120, in _call_impl train_result = trainer.train(resume_from_checkpoint=checkpoint) File "/opt/conda/lib/python3.8/site-packages/transformers/trainer.py", line 1422, in train result = forward_call(*input, **kwargs) File "/opt/conda/lib/python3.8/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 379, in forward hidden_states = self.self_attn_layer_norm(hidden_states) File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1109, in _call_impl tr_loss_step = self.training_step(model, inputs) File "/opt/conda/lib/python3.8/site-packages/transformers/trainer.py", line 2011, in training_step result = hook(self, input) File "/opt/conda/lib/python3.8/site-packages/deepspeed/utils/nvtx.py", line 11, in wrapped_fn return func(*args, **kwargs) File "/opt/conda/lib/python3.8/site-packages/deepspeed/runtime/zero/stage3.py", line 1411, in _pre_forward_module_hook self.pre_sub_module_forward_function(module) File "/opt/conda/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 28, in decorate_context loss = self.compute_loss(model, inputs) File "/opt/conda/lib/python3.8/site-packages/transformers/trainer.py", line 2043, in compute_loss return func(*args, **kwargs) File "/opt/conda/lib/python3.8/site-packages/deepspeed/runtime/zero/stage3.py", line 1528, in pre_sub_module_forward_function self.param_coordinator.fetch_sub_module(sub_module) File "/opt/conda/lib/python3.8/site-packages/deepspeed/utils/nvtx.py", line 11, in wrapped_fn return func(*args, **kwargs) File "/opt/conda/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 28, in decorate_context outputs = model(**inputs) File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl return func(*args, **kwargs) File "/opt/conda/lib/python3.8/site-packages/deepspeed/runtime/zero/stage3.py", line 358, in fetch_sub_module raise RuntimeError( RuntimeError: tracing error at step 42: expected the next 2 parameters in the parameter fetch queue to be ({'id': 26, 'status': 'AVAILABLE', 'numel': 1024, 'ds_numel': 1024, 'shape': (1024,), 'ds_shape': (1024,), 'requires_grad': True, 'grad_shape': None, 'persist': True, 'active_sub_modules': {24}}, {'id': 27, 'status': 'AVAILABLE', 'numel': 1024, 'ds_numel': 1024, 'shape': (1024,), 'ds_shape': (1024,), 'requires_grad': True, 'grad_shape': None, 'persist': True, 'active_sub_modules': {24}}) but got ({'id': 115, 'status': 'NOT_AVAILABLE', 'numel': 0, 'ds_numel': 1024, 'shape': (0,), 'ds_shape': (1024,), 'requires_grad': True, 'grad_shape': None, 'persist': True, 'active_sub_modules': set()}, {'id': 116, 'status': 'NOT_AVAILABLE', 'numel': 0, 'ds_numel': 1048576, 'shape': (0,), 'ds_shape': (1024, 1024), 'requires_grad': True, 'grad_shape': None, 'persist': False, 'active_sub_modules': set()}). return forward_call(*input, **kwargs) File "/opt/conda/lib/python3.8/site-packages/deepspeed/utils/nvtx.py", line 11, in wrapped_fn return func(*args, **kwargs) File "/opt/conda/lib/python3.8/site-packages/deepspeed/runtime/engine.py", line 1556, in forward loss = self.module(*inputs, **kwargs) File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1120, in _call_impl result = forward_call(*input, **kwargs) File "/opt/conda/lib/python3.8/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1306, in forward outputs = self.model( File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1120, in _call_impl result = forward_call(*input, **kwargs) File "/opt/conda/lib/python3.8/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1164, in forward encoder_outputs = self.encoder( File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1120, in _call_impl result = forward_call(*input, **kwargs) File "/opt/conda/lib/python3.8/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 819, in forward layer_outputs = encoder_layer( File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1120, in _call_impl result = forward_call(*input, **kwargs) File "/opt/conda/lib/python3.8/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 379, in forward hidden_states = self.self_attn_layer_norm(hidden_states) File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1109, in _call_impl result = hook(self, input) File "/opt/conda/lib/python3.8/site-packages/deepspeed/utils/nvtx.py", line 11, in wrapped_fn return func(*args, **kwargs) File "/opt/conda/lib/python3.8/site-packages/deepspeed/runtime/zero/stage3.py", line 1411, in _pre_forward_module_hook self.pre_sub_module_forward_function(module) File "/opt/conda/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 28, in decorate_context return func(*args, **kwargs) File "/opt/conda/lib/python3.8/site-packages/deepspeed/runtime/zero/stage3.py", line 1528, in pre_sub_module_forward_function self.param_coordinator.fetch_sub_module(sub_module) File "/opt/conda/lib/python3.8/site-packages/deepspeed/utils/nvtx.py", line 11, in wrapped_fn return func(*args, **kwargs) File "/opt/conda/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 28, in decorate_context return func(*args, **kwargs) File "/opt/conda/lib/python3.8/site-packages/deepspeed/runtime/zero/stage3.py", line 358, in fetch_sub_module raise RuntimeError( RuntimeError: tracing error at step 42: expected the next 2 parameters in the parameter fetch queue to be ({'id': 26, 'status': 'AVAILABLE', 'numel': 1024, 'ds_numel': 1024, 'shape': (1024,), 'ds_shape': (1024,), 'requires_grad': True, 'grad_shape': None, 'persist': True, 'active_sub_modules': {24}}, {'id': 27, 'status': 'AVAILABLE', 'numel': 1024, 'ds_numel': 1024, 'shape': (1024,), 'ds_shape': (1024,), 'requires_grad': True, 'grad_shape': None, 'persist': True, 'active_sub_modules': {24}}) but got ({'id': 115, 'status': 'NOT_AVAILABLE', 'numel': 0, 'ds_numel': 1024, 'shape': (0,), 'ds_shape': (1024,), 'requires_grad': True, 'grad_shape': None, 'persist': True, 'active_sub_modules': set()}, {'id': 116, 'status': 'NOT_AVAILABLE', 'numel': 0, 'ds_numel': 1048576, 'shape': (0,), 'ds_shape': (1024, 1024), 'requires_grad': True, 'grad_shape': None, 'persist': False, 'active_sub_modules': set()}). 1%|█▉ | 1/96 [00:01<03:09, 1.99s/it] [2022-04-13 20:44:34,034] [INFO] [launch.py:178:sigkill_handler] Killing subprocess 598135 [2022-04-13 20:44:34,034] [INFO] [launch.py:178:sigkill_handler] Killing subprocess 598136 [2022-04-13 20:44:34,034] [ERROR] [launch.py:184:sigkill_handler] ['/opt/conda/bin/python3.8', '-u', 'examples/pytorch/translation/run_translation.py', '--local_rank=1', '--deepspeed', 'tests/deepspeed/ds_config_zero3.json', '--model_name_or_path', 'facebook/m2m100_418M', '--per_device_train_batch_size', '8', '--per_device_eval_batch_size', '8', '--output_dir', 'output_dir', '--overwrite_output_dir', '--fp16', '--do_train', '--do_eval', '--do_predict', '--max_train_samples', '500', '--max_eval_samples', '50', '--max_predict_samples', '50', '--num_train_epochs', '3', '--dataset_name', 'wmt16', '--dataset_config', 'ro-en', '--source_lang', 'en', '--target_lang', 'ro', '--predict_with_generate', '--forced_bos_token', 'ro'] exits with return code = 1 ```<|||||>I suspect you are still using the older `transformers`, can you make sure to uninstall it first, ensure it's not there and then install from that branch? Thank you! or alternatively make sure to set `PYTHONPATH` to where the new source is Here is how I normally do this: ``` git clone https://github.com/huggingface/transformers cd transformers git checkout ds-m2m-layerdrop PYTHONPATH=src deepspeed examples/pytorch/translation/run_translation.py [...] ```<|||||>Hmm, no, you're right, I can reproduce the issue. I will get back to you once I get a chance to look at it.<|||||>update, nope, it works just fine. I have just suggested to you to use `PYTHONPATH` and haven't used it myself ;) Try again with: ``` git clone https://github.com/huggingface/transformers cd transformers git checkout ds-m2m-layerdrop PYTHONPATH=src deepspeed examples/pytorch/translation/run_translation.py --deepspeed tests/deepspeed/ds_config_zero3.json --model_name_or_path facebook/m2m100_418M --per_device_train_batch_size 8 --per_device_eval_batch_size 8 --output_dir output_dir --overwrite_output_dir --fp16 --do_train --do_eval --do_predict --max_train_samples 500 --max_eval_samples 50 --max_predict_samples 50 --num_train_epochs 3 --dataset_name wmt16 --dataset_config "ro-en" --source_lang en --target_lang ro --predict_with_generate --forced_bos_token ro ```<|||||>Yes you are right, it does work! Thanks a lot for fixing this @stas00!
transformers
16,687
closed
Can't load pretrained TrOCR model
@NielsRogge I get this error when I try to load a local TrOCR checkpoint. ```python >>> processor = TrOCRProcessor.from_pretrained("./checkpoint-2") Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/users/gpupro/gpu_tazi/.local/lib/python3.8/site-packages/transformers/processing_utils.py", line 186, in from_pretrained args = cls._get_arguments_from_pretrained(pretrained_model_name_or_path, **kwargs) File "/usr/users/gpupro/gpu_tazi/.local/lib/python3.8/site-packages/transformers/processing_utils.py", line 230, in _get_arguments_from_pretrained args.append(attribute_class.from_pretrained(pretrained_model_name_or_path, **kwargs)) File "/usr/users/gpupro/gpu_tazi/.local/lib/python3.8/site-packages/transformers/models/auto/tokenization_auto.py", line 544, in from_pretrained tokenizer_class_py, tokenizer_class_fast = TOKENIZER_MAPPING[type(config)] File "/usr/users/gpupro/gpu_tazi/.local/lib/python3.8/site-packages/transformers/models/auto/auto_factory.py", line 564, in __getitem__ raise KeyError(key) KeyError: <class 'transformers.models.vision_encoder_decoder.configuration_vision_encoder_decoder.VisionEncoderDecoderConfig'> ``` This is the content of my checkpoint folder: ``` checkpoint-2 |-trainer_state.json |-preprocessor_config.json |-training_args.bin |-scaler.pt |-optimizer.pt |-scheduler.pt |-pytorch_model.bin |-rng_state.pth |-config.json ``` Yet, loading a TrOCR checkpoint from the hub works just fine.
04-10-2022 00:47:15
04-10-2022 00:47:15
I have exactly the same problem. If anyone finds a solution, it would be appreciated. If I manage to solve it, I will post it here ASAP. EDIT: I have not been able to follow the code trace exactly but I believe the error is as follows. When the model is created from the `hub.py` file, the model class is `DeitConfig`, however, when the model is created from a configuration file, the model appears as `VisionEncoderDecoderConfig`. In the `transformers.models.auto.auto_factory file`, it is validated that the model is among the tokenizers defined in `TOKENIZER_MAPPING` and if it is not, the error is raised. That is, the behavior of loading the same file from the hub and from local differs and that is what is causing this problem. <|||||>Could be related to https://github.com/huggingface/transformers/issues/14884<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>I have the same problem, Is any solution to this?<|||||>@NouamaneTazi If the new version 4.19 still doesn't work with your local checkpoint, could you please upload your checkpoint/config files to your HF Hub repo, and provide a link to it? @emigomez Do you use a local checkpoint or a checkpoint from the Hub? Could you try with v4.19?<|||||>Hi @emigomez , if the solution proposed by @ydshieh doesn't work for you, what I did was to train the model as in the tutorials and, once trained, load the model from the generated checkpoint: ``` model = VisionEncoderDecoderModel.from_pretrained(/path/to/local/checkpoint) ``` And the preprocessor I load it from the hub (using the same checkpoint of the pretrained model), for example, if you have finetuned the model from `microsoft/trocr-base-printed`: ``` preprocessor = TrOCRProcessor.from_pretrained(`microsoft/trocr-base-printed`). ``` This way the combination model + preprocessor works for me, allowing me to use the models to predict. I hope it helps you! <|||||>Hi @ydshieh I am using my local checkpoints obtained after fine tuning, and I am using the v4.19 yes I have generated this pretrained model with ``` trainer = Seq2SeqTrainer(.....) trainer.train() trainer.save_model("./models") ``` Hi @CuarteroAlvaro I was trying to make the inference with my model with: ``` MODEL_PATH = "./models/checkpoints/checkpoint_10000/" # option 1 MODEL_PATH = "./models/" # option 2 processor = TrOCRProcessor.from_pretrained(MODEL_PATH) model = VisionEncoderDecoderModel.from_pretrained(MODEL_PATH) ``` My error appears when I was trying to load the processor, so as you suggested I load the one used in my training: ` processor = TrOCRProcessor.from_pretrained('microsoft/trocr-base-printed') ` This is the complete code that I'm using to infer: ``` from transformers import TrOCRProcessor, VisionEncoderDecoderModel import requests from PIL import Image import time import torch MODEL_PATH = "./models/checkpoints/checkpoint-10000/" # option 1 MODEL_PATH = "./models/" # option 2 processor = TrOCRProcessor.from_pretrained('microsoft/trocr-base-printed') model = VisionEncoderDecoderModel.from_pretrained(MODEL_PATH) device = 'cuda' if torch.cuda.is_available() else 'cpu' print("Running in device:", device) model.to(device) image = Image.open("0_0.tif").convert("RGB") timeini = time.time() pixel_values = processor(image, return_tensors="pt").pixel_values generated_ids = model.generate(pixel_values) generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True)[0] timeend = time.time() - timeini print("\nResult: ", generated_text) print("Execution time: ", timeend) ``` With both MODEL_PATH options I obtain the next error: ``` $ python trocr_infer.py Running in device: cuda Traceback (most recent call last): File "trocr_infer.py", line 21, in <module> generated_ids = model.generate(pixel_values) File "C:\Users\MSI\AppData\Local\Programs\Python\Python38\lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context return func(*args, **kwargs) File "C:\Users\MSI\AppData\Local\Programs\Python\Python38\lib\site-packages\transformers\generation_utils.py", line 1172, in generate model_kwargs = self._prepare_encoder_decoder_kwargs_for_generation( File "C:\Users\MSI\AppData\Local\Programs\Python\Python38\lib\site-packages\transformers\generation_utils.py", line 525, in _prepare_encoder_decoder_kwargs_for_generation model_kwargs["encoder_outputs"]: ModelOutput = encoder(**encoder_kwargs) File "C:\Users\MSI\AppData\Local\Programs\Python\Python38\lib\site-packages\torch\nn\modules\module.py", line 1110, in _call_impl return forward_call(*input, **kwargs) File "C:\Users\MSI\AppData\Local\Programs\Python\Python38\lib\site-packages\transformers\models\vit\modeling_vit.py", line 572, in forward embedding_output = self.embeddings( File "C:\Users\MSI\AppData\Local\Programs\Python\Python38\lib\site-packages\torch\nn\modules\module.py", line 1110, in _call_impl return forward_call(*input, **kwargs) File "C:\Users\MSI\AppData\Local\Programs\Python\Python38\lib\site-packages\transformers\models\vit\modeling_vit.py", line 135, in forward embeddings = self.patch_embeddings(pixel_values, interpolate_pos_encoding=interpolate_pos_encoding) File "C:\Users\MSI\AppData\Local\Programs\Python\Python38\lib\site-packages\torch\nn\modules\module.py", line 1110, in _call_impl return forward_call(*input, **kwargs) File "C:\Users\MSI\AppData\Local\Programs\Python\Python38\lib\site-packages\transformers\models\vit\modeling_vit.py", line 191, in forward x = self.projection(pixel_values).flatten(2).transpose(1, 2) File "C:\Users\MSI\AppData\Local\Programs\Python\Python38\lib\site-packages\torch\nn\modules\module.py", line 1110, in _call_impl return forward_call(*input, **kwargs) File "C:\Users\MSI\AppData\Local\Programs\Python\Python38\lib\site-packages\torch\nn\modules\conv.py", line 447, in forward return self._conv_forward(input, self.weight, self.bias) File "C:\Users\MSI\AppData\Local\Programs\Python\Python38\lib\site-packages\torch\nn\modules\conv.py", line 443, in _conv_forward return F.conv2d(input, weight, bias, self.stride, RuntimeError: Input type (torch.FloatTensor) and weight type (torch.cuda.FloatTensor) should be the same or input should be a MKLDNN tensor and weight is a dense tensor ``` And these are the files that I have under the previous folders: models/: `checkpoints/ config.json preprocessor_config.json pytorch_model.bin runs/ training_args.bin` models/checkpoints/checkpoint-10000/: `config.json optimizer.pt preprocessor_config.json pytorch_model.bin rng_state.pth scaler.pt scheduler.pt trainer_state.json training_args.bin` Do you know how to fix this problem? Thank you both for your quick reply!! <|||||>@emigomez Would you mind to share a complete code snippet of your training script and arguments, so we can reproduce the issue quickly in order to identify the cause? Without training, I couldn't reproduce the issue (by just loading/saving/loading-again) ``` from transformers import TrOCRProcessor, VisionEncoderDecoderModel processor = TrOCRProcessor.from_pretrained('microsoft/trocr-base-printed') model = VisionEncoderDecoderModel.from_pretrained('microsoft/trocr-base-printed') model.save_pretrained("./local_checkpoint") loaded_model = VisionEncoderDecoderModel.from_pretrained('./local_checkpoint') print(loaded_model) ```<|||||>The problem was that my model is on the GPU, but my data is on the CPU. So, I need to send my data to GPU changing on the previous code that I have shared: `generated_ids = model.generate(pixel_values.to(device))` And that is working. Thank you!!<|||||>Yes, i was just writing the answer! <|||||>@ydshieh, The problem appears when trying to load the preprocessor from a local checkpoint. I think this code will reproduce the issue: ```python from transformers import TrOCRProcessor, VisionEncoderDecoderModel processor = TrOCRProcessor.from_pretrained('microsoft/trocr-base-printed') model = VisionEncoderDecoderModel.from_pretrained('microsoft/trocr-base-printed') model.save_pretrained("./local_checkpoint") loaded_preprocessor = TrOCRProcessor.from_pretrained('./local_checkpoint') loaded_model = VisionEncoderDecoderModel.from_pretrained('./local_checkpoint') print(loaded_preprocessor, loaded_model) ```<|||||>Hi, @CuarteroAlvaro In you code snippet, there is 1 line missing ``` processor.save_pretrained('./local_checkpoint') # <-- this is required ``` The following will work ``` from transformers import TrOCRProcessor, VisionEncoderDecoderModel processor = TrOCRProcessor.from_pretrained('microsoft/trocr-base-printed') model = VisionEncoderDecoderModel.from_pretrained('microsoft/trocr-base-printed') model.save_pretrained("./local_checkpoint") processor.save_pretrained('./local_checkpoint') # <-- this is required loaded_preprocessor = TrOCRProcessor.from_pretrained('./local_checkpoint') loaded_model = VisionEncoderDecoderModel.from_pretrained('./local_checkpoint') print(loaded_preprocessor, loaded_model) ``` But I can't reproduce the error shown in @NouamaneTazi 's original issue. ``` KeyError: <class 'transformers.models.vision_encoder_decoder.configuration_vision_encoder_decoder.VisionEncoderDecoderConfig'> ```<|||||>I don't seem to be able to reproduce the problem, so I'll mark this as resolved! Thanks for your help 🤗
transformers
16,686
closed
fixed crash when deleting older checkpoint and files with name f"{checkpoint_prefix}-*" exist
What does this PR do? I create an archive of older checkpoints during training the checkpoint has a name with `f"{checkpoint_prefix}-*.zip/.tar ` previously `glob(f"{checkpoint_prefix}-*")` takes all files/folders starting with the name checkpoint, and later `shutil.rmtree(checkpoint)` takes a folder name; since at some point it my get a zip file; it crashes training; adding this `if os.path.isdir(x)` allows only folders on `glob_checkpoints`. let's say output folder structure is like: (with `save_limit=5`) ``` checkpoint-36000 checkpoint-35000 checkpoint-34000 checkpoint-33000 checkpoint-33000.zip ``` then code attempts to remove oldest checkpoint since we have a file (checkpoint-33000.zip) and pass the file to `shutil.rmtree(checkpoint)` to delete it will fail. by avoiding storing files on `glob_checkpoints` this will get fixed! ( checking everything is folder as checkpoints are folders not single files.) **Before submitting:** - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? Who can review? @sgugger
04-10-2022 00:36:26
04-10-2022 00:36:26
_The documentation is not available anymore as the PR was closed or merged._<|||||>@sgugger
transformers
16,685
closed
Translate index.mdx (to ES) and add Spanish models to quicktour.mdx examples
# What does this PR do? Related to issue #15947 1. Translate `index.mdx` to Spanish 2. Replace English models and a dataset in `quicktour.mdx` with Spanish versions. ## Relevant - Translation of `index.mdx` included the list of compatible models (not the papers´ and models´ names). Since this should not be updated manually, I can come back to the original text if required. - Spanish models selected for quicktour are: - clasificador = pipeline('sentiment-analysis', model="pysentimiento/robertuito-sentiment-analysis") - reconocedor_de_voz = pipeline("automatic-speech-recognition", model="jonatasgrosman/wav2vec2-large-xlsr-53-spanish", device=0) - Spanish ASR dataset selected is: - dataset = datasets.load_dataset("PolyAI/minds14", name="es-ES", split="train") ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
04-09-2022 22:08:39
04-09-2022 22:08:39
_The documentation is not available anymore as the PR was closed or merged._<|||||>@sgugger, I ran `make style` and made some manual debug but cannot get `check_code_quality` to pass. Any idea why could it be? I think the error is in[ these lines](https://github.com/huggingface/transformers/pull/16685/files#diff-517e793a1e18859abaf368b0c3f4d344231747ebc1ceb5154ca94159a7207bf1R110-R116): ` A continuación, carga el dataset (ve 🤗 Datasets [Quick Start](https://huggingface.co/docs/datasets/quickstart.html) para más detalles) sobre el que quisieras iterar. Por ejemplo, vamos a cargar el dataset [MInDS-14](https://huggingface.co/datasets/PolyAI/minds14): ```py >>> from datasets import load_dataset, Audio >>> dataset = load_dataset("PolyAI/minds14", name="es-ES", split="train") # doctest: +IGNORE_RESULT ``` `<|||||>Make sure you have the latest version of `hf-doc-builder` installed (`pip install hf-doc-builder -U`) then run `make style`.<|||||>EDIT: Fixed by @osanseviero in PR #17197. Sorry @sgugger, even with updating `hf-doc-builder` to the latest version, `0.4.0`, and running `make style` the `check_code_quality` error continues appearing. I am not able to find the error and the [feedback in CircleCI](https://app.circleci.com/pipelines/github/huggingface/transformers/39893/workflows/35203812-3c00-42fa-b7e6-2d1789c21b12/jobs/449693) is not revealing (`ValueError: 1 files should be restyled!`). When running `make style` I get this error (I tried to debug in other ways but the error continues): ![Screen Shot 2022-05-11 at 23 45 35](https://user-images.githubusercontent.com/4755430/167993881-683ab4e8-7662-4697-9d1e-8e5e929ddd21.png) In the meantime, I merged the PR with the error so we can get the index and quicktour before the release tomorrow. However, please let me know if we should proceed in another way. Sorry for this cumbersome merge. They are looking fine in the docs: - [Index](https://huggingface.co/docs/transformers/main/es/index); - [Quicktour](https://huggingface.co/docs/transformers/main/es/quicktour).
transformers
16,684
closed
`FlaxBartForConditionalGeneration` has a `.encode` method but `BartForConditionalGeneration` does not
## Environment info - `transformers` version: 4.18.0 - Platform: Linux-5.11.0-1018-gcp-x86_64-with-glibc2.31 - Python version: 3.10.4 - Huggingface_hub version: 0.5.1 - PyTorch version (GPU?): 1.11.0+cu102 (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): 0.4.1 (tpu) - Jax version: 0.3.5 - JaxLib version: 0.3.5 - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help @patil-suraj ## Information Model I am using: BART ## To reproduce Flax version (working): ```python from transformers import BartTokenizer, FlaxBartForConditionalGeneration tokenizer = BartTokenizer.from_pretrained('facebook/bart-base') model = FlaxBartForConditionalGeneration.from_pretrained('facebook/bart-base') inputs = tokenizer('Travelers wait about an hour and a half to cross the Tower.', return_tensors='jax') outputs = model.encode(**inputs) print(outputs) # OK ``` PyTorch version (not working): ```python from transformers import BartTokenizer, BartForConditionalGeneration tokenizer = BartTokenizer.from_pretrained('facebook/bart-base') model = BartForConditionalGeneration.from_pretrained('facebook/bart-base') inputs = tokenizer('Travelers wait about an hour and a half to cross the Tower.', return_tensors='pt') outputs = model.encode(**inputs) # AttributeError: 'BartForConditionalGeneration' object has no attribute 'encode' ``` ## Expected behavior The PyTorch version should also have a `.encode` method to generate the encoder output, as the Flax version does. ## Actual behavior The PyTorch version does not a `.encode` method.
04-09-2022 16:08:52
04-09-2022 16:08:52
Hi! The `encode` method does not exist in PT model because it's possible to access the encoder using `model.get_encoder` and then call it to run the encoder. Flax does not allow accessing modules like this so we need to expose explicit methods like `encode`. This is not required for PT.<|||||>Thanks for the explanation!
transformers
16,683
closed
Option to change Ray's gridsearch scope
# 🚀 Feature request It would be great if we can get more control over how Ray is selecting the best trial in a hyperparameter search. Currently, the default value for [`get_best_trial`](https://docs.ray.io/en/latest/tune/api_docs/analysis.html#ray.tune.ExperimentAnalysis.get_best_trial) is used here: https://github.com/huggingface/transformers/blob/7c5d79912a21880ce13d77881940458e90d98917/src/transformers/integrations.py#L299 namely `scope="last"`. So for each trial, it will simply take the _last_ checkpoint of that trial, and use its performance to compare with the other trials. This is not always ideal as it may very well be that some trials converge sooner than other and than overfit, leading to poor evaluation scores in their last checkpoints. Fortunately, Ray allows other options, such as `"all"`, which takes the best checkpoint instead of the last, of each trial and compares those. ## Motivation Currently there is no way to pass this through to Ray from within `transformers` as I can tell, yet it is an important aspect of hyperparameter search. ## Your contribution I can work on this if requested, although I am still looking for input how to best tackle this. One could simply add an argument to TrainingArguments, e.g. `ray_scope`, and then change this line https://github.com/huggingface/transformers/blob/7c5d79912a21880ce13d77881940458e90d98917/src/transformers/integrations.py#L299 to ```python best_trial = analysis.get_best_trial(metric="objective", mode=direction[:3], scope=trainer.args.ray_scope) ```
04-09-2022 13:24:07
04-09-2022 13:24:07
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Yes, I'd still like to see this added. I can do a quick PR if needed.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Another bump to remind myself that I'll do a PR for this in the coming month.
transformers
16,682
closed
Type hint complete Albert model file.
# What does this PR do? Type hint the Albert Model file for PyTorch Model. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Part of #16059 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> @Rocketknight1
04-09-2022 11:35:37
04-09-2022 11:35:37
_The documentation is not available anymore as the PR was closed or merged._<|||||>Hi, I'm sorry for the delay with this one! We really appreciate it, and I'm trying to get a chance to look through it all ASAP!<|||||>Thanks a lot and I am glad that you liked the work. More of these will be coming in a few days for other models, so it would be great if I know up to what level of detailing is needed for the Type Annotations.<|||||>@karthikrangasai I spoke to the team and the conclusion was that we should just use `Union[AlbertForPreTrainingOutput, Tuple]` - as well as being easier for you, when these type hints are copied into the documentation, it'll be a lot more readable that way. That said, I respect the dedication and precision that went into making it, so I'm a little sad to see it go.<|||||>Hello @Rocketknight1 , Sure, no worries. I can make the changes and update the Pull Request in a while. Although I am not sure what the reason is for to have return type a Tuple, a suggestion is to maybe remove the return type as "Tuple" and make it the respective output type for all models. This will have keyword based output values like ``` output = AlbertModel(**inputs) output.last_hidden_state output.pooler_output ``` and this might become a more cleaner API. <|||||>Seeing some code quality issues, I'm guessing because we changed our versions for code formatting tools and might need to rebase. Let me check!<|||||>It seems like you'll need to pull commits from our repo to the main branch of your repo, then rebase your branch, then force push to update. After that, the files should be updated and the error should go away!<|||||>Hello @Rocketknight1, I have updated the PR with the main branch. All tests are passing.<|||||>Great job. Thank you for this PR, it's much appreciated!
transformers
16,681
closed
`LongT5`: Efficient Text-To-Text Transformer for Long Sequences
# 🌟 New model addition -- LongT5: Efficient Text-To-Text Transformer for Long Sequences ## Model description LongT5 is an extension of the [T5 model](https://github.com/google-research/text-to-text-transfer-transformer) that handles long sequence inputs more efficiently. We integrated attention ideas from long-input transformers [ETC](https://arxiv.org/abs/2004.08483),and adopted pre-training strategies from summarization pre-training [PEGASUS](https://arxiv.org/abs/1912.08777) into the scalable T5 architecture. The result is a new attention mechanism we call Transient Global(TGlobal), which mimics ETC’s local/globalattention mechanism, but without requiring additional side-inputs. We are able to achieve state-of-the-art results on several summarization and question answering tasks, as well as outperform the original T5 models on these tasks. *Description copied from https://github.com/google-research/longt5/blob/master/README.md.* The full paper is currently available on arXiv -- [LongT5: Efficient Text-To-Text Transformer for Long Sequences](https://arxiv.org/abs/2112.07916). ## Open source status The model has its own repository available [here](https://github.com/google-research/longt5). * [x] the model implementation is available - the model implementation is available at [Google FlaxFormer repo](https://github.com/google/flaxformer/tree/main/flaxformer/architectures/longt5). * [x] the model weights are available: Currently, Google has released five checkpoints listed in the [LongT5 repo](https://github.com/google-research/longt5) - **LongT5-Local-Base** (250 million parameters) - **LongT5-TGlobal-Base** (250 million parameters) - **LongT5-Local-Large** (780 million parameters) - **LongT5-TGlobal-Large** (780 million parameters) - **LongT5-TGlobal-XL** (3 billion parameters) * [x] who are the authors: @mandyguo-xyguo, Joshua Ainslie, @duthus, @santiontanon, @nijianmo, @yhsung, @yinfeiy, (not sure with some GitHub names, so will be happy if anyone can complete it :] ) ### Additional context If anyone from the original authors won't be interested in porting the model into the `transformers`, I'll be more than happy to work on this :].
04-09-2022 10:33:37
04-09-2022 10:33:37
Thanks a lot for opening the issue @stancld ! I'm quite busy with other projects at the moment so I'd be more than happy to guide you here! Do you want to give it a try?<|||||>Also cc @stefan-it @peregilk @versae <|||||>This might also be helpful: https://github.com/patrickvonplaten/t5-mtf-to-hf-converter<|||||>Here some info for a T5X -> HF conversion script: https://github.com/google-research/t5x/issues/198<|||||>Feel free to start working on it - I'm more than happy to help you if you're stuck :-) Also cc @patil-suraj @LysandreJik @craffel for notification<|||||>https://github.com/google-research/longt5/issues/2#issue-1198955086<|||||>This is super cool! Happy to help if anyone wants to give it a try :) <|||||>@patrickvonplaten @patil-suraj I'm gonna give it a try and will try to open a draft PR as soon as I have some progress! :] Also @patrickvonplaten, thanks a lot for all the useful links you have posted here! :]
transformers
16,680
closed
Trying to Train Lonformer but from standard transfomer file, error AttributeError: module 'wandb' has no attribute 'run'. Even when I have not install Wandb
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.18.0 - Platform: Windows - Python version: 3.7.13 - PyTorch version (GPU?): Yes - Tensorflow version (GPU?): No - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. @sgugger Models: - Longformer, BigBird: @ydshieh Library: - Tokenizers: @SaulLu - Trainer: @sgugger --> ## Information The model Longformer, I am using in a fresh environment with below packages: torch==1.8.2+cu111 torchvision==0.9.2+cu111 torchaudio===0.8.2 tqdm mlflow pandas transformers datasets PyYAML boto3 matplotlib sklearn python-dotenv s3fs Ther Problem start when Trainer API executed. In the Transformers Python file error comes stating no module Wandb but I haven't installed Wandb. My understanding is if I have installed Wandb then only this Integration should trigger. I am getting Error in integrations.py in Transformer Package File "D:\Virtual_Env\GitHub_projects\AIOPS\DVC\Fact_Checking_Health_related_Claims\env\lib\site- packages\transformers\integrations.py", line 592, in setup if self._wandb.run is None: AttributeError: module 'wandb' has no attribute 'run' My Code from where I am triggering Trainer API: can be found here https://github.com/nabarunbaruaAIML/Fact_Checking_Health_related_Claims/blob/master/src/stage_03_train.py The tasks I am working on is: * Text Classification on Publicly available Dataset ## To reproduce Steps to reproduce the behaviour: 1. After setuping Environment 2. Execute Python File: stage_01_load_save & stage_02_prepare_dataset which download Public Dataset and convert it to Dataset 3. Lastly Execute Python file to start training via trainer API: python src/stage_03_train.py **Github:** https://github.com/nabarunbaruaAIML/Fact_Checking_Health_related_Claims/tree/master/src ![image](https://user-images.githubusercontent.com/64695833/162567205-ec7f3584-9f32-42cf-9f8c-00ee452ebffa.png) ## Expected behavior Training should happen
04-09-2022 10:06:57
04-09-2022 10:06:57
Got the Solution In previous Development I had set Wandb Global Environment Variable as True. And I suppose Transformer Library checks for the Variable and if found it will try for Integration because of which this Issue happened. To fix if you're not using Wandb then execute > export WANDB_DISABLED=true else add it to your env file This issue can be closed now.
transformers
16,679
closed
Enable more test_torchscript
# What does this PR do? Enable more `test_torchscript` (in 30 files) by updating `_create_and_check_torchscript`. (There are still 21 files with 23 places being `False` at this moment - they still give errors.) The main place to review is in `test_modeling_common.py`. The changes in model specific test files are removing lines regarding `test_torchscript = True/False`.
04-09-2022 07:55:57
04-09-2022 07:55:57
_The documentation is not available anymore as the PR was closed or merged._
transformers
16,678
closed
`FlaxBartForConditionalGeneration` should not require `input_ids` when `encoder_output` is provided
## Environment info - `transformers` version: 4.18.0 - Platform: Linux-5.11.0-1018-gcp-x86_64-with-glibc2.31 - Python version: 3.10.4 - Huggingface_hub version: 0.5.1 - PyTorch version (GPU?): 1.11.0+cu102 (False) - Tensorflow version (GPU?): 2.8.0 (False) - Flax version (CPU?/GPU?/TPU?): 0.4.1 (cpu) - Jax version: 0.3.5 - JaxLib version: 0.3.5 - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help @patil-suraj ## Information Model I am using: BART ## To reproduce ```python from transformers import BartTokenizer, BartForConditionalGeneration, FlaxBartForConditionalGeneration tokenizer = BartTokenizer.from_pretrained('facebook/bart-base') model_flax = FlaxBartForConditionalGeneration.from_pretrained('facebook/bart-base') inputs = tokenizer('Travelers wait about an hour and a half to cross the Tower.', return_tensors='jax') outputs_flax = model_flax.encode(**inputs) generate_ids_flax = model_flax.generate(attention_mask=inputs.attention_mask, encoder_output=outputs_flax) # TypeError: FlaxGenerationMixin.generate() missing 1 required positional argument: 'input_ids' import numpy as onp import torch from transformers.modeling_outputs import BaseModelOutput def jax2pt(a): return torch.from_numpy(onp.asarray(a)) model_pt = BartForConditionalGeneration.from_pretrained('facebook/bart-base') outputs_pt = BaseModelOutput(last_hidden_state=jax2pt(outputs_flax.last_hidden_state)) generate_ids_pt = model_pt.generate(attention_mask=jax2pt(inputs.attention_mask), encoder_outputs=outputs_pt) print(generate_ids_pt) # OK ``` ## Expected behavior The Flax model should work as the PyTorch model. ## Actual behavior `FlaxBartForConditionalGeneration` requires `input_ids`, even if `encoder_output` is provided.
04-09-2022 06:47:41
04-09-2022 06:47:41
ping @patil-suraj<|||||>Hey @ayaka14732 ! The flax generate method does not support passing in `encoder_outputs`, so `input_ids` is a required input.<|||||>Thanks! Why does the flax generate method not support passing in `encoder_outputs`? Is that a bug or a feature? I am trying to generate with `encoder_outputs`, but I don't have the `input_ids`. That's because the `encoder_outputs` are produced from a customized encoder rather than the original one. What approach would you suggest to achieve this?<|||||>We can support passing `encoder_outputs` in flax generate. Would you like to open a PR for this ? Happy to help with it. We'll need to modify this method to skip calling `.encode` if the `encoder_outputs` are passed as a kwarg. https://github.com/huggingface/transformers/blob/bae9b6458cb4aebaf3a2eea1ab5d47904062f30f/src/transformers/generation_flax_utils.py#L142-L149<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>I am going to make a PR<|||||>@patil-suraj Although the modification seems easy, I didn't find similar code in `generation_utils.py` to be used as a reference. In other words, the PyTorch version does not check `encoder_outputs` in `_prepare_encoder_decoder_kwargs_for_generation`: https://github.com/huggingface/transformers/blob/b9bb417324c0d9013c505dc39c016ab9ca0e23c8/src/transformers/generation_utils.py#L507-L527 Why is that?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>I believe this is not completed
transformers
16,677
closed
The accuracy of test set is different in training and evaluating Bert
Hi,friends, I meet a problem which puzzled me for couple of days. I want to training a Bert for two classification task on my custom data. I choose BertForSequenceClassification model. And I use the 'Trainer' from 'transformers', the tokenizer, model, metrics and Trainer code as follows: ` tokenizer=BertTokenizerFast.from_pretrained("bert_base_chinese") model = BertForSequenceClassification.from_pretrained("bert-base-chinese", return_dict=True) def computed_metric(eval_pred): predictions, labels = eval_pred predictions = np.argmax(predictions, axis=1) precision, recall, f1, _ =precision_recall_fscore_support(labels,predictions, average="macro") acc = accuracy_score(labels, predictions) return {"accuracy": acc, 'precision: ':precision, "f1" : f1, "recall" : recall } trainer = Trainer( model, args, train_dataset= dataset_train, eval_dataset= dataset_test, tokenizer= tokenizer, compute_metrics= computed_metric, ) ` The problem is: in the training process "eval_accuracy: 0.82", however, after t he training, I tested the model separately, and the statistical accuracy was only 0.74. The code for the test as follows: ` def tokenize(x): return tokenizer([x[0]], [x[1]], truncation=True, padding='max_length',max_length=512) answer_model = [] #record the answer from model model.eval() for i in range(len(data_text)): encoded_input = tokenize(data_text[i]) encoded_input["input_ids"] = torch.tensor(encoded_input["input_ids"]) encoded_input["token_type_ids"] = torch.tensor(encoded_input["token_type_ids"]) encoded_input["attention_mask"] = torch.tensor(encoded_input["attention_mask"]) output = model(**encoded_input) print(i) print(output) # [-2.9663, 2.3750] if output["logits"][0][0] < output["logits"][0][1]: answer_model.append(1) else: answer_model.append(0) acc = accuracy_score(data_label, answer_model) print("accuracy ",acc) #only 0.74 ` I have carefully checked the code for two days and retrained it, but the result is still different. I don't know where the problem might be. I hope to get your help!
04-09-2022 05:06:13
04-09-2022 05:06:13
Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests. Could you ask your question on the [forum](https://discuss.huggingface.co) instead? Thanks!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
16,676
closed
Adding new tokens to RobertaTokenizer gives very strange results - probably a bug.
I am trying to add new tokens to Roberta tokenizer, but the results are rather strange. from transformers import RobertaTokenizerFast t = RobertaTokenizerFast.from_pretrained('roberta-base') print(t.tokenize("Apples are too_big for turtles.")) t.add_tokens(["too_big","turtles"]) print(t.tokenize("Apples are too_big for turtles.")) The result before adding the tokens are: ['App', 'les', 'Ġare', 'Ġtoo', '_', 'big', 'Ġfor', 'Ġturtles', '.'] and this is ok, but after I add new tokes the results are: ['App', 'les', 'Ġare', 'Ġ', 'too_big', 'Ġfor', 'Ġ', 'turtles', '.'] and they should be: ['App', 'les', 'Ġare', ''Ġtoo_big', 'Ġfor', 'Ġturtles', '.'] Is this a bug? Normally i doubt the tokenizer ever has Ġ alone without anything else as a token.
04-09-2022 00:58:24
04-09-2022 00:58:24
For additional context, the non-fast `RobertaTokenizer` returns the following results. ```python >>> print(tokenizer.tokenize("Apples are too_big for turtles.")) ['App', 'les', 'Ġare', 'Ġtoo', '_', 'big', 'Ġfor', 'Ġturtles', '.'] >>> tokenizer.add_tokens(["too_big","turtles"]) >>> print(tokenizer.tokenize("Apples are too_big for turtles.")) ['App', 'les', 'Ġare', 'too_big', 'for', 'turtles', '.'] ``` cc @SaulLu<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Hi @Oxi84! Sorry for the late reply. As far as added tokens are concerned, you can set attributes when using a fast tokenizer (for the moment it won't work with a slow one). https://github.com/huggingface/transformers/blob/31616b8d613dcb7ac69b562d51b42d0db379f72f/src/transformers/tokenization_utils_base.py#L77-L91 So if the default behaviour doesn't suit you, you can for example change it to get the same behavior as the slow tokenizer: ```python from transformers import RobertaTokenizerFast, AddedToken t = RobertaTokenizerFast.from_pretrained('roberta-base') t.add_tokens([AddedToken("too_big", lstrip=True, rstrip=True), AddedToken("turtles", lstrip=True, rstrip=True)]) print(t.convert_ids_to_tokens(t.encode("Apples are too_big for turtles."))) # ['<s>', 'App', 'les', 'Ġare', 'too_big', 'for', 'turtles', '.', '</s>'] ``` Or if you want to be close to your initial proposal, you'll have to add a space at the beginning of the added tokens: ```python from transformers import RobertaTokenizerFast, AddedToken t = RobertaTokenizerFast.from_pretrained('roberta-base') t.add_tokens([" too_big", " turtles", "too_big","turtles"]) print(t.convert_ids_to_tokens(t.encode("Apples are too_big for turtles."))) # ['<s>', 'App', 'les', 'Ġare', ' too_big', 'Ġfor', ' turtles', '.', '</s>'] ```<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
16,675
closed
[Doctest] added doctest changes for electra
# What does this PR do? Adds Doctest fo Electra Pytorch Issue: #16292 ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? @ydshieh @patrickvonplaten
04-08-2022 19:07:01
04-08-2022 19:07:01
_The documentation is not available anymore as the PR was closed or merged._<|||||>Hi, I also got an error for `ElectraForPreTraining`. ~~I am still checking it.~~ Regarding `ElectraForPreTraining`, could you update that part with ``` >>> from transformers import ElectraForPreTraining, ElectraTokenizerFast >>> import torch >>> discriminator = ElectraForPreTraining.from_pretrained("google/electra-base-discriminator") >>> tokenizer = ElectraTokenizerFast.from_pretrained("google/electra-base-discriminator") >>> sentence = "The quick brown fox jumps over the lazy dog" >>> fake_sentence = "The quick brown fox fake over the lazy dog" >>> fake_tokens = tokenizer.tokenize(fake_sentence, add_special_tokens=True) >>> fake_inputs = tokenizer.encode(fake_sentence, return_tensors="pt") >>> discriminator_outputs = discriminator(fake_inputs) >>> predictions = torch.round((torch.sign(discriminator_outputs[0]) + 1) / 2) >>> fake_tokens ['[CLS]', 'the', 'quick', 'brown', 'fox', 'fake', 'over', 'the', 'lazy', 'dog', '[SEP]'] >>> predictions.squeeze().tolist() [0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0] ``` `google/electra-small-discriminator` gives all `0` - not a very inspiring example :-)<|||||>sure<|||||>Hi @bhadreshpsavani , please ping me once you think it's ready for the next review 🙏 Thank you!<|||||>Hi @ydshieh, Its ready, I have made the suggested changes and few more to fix the doctests<|||||>@patrickvonplaten Regarding `p a r i s`, check the fix in this PR https://github.com/huggingface/transformers/pull/16698 I will help @bhadreshpsavani to finalize this PR regarding this part :)<|||||>Hello @ydshieh, Since this fix for that issue is merged shall I pull the changes and make the required changes from `p a r i s` to `paris` ?<|||||>Looks good to me, tested locally and indeed rebasing on https://github.com/huggingface/transformers/pull/16698 solves the issue.<|||||>> Hello @ydshieh, Since this fix for that issue is merged shall I pull the changes and make the required changes from `p a r i s` to `paris` ? Would be great if you can pull, rebase, and change the expected value to `paris` (after a verification) 😄 . Thanks<|||||>Hi @ydshieh, I have updated the changes<|||||>Super nice PR! Thank you, @bhadreshpsavani , especially for the patience 💯 ! Merged (tested locally again -> all pass)<|||||>Hi @ydshieh, Thank you for all the help and guidance! Shall I create PR for other model as well?
transformers
16,674
closed
[Trainer] tf32 arg doc
As discussed at https://github.com/huggingface/transformers/issues/16588#issuecomment-1093056995 expanding the TF32 Trainer arg doc to define the default value and where to find more information. @sgugger
04-08-2022 17:24:05
04-08-2022 17:24:05
_The documentation is not available anymore as the PR was closed or merged._
transformers
16,673
closed
Load finetuned state dict without loading pretrained weights
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes #16672 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @sgugger <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
04-08-2022 17:07:44
04-08-2022 17:07:44
_The documentation is not available anymore as the PR was closed or merged._
transformers
16,672
closed
Can't load a local finetuned state dict anymore without loading the official pretrained weights first
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.18.0 - Platform: Ubuntu & Mac - Python version: 3.9.7 ### Who can help @sgugger ## Information Issue first reported [here](https://github.com/unitaryai/detoxify/issues/48) Model I am using (Bert, XLNet ...): Bert, Roberta The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: The code below worked before version 4.18.0. 1. cannot load a finetuned state dict (can download from [here](https://github.com/unitaryai/detoxify/releases/download/v0.3-alpha/toxic_debiased-c7548aa0.ckpt)) without loading the official pretrained HF weights (which worked by having `pretrained_model_name_or_path` as None): ```python model = RobertaForSequenceClassification.from_pretrained( pretrained_model_name_or_path=None, config="roberta-base", num_labels=16, state_dict=state_dict, ) ``` <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> Stack trace: ``` Exception has occurred: TypeError (note: full exception trace is shown but execution is paused at: _run_module_as_main) expected str, bytes or os.PathLike object, not NoneType File "[/detoxify/toxic-env/lib/python3.9/site-packages/torch/serialization.py]()", line 308, in _check_seekable f.seek(f.tell()) During handling of the above exception, another exception occurred: File "[/detoxify/toxic-env/lib/python3.9/site-packages/transformers/modeling_utils.py]()", line 349, in load_state_dict return torch.load(checkpoint_file, map_location="cpu") File "[/detoxify/toxic-env/lib/python3.9/site-packages/torch/serialization.py]()", line 594, in load with _open_file_like(f, 'rb') as opened_file: File "[/detoxify/toxic-env/lib/python3.9/site-packages/torch/serialization.py]()", line 235, in _open_file_like return _open_buffer_reader(name_or_buffer) File "[/detoxify/toxic-env/lib/python3.9/site-packages/torch/serialization.py]()", line 220, in __init__ _check_seekable(buffer) File "[/detoxify/toxic-env/lib/python3.9/site-packages/torch/serialization.py]()", line 311, in _check_seekable raise_err_msg(["seek", "tell"], e) File "[/detoxify/toxic-env/lib/python3.9/site-packages/torch/serialization.py]()", line 304, in raise_err_msg raise type(e)(msg) ``` ## Expected behavior This seems to only be an issue since #16343 was introduced and seems to be related to this [change](https://github.com/huggingface/transformers/pull/16343/files#diff-6b72b98c4c2dcfc6cc606843917733f5d858374fbc22a735ff483bbc0c1e63eaL1444-R1796) (L1444-R1796) What would solve this would be to have `if not is_sharded and state_dict is None:` on L1797.
04-08-2022 16:37:15
04-08-2022 16:37:15
transformers
16,671
closed
ASR Pipeline: End of transcripts missing when chunking enabled
## Environment info - `transformers` version: 4.18.0 - Platform: Linux-5.4.144+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.13 - Huggingface_hub version: 0.5.1 - PyTorch version (GPU?): 1.10.0+cu111 (False) - Tensorflow version (GPU?): 2.8.0 (False) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: yes & no - Using distributed or parallel set-up in script?: no ### Who can help @Narsil, @patrickvonplaten, @jonatasgrosman, @anton-l ## Information If you use the chunking feature implemented in the asr pipeline, in some cases it cuts off the end of the audio transcripts. Model I am using: `facebook/wav2vec2-large-960h-lv60-self` The problem arises when using: * [x] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) ## To reproduce The issue can be reproduced using the following colab notebook: https://colab.research.google.com/drive/1SSWr-2X2nnbKLa5dUSEm_Y_WFTju2-fN?usp=sharing ## Expected behavior `chunk_length_s=None` and `chunk_length_s=10` should yield the same (or similar) results, without cutting off the end of the transcript.
04-08-2022 16:14:23
04-08-2022 16:14:23
Hey @nkaenzig-aifund, Thanks a lot for the reproducible bug report! I'm looking into it and will try to submit a fix today. cc @Narsil ```python #!/usr/bin/env python3 from transformers import pipeline pipe = pipeline(model='facebook/wav2vec2-large-960h-lv60-self') result_1 = pipe('47.wav', chunk_length_s=None) result_2 = pipe('47.wav', chunk_length_s=10) print("Correct", result_1) print("Wrong", result_2) ``` with: !wget https://public-a2d129863a16ad26b0deda49d22c64b8.s3.us-west-2.amazonaws.com/47.wav
transformers
16,670
closed
Bug in Marian model (or tokenizer) in transformers==4.18.0
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.18.0 - Platform: Google Colab / Linux & Conda - Python version: 3.7.13 - PyTorch version (GPU?): 1.10.0 - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ### Who can help @patil-suraj <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - ALBERT, BERT, XLM, DeBERTa, DeBERTa-v2, ELECTRA, MobileBert, SqueezeBert: @LysandreJik - T5, Pegasus, EncoderDecoder: @patrickvonplaten - Blenderbot, MBART, BART, Marian, Pegasus: @patil-suraj - Reformer, TransfoXL, XLNet, FNet: @patrickvonplaten - Longformer, BigBird: @ydshieh - FSMT: @stas00 - Funnel: @sgugger - GPT-2, GPT: @patil-suraj, @patrickvonplaten, @LysandreJik - RAG, DPR: @patrickvonplaten, @lhoestq - TensorFlow: @Rocketknight1 - JAX/Flax: @patil-suraj - TAPAS, LayoutLM, LayoutLMv2, LUKE, ViT, BEiT, DEiT, DETR, CANINE: @NielsRogge - GPT-Neo, GPT-J, CLIP: @patil-suraj - Wav2Vec2, HuBERT, SpeechEncoderDecoder, UniSpeech, UniSpeechSAT, SEW, SEW-D, Speech2Text: @patrickvonplaten, @anton-l If the model isn't in the list, ping @LysandreJik who will redirect you to the correct contributor. Library: - Benchmarks: @patrickvonplaten - Deepspeed: @stas00 - Ray/raytune: @richardliaw, @amogkam - Text generation: @patrickvonplaten @narsil - Tokenizers: @SaulLu - Trainer: @sgugger - Pipelines: @Narsil - Speech: @patrickvonplaten, @anton-l - Vision: @NielsRogge, @sgugger Documentation: @sgugger Model hub: - for issues with a model, report at https://discuss.huggingface.co/ and tag the model's creator. HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj For research projetcs, please ping the contributor directly. For example, on the following projects: - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> ## Information Model I am using (Bert, XLNet ...): Marian The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) The tasks I am working on is: * [x] an official GLUE/SQUaD task: Oscar & ALT - Standard MT task * [ ] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. Extend the tokenizer using a target one 2. Add tokens 3. Run forward with model in training mode. 4. script and error reported here: https://colab.research.google.com/drive/1utS-L1iO1paiwKKPNqVHW5ARvprfRgG2?usp=sharing **Traceback below:** ``` [/usr/local/lib/python3.7/dist-packages/transformers/models/marian/modeling_marian.py](https://localhost:8080/#) in forward(self, input_ids, attention_mask, decoder_input_ids, decoder_attention_mask, head_mask, decoder_head_mask, cross_attn_head_mask, encoder_outputs, past_key_values, inputs_embeds, decoder_inputs_embeds, labels, use_cache, output_attentions, output_hidden_states, return_dict) 1452 if labels is not None: 1453 loss_fct = CrossEntropyLoss() -> 1454 masked_lm_loss = loss_fct(lm_logits.view(-1, self.target_vocab_size), labels.view(-1)) 1455 1456 if not return_dict: RuntimeError: shape '[-1, 65001]' is invalid for input of size 8320768 ``` <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior Standard Marian training output. No issue with `transformers==4.17.0` <!-- A clear and concise description of what you would expect to happen. -->
04-08-2022 15:18:11
04-08-2022 15:18:11
Good catch! Fix is here #16700<|||||>Thank you!
transformers
16,669
closed
Fix example logs repeating themselves
# Fix Loggers repeating themselves during the Examples tests ## What does this add? This PR refactores slighly how the `stream_handler` is made and passed to `logger` during all of the examples tests ## Why is it needed? When running the `no_trainer` tests, I was noticing that after each test was called the data would repeat itself. So after test 1 it was only logged once, test 2 twice, etc. For example something like the following was printed: ``` ***** Running training ***** ***** Running training ***** ***** Running training ***** Num examples = 10 Num examples = 10 Num examples = 10 ``` After further investigation, I found that this was due to the `stream_handler` being added at the start of each test. As discussed in this [SO post I found](https://stackoverflow.com/questions/6729268/log-messages-appearing-twice-with-python-logging), since the loggers are global we keep adding more handlers without actually removing them. As a result we kept getting multitudes of sys.stdout loggers being added, leading to these multiple print statements each time. The fix is before each test case is written, add in our handler (e.g.): ```python stream_handler = logging.StreamHandler(sys.stdout) logger.addHandler(stream_handler) class SomeTestCase(TestCasePlus): ... ``` Also unsure why this didn't quite happen during CircleCI, but this was what occurred locally for me
04-08-2022 15:07:30
04-08-2022 15:07:30
_The documentation is not available anymore as the PR was closed or merged._
transformers
16,668
closed
Generate: min length can't be larger than max length
# What does this PR do? Adds a simple check, as described in the PR title. Closes #16622
04-08-2022 14:47:36
04-08-2022 14:47:36
_The documentation is not available anymore as the PR was closed or merged._
transformers
16,667
closed
Only call get_output_embeddings when tie_word_embeddings is set
# What does this PR do? This PR just changes the order in which two conditions in `tie_weights` in the `PreTrainedModel` class are checked. This would allow us (and probably others) to avoid problems when initializing child classes of models. In our (maybe somewhat specific use case) we encounter a problem when initializing a child class of a model. During the call of the parent constructor (`PreTrainedModel`) `get_ourput_embeddings` of the child class is called when it is not completely initialized. This also happens when `tie_word_embeddings=False` and the output embeddings are not even needed. Changing the order of the conditions fixes the problem without changing the functionality. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @patrickvonplaten, @thomwolf, @sgugger <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: , @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
04-08-2022 14:38:28
04-08-2022 14:38:28
_The documentation is not available anymore as the PR was closed or merged._
transformers
16,666
closed
Fix some doc examples in task summary
# What does this PR do? Fix some doctest CI failure found [here](https://github.com/huggingface/transformers/runs/5877708951?check_suite_focus=true) A few failures: - ``` Expected nothing Got: '_unknown_' ``` - ``` Expected nothing Got: 'MISTER QUILTER IS THE APOSTLE OF THE MIDDLE CLASSES AND WE ARE GLAD TO WELCOME HIS GOSPEL' ``` - ``` Expected: <pad> prosecutors say the marriages were part of an immigration scam. if convicted, barrientos faces two criminal counts of "offering a false instrument for filing in the first degree" she has been married 10 times, nine of them between 1999 and 2002. Got: <pad> prosecutors say the marriages were part of an immigration scam. if convicted, barrientos faces two criminal counts of "offering a false instrument for filing in the first degree" she has been married 10 times, nine of them between 1999 and 2002.</s><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad> ``` There is one error I haven't fixed yet ``` No such file or directory: 'jfk_moon_speech.wav' ``` Do you know where to get this file for testing purpose ..?
04-08-2022 14:34:20
04-08-2022 14:34:20
_The documentation is not available anymore as the PR was closed or merged._<|||||>> Thanks for fixing these! > > The `jfk_moon_speech.wav` is a local file I used to test some code examples, so maybe we can replace it with an audio example from the Hub? Sure! Let me try. Haven't tried audio dataset from the Hub.<|||||>@sgugger @stevhliu I was able to use a file from dataset. One example of change is ```py >>> from transformers import pipeline >>> from datasets import load_dataset >>> import torch >>> torch.manual_seed(42) # doctest: +IGNORE_RESULT >>> dataset = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation") >>> dataset = dataset.sort("id") >>> audio_file = dataset[0]["audio"]["path"] >>> audio_classifier = pipeline( ... task="audio-classification", model="ehcalabres/wav2vec2-lg-xlsr-en-speech-emotion-recognition" ... ) >>> predictions = audio_classifier(audio_file) >>> predictions = [{"score": round(pred["score"], 4), "label": pred["label"]} for pred in predictions] >>> predictions [{'score': 0.1315, 'label': 'calm'}, {'score': 0.1307, 'label': 'neutral'}, {'score': 0.1274, 'label': 'sad'}, {'score': 0.1261, 'label': 'fearful'}, {'score': 0.1242, 'label': 'happy'}] ``` I ran the tests, and `task_summary.mdx` is fine, except the one in #16644 (it is not merged yet). Let me know if you have further comments :-)
transformers
16,665
closed
add dataset metadata to model card
# 🚀 Feature request When pulling a dataset from the hub, it would be useful to have some metadata about the specific dataset and version that is used. The metadata could then be passed to the `Trainer` which could then be saved to a model card. ## Motivation This is useful for people who run many experiments on different versions (commits/branches) of the same dataset. I opened an issue in `datasets` related to this because the metadata is currently unavailable for `Trainer`: https://github.com/huggingface/datasets/issues/4129
04-08-2022 14:25:48
04-08-2022 14:25:48
Might be of interest to @sgugger <|||||>Once the issue is solved on the Datasets side, the work on the Trainer can begin.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>@sgugger I just realized that in addition to tracking the dataset version, it would be important to also track the model version. Most pre-trained models shouldn't be changing, but this would be useful to have for the private hub. Would it make sense to include the revision in the model config file? That way the information would stay with the model files even if `push_to_hub` wasn't used. I guess it could still be lost if someone saves the state_dict instead of calling `save_pretrained` but I don't think that happens very often.<|||||>The revision is accessible whenever a user downloads a model so we know it. We can add it as a private variable in the config once it's loaded, but it shouldn't be saved as the revision will change every time the config is pushed.<|||||>When you say "the revision will change" are you referring to the revision associated with the model that just got fine-tuned or the revision associated with the pre-trained model that was used as the starting point for the fine-tuning?<|||||>The revision will change at each commit, so it's not something we can store and then push to the Hub as it will instantly mismatch the actual commit sha.<|||||>I'm not sure I follow what you are saying. Here is an example representing what I had in mind. I want to fine-tune bert-base-cased on squad. When I call from_pretrained, that pulls the most recent commit for the bert-base-cased repo, a8d257ba9925ef39f3036bfc338acf5283c512d9. Likewise, when I pull squad it pulls the most recent commit, d04f25d7823ef492730bfcf7ae02f363c2373a28. I proceed to fine-tune the model and then when I save it, I would like it to be known that I started with bert-base-cased at revision = a8d257ba9925ef39f3036bfc338acf5283c512d9 and the model was trained on squad at d04f25d7823ef492730bfcf7ae02f363c2373a28. If I then push a README.md file, there will be a new revision for the newly fine-tuned model. I don't need to track that. As long as I don't modify the model weights, I would like it to be tracked that I started with bert-base-cased at revision = a8d257ba9925ef39f3036bfc338acf5283c512d9 and it was trained on squad at d04f25d7823ef492730bfcf7ae02f363c2373a28 Does that make sense?<|||||>Yes, I'm just saying there is no point saving in the config file of your bert-base-cased model the sha of the commit (which we won't know before pusing).<|||||>Ok got it.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
16,664
closed
MT5ForConditionalGeneration has model.config.max_length=20 by default. Why?
- `transformers` version: 4.6.1 - Platform: Ubuntu 18 - Python version: 3.6 I spent one week training a T5 model with this package and couldn't figure out why my sequences obtained with Trainer.evaluate were only yielding a maximum of 20 tokens. I sent the `max_length` argument to the tokenizer to encode the input/output. After a long time I found out that this happens: ``` model = MT5ForConditionalGeneration.from_pretrained('google/mt5-small') model.config.max_length Out: 20 ``` The generate method was being used in Trainer because I used `predict_with_generate=True`. **Please change this behaviour, this was a very hard bug to find. `model.config.max_length` should be set to `None` by default, if the model does not have limitations.**
04-08-2022 13:31:08
04-08-2022 13:31:08
Hi @JoaoLages 👋 Tagging @patrickvonplaten for the discussion. Context for this discussion: We have had similar problems in the past, related to a small default `max_length` (e.g. https://github.com/huggingface/transformers/issues/16622). We are now setting proper checks for the `min_length` arguments (it must be smaller than `max_length`, https://github.com/huggingface/transformers/pull/16668). The issue is a double-edged sword. With no default, we risk generating beyond the model's input size, which may be undesirable. If we set `max_length` to the model's input size by default, we avoid the previous problem but `generate()` will still likely need significant resources (compute and memory), which may be frustrating to new users. With a small default, we run into the problem you described. I agree that the current state is not good enough, but increasing the default `max_length` may open a pandora's box of problems. Would it help if the console showed why generation stopped? (which is one of the following: max length reached, all sentences finished and early stopping was active, or no further score improvement was possible) Alternatively, we can increase the default to the model's input size, and rely on early stopping to ensure generation does not drag for long by default -- WDYT @patrickvonplaten? (we could add a tqdm-like progress bar so users don't think the process died)<|||||>> Would it help if the console showed why generation stopped? Would still be hard to find why `max_length` was reached. I still think it's best to always have `max_length` defined by default to the maximum value that the model can handle. A warning could be shown to know that this behaviour is in usage and that it may consume a lot of resources.<|||||>Sadly we cannot change this default anymore due to backward compatibility. Always having the model generate up to maximum allowed tokens can also be tricky - multiple models will always error out due to memory, some models like T5 have no max length really, ... so think we'll have to leave it at 20. Maybe we can improve the docs somehow<|||||>I have the idea that no one is using early stopping in these models because 'it doesn't work' and it is due to this silent behaviour 😅 <|||||>These legacy choices are painful, but changing them also causes many issues for downstream users 💔 @JoaoLages I've added three actions points for me: 1. Keep an eye on related issues, and bring up the discussion around this argument's default if this becomes a routine problem; 2. Try to log the stopping criteria (as discussed above), which may help to raise awareness around this argument; 3. Work on documentation, which I was going to anyway :D Is there anything else you believe we can do to improve `generate()`?<|||||>Not really, `generate` works perfectly, it was just this silent small max_length setting :) Some warnings/log messages would be better than nothing!<|||||>(going to keep the issue open to backlink in a future improved logging PR)<|||||>People that are familiar with `generate()` should know that `max_length` can and should be overwritten. I'll try to make the docs better here, but I don't think we should add a warning as this will literally be shown everytime someone calls generate without defining `max_length`<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
16,663
closed
Enabling `Tapex` in table question answering pipeline.
# What does this PR do? - Enables newly output `Tapex` models in the pipeline - Use a `self.type` flag, since `Tapas` (the previously used models) have some non trivial defaults which do not apply to `Tapex`.(Ideally those `type` are not model specific like `tapas` but `encoder` vs `encoder-decoder` when applicable. - Added a slow test - Ideally we need a fast test, but there doesn't seem to be a small random model yet: https://huggingface.co/hf-internal-testing @NielsRogge @LysandreJik <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
04-08-2022 11:01:00
04-08-2022 11:01:00
_The documentation is not available anymore as the PR was closed or merged._<|||||>Gentle ping @NielsRogge if we want to go live next week.
transformers
16,662
closed
Fix error in doc of `DataCollatorWithPadding`
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> The defalut value of `padding` in `DataCollatorWithPadding` is `True`, not `False`. The doc is wrong. ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
04-08-2022 11:00:24
04-08-2022 11:00:24
_The documentation is not available anymore as the PR was closed or merged._
transformers
16,661
closed
Fix #16660 (tokenizers setters of ids of special tokens)
# What does this PR do? Fixes #16660 @LysandreJik It seems no one was using those setters because the code is quite old, but here's a fix if you want to merge it.
04-08-2022 01:31:48
04-08-2022 01:31:48
_The documentation is not available anymore as the PR was closed or merged._<|||||>Thank you very much for sharing your solution, it looks great! :hugs: Just to be able to merge this change on main I think it would be good to add a test to make sure that future changes don't break the behaviour you just fixed - I let @LysandreJik confirm. This new test would surely have its place in `tests/test_tokenization_common.py`. If you ever need more help or don't have time to do it, don't hesitate to tell us!<|||||>Sure! I've just added the checks into an existing test. Some things could be done differently depending on your preferences: a) Simplify and assume that `len(tokenizer) > 0` always. b) Add a dummy token before testing the setters when `len(tokenizer) == 0`. I've chosen not to do it because you may want to have tests as independent as possible, and this change would make the test fail when the method to add tokens fails. Please let me know.<|||||>I ended up creating a different method for this test because I've seen at least two models where the ids shouldn't be set and I needed to disable only this test for those tokenizers.
transformers
16,660
closed
Tokenizers setter of ids of special tokens don't work
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: - Platform: - Python version: - PyTorch version (GPU?): - Tensorflow version (GPU?): - Using GPU in script?: - Using distributed or parallel set-up in script?: ### Who can help - Tokenizers: @SaulLu ## Information The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. Create an instance of a pretrained tokenizer 2. Try to set the pad_token_id For instance: ``` tokenizer = AutoTokenizer.from_pretrained('gpt2') tokenizer.pad_token_id = tokenizer.eos_token_id ``` Output: ``` --------------------------------------------------------------------------- TypeError Traceback (most recent call last) /tmp/ipykernel_33/1516894257.py in <module> 1 tokenizer = AutoTokenizer.from_pretrained('gpt2') ----> 2 tokenizer.pad_token_id = tokenizer.eos_token_id /opt/conda/lib/python3.7/site-packages/transformers/tokenization_utils_base.py in pad_token_id(self, value) 1173 @pad_token_id.setter 1174 def pad_token_id(self, value): -> 1175 self._pad_token = self.convert_tokens_to_ids(value) 1176 1177 @cls_token_id.setter /opt/conda/lib/python3.7/site-packages/transformers/tokenization_utils_fast.py in convert_tokens_to_ids(self, tokens) 248 249 ids = [] --> 250 for token in tokens: 251 ids.append(self._convert_token_to_id_with_added_voc(token)) 252 return ids TypeError: 'int' object is not iterable ``` ## Expected behavior Set the `pad_token` appropriately. I've fixed this in a branch and I'm submitting a PR.
04-08-2022 01:26:56
04-08-2022 01:26:56
Thank you very much for sharing this problem (and a solution)! :hugs: You are right, this behaviour is not desirable
transformers
16,659
closed
Adding GPT-NeoX-20B
# What does this PR do? Adds GPT-NeoX-20B model and tokenizers. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes https://github.com/huggingface/transformers/issues/15642 ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? @LysandreJik Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
04-07-2022 22:25:07
04-07-2022 22:25:07
_The documentation is not available anymore as the PR was closed or merged._<|||||>Incredible work! I have tested the model and seems to work as intended. I did discover one problem with the tokenizer though: Here is the full script: ``` !git clone https://github.com/zphang/transformers !cd transformers !git checkout neox20b !pip install -e . !cd .. from transformers import AutoModelForCausalLM, GPTNeoXTokenizer model_name = r"EleutherAI/gpt-neox-20b" model = AutoModelForCausalLM.from_pretrained(model_name) tokenizer = GPTNeoXTokenizer.from_pretrained(model_name) input_ids=tokenizer.encode("This is the input text", return_tensors="pt",add_special_tokens=False) beam_output = model.generate( input_ids=input_ids, max_length=input_ids.shape[1]+30, min_length=input_ids.shape[1]+5, early_stopping=True, num_return_sequences=4, do_sample=True ) for j in range(4): output = tokenizer.decode(beam_output[j][input_ids.shape[1]:], skip_special_tokens=False) ``` I got the following error: File "testing/testDecoderOnly.py", line 104, in testModelSample ran = tokenizer.decode(beam_output[j][input_ids.shape[1]:], skip_special_tokens=False) File "/home/ec2-user/t5-regression3/transformers/src/transformers/tokenization_utils_base.py", line 3308, in decode **kwargs, File "/home/ec2-user/t5-regression3/transformers/src/transformers/tokenization_utils.py", line 946, in _decode sub_texts.append(self.convert_tokens_to_string(current_sub_text)) File "/home/ec2-user/t5-regression3/transformers/src/transformers/models/gpt2/tokenization_gpt2.py", line 266, in convert_tokens_to_string text = bytearray([self.byte_decoder[c] for c in text]).decode("utf-8", errors=self.errors) File "/home/ec2-user/t5-regression3/transformers/src/transformers/models/gpt2/tokenization_gpt2.py", line 266, in <listcomp> text = bytearray([self.byte_decoder[c] for c in text]).decode("utf-8", errors=self.errors) KeyError: ' ' <|||||>Hm yea, I can replicate that issue too. I'm not too familiar with the tokenization code. The fast tokenizer seems to work just fine, but the Python one (which I'm basing of GPT-2's tokenizer) seems to have some issues. Here's the a minimal reproducible version: ```python import transformers model_name = "EleutherAI/gpt-neox-20b" tokenizer_slow = transformers.GPTNeoXTokenizer.from_pretrained(model_name) tokenizer_fast = transformers.GPTNeoXTokenizerFast.from_pretrained(model_name) print("Fast", repr(tokenizer_fast.decode([50274]))) print("Slow", repr(tokenizer_slow.decode([50274]))) ``` ``` Fast ' ' --------------------------------------------------------------------------- KeyError Traceback (most recent call last) Input In [78], in <cell line: 2>() 1 print("Fast", repr(tokenizer_fast.decode([50274]))) ----> 2 print("Slow", repr(tokenizer_slow.decode([50274]))) File ~/code/transformers/src/transformers/tokenization_utils_base.py:3304, in PreTrainedTokenizerBase.decode(self, token_ids, skip_special_tokens, clean_up_tokenization_spaces, **kwargs) 3301 # Convert inputs to python lists 3302 token_ids = to_py_obj(token_ids) -> 3304 return self._decode( 3305 token_ids=token_ids, 3306 skip_special_tokens=skip_special_tokens, 3307 clean_up_tokenization_spaces=clean_up_tokenization_spaces, 3308 **kwargs, 3309 ) File ~/code/transformers/src/transformers/tokenization_utils.py:946, in PreTrainedTokenizer._decode(self, token_ids, skip_special_tokens, clean_up_tokenization_spaces, spaces_between_special_tokens, **kwargs) 944 current_sub_text.append(token) 945 if current_sub_text: --> 946 sub_texts.append(self.convert_tokens_to_string(current_sub_text)) 948 if spaces_between_special_tokens: 949 text = " ".join(sub_texts) File ~/code/transformers/src/transformers/models/gpt2/tokenization_gpt2.py:266, in GPT2Tokenizer.convert_tokens_to_string(self, tokens) 264 """Converts a sequence of tokens (string) in a single string.""" 265 text = "".join(tokens) --> 266 text = bytearray([self.byte_decoder[c] for c in text]).decode("utf-8", errors=self.errors) 267 return text File ~/code/transformers/src/transformers/models/gpt2/tokenization_gpt2.py:266, in <listcomp>(.0) 264 """Converts a sequence of tokens (string) in a single string.""" 265 text = "".join(tokens) --> 266 text = bytearray([self.byte_decoder[c] for c in text]).decode("utf-8", errors=self.errors) 267 return text KeyError: ' ' ``` I believe the NeoX tokenizer handles spaces a little differently (it has special tokens for single, double, triple spaces, etc). Do you know if someone who's more familiar with tokenization code might be able to chime in?<|||||>Great that the fast version works. Gently pinging @SaulLu and @Narsil if they have any answers.<|||||>Hey @LysandreJik, thanks for taking a look! I'll look into getting the tests to pass today. Re: the model script, I did use the new model templating script, but many parts of it seemed to make the assumption that the model with be an encoder-decoder model (e.g. mentioning cross attention). I removed most of other model implementations aside from CasualLM as that's the primary format that NeoX-20B would be used for, but it looks like I missed out some other references to the other model implementations. Other than that, the script was very useful in setting up the boilerplate.<|||||>added a PR to the PR to support `AutoTokenizer` initialization from `pretrained_model_name_or_path`: https://github.com/zphang/transformers/pull/1<|||||>Tests are passing now, though the non-fast tokenizer still needs looking at I think<|||||>Hi @zphang and @ViktorThink Thank you very much for working on the addition of this model! :hugs: As you mentioned several times in this PR, I took a closer look at the error currently returned by the slow tokenizer with the snippet: ```python import transformers model_name = "EleutherAI/gpt-neox-20b" tokenizer_slow = transformers.GPTNeoXTokenizer.from_pretrained(model_name) tokenizer_fast = transformers.GPTNeoXTokenizerFast.from_pretrained(model_name) print("Fast", repr(tokenizer_fast.decode([50274]))) print("Slow", repr(tokenizer_slow.decode([50274]))) ``` Looking at the vocabulary file ([here](https://huggingface.co/EleutherAI/gpt-neox-20b/resolve/main/vocab.json)), it seems that the id `50274` corresponds to `" "` (3 spaces). For me this token does not correspond to a token that could have been produced by learning a byte-level BPE - and thus it's not surprising that the GPT2 tokenizer raises an error . To my knowledge, the GPT2 byte-level BPE uses a little trick: instead of working on the pure byte sequence, some bytes are shifted to match visible characters - e.g. the space `" "` (`\u0020`) is replaced by `"Ġ"` (`\u0120`). This offset is removed at the very end during decoding. Therefore, this trick implies that we can't have non-visible character in the vocabulary like the space `"\u0020"` used in the previous example. On the contrary, a token of which byte-level BPE could have been aware is `"ĠĠĠ"`. Then, maybe `" "` is an added token that should be preserved from the beginning of the tokenization? (If it's the case we need to add them as added tokens in GPTNeoX's tokenizer). **TL:DR:** I think that to solve this problem it would be necessary to understand how this vocabulary was obtained. Do you have more information about this? :blush: <|||||>Let me check in with the folks who prepared the tokenizer<|||||>@SaulLu we added some custom tokens to the tokenizer before training it corresponding to various numbers of spaces. There is a single token representation of one space, two spaces, three spaces, ..., twenty-four spaces. We created a custom tokenizer file that you can find [here](https://mystic.the-eye.eu/public/AI/models/GPT-NeoX-20B/slim_weights/20B_tokenizer.json) which should be consistent with the Tokenizers library.<|||||>Thank you very much for the information @StellaAthena! It makes sense! So if the original tokenizer is based on the `tokenizers` rust backend, maybe it is not necessary to provide a slow version? WDYT? In short, I think it would be difficult to align the behavior of slow and fast version which these different space sets added as added tokens. And my opinion is that if the original tokenizer is based on the rust backend, the specific development required to create the slow tokenizer may not be "cost"-effective. --------------------------- :point_down: More details on why I think adding the different space sets as added tokens won't give the same tokenization between the slow and the fast versions 1. In slow tokenizers added tokens cannot be added by specifying `lstrip`, `rstrip` etc arguments as it is possible to do in fast because of this line: https://github.com/huggingface/transformers/blob/e6f00a11d7fa34215184e3c797e19e6c7debe0fe/src/transformers/tokenization_utils.py#L410 2. For the slow tokens, the added tokens therefore have the behaviour of `rstrip=True` and `lstrip=True` around the added token because of this part of the code (we don't go into `if isinstance(tok_extended, AddedToken):` because of the point 1. above but into the second block `else: `): https://github.com/huggingface/transformers/blob/e6f00a11d7fa34215184e3c797e19e6c7debe0fe/src/transformers/tokenization_utils.py#L515-L536 Concretely, let's look on an example at the solution which would consist in adding the space sets as added tokens. It would give these 2 different tokenizations: ```python from transformers import GPT2Tokenizer, GPT2TokenizerFast model_name = "gpt-neox-20b-with-added-space-tokens" # Assuming that the tokens discussed above are added as tokens and not in the vocabulary tokenizer_s = GPT2Tokenizer.from_pretrained(model_name) tokenizer_f = GPT2TokenizerFast.from_pretrained(model_name) text = ''' def add(a,b): return a + b''' print(tokenizer_s.convert_ids_to_tokens(tokenizer_s.encode(text))) print(tokenizer_f.convert_ids_to_tokens(tokenizer_f.encode(text))) print(tokenizer_s.convert_ids_to_tokens(tokenizer_s.encode(text))) print(tokenizer_f.convert_ids_to_tokens(tokenizer_f.encode(text))) # [' ', 'def', 'Ġadd', '(', 'a', ',', 'b', '):', ' ', 'return', 'Ġa', 'Ġ+', 'Ġb'] # [' ', 'def', 'Ġadd', '(', 'a', ',', 'b', '):', 'Ċ', ' ', 'return', 'Ġa', 'Ġ+', 'Ġb'] ``` You can see that the line break is eaten with the slow version.<|||||>@SaulLu That makes sense to me. If a slow tokenizer isn’t necessary I don’t see a reason to try to force it<|||||>Ok! I wasn't sure if it was always standard/required to provide the slow tokenizer. If we are okay with only providing a fast tokenizer, I can go ahead and remove the slow one. I'll also try to address the rest of the comments this weekend (have been a little busy). Thanks everyone for all your feedback!<|||||>@zphang if you give me push access to the repo I am happy to help<|||||>I have resolved the merge conflicts in the config files, but I am not confidant in my understanding of how these various configs are supposed to work. I would appreciate it if someone double checked that I didn't do anything stupid.<|||||>Are there any further blockers to merging? It would be nice to have this merged in time for ACL next week :)<|||||>Apologies, I must have missed the previous comments. I've pushed an update with the desired changes.<|||||>There are still four open comment on the modeling file, if you could have a look.<|||||>I think I got to all of them now (is there an easy way to check on the GitHub web interface?), let me know if I'm missing any.<|||||>I see there are closed but not addressed, maybe you forgot to push your commit?<|||||>Terribly sorry! Pushed now.<|||||>Thanks again for all your wok on this model!
transformers
16,658
closed
Add type hints for XLM TensorFlow
# What does this PR do? This PR adds type hints to the XLM model for TensorFlow. @LysandreJik
04-07-2022 22:19:21
04-07-2022 22:19:21
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_16658). All of your documentation changes will be reflected on that endpoint.<|||||>@elusenji you might have unintentionally added commits from other contributors. You would want to remove other's changes.<|||||>@kumar-abhishek does that fix the issue you highlighted?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Is this still being reviewed? All the checks were successfully passed.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Sorry, this PR completely flew over the radar! Thanks for working on it, and please ping @Rocketknight1 for the type hints PRs :)<|||||>@elusenji It looks good now, but unfortunately, things moved on since the PR was filed and now we have a bunch of new errors! Can you pull changes from our repo to your fork, and then rebase your branch? That should hopefully resolve these errors, and then this PR should be ready to merge.<|||||>> @elusenji It looks good now, but unfortunately, things moved on since the PR was filed and now we have a bunch of new errors! Can you pull changes from our repo to your fork, and then rebase your branch? That should hopefully resolve these errors, and then this PR should be ready to merge. Done, but not sure if I did this correctly.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
16,657
closed
[modeling utils] revamp `from_pretrained(..., low_cpu_mem_usage=True)` + tests
The initial `from_pretrained(..., low_cpu_mem_usage=True)` implementation was a quick hack to enable loading gptj models on low CPU memory setups. It didn't work with all models. This PR takes it one step further. It revamps the implementation to support many features it wasn't supporting by revamping the implementation and delegating all the work to the normal `from_pretrained` code path except the final step of `state_dict` => model param overwrite. This PR: 1. revamps `low_cpu_mem_usage=True` 3. adds a functional test that checks `from_pretrained(mname, low_cpu_mem_usage=True)` works with sharded and non-sharded checkpoint 4. adds a quality test that measures CPU memory and checks that indeed `low_cpu_mem_usage=True` uses less memory. 5. adds various testing utils helper functions to support the new tests The low cpu memory usage code path is still not 100% complete feature-wise, but it's getting there. Though I'm contemplating a different approach to solving the issue of low cpu memory. That is by introducing several new `from_pretrained` args that should allow loading the model and/or `state_dict` directly on GPU for single GPU or DDP. But that's for another PR. @sgugger, @LysandreJik, @patrickvonplaten
04-07-2022 21:55:15
04-07-2022 21:55:15
_The documentation is not available anymore as the PR was closed or merged._<|||||>Hmm, so trying to write a test that shows the memory saving proved to be a puzzle. Getting inconsistent results between my desktop and the CI. That was using `process.memory_info().rss` then I also tried `tracemalloc`, but I think that one is problematic if pytorch uses some kernels that don't go through python memory allocation. I think I may try `/usr/bin/time -f %M` via an external process. As it gives me an RSS max peak for the whole independent process. The results are very peculiar: - for a 1GB model there is no saving - for a 10GB model, low_mem saves 1/4 of memory, - for a 40GB model, low_mem saves 1/2 of memory, but at least that explains why my memory tracking wasn't showing the saving consistently since I was using a 0.5GB model for the test. So what I'm doing is: ``` # 420 MB https://huggingface.co/bert-base-uncased /usr/bin/time -f %M python -c 'from transformers import AutoModel; AutoModel.from_pretrained("bert-base-uncased", low_cpu_mem_usage=True)' 1139516 /usr/bin/time -f %M python -c 'from transformers import AutoModel; AutoModel.from_pretrained("bert-base-uncased", low_cpu_mem_usage=False)' 1140324 # 1.25GB https://huggingface.co/bert-large-uncased/ /usr/bin/time -f %M python -c 'from transformers import AutoModel; AutoModel.from_pretrained("bert-large-uncased", low_cpu_mem_usage=True)' 2906584 /usr/bin/time -f %M python -c 'from transformers import AutoModel; AutoModel.from_pretrained("bert-large-uncased", low_cpu_mem_usage=False)' 2908236 # 10.6 GB https://huggingface.co/bigscience/T0_3B /usr/bin/time -f %M python -c 'from transformers import AutoModel; AutoModel.from_pretrained("bigscience/T0_3B", low_cpu_mem_usage=True)' 16122900 /usr/bin/time -f %M python -c 'from transformers import AutoModel; AutoModel.from_pretrained("bigscience/T0_3B", low_cpu_mem_usage=False)' 22299560 # 41.5 GB https://huggingface.co/bigscience/T0 /usr/bin/time -f %M python -c 'from transformers import AutoModel; AutoModel.from_pretrained("bigscience/T0", low_cpu_mem_usage=True)' 43788452 /usr/bin/time -f %M python -c 'from transformers import AutoModel; AutoModel.from_pretrained("bigscience/T0", low_cpu_mem_usage=False)' 86765944 ``` **update**: The culprit proved be that my original low_cpu_mem code was not able to handle models with a custom prefix in its keys like `bert.` - this PR fixes it. <|||||>The quality test will not work as the original implementation doesn't work with bert or any other model with its custom `bert.` prefix. It doesn't put the model to the meta device and thus doesn't save any memory. I totally hear you about the complexity and that the PR is difficult to review in several places. So I propose this plan: 1. a new PR that just refactors `_find_mismatched_keys` 2. merge (1) then re-base this PR and revisit? does that sound OK?<|||||>That sounds right, thanks for understanding!<|||||>Step 1 is ready: https://github.com/huggingface/transformers/pull/16706<|||||>@sgugger, as planned I rebased on https://github.com/huggingface/transformers/pull/16706, so the diff should be much easier to read now.
transformers
16,656
closed
Add tests for no_trainer and fix existing examples
# New tests for the `no_trainer` scripts ## What does this add? - Adds in test cases for each of the `no_trainer` scripts, mocking how the Transformers counterparts work - Fixes a small variety of bugs inside the `no_trainer` scripts, discovered while writing these tests - Introduces the ability to write a json file at the end of training, so that tests can be performed, similar to the Transformers tests
04-07-2022 16:46:57
04-07-2022 16:46:57
_The documentation is not available anymore as the PR was closed or merged._<|||||>CI failures were fixed by removing: ```python if torch_device != "cuda": testargs.append("--no_cuda") ``` from `clm`, `mlm`, and `ner`. From what I could *see* they were unused, so I didn't duplicate them from the transformers tests. Let me know if they should be added back in, with special behavior on those tests 😄 <|||||>For information, here are the durations: ``` 61.24s call examples/pytorch/test_accelerate_examples.py::ExamplesTests::test_run_swag 60.97s call examples/pytorch/test_pytorch_examples.py::ExamplesTests::test_run_squad 51.48s call examples/pytorch/test_pytorch_examples.py::ExamplesTests::test_run_speech_recognition_seq2seq 44.82s call examples/pytorch/test_pytorch_examples.py::ExamplesTests::test_run_swag 40.75s call examples/pytorch/test_accelerate_examples.py::ExamplesTests::test_run_squad 32.17s call examples/pytorch/test_pytorch_examples.py::ExamplesTests::test_run_squad_seq2seq 27.32s call examples/pytorch/test_accelerate_examples.py::ExamplesTests::test_run_ner 26.55s call examples/pytorch/test_pytorch_examples.py::ExamplesTests::test_run_speech_recognition_ctc 26.51s call examples/pytorch/test_accelerate_examples.py::ExamplesTests::test_run_clm 21.61s call examples/pytorch/test_pytorch_examples.py::ExamplesTests::test_run_ner 18.85s call examples/pytorch/test_pytorch_examples.py::ExamplesTests::test_run_clm 17.32s call examples/pytorch/test_pytorch_examples.py::ExamplesTests::test_run_glue 16.42s call examples/pytorch/test_pytorch_examples.py::ExamplesTests::test_run_wav2vec2_pretraining 15.14s call examples/pytorch/test_accelerate_examples.py::ExamplesTests::test_run_glue 14.38s call examples/pytorch/test_accelerate_examples.py::ExamplesTests::test_run_mlm 14.05s call examples/pytorch/test_pytorch_examples.py::ExamplesTests::test_run_mlm 3.41s call examples/pytorch/test_pytorch_examples.py::ExamplesTests::test_run_audio_classification 1.05s call examples/pytorch/test_pytorch_examples.py::ExamplesTests::test_run_clm_config_overrides 0.76s call examples/pytorch/test_pytorch_examples.py::ExamplesTests::test_generation 56 durations < 0.05 secs were omitted ``` Could the `run_swag_no_trainer` be made a bit faster? The other ones look okay.<|||||>Changed checkpointing tests to be by epoch, and also not saving with swag. Reduced time by almost 40% overall Here were those times locally for me: **Before** ``` ======================================================================================= slowest durations ======================================================================================== 15.11s call examples/pytorch/test_accelerate_examples.py::ExamplesTestsNoTrainer::test_run_swag_no_trainer 9.99s call examples/pytorch/test_accelerate_examples.py::ExamplesTestsNoTrainer::test_run_ner_no_trainer 9.70s call examples/pytorch/test_accelerate_examples.py::ExamplesTestsNoTrainer::test_run_squad_no_trainer 7.90s call examples/pytorch/test_accelerate_examples.py::ExamplesTestsNoTrainer::test_run_clm_no_trainer 6.33s call examples/pytorch/test_accelerate_examples.py::ExamplesTestsNoTrainer::test_run_glue_no_trainer 4.39s call examples/pytorch/test_accelerate_examples.py::ExamplesTestsNoTrainer::test_run_mlm_no_trainer ``` **After** ``` ======================================================================================= slowest durations ======================================================================================== 7.47s call examples/pytorch/test_accelerate_examples.py::ExamplesTestsNoTrainer::test_run_clm_no_trainer 6.30s call examples/pytorch/test_accelerate_examples.py::ExamplesTestsNoTrainer::test_run_squad_no_trainer 5.33s call examples/pytorch/test_accelerate_examples.py::ExamplesTestsNoTrainer::test_run_ner_no_trainer 5.13s call examples/pytorch/test_accelerate_examples.py::ExamplesTestsNoTrainer::test_run_glue_no_trainer 4.06s call examples/pytorch/test_accelerate_examples.py::ExamplesTestsNoTrainer::test_run_swag_no_trainer 3.89s call examples/pytorch/test_accelerate_examples.py::ExamplesTestsNoTrainer::test_run_mlm_no_trainer ```
transformers
16,655
closed
Add tests for no_trainer + fixes
# New tests for the `no_trainer` scripts ## What does this add? - Adds in test cases for each of the `no_trainer` scripts, mocking how the Transformers counterparts work - Fixes a small variety of bugs inside the `no_trainer` scripts, discovered while writing these tests - Introduces the ability to write a json file at the end of training, so that tests can be performed, similar to the Transformers tests
04-07-2022 16:36:04
04-07-2022 16:36:04
transformers
16,654
closed
[feat] Add FLAVA model
This PR aims to add [FLAVA](ihttps://arxiv.org/abs/2112.04482) model to the transformers repo. Following checklist delineates the list of things to be done for this PR to be complete: - [x] Flava init - [x] Flava base models - [x] Flava layers - [x] Flava Configs - [x] Flava encoders - [x] Flava pretraining models - [x] Flava codebook - [x] Flava feature extractors - [ ] Flava classification/retrieval models (in progress) - [x] Documentation updates - [x] Encoding utilities - [x] Imports updates - [x] Argstring updates - [x] Flava pretrained checkpoints - [x] Flava unit tests - [x] Flava integration tests - [x] Sanity check - [x] Lint - [x] Flava Processors
04-07-2022 16:00:44
04-07-2022 16:00:44
_The documentation is not available anymore as the PR was closed or merged._<|||||>@sgugger Thanks for your review on the PR. About lowercasing things, I followed what I saw in the repo as not all models were lower case. An example of this would be CLIP where classes are named as `CLIPConfig`, `CLIPTextEncoder` and so on. Would it be possible to keep `FLAVA` as uppercase?<|||||>`CLIP` was a mistake IMO, and it makes it harder on users to guess the right names of the models if we have different casings all the time. It's not because we make a mistake once we should repeat it again ang again just because there is a precedent ;-)<|||||>Hi! I was wondering if there are any updates on this and if this will be merged soon?<|||||>@kshitijkg This is almost ready and should land in the core soon. @sgugger @patrickvonplaten This is ready for another round of review. I have resolved all of your comments except a few where I have asked clarifications. Hoping to land it soon.