repo
stringclasses 1
value | number
int64 1
25.3k
| state
stringclasses 2
values | title
stringlengths 1
487
| body
stringlengths 0
234k
โ | created_at
stringlengths 19
19
| closed_at
stringlengths 19
19
| comments
stringlengths 0
293k
|
---|---|---|---|---|---|---|---|
transformers | 21,673 | closed | Added Type Hints for modeling_tf_encoder_decoder.py | # What does this PR do?
This pull request adds type hints for modeling_tf_encoder_decoder.py as outlined in Issue #16059 while not being on my main branch
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@Rocketknight1 | 02-17-2023 02:11:45 | 02-17-2023 02:11:45 | @Rocketknight1 I think this is ready, though I am getting an odd error saying "module 'tensorflow' has no attribute 'tensor'" which I am not sure how to resolve. Thanks and let me know if there is anything I need to fix! <|||||>@Batese2001 The issue was that one of your hints was `tf.tensor` instead of `tf.Tensor` - it was case-sensitive! I just changed it there, let's see if that fixes the tests. Note that because I made the change in the PR branch, you should `pull` those changes before committing/pushing any further updates, or you might get a branch conflict.<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>Tests look good - that Torch error is totally unrelated to this PR. Are you happy for me to merge the PR at this point?<|||||>Wonderful! If you think it is ready, then I am all for merging! Thank you for your help
<|||||>Merged, and thanks for your help! |
transformers | 21,672 | closed | Trainer state vitalization in TrOCR checkpoint | Hello, everyone, I am trying to make one dictionary in py to collect all necessary data in `traner_state.json ` in the TrOCR model in the trainer class (I am not using collab so I need to build my own dict and visualize it cer,wer, steps, epochs, ...)
her the py code I wrote
`import pandas as pd df = pd.read_json('/checkpoint-2000/trainer_state.json')
# print(df.head()) # print(df.to_string()) column_names = list(df.columns.values) print(column_names)
# log_history = column_names[7]
# print(log_history[0]) import json
# Opening JSON file with open('/checkpoint-2000/trainer_state.json') as json_file: data = json.load(json_file)
# print("Type:", type(data)) # print('show log_history', data['log_history']) log_history =data['log_history']
# print('\nlog_history\n',log_history[0]['epoch']) odd_dict , even_dict= {},{} log_history_dict = {} for count, value in enumerate(log_history): log_history_dict[count] = value print('\nlog_history_dict \n', log_history_dict) for k ,v in log_history_dict.items(): if k % 2 == 0: even_dict[k] = v else: odd_dict[k] = v
# print('\n even_dict',even_dict , '\nodd_dict' , odd_dict)
# log_history_clean = {} # for v in odd_dict.values():
# log_history_clean ['epoch'] = v['epoch'] # log_history_clean['learning_rate']= v['learning_rate']
# log_history_clean['loss']= v['loss']
# log_history_clean['step']= v['step']
# # for key ,value in v.items():
# # log_history_clean[key] = value
# # print(key,value) # print(log_history_clean)
# # --------- # {
# "best_metric": null,
# "best_model_checkpoint": null,
# "epoch": 1.4265335235378032,
# "global_step": 2000,
# "is_hyper_param_search": false,
# "is_local_process_zero": true,
# "is_world_process_zero": true,
# "log_history":[
# {
# "epoch": 0.36,
# "learning_rate": 3.94339514978602e-05,
# "loss": 0.5516,
# "step": 500
# },
# {
# "epoch": 0.36,
# "eval_cer": 4.407666576772222,
# "eval_loss": 0.25193867087364197,
# "eval_runtime": 1338.5651,
# "eval_samples_per_second": 13.973,
# "eval_steps_per_second": 0.583,
# "eval_wer": 17.79562559983836,
# "step": 500
# },
# ]
# }`
the expected new JSON file I want to make it like this format : `
# Goal : { 'index' : 0
# 'epoch': 0.36 ,
# 'learning_rate': 3.94339514978602e-05,
# 'loss': 0.5516,
# 'step': 500 ,
# 'epoch': 0.36
# 'eval_cer': 4.407666576772222,
# 'eval_loss': 0.25193867087364,
# 'eval_runtime': 1338.5651,
# 'eval_samples_per_second': 13.973,
# 'eval_steps_per_second': 0.583,
# 'eval_wer': 17.79562559983836,
# 'step': 500,
# # # # # # # # # # # # }` | 02-17-2023 00:37:22 | 02-17-2023 00:37:22 | You should ask questions like this on the [forums](https://discuss.huggingface.co/) as we keep issues for bugs and feature requests only.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 21,671 | closed | Allows to use `decoder_inputs_embeds` for `model.generate` | # What does this PR do?
Allows to use `decoder_inputs_embeds` for `model.generate` in VisionEncoderDecoderModel
## Who can review?
Vision Model
@amyeroberts
| 02-16-2023 23:35:24 | 02-16-2023 23:35:24 | _The documentation is not available anymore as the PR was closed or merged._<|||||>cc @gante <|||||>And how about the EncoderDecoderModel like T5?
I tried to replace the `prepare_inputs_for_generation` method only guided by [#6535](https://github.com/huggingface/transformers/issues/6535), but it does not work ....
```
class CustomT5ForConditionalGeneration(T5ForConditionalGeneration):
def prepare_inputs_for_generation(self,
input_ids,
past_key_values=None,
attention_mask=None,
head_mask=None,
decoder_head_mask=None,
cross_attn_head_mask=None,
use_cache=None,
encoder_outputs=None,
**kwargs):
res = super().prepare_inputs_for_generation(input_ids,
past_key_values,
attention_mask,
head_mask,
decoder_head_mask,
cross_attn_head_mask,
use_cache,
encoder_outputs,
**kwargs)
# maybe another solution :https://github.com/huggingface/transformers/pull/21671
# add decoder embeddings and mask
if "decoder_inputs_embeds" in kwargs.keys():
res["decoder_inputs_embeds"] = kwargs["decoder_inputs_embeds"]
if "decoder_attention_mask" in kwargs.keys():
res["decoder_attention_mask"] = kwargs["decoder_attention_mask"]
# if `inputs_embeds` are passed, we only want to use them in the 1st generation step
if past_key_values is None:
del res["decoder_input_ids"]
else:
# only last token for inputs_ids if past is defined in kwargs
res['decoder_input_ids'] = res['decoder_input_ids'][:, -1].unsqueeze(-1)
del res["decoder_inputs_embeds"]
return res
```
<|||||>Hey @YiandLi ๐
My suggestion would be to open a separate issue for the support of a `decoder_input_embeds` input, like #6535, so the issue becomes clear and visible to everyone. Like in #6535, I'd be happy to a) share a temporary solution b) push a permanent solution if the issue acquires sufficient traction.
Normally, I would not provide support for custom tasks, as my bandwidth is very limited, but according to this closed PR you are not the first person asking the question :) |
transformers | 21,670 | closed | [`CLAP`] Fix few broken things | # What does this PR do?
This PR fixes the forward pass that was broken in the `main` branch of `transformers` for `ClapModel`. To reproduce:
```python
from datasets import load_dataset
from transformers import ClapModel, ClapProcessor
dataset = load_dataset('ashraq/esc50')
input_text = ["Sound of a dog", "Sound of vaccum cleaner"]
audio_sample = dataset["train"]["audio"][-1]['array']
model_id = "ybelkada/clap-htsat-unfused"
processor = ClapProcessor.from_pretrained(model_id)
model = ClapModel.from_pretrained(model_id)
input_text = processor.tokenizer(input_text, return_tensors="pt", padding=True)
input_sample = processor.feature_extractor(audio_sample, return_tensors="pt")
out = model(input_ids=input_text.input_ids, attention_mask=input_text.attention_mask, input_features=input_sample.input_features, is_longer=input_sample.is_longer)
print(out.logits_per_audio.softmax(dim=-1)[0])
```
This PR fixes also few other nits that were missing during the bad rebase
This PR also fixes the doctest and the failing slow tests
cc @ArthurZucker @sgugger | 02-16-2023 21:10:27 | 02-16-2023 21:10:27 | There are also 2 more lines that got erased, shared that with you on slack ๐ <|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>Local doctests and tests pass:
```
============================================================ 119 passed, 39 skipped, 38 warnings in 75.88s (0:01:15) ============================================================
``` |
transformers | 21,669 | closed | TypeError: 'NoneType' object is not callable. While using run_clm.py, while trying to load dataset. | ### System Info
```
- `transformers` version: 4.26.1
- Platform: Linux-6.0.0-6-amd64-x86_64-with-glibc2.36
- Python version: 3.9.12
- Huggingface_hub version: 0.12.0
- PyTorch version (GPU?): 1.13.1+cu117 (True)
- Tensorflow version (GPU?): 2.11.0 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
```
Trace:
```
โญโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ Traceback (most recent call last) โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฎ
โrun_clm.py:621 in <module> โ
โ โ
โ 618 โ
โ 619 โ
โ 620 if __name__ == "__main__": โ
โ โฑ 621 โ main() โ
โ 622 โ
โ โ
โrun_clm.py:286 in main โ
โ โ
โ 283 โ # download the dataset. โ
โ 284 โ if data_args.dataset_name is not None: โ
โ 285 โ โ # Downloading and loading a dataset from the hub. โ
โ โฑ 286 โ โ raw_datasets = load_dataset( โ
โ 287 โ โ โ data_args.dataset_name, โ
โ 288 โ โ โ data_args.dataset_config_name, โ
โ 289 โ โ โ cache_dir=model_args.cache_dir, โ
โ โ
โ anaconda3/lib/python3.9/site-packages/datasets/load.py:1735 in load_dataset โ
โ โ
โ 1732 โ ignore_verifications = ignore_verifications or save_infos โ
โ 1733 โ โ
โ 1734 โ # Create a dataset builder โ
โ โฑ 1735 โ builder_instance = load_dataset_builder( โ
โ 1736 โ โ path=path, โ
โ 1737 โ โ name=name, โ
โ 1738 โ โ data_dir=data_dir, โ
โ โ
โ anaconda3/lib/python3.9/site-packages/datasets/load.py:1519 in load_dataset_builder โ
โ โ
โ 1516 โ โ raise ValueError(error_msg) โ
โ 1517 โ โ
โ 1518 โ # Instantiate the dataset builder โ
โ โฑ 1519 โ builder_instance: DatasetBuilder = builder_cls( โ
โ 1520 โ โ cache_dir=cache_dir, โ
โ 1521 โ โ config_name=config_name, โ
โ 1522 โ โ data_dir=data_dir, โ
โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ
```
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
1. run `run_clm.py` with the following parameters.
```
--model_type gpt2 \
--output_dir ./models \
--do_train \
--learning_rate 1e-4 \
--num_train_epochs 5 \
--save_total_limit 2 \
--save_steps 2000 \
--per_gpu_train_batch_size 16 \
--seed 42 \
--validation_file test.txt \
--do_eval \
--train_file text.txt \
--dataset_name test \
--tokenizer tokeniser2.py`
```
### Expected behavior
The program does not crash. | 02-16-2023 20:34:47 | 02-16-2023 20:34:47 | Hi @Norfaisbest, thanks for raising this issue.
It seems this issue is arising from the dataset `test` being loaded with `load_dataset` and isn't a `transformers` issue. I would suggest trying to debug running just `load_dataset('dataset_name')` outside of the script when trying to debug. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 21,668 | open | Add SeaFormer model | ### Model description
The computational cost and memory requirement render many computer vision models unsuitable on the mobile device, especially for the high-resolution per-pixel semantic segmentation task. SeaFormer (Squeeze-enhanced Axial Transformer) designed a generic attention block characterized by the formulation of squeeze Axial and detail enhancement. Coupled with a light segmentation head, they achieve the best trade-off between segmentation accuracy and latency on the ARM-based mobile devices on the ADE20K and Cityscapes datasets. They beat both the mobile-friendly rivals and Transformer-based counterparts with better performance and lower latency.
### Open source status
- [X] The model implementation is available
- [X] The model weights are available
### Provide useful links for the implementation
Paper: https://arxiv.org/pdf/2301.13156.pdf
Code and weights: https://github.com/fudan-zvg/SeaFormer
Authors: @wwqq @lzrobots @speedinghzl
cc: @NielsRogge @alaradirik | 02-16-2023 19:29:32 | 02-16-2023 19:29:32 | Hi. I would like to work on this.<|||||>Hi @inderpreetsingh01 thanks for opening the issue, SeaFormer definitely seems like a good addition to the library!
Are you planning to work on this model? If not, @strankid could start working on it or you two could collaborate on a PR. In either case, you could take a look at our [model addition guidelines](https://huggingface.co/docs/transformers/add_new_model), as well as the transformers code of other segmentation models such as [SegFormer](https://github.com/huggingface/transformers/tree/main/src/transformers/models/segformer), [MaskFormer](https://github.com/huggingface/transformers/tree/main/src/transformers/models/maskformer), [Mask2Former](https://github.com/huggingface/transformers/tree/main/src/transformers/models/mask2former) and [OneFormer](https://github.com/huggingface/transformers/tree/main/src/transformers/models/oneformer).<|||||>hello, fancy to make [SETR](https://github.com/fudan-zvg/SETR) on board๏ผ <|||||>@alaradirik should I begin with a WIP PR? <|||||>Hi @alaradirik thanks for sharing the resources. I will be working on adding this model. @strankid if you want we can collaborate on this. @lzrobots i saw both SeaFormer and SETR use mmseg, we can look into it.<|||||>@inderpreetsingh01 i'm down to collaborate! <|||||>Great :) @strankid @inderpreetsingh01 you can ping me if you have questions about the library or need help with anything (e.g. model conversion).
It'd be great if you could open a WIP PR, as it'd make it easier to ask / answer questions and do a preliminary review later down the road.<|||||>thanks @alaradirik will do it. @strankid can you share your mail id so that we can connect on slack?<|||||>@inderpreetsingh01 my email is [email protected]. Should I create the PR or would you like to? <|||||>@strankid sure you can create the pr.<|||||>@inderpreetsingh01 Saw you created the wip pr. Since you have my email, just contact me and let me know how you want to split the work. |
transformers | 21,667 | closed | T5 Mutli-GPU FSDP evaluation loop raises RuntimeError when predict_with_generate is True | ### System Info
Transformers version 4.27.0-dev
Python version 3.8.12
### Who can help?
@gante
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
after #21604 I tried optimizing my code a bit more and I read about deepspeed and FSDP and decided to try FSDP since it seemed simpler.
here's a link to the new code:
https://pastebin.com/n9Su4AiL
torchrun train_model.py --dataset_path ./data/HF_HE_AR_Dataset.json --tokenizer_path ./T5Tokenizer/ --max_length=128 --batch_size=4 --logging_steps 10 --save_steps 1000 --model google/t5-v1_1-large --validation_path ./data/dev.json --test_path ./data/test.json --weight_decay 0.0
when the code reaches the number of logging steps I defined (here is 10), it crashes when the final error is:
```
RuntimeError: The tensor has a non-zero number of elements, but its data is not allocated yet. Caffe2 uses a lazy al
location, so you will need to call mutable_data() or raw_mutable_data() to actually allocate memory.
```
full traceback can be found here:
https://pastebin.com/ucZ021EQ
it happens with and without ``fsdp_transformer_layer_cls_to_wrap`` and with any fsdp option with and without ``auto_wrap`` and both ``shard_grap_op`` and ``full_shard`` and with and without ``fp16=True``
when ``predict_with_generate=True``
if ``predict_with_generate=False`` it works fine
### Expected behavior
running fsdp with predict_with_generate successfully | 02-16-2023 18:52:34 | 02-16-2023 18:52:34 | Hey @eyalmazuz ๐
Looking at the exception, it does not look like a generate error, but rather a pytorch/trainer-related issue (it fails in the embedding layer). I'm not very knowledgeable there, so I'm tagging @sgugger for a comment.
BTW, without a short reproducible script, our ability to help is limited :)<|||||>> Hey @eyalmazuz wave
>
> Looking at the exception, it does not look like a generate error, but rather a pytorch/trainer-related issue (it fails in the embedding layer). I'm not very knowledgeable there, so I'm tagging @sgugger for a comment.
>
> BTW, without a short reproducible script, our ability to help is limited :)
Hi @gante
I created a repository with all the code here:
https://github.com/eyalmazuz/T5-Translation
I think I uploaded everything needed
It is possible to use the validation file for training as well, the problem still persists
and as I mentioned at the end of the issue, it only happens when ``predict_with_generate=True``, so I assumed it's an issue with it, in the way it is handled when generating outputs as part of the evaluation vs predicting<|||||>cc @pacman100 <|||||>Hello, with FSDP it isn't supported as mentioned here: https://huggingface.co/docs/accelerate/usage_guides/fsdp#a-few-caveats-to-be-aware-of
```
This feature is incompatible with --predict_with_generate in the run_translation.py script of ๐ค Transformers library.
```<|||||>@eyalmazuz, the reason is the `generate` of transformers bypasses the FSDP module's `forward` and directly calls internal model's encoder which isn't wrapped in FSDP unit, because of this the parameters required for the forward pass aren't gathered leading to the error you notice above.
Related PRs to make `generate` work with FSDP, some hack is required:
https://github.com/pytorch/pytorch/issues/82461<|||||>Even if one manually wraps encoder and decoder in separate FSDP units, it will still produce errors because shared parameters should be part of same FSDP unit which would now be broken because shared embedding layers of encoder and decoder will be in separate FSDP units: https://github.com/pytorch/pytorch/issues/79605<|||||>> Related PRs to make generate work with FSDP, some hack is required:
https://github.com/pytorch/pytorch/issues/82461
A hacky way proposed in above issue with PyTorch team is currently the only way to get `generate` to work with FSDP.<|||||>@pacman100 thank you for your reply
If I understood [https://github.com/pytorch/pytorch/issues/82461](https://github.com/pytorch/pytorch/issues/82461), then the issue occurs because FSDP wraps the entire T5 but not sub modules so when calling forward on T5 it works but calling directly on T5.encoder will not work since it's specifically not wrapped in FSDP.
But isn't adding ``auto_wrap`` to the FSDP params suppose to recursively wrap all layers in FSDP and thus solve the issue?
as the documentation says
```
To automatically recursively wrap layers with FSDP using default_auto_wrap_policy,
add --fsdp "full_shard auto_wrap" or --fsdp "shard_grad_op auto_wrap" to the command line arguments.
```
Or is it only wrapping T5Block in this case?
I changed the seq2seq_trainer file and added a small dummy forward pass before ``model.generate`` as mentioned in [https://github.com/huggingface/accelerate/issues/570](https://github.com/huggingface/accelerate/issues/570)
```
model_inputs = self.tokenizer(
"ูู ููู
", text_target="ืืืื ืฉื ื, ืืืขื ืื ืืืืช ืืกืคืจ", max_length=10, return_tensors='pt', truncation=True
)
outputs = self.model(**model_inputs)
gen_kwargs["synced_gpus"] = True
generated_tokens = self.model.generate(
generation_inputs,
**gen_kwargs,
)
```
is ``synced_gpus=True`` needed?
it works without it, but it'll keep it anyways<|||||>@eyalmazuz, transformer auto wrap only wraps T5Block modules in nested FSDP units
the encoder, decoder, lm_head and shared are part of the global FSDP unit and this is important too because embedding layers which are shared need to be part of the same FSDP unit, in this case the global one.
If one puts encoder and decoder modules in different nested FSDP units, shared embedding weights are no longer in same FSDP units leading to another error as mentioned in above comments <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 21,666 | closed | NeoX underperforming on A100 | ### System Info
- `transformers` version: 4.25.1
- Platform: Linux-5.15.0-50-generic-x86_64-with-debian-buster-sid
- Python version: 3.7.13
- Huggingface_hub version: 0.11.1
- PyTorch version (GPU?): 1.12.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help?
When running EleutherAI/pythia-12b-deduped on A100, generation slows down from 15tok/s to 3tok/s when sending 1024 tokens as input. This behaviour does not occur using A6000 with the same setup (14.04 tok/s, 1024 tok input). Ruled out "machine" issue as this happens on multiple servers on multiple locations (runpod, vast.ai, google...)
Does any of you have an idea what the root cause of this issue could be? Running pure python with no accelerate or anything that could "interfere" with the speed of HF. I am running a standard KoboldAI application.
Tagging: @ArthurZucker @younesbelkada @StellaAthena
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
1. Clone https://github.com/henk717/KoboldAI + install dependencies.
2. Start the KoboldAI using model EleutherAI/pythia-12b-deduped
3. Let it generate 100 tokens until it reaches 1024 tokens.
On A100, this would slowly start to slow down token generation. On A6000, this has no effect.
### Expected behavior
Expected behaviour would be that the token generation would be same, regardless of amount inserted. | 02-16-2023 15:50:10 | 02-16-2023 15:50:10 | Are you using a 40 GB or 80 GB A100? A 40 GB A100 has slightly less VRAM than an A6000 (48 GB), and itโs possible that youโre tripping a failsafe such as CPU-offload to avoid an OOM error.
12 billion params * 3 Bytes per parameter = 36 GB of VRAM. My general rule of thumb is to add 20% overhead space for a full context length of 2048, which would push the model over the 40 GB limit.
One way to test this theory would be to try running it with LLM.int8. Another would be to carefully monitor GPU and CPU usage during inference.<|||||>This might be the issue indeed. It's automatically offloading to CPU but not generating any overflow warnings, making it difficult to pinpoint where the issue would lie. Going for the 80Gb did indeed solve the issue. |
transformers | 21,665 | closed | Fix typos in contrastive-image-text example README | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes typos in https://github.com/huggingface/transformers/blob/main/examples/pytorch/contrastive-image-text/README.md.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 02-16-2023 13:30:25 | 02-16-2023 13:30:25 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 21,664 | closed | AttributeError: module 'google.protobuf.descriptor' has no attribute '_internal_create_key' | ## Environment info
transformers version: '4.26.1'
Platform: databricks
```
from transformers import pipeline
model_path = "cardiffnlp/twitter-xlm-roberta-base-sentiment"
sentiment_task = pipeline("sentiment-analysis", model=model_path, tokenizer=model_path)
sentiment_task("T'estimo!")
```
## Who can help
@Narsil @ArthurZucker @younesbelkada
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
from transformers import pipeline
model_path = "cardiffnlp/twitter-xlm-roberta-base-sentiment"
sentiment_task = pipeline("sentiment-analysis", model=model_path, tokenizer=model_path)
sentiment_task("T'estimo!")
```
### Expected behavior
[{'label': 'Positive', 'score': 0.6600581407546997}] | 02-16-2023 12:58:03 | 02-16-2023 12:58:03 | Hi @k3ybladewielder
It seems that this is related to your environment, can you create a fresh environment , install `transformers` `pip install transformers` and run the script again? or alternatively uninstall the package that is causing the issue?
Also please share with us the full trace of the error in your issue and not in the title as it is hard to understand what is going on with very few details! Thanks!<|||||>Sorry for forgot to share the full trace of error.
I did it and now, the error is:
```
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<command-3729796333060543> in <module>
5 #model_path = 'distilbert-base-uncased-finetuned-sst-2-english' #para usar no target_lang
6 # tokenizer = AutoTokenizer.from_pretrained(model_path)
----> 7 sentiment_task = pipeline("sentiment-analysis", model=model_path, tokenizer=model_path)
8 sentiment_task("T'estimo!")
/databricks/python/lib/python3.7/site-packages/transformers/pipelines/__init__.py in pipeline(task, model, config, tokenizer, feature_extractor, framework, revision, use_fast, use_auth_token, model_kwargs, **kwargs)
500
501 tokenizer = AutoTokenizer.from_pretrained(
--> 502 tokenizer_identifier, revision=revision, use_fast=use_fast, _from_pipeline=task, **tokenizer_kwargs
503 )
504
/databricks/python/lib/python3.7/site-packages/transformers/models/auto/tokenization_auto.py in from_pretrained(cls, pretrained_model_name_or_path, *inputs, **kwargs)
496 tokenizer_class_py, tokenizer_class_fast = TOKENIZER_MAPPING[type(config)]
497 if tokenizer_class_fast and (use_fast or tokenizer_class_py is None):
--> 498 return tokenizer_class_fast.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs)
499 else:
500 if tokenizer_class_py is not None:
/databricks/python/lib/python3.7/site-packages/transformers/tokenization_utils_base.py in from_pretrained(cls, pretrained_model_name_or_path, *init_inputs, **kwargs)
1747 *init_inputs,
1748 use_auth_token=use_auth_token,
-> 1749 **kwargs,
1750 )
1751
/databricks/python/lib/python3.7/site-packages/transformers/tokenization_utils_base.py in _from_pretrained(cls, resolved_vocab_files, pretrained_model_name_or_path, init_configuration, use_auth_token, *init_inputs, **kwargs)
1869 # Instantiate tokenizer.
1870 try:
-> 1871 tokenizer = cls(*init_inputs, **init_kwargs)
1872 except OSError:
1873 raise OSError(
/databricks/python/lib/python3.7/site-packages/transformers/models/xlm_roberta/tokenization_xlm_roberta_fast.py in __init__(self, vocab_file, tokenizer_file, bos_token, eos_token, sep_token, cls_token, unk_token, pad_token, mask_token, **kwargs)
142 pad_token=pad_token,
143 mask_token=mask_token,
--> 144 **kwargs,
145 )
146
/databricks/python/lib/python3.7/site-packages/transformers/tokenization_utils_fast.py in __init__(self, *args, **kwargs)
116 else:
117 raise ValueError(
--> 118 "Couldn't instantiate the backend tokenizer from one of: \n"
119 "(1) a `tokenizers` library serialization file, \n"
120 "(2) a slow tokenizer instance to convert or \n"
ValueError: Couldn't instantiate the backend tokenizer from one of:
(1) a `tokenizers` library serialization file,
(2) a slow tokenizer instance to convert or
(3) an equivalent slow tokenizer class to instantiate and convert.
You need to have sentencepiece installed to convert a slow tokenizer to a fast one.
```<|||||>Thanks !
I think if you install `sentencepiece` the error should disapear
`pip install sentencepiece`<|||||>Its works, thank you @younesbelkada |
transformers | 21,663 | closed | CUBLAS_STATUS_INVALID_VALUE when generating with OPT models | ### System Info
Hi,
I'm encountering an error when trying to do text generation with the OPT models. Here are the specs and the error, and below the steps to reproduce.
Ubuntu 18.04
Conda env:
torch 1.13.1 pypi_0 pypi
transformers 4.26.1 pypi_0 pypi
Python 3.9.7
The error:
```python
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-1-45f0d829e48e> in <module>
6 prompt = "Hello, I am conscious and"
7 input_ids = tokenizer(prompt, return_tensors="pt").input_ids.cuda()
----> 8 generated_ids = model.generate(input_ids)
9 text = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
10 print(text)
~/anaconda3/envs/torch/lib/python3.9/site-packages/torch/autograd/grad_mode.py in decorate_context(*args, **kwargs)
25 def decorate_context(*args, **kwargs):
26 with self.clone():
---> 27 return func(*args, **kwargs)
28 return cast(F, decorate_context)
29
~/anaconda3/envs/torch/lib/python3.9/site-packages/transformers/generation/utils.py in generate(self, inputs, generation_config, logits_processor, stopping_criteria, prefix_allowed_tokens_fn, synced_gpus, **kwargs)
1389
1390 # 11. run greedy search
-> 1391 return self.greedy_search(
1392 input_ids,
1393 logits_processor=logits_processor,
~/anaconda3/envs/torch/lib/python3.9/site-packages/transformers/generation/utils.py in greedy_search(self, input_ids, logits_processor, stopping_criteria, max_length, pad_token_id, eos_token_id, output_attentions, output_hidden_s
tates, output_scores, return_dict_in_generate, synced_gpus, **model_kwargs)
2177
2178 # forward pass to get next token
-> 2179 outputs = self(
2180 **model_inputs,
2181 return_dict=True,
~/anaconda3/envs/torch/lib/python3.9/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
1192 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1193 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1194 return forward_call(*input, **kwargs)
1195 # Do not call functions when jit is used
1196 full_backward_hooks, non_full_backward_hooks = [], []
~/anaconda3/envs/torch/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py in forward(self, input_ids, attention_mask, head_mask, past_key_values, inputs_embeds, labels, use_cache, output_attentions, output_hidden_states, return_dict)
930
931 # decoder outputs consists of (dec_features, layer_state, dec_hidden, dec_attn)
--> 932 outputs = self.model.decoder(
933 input_ids=input_ids,
934 attention_mask=attention_mask,
~/anaconda3/envs/torch/lib/python3.9/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
1192 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1193 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1194 return forward_call(*input, **kwargs)
1195 # Do not call functions when jit is used
1196 full_backward_hooks, non_full_backward_hooks = [], []
~/anaconda3/envs/torch/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py in forward(self, input_ids, attention_mask, head_mask, past_key_values, inputs_embeds, use_cache, output_attentions, output_hidden_states,
return_dict)
695 else:
696
--> 697 layer_outputs = decoder_layer(
698 hidden_states,
699 attention_mask=attention_mask,
~/anaconda3/envs/torch/lib/python3.9/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
1192 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1193 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1194 return forward_call(*input, **kwargs)
1195 # Do not call functions when jit is used
1196 full_backward_hooks, non_full_backward_hooks = [], []
~/anaconda3/envs/torch/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py in forward(self, hidden_states, attention_mask, layer_head_mask, output_attentions, use_cache, past_key_value)
324
325 # Self Attention
--> 326 hidden_states, self_attn_weights, present_key_value = self.self_attn(
327 hidden_states=hidden_states, 328 past_key_value=past_key_value,
~/anaconda3/envs/torch/lib/python3.9/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
1192 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1193 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1194 return forward_call(*input, **kwargs)
1195 # Do not call functions when jit is used
1196 full_backward_hooks, non_full_backward_hooks = [], []
~/anaconda3/envs/torch/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py in forward(self, hidden_states, key_value_states, past_key_value, attention_mask, layer_head_mask, output_attentions)
206
207 src_len = key_states.size(1)
--> 208 attn_weights = torch.bmm(query_states, key_states.transpose(1, 2))
209
210 if attn_weights.size() != (bsz * self.num_heads, tgt_len, src_len):
RuntimeError: CUDA error: CUBLAS_STATUS_INVALID_VALUE when calling `cublasGemmStridedBatchedExFix( handle, opa, opb, m, n, k, (void*)(&falpha), a, CUDA_R_16F, lda, stridea, b, CUDA_R_16F, ldb, strideb, (void*)(&fbeta), c, CUDA_R_
16F, ldc, stridec, num_batches, CUDA_R_32F, CUBLAS_GEMM_DEFAULT_TENSOR_OP)`
```
### Who can help?
@ArthurZucker, @younesbelkada, @sgugger, @sgugger, @muellerzr, @gante
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
In an environment with PyTorch and Transformers, open an IPython console and paste the example [from the documentation](https://huggingface.co/facebook/opt-66b) (here with a 1.3b model, but I first tried the 6.7b):
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model = AutoModelForCausalLM.from_pretrained("facebook/opt-1.3b", torch_dtype=torch.float16).cuda() # I also tried without half-precision
# the fast tokenizer currently does not work correctly
tokenizer = AutoTokenizer.from_pretrained("facebook/opt-1.3b", use_fast=False)
prompt = "Hello, I am conscious and"
input_ids = tokenizer(prompt, return_tensors="pt").input_ids.cuda()
generated_ids = model.generate(input_ids)
text = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
print(text)
```
### Expected behavior
The model should do inference and generate text out of the box.
Any help would be greatly appreciated, thanks for reading! | 02-16-2023 12:41:24 | 02-16-2023 12:41:24 | Hey @jchwenger ๐
I was able to run the script you shared without bugs on my end. Looking at other threads online, it may be due to an incorrect environment on your end -- check this thread from the comment I link [here](https://discuss.pytorch.org/t/runtimeerror-cuda-error-cublas-status-invalid-value-when-calling-cublassgemm-handle-opa-opb-m-n-k-alpha-a-lda-b-ldb-beta-c-ldc/124544/18) ๐ค <|||||>Hi @gante,
Oh, I see, thanks a lot for this reference, I'll investigate.<|||||>Hi again @gante, thanks for the help, I had a PyTorch/Cuda mismatch, after reinstall using the command from the website l it works! |
transformers | 21,662 | closed | Remote code is loaded from `main` even when revision is provided | ### System Info
When specifying a branch to load a model with remote code as follows fails because there is no modeling file on `main`. Is this a bug or the expected behaviour?
### Who can help?
The one and only _**@sgugger**_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```python
model = transformers.AutoModelForCausalLM.from_pretrained("bigcode/santacoder-fast-inference", revision="main_custom", trust_remote_code=True)
```
The following error shows that the code file is attempted to be loaded from `main` instead of `main_custom` (where a modeling file is present:
```bash
Could not locate the configuration_gpt_bigcode.py inside bigcode/santacoder-fast-inference.
Traceback (most recent call last):
File "/work/arjunguha-research-group/arjun/venvs/bigcode/lib/python3.8/site-packages/huggingface_hub/utils/_errors.py", line 264, in hf_raise_for_status
response.raise_for_status()
File "/shared/centos7/python/3.8.1/lib/python3.8/site-packages/requests/models.py", line 940, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 404 Client Error: Not Found for url: https://huggingface.co/bigcode/santacoder-fast-inference/resolve/main/configuration_gpt_bigcode.py
```
### Expected behavior
Loading without error. | 02-16-2023 11:26:07 | 02-16-2023 11:26:07 | Will have a look even if you didn't properly tag me :-p <|||||>The PR linked above fixes the config problem (I can load it with `AutoConfig`). The model still won't load however as the auto mpa of that config doesn't contain an entry for `AutoModelForCausalLM`. |
transformers | 21,661 | closed | gpt2 can't be trained for QA ? | ### System Info
- `transformers` version: 4.26.0.dev0
- Platform: Linux-3.10.0-1160.81.1.el7.x86_64-x86_64-with-glibc2.17
- Python version: 3.9.15
- Huggingface_hub version: 0.11.1
- PyTorch version (GPU?): 1.13.1+cu116 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@sgugger
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
code: similar to [link](https://github.com/huggingface/transformers/tree/main/examples/pytorch/question-answering)
but the model is changed to gpt2,
```
python run_qa.py \
--model_name_or_path gpt2 \
--dataset_name squad \
--do_train \
--do_eval \
--per_device_train_batch_size 12 \
--learning_rate 3e-5 \
--num_train_epochs 2 \
--max_seq_length 384 \
--doc_stride 128 \
--output_dir /tmp/debug_squad/
```
or
```
python run_seq2seq_qa.py \
--model_name_or_path gpt2 \
--dataset_name squad_v2 \
--context_column context \
--question_column question \
--answer_column answers \
--do_train \
--do_eval \
--per_device_train_batch_size 12 \
--learning_rate 3e-5 \
--num_train_epochs 2 \
--max_seq_length 384 \
--doc_stride 128 \
--output_dir /tmp/debug_seq2seq_squad/
```
ValueError: Unrecognized configuration class <class 'transformers.models.gpt2.configuration_gpt2.GPT2Config'> for this kind of AutoModel: AutoModelForQuestionAnswering.
Model type should be one of AlbertConfig, BartConfig, BertConfig, BigBirdConfig, BigBirdPegasusConfig, BloomConfig, CamembertConfig, CanineConfig, ConvBertConfig, Data2VecTextConfig, DebertaConfig, DebertaV2Config, DistilBertConfig, ElectraConfig, ErnieConfig, FlaubertConfig, FNetConfig, FunnelConfig, GPTJConfig, IBertConfig, LayoutLMv2Config, LayoutLMv3Config, LEDConfig, LiltConfig, LongformerConfig, LukeConfig, LxmertConfig, MarkupLMConfig, MBartConfig, MegatronBertConfig, MobileBertConfig, MPNetConfig, MvpConfig, NezhaConfig, NystromformerConfig, OPTConfig, QDQBertConfig, ReformerConfig, RemBertConfig, RobertaConfig, RobertaPreLayerNormConfig, RoCBertConfig, RoFormerConfig, SplinterConfig, SqueezeBertConfig, XLMConfig, XLMRobertaConfig, XLMRobertaXLConfig, XLNetConfig, YosoConfig.
### Expected behavior
looking forward to your kind reply
thx | 02-16-2023 09:04:23 | 02-16-2023 09:04:23 | As said multiple times in the past, please use the [forums](https://discuss.huggingface.co/) for questions like this.<|||||>@sgugger this is a bug, not question<|||||>I believe it's because `gpt2` doesn't have a `QuestionAnswering` head(like `GPTJForQuestionAnswering`), I would be happy to implement that if @sgugger approves the addition. <|||||>I don't see a bug. GPT-2 is not meant to be used for question-answering. You can find the list of architectures that support this task by reading the error message or having a look at the question-answering [task page](https://huggingface.co/docs/transformers/main/en/tasks/question_answering) in the doc (first tip in green).
@susnato Decoder models perform really poorly on this task, so there is no point adding GPT2ForQuestionAnswering IMO.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>> I don't see a bug. GPT-2 is not meant to be used for question-answering. You can find the list of architectures that support this task by reading the error message or having a look at the question-answering [task page](https://huggingface.co/docs/transformers/main/en/tasks/question_answering) in the doc (first tip in green).
>
> @susnato Decoder models perform really poorly on this task, so there is no point adding GPT2ForQuestionAnswering IMO.
Is it worth including in the library for completeness? I'm trying to use the Cerebras-GPT model suite for some Question Answering tasks and they inherit from the GPT2Model class. Could we still include it?<|||||>> I don't see a bug. GPT-2 is not meant to be used for question-answering. You can find the list of architectures that support this task by reading the error message or having a look at the question-answering [task page](https://huggingface.co/docs/transformers/main/en/tasks/question_answering) in the doc (first tip in green).
>
> @susnato Decoder models perform really poorly on this task, so there is no point adding GPT2ForQuestionAnswering IMO.
question answer task page mentions support for GPT2Model class, is that a bug?? <|||||>@kumaramit003 Support for question-answering for the GPT-2 model was added recently in #23030 |
transformers | 21,660 | closed | make opt checkpoint dir name correct |
# What does this PR do?
I found i can't load the converted checkpoint with tp_8 pp_1 dp_1 or tp_4 pp_2 dp_1, only tp 2 pp 2 dp 2 works. I check the code and found it might be the opt dir name issue. so i just fix it. and it works on my side.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
https://github.com/huggingface/accelerate/issues/1088
#https://github.com/huggingface/accelerate/issues/1088
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 02-16-2023 06:46:53 | 02-16-2023 06:46:53 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_21660). All of your documentation changes will be reflected on that endpoint.<|||||>cc @pacman100 <|||||>Friendly ping @pacman100 <|||||>@pacman100 ?<|||||>Hello, I will look into this tomorrow, thank you for your patience and sorry for the delay. |
transformers | 21,659 | closed | KeyError: 'eval_f1' when hyperparameter tuning distilroberta with raytune and population based training | ## Who can help:
ray/raytune: @richardliaw, @amogkam
trainer: @sgugger
## Information
I'm trying to hyperparameter tune DistilRoberta with RayTune using PBT and HuggingFace Trainer API. Im using Google Colab with 1 GPU to tune the model.
## Error Message
All my trials have the same error, this is just the error for the 3rd trial
```
`(_objective pid=42097)
91%|โโโโโโโโโ | 50/55 [00:05<00:00, 8.73it/s]
(_objective pid=42097)
93%|โโโโโโโโโโ| 51/55 [00:05<00:00, 8.72it/s]
(_objective pid=42097)
95%|โโโโโโโโโโ| 52/55 [00:05<00:00, 8.72it/s]
(_objective pid=42097)
96%|โโโโโโโโโโ| 53/55 [00:05<00:00, 8.73it/s]
(_objective pid=42097)
98%|โโโโโโโโโโ| 54/55 [00:06<00:00, 8.73it/s]
25%|โโโ | 438/1752 [02:43<07:14, 3.02it/s]
100%|โโโโโโโโโโ| 55/55 [00:06<00:00, 8.73it/s]
2023-02-16 05:38:41,958 ERROR trial_runner.py:1088 -- Trial _objective_f0650_00002: Error processing event.
ray.exceptions.RayTaskError(KeyError): ray::ImplicitFunc.train() (pid=42097, ip=172.28.0.12, repr=_objective)
File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable/trainable.py", line 367, in train
raise skipped from exception_cause(skipped)
File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable/function_trainable.py", line 335, in entrypoint
return self._trainable_func(
File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable/function_trainable.py", line 652, in _trainable_func
output = fn()
File "/usr/local/lib/python3.8/dist-packages/transformers/integrations.py", line 332, in dynamic_modules_import_trainable
return trainable(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable/util.py", line 386, in inner
return trainable(config, **fn_kwargs)
File "/usr/local/lib/python3.8/dist-packages/transformers/integrations.py", line 233, in _objective
local_trainer.train(resume_from_checkpoint=checkpoint, trial=trial)
File "/usr/local/lib/python3.8/dist-packages/transformers/trainer.py", line 1543, in train
return inner_training_loop(
File "/usr/local/lib/python3.8/dist-packages/transformers/trainer.py", line 1883, in _inner_training_loop
self._maybe_log_save_evaluate(tr_loss, model, trial, epoch, ignore_keys_for_eval)
File "/usr/local/lib/python3.8/dist-packages/transformers/trainer.py", line 2132, in _maybe_log_save_evaluate
self._report_to_hp_search(trial, self.state.global_step, metrics)
File "/usr/local/lib/python3.8/dist-packages/transformers/trainer.py", line 1229, in _report_to_hp_search
self.objective = self.compute_objective(metrics.copy())
File "<ipython-input-27-3decc80fdc30>", line 3, in my_objective
KeyError: 'eval_f1'
(_objective pid=42097) precision recall f1-score support
(_objective pid=42097)
(_objective pid=42097) 0 0.812 0.745 0.777 145
(_objective pid=42097) 1 0.650 0.988 0.784 173
(_objective pid=42097) 2 0.614 0.906 0.732 128
(_objective pid=42097) 3 0.850 0.936 0.891 109
(_objective pid=42097) 4 0.593 0.273 0.374 187
(_objective pid=42097) 5 0.818 0.115 0.202 78
(_objective pid=42097) 6 0.584 0.653 0.617 239
(_objective pid=42097) 7 0.606 0.434 0.506 99
(_objective pid=42097) 8 0.596 0.738 0.659 408
(_objective pid=42097) 9 0.564 0.175 0.267 126
(_objective pid=42097) 10 0.731 0.831 0.778 59
(_objective pid=42097)
(_objective pid=42097) accuracy 0.644 1751
(_objective pid=42097) macro avg 0.674 0.618 0.599 1751
(_objective pid=42097) weighted avg 0.647 0.644 0.612 1751
(_objective pid=42097)
(_objective pid=42097) {'eval_loss': 1.00838041305542, 'eval_macro_f1': 0.598748987658475, 'eval_macro_precision': 0.6744155267629177, 'eval_macro_recall': 0.6175753130931809, 'eval_balanced_accuracy': 0.6175753130931809, 'eval_runtime': 6.2857, 'eval_samples_per_second': 278.568, 'eval_steps_per_second': 8.75, 'epoch': 1.0}
(pid=43454) 2023-02-16 05:38:44.739485: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer.so.7'; dlerror: libnvinfer.so.7: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /usr/lib64-nvidia
(pid=43454) 2023-02-16 05:38:44.739641: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer_plugin.so.7'; dlerror: libnvinfer_plugin.so.7: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /usr/lib64-nvidia
(pid=43454) 2023-02-16 05:38:44.739655: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Cannot dlopen some TensorRT libraries. If you would like to use Nvidia GPU with TensorRT, please make sure the missing libraries mentioned above are installed properly.
== Status ==
Current time: 2023-02-16 05:38:47 (running for 00:08:54.32)
Memory usage on this node: 13.8/83.5 GiB
PopulationBasedTraining: 0 checkpoints, 0 perturbs
Resources requested: 10.0/12 CPUs, 1.0/1 GPUs, 0.0/49.72 GiB heap, 0.0/24.86 GiB objects
Result logdir: /content/ray_results/tune_transformer_pbt
Number of trials: 20/50 (3 ERROR, 16 PENDING, 1 RUNNING)
+------------------------+----------+-------------------+-----------------+--------------------+----------------+
| Trial name | status | loc | learning_rate | num_train_epochs | weight_decay |
|------------------------+----------+-------------------+-----------------+--------------------+----------------|
| _objective_f0650_00003 | RUNNING | 172.28.0.12:43454 | 4.87964e-05 | 4 | 0.0102922 |
| _objective_f0650_00004 | PENDING | | 1.00312e-05 | 5 | 0.469276 |
| _objective_f0650_00005 | PENDING | | 2.21697e-05 | 5 | 0.0917023 |
| _objective_f0650_00006 | PENDING | | 2.16492e-05 | 6 | 0.215973 |
| _objective_f0650_00007 | PENDING | | 1.18666e-05 | 4 | 0.19993 |
| _objective_f0650_00008 | PENDING | | 2.82428e-05 | 5 | 0.183181 |
| _objective_f0650_00009 | PENDING | | 4.93292e-05 | 4 | 0.191231 |
| _objective_f0650_00010 | PENDING | | 3.43018e-05 | 2 | 0.0232252 |
| _objective_f0650_00011 | PENDING | | 1.05306e-05 | 6 | 0.22525 |
| _objective_f0650_00012 | PENDING | | 4.23359e-05 | 2 | 0.482816 |
| _objective_f0650_00013 | PENDING | | 1.92358e-05 | 2 | 0.00798313 |
| _objective_f0650_00014 | PENDING | | 1.48815e-05 | 5 | 0.220076 |
| _objective_f0650_00015 | PENDING | | 1.69346e-05 | 6 | 0.416597 |
| _objective_f0650_00016 | PENDING | | 3.65009e-05 | 2 | 0.12939 |
| _objective_f0650_00017 | PENDING | | 1.83177e-05 | 3 | 0.212578 |
| _objective_f0650_00018 | PENDING | | 4.87834e-05 | 5 | 0.0924272 |
| _objective_f0650_00019 | PENDING | | 2.5806e-05 | 3 | 0.224877 |
| _objective_f0650_00000 | ERROR | 172.28.0.12:39382 | 3.92798e-05 | 5 | 0.475357 |
| _objective_f0650_00001 | ERROR | 172.28.0.12:40740 | 2.78333e-05 | 6 | 0.298425 |
| _objective_f0650_00002 | ERROR | 172.28.0.12:42097 | 2.33483e-05 | 4 | 0.229624 |
+------------------------+----------+-------------------+-----------------+--------------------+----------------+
Number of errored trials: 3
+------------------------+--------------+---------------------------------------------------------------------------------------------------------------------+
| Trial name | # failures | error file |
|------------------------+--------------+---------------------------------------------------------------------------------------------------------------------|
| _objective_f0650_00000 | 1 | /content/ray_results/tune_transformer_pbt/_objective_f0650_00000_0_num_train_epochs=5_2023-02-16_05-29-52/error.txt |
| _objective_f0650_00001 | 1 | /content/ray_results/tune_transformer_pbt/_objective_f0650_00001_1_num_train_epochs=6_2023-02-16_05-32-48/error.txt |
| _objective_f0650_00002 | 1 | /content/ray_results/tune_transformer_pbt/_objective_f0650_00002_2_num_train_epochs=4_2023-02-16_05-35-45/error.txt |
+------------------------+--------------+---------------------------------------------------------------------------------------------------------------------+
```
## Hyperparameter tuning code
```
def compute_metrics(p):
...
return {
'macro_f1' : macro_f1,
}
#hyperparameter tuning configs
def my_objective(metrics):
return metrics["eval_f1"]
training_args = TrainingArguments(
...
metric_for_best_model="eval_f1",
)
def model_init():
return AutoModelForSequenceClassification.from_pretrained(model_checkpoint,num_labels=11,)
trainer = Trainer(
...
compute_metrics=compute_metrics,
)
tune_config = {
"per_device_train_batch_size": 32,
}
scheduler = PopulationBasedTraining(
...
metric="eval_f1",
...
#hyperparameter search
best_run = trainer.hyperparameter_search(
hp_space=lambda _: tune_config,
compute_objective = my_objective,
...
)
```
Any help would be greatly appreciated! | 02-16-2023 06:11:54 | 02-16-2023 06:11:54 | Please use the [forums](https://discuss.huggingface.co/) to help debug your code as we keep issues for bugs and feature requests only. Here the metric you are using with `metric_for_best_model="eval_f1"` does not exist as your `compute_metrics` function only returns the following keys:
```py
{
'macro_f1' : macro_f1,
'macro_precision': macro_precision,
'macro_recall': macro_recall,
'balanced_accuracy': acc
}
```
`eval_macro_f1` would work better.
<|||||>Thank you and sorry about that! |
transformers | 21,658 | closed | Bump werkzeug from 2.0.3 to 2.2.3 in /examples/research_projects/decision_transformer | Bumps [werkzeug](https://github.com/pallets/werkzeug) from 2.0.3 to 2.2.3.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/pallets/werkzeug/releases">werkzeug's releases</a>.</em></p>
<blockquote>
<h2>2.2.3</h2>
<p>This is a fix release for the 2.2.x release branch.</p>
<ul>
<li>Changes: <a href="https://werkzeug.palletsprojects.com/en/2.2.x/changes/#version-2-2-3">https://werkzeug.palletsprojects.com/en/2.2.x/changes/#version-2-2-3</a></li>
<li>Milestone: <a href="https://github.com/pallets/werkzeug/milestone/26?closed=1">https://github.com/pallets/werkzeug/milestone/26?closed=1</a></li>
</ul>
<p>This release contains security fixes for:</p>
<ul>
<li><a href="https://github.com/pallets/werkzeug/security/advisories/GHSA-xg9f-g7g7-2323">https://github.com/pallets/werkzeug/security/advisories/GHSA-xg9f-g7g7-2323</a></li>
<li><a href="https://github.com/pallets/werkzeug/security/advisories/GHSA-px8h-6qxv-m22q">https://github.com/pallets/werkzeug/security/advisories/GHSA-px8h-6qxv-m22q</a></li>
</ul>
<h2>2.2.2</h2>
<p>This is a fix release for the <a href="https://github.com/pallets/werkzeug/releases/tag/2.2.0">2.2.0</a> feature release.</p>
<ul>
<li>Changes: <a href="https://werkzeug.palletsprojects.com/en/2.2.x/changes/#version-2-2-2">https://werkzeug.palletsprojects.com/en/2.2.x/changes/#version-2-2-2</a></li>
<li>Milestone: <a href="https://github.com/pallets/werkzeug/milestone/25?closed=1">https://github.com/pallets/werkzeug/milestone/25?closed=1</a></li>
</ul>
<h2>2.2.1</h2>
<p>This is a fix release for the <a href="https://github.com/pallets/werkzeug/releases/tag/2.2.0">2.2.0</a> feature release.</p>
<ul>
<li>Changes: <a href="https://werkzeug.palletsprojects.com/en/2.2.x/changes/#version-2-2-1">https://werkzeug.palletsprojects.com/en/2.2.x/changes/#version-2-2-1</a></li>
<li>Milestone: <a href="https://github.com/pallets/werkzeug/milestone/24?closed=1">https://github.com/pallets/werkzeug/milestone/24?closed=1</a></li>
</ul>
<h2>2.2.0</h2>
<p>This is a feature release, which includes new features and removes previously deprecated features. The 2.2.x branch is now the supported bugfix branch, the 2.1.x branch will become a tag marking the end of support for that branch. We encourage everyone to upgrade, and to use a tool such as <a href="https://pypi.org/project/pip-tools/">pip-tools</a> to pin all dependencies and control upgrades.</p>
<ul>
<li>Changes: <a href="https://werkzeug.palletsprojects.com/en/2.2.x/changes/#version-2-2-0">https://werkzeug.palletsprojects.com/en/2.2.x/changes/#version-2-2-0</a></li>
<li>Milestone: <a href="https://github.com/pallets/werkzeug/milestone/20?closed=1">https://github.com/pallets/werkzeug/milestone/20?closed=1</a></li>
</ul>
<h2>2.1.2</h2>
<p>This is a fix release for the <a href="https://github.com/pallets/werkzeug/releases/tag/2.1.0">2.1.0</a> feature release.</p>
<ul>
<li>Changes: <a href="https://werkzeug.palletsprojects.com/en/2.1.x/changes/#version-2-1-2">https://werkzeug.palletsprojects.com/en/2.1.x/changes/#version-2-1-2</a></li>
<li>Milestone: <a href="https://github.com/pallets/werkzeug/milestone/22?closed=1">https://github.com/pallets/werkzeug/milestone/22?closed=1</a></li>
</ul>
<h2>2.1.1</h2>
<p>This is a fix release for the <a href="https://github.com/pallets/werkzeug/releases/tag/2.1.0">2.1.0</a> feature release.</p>
<ul>
<li>Changes: <a href="https://werkzeug.palletsprojects.com/en/2.1.x/changes/#version-2-1-1">https://werkzeug.palletsprojects.com/en/2.1.x/changes/#version-2-1-1</a></li>
<li>Milestone: <a href="https://github.com/pallets/werkzeug/milestone/19?closed=1">https://github.com/pallets/werkzeug/milestone/19?closed=1</a></li>
</ul>
<h2>2.1.0</h2>
<p>This is a feature release, which includes new features and removes previously deprecated features. The 2.1.x branch is now the supported bugfix branch, the 2.0.x branch will become a tag marking the end of support for that branch. We encourage everyone to upgrade, and to use a tool such as <a href="https://pypi.org/project/pip-tools/">pip-tools</a> to pin all dependencies and control upgrades.</p>
<ul>
<li>Changes: <a href="https://werkzeug.palletsprojects.com/en/2.1.x/changes/#version-2-1-0">https://werkzeug.palletsprojects.com/en/2.1.x/changes/#version-2-1-0</a></li>
<li>Milestone: <a href="https://github.com/pallets/werkzeug/milestone/16?closed=1">https://github.com/pallets/werkzeug/milestone/16?closed=1</a></li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a href="https://github.com/pallets/werkzeug/blob/main/CHANGES.rst">werkzeug's changelog</a>.</em></p>
<blockquote>
<h2>Version 2.2.3</h2>
<p>Released 2023-02-14</p>
<ul>
<li>Ensure that URL rules using path converters will redirect with strict slashes when
the trailing slash is missing. :issue:<code>2533</code></li>
<li>Type signature for <code>get_json</code> specifies that return type is not optional when
<code>silent=False</code>. :issue:<code>2508</code></li>
<li><code>parse_content_range_header</code> returns <code>None</code> for a value like <code>bytes */-1</code>
where the length is invalid, instead of raising an <code>AssertionError</code>. :issue:<code>2531</code></li>
<li>Address remaining <code>ResourceWarning</code> related to the socket used by <code>run_simple</code>.
Remove <code>prepare_socket</code>, which now happens when creating the server. :issue:<code>2421</code></li>
<li>Update pre-existing headers for <code>multipart/form-data</code> requests with the test
client. :issue:<code>2549</code></li>
<li>Fix handling of header extended parameters such that they are no longer quoted.
:issue:<code>2529</code></li>
<li><code>LimitedStream.read</code> works correctly when wrapping a stream that may not return
the requested size in one <code>read</code> call. :issue:<code>2558</code></li>
<li>A cookie header that starts with <code>=</code> is treated as an empty key and discarded,
rather than stripping the leading <code>==</code>.</li>
<li>Specify a maximum number of multipart parts, default 1000, after which a
<code>RequestEntityTooLarge</code> exception is raised on parsing. This mitigates a DoS
attack where a larger number of form/file parts would result in disproportionate
resource use.</li>
</ul>
<h2>Version 2.2.2</h2>
<p>Released 2022-08-08</p>
<ul>
<li>Fix router to restore the 2.1 <code>strict_slashes == False</code> behaviour
whereby leaf-requests match branch rules and vice
versa. :pr:<code>2489</code></li>
<li>Fix router to identify invalid rules rather than hang parsing them,
and to correctly parse <code>/</code> within converter arguments. :pr:<code>2489</code></li>
<li>Update subpackage imports in :mod:<code>werkzeug.routing</code> to use the
<code>import as</code> syntax for explicitly re-exporting public attributes.
:pr:<code>2493</code></li>
<li>Parsing of some invalid header characters is more robust. :pr:<code>2494</code></li>
<li>When starting the development server, a warning not to use it in a
production deployment is always shown. :issue:<code>2480</code></li>
<li><code>LocalProxy.__wrapped__</code> is always set to the wrapped object when
the proxy is unbound, fixing an issue in doctest that would cause it
to fail. :issue:<code>2485</code></li>
<li>Address one <code>ResourceWarning</code> related to the socket used by
<code>run_simple</code>. :issue:<code>2421</code></li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="https://github.com/pallets/werkzeug/commit/22a254fca2ad0130adbbcbd11d3de51bcb04a08b"><code>22a254f</code></a> release version 2.2.3</li>
<li><a href="https://github.com/pallets/werkzeug/commit/517cac5a804e8c4dc4ed038bb20dacd038e7a9f1"><code>517cac5</code></a> Merge pull request from GHSA-xg9f-g7g7-2323</li>
<li><a href="https://github.com/pallets/werkzeug/commit/babc8d9e8c9fa995ef26050698bc9b5a92803664"><code>babc8d9</code></a> rewrite docs about request data limits</li>
<li><a href="https://github.com/pallets/werkzeug/commit/09449ee77934a0c883f5959785864ecae6aaa2c9"><code>09449ee</code></a> clean up docs</li>
<li><a href="https://github.com/pallets/werkzeug/commit/fe899d0cdf767a7289a8bf746b7f72c2907a1b4b"><code>fe899d0</code></a> limit the maximum number of multipart form parts</li>
<li><a href="https://github.com/pallets/werkzeug/commit/cf275f42acad1b5950c50ffe8ef58fe62cdce028"><code>cf275f4</code></a> Merge pull request from GHSA-px8h-6qxv-m22q</li>
<li><a href="https://github.com/pallets/werkzeug/commit/8c2b4b82d0cade0d37e6a88e2cd2413878e8ebd4"><code>8c2b4b8</code></a> don't strip leading = when parsing cookie</li>
<li><a href="https://github.com/pallets/werkzeug/commit/7c7ce5cb73f3f7d3b9c09340e4f322aeb583dbc5"><code>7c7ce5c</code></a> [pre-commit.ci] pre-commit autoupdate (<a href="https://github-redirect.dependabot.com/pallets/werkzeug/issues/2585">#2585</a>)</li>
<li><a href="https://github.com/pallets/werkzeug/commit/19ae03e6a39b3f63fd08fef4fddae4385cdddf25"><code>19ae03e</code></a> [pre-commit.ci] auto fixes from pre-commit.com hooks</li>
<li><a href="https://github.com/pallets/werkzeug/commit/a83d3b8bf070810874c8e8d03dcce270666e10fe"><code>a83d3b8</code></a> [pre-commit.ci] pre-commit autoupdate</li>
<li>Additional commits viewable in <a href="https://github.com/pallets/werkzeug/compare/2.0.3...2.2.3">compare view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
- `@dependabot use these labels` will set the current labels as the default for future PRs for this repo and language
- `@dependabot use these reviewers` will set the current reviewers as the default for future PRs for this repo and language
- `@dependabot use these assignees` will set the current assignees as the default for future PRs for this repo and language
- `@dependabot use this milestone` will set the current milestone as the default for future PRs for this repo and language
You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts).
</details> | 02-16-2023 05:27:53 | 02-16-2023 05:27:53 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 21,657 | closed | [Examples] TPU-based training of a language model using TensorFlow | This PR adds an example of performing (masked) language model training using TensorFlow and TPUs. The example is meant to act as a reference for the community on this topic. The following are the main components of the PR:
* Tokenizer training script (for completeness)
* TFRecords preparation script (recommended practice when using TPUs)
* Training script
* Evaluation / inference
The purpose of this separation (as opposed to having everything in a single script) is to allow the community to have isolated reference points for performing TPU-based training of our models, which I think is beneficial.
The artifacts produced during this project can be found here: https://huggingface.co/tf-tpu.
* Tokenizer (trained from scratch): https://huggingface.co/tf-tpu/unigram-tokenizer-wikitext
* Model: https://huggingface.co/tf-tpu/roberta-base-epochs-500-no-wd
Cc: @Rocketknight1 @gante @amyeroberts | 02-16-2023 05:12:07 | 02-16-2023 05:12:07 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@Rocketknight1 I incorporated the `group_texts()` utility that we discussed over Slack. Let me know if the changes look good to you. Most of it is copy-pasted from [here](https://github.com/huggingface/notebooks/blob/main/examples/language_modeling-tf.ipynb).
[Here's](https://colab.research.google.com/gist/sayakpaul/adfaa9b45c9b56f222487995d0971645/scratchpad.ipynb) Colab Notebook where I verified these. <|||||>@Rocketknight1 I took a deeper look into the TFRecord preparation script. I don't understand why there's a discrepancy in the following.
While serializing the TFRecords, I am making each TFRecord shard has got a specific number of samples. When there are lesser samples for a TFRecord shard than the specified amount, that's fine.
But when I load the TFRecords back and create a `tf.data.Dataset` out of them, the number of entries in the dataset (before batching) is much lesser.
Here is a minimal Colab Notebook that demonstrates the issue: https://colab.research.google.com/gist/sayakpaul/b4b02f3f656c0041c93f6ba78c8e65fd/scratchpad.ipynb.
When you get a moment, could you take a look? <|||||>Thanks @Rocketknight1 for your help in debugging https://github.com/huggingface/transformers/pull/21657#issuecomment-1468086926 (discussed internally via Slack). I am currently regenerating the TFRecord shards. I will update here once that's done.<|||||>@Rocketknight1 corrected TFRecord shards have been pushed to `gs://tf-tpu-training-resources`.
Here are the record counts per split:
* Train: 300917
* Validation: 626
* Test: 722
The TFRecords were generated with a block size of 512. <|||||>@Rocketknight1 the training code looks good to me, except for a few things:
* Maybe we should scale the LR with the batch size?
* Take `mlm_probability` as a CLI arg?
* Modularize the dataset preparation code a bit?
But all these are non-blockers. Let's do 4 - 5 training runs varying the number of epochs and the learning rate. <|||||>@sayakpaul MLM probability added as an arg and I modularized the loading!<|||||>@Rocketknight1 started a training run with:
```bash
python3 train_model.py \
--tokenizer tf-tpu/unigram-tokenizer-wikitext \
--per_replica_batch_size 64 \
--tpu_name local --tpu_zone us-central1 --gcp_project huggingface-ml --bfloat16 \
--train_dataset gs://tf-tpu-training-resources/train --eval_dataset gs://tf-tpu-training-resources/validation \
--num_epochs 100 \
--output_dir roberta-base-epochs-100 --hub_model_id tf-tpu/roberta-base-epochs-100
```<|||||>@Rocketknight1 here's the final model trained with the command from [here](https://github.com/huggingface/transformers/pull/21657#issuecomment-1483729424):
https://huggingface.co/tf-tpu/roberta-base-epochs-100
When you try out examples in the widget of the model page ^, pass `[MASK]` instead of the default `<mask>`. The results are far from perfect (evident from the validation accuracy), though. <|||||>@Rocketknight1 could you review [this PR](https://huggingface.co/tf-tpu/roberta-base-epochs-500-no-wd/discussions/1)? <|||||>@sgugger thanks!
I addressed your comments. For https://github.com/huggingface/transformers/pull/21657#discussion_r1164017322, I will defer to @Rocketknight1. <|||||>Merging since the failing tests are unrelated. |
transformers | 21,656 | closed | T5 int8 inference is not compatible with nvidia/apex | ### System Info
- `transformers` version: 4.26.1
- Platform: Linux-5.15.0-60-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.12.0
- PyTorch version (GPU?): 1.14.0a0+44dac51 (True)
- Tensorflow version (GPU?): 2.11.0 (False)
- Flax version (CPU?/GPU?/TPU?): 0.6.4 (gpu)
- Jax version: 0.4.3
- JaxLib version: 0.4.3
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@ArthurZucker and @younesbelkada
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Copy paste int8 inference code from flan-t5-xxl page:
```python
# pip install bitsandbytes accelerate
from transformers import T5Tokenizer, T5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained("google/flan-t5-xxl")
model = T5ForConditionalGeneration.from_pretrained("google/flan-t5-xxl", device_map="auto", load_in_8bit=True)
input_text = "translate English to German: How old are you?"
input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to("cuda")
outputs = model.generate(input_ids)
print(tokenizer.decode(outputs[0]))
```
Error message:
```
โ /usr/local/lib/python3.8/dist-packages/apex/normalization/fused_layer_norm.py:69 in forward โ
โ โ
โ 66 โ โ ctx.eps = eps โ
โ 67 โ โ input_ = input.contiguous() โ
โ 68 โ โ weight_ = weight.contiguous() โ
โ โฑ 69 โ โ output, invvar = fused_layer_norm_cuda.rms_forward_affine( โ
โ 70 โ โ โ input_, ctx.normalized_shape, weight_, ctx.eps) โ
โ 71 โ โ ctx.save_for_backward(input_, weight_, invvar) โ
โ 72 โ โ return output โ
โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ
RuntimeError: expected scalar type Float but found Half
```
If I understand correctly, apex is triggered automatically, ref T5 doc:
> If youโd like a faster training and inference performance, install [apex](https://github.com/NVIDIA/apex#quick-start) and then the model will automatically use apex.normalization.FusedRMSNorm instead of T5LayerNorm. The former uses an optimized fused kernel which is several times faster than the latter.
Any work around?
Should I turn off apex during `load_in_8bit` to use default layernorm? How?
### Expected behavior
Want to work with int8 inference. Is it a bug or what did I miss? Thanks.
Ideally still want to keep apex during training. And turn it off during int8 inference.
Remove it definitely works, but -10% tflops at training from a quick benchmark. | 02-16-2023 03:35:11 | 02-16-2023 03:35:11 | Hi @lukaemon
Yes this is a known issue, currently int8 and apex are not supported together, the fix I can propose now is to disable apex by uninstalling it until we found a proper fix!
Thanks a lot<|||||>Will do. Thanks. |
transformers | 21,655 | closed | [bloom] gradient_checkpointing fix | The `BloomBlock.forward`'s args signature is:
https://github.com/huggingface/transformers/blob/9d1116e9951686f937d17697820117636bfc05a5/src/transformers/models/bloom/modeling_bloom.py#L417-L425
but when it's called in `gradient_checkpointing` code this is used:
https://github.com/huggingface/transformers/blob/9d1116e9951686f937d17697820117636bfc05a5/src/transformers/models/bloom/modeling_bloom.py#L772-L778
so unless I'm mistaken `head_mask` is passed as `layer_past`.
This PR re-injects the missing `layer_past` arg.
I see that there are tests that test the overall feature, but I haven't looked closely at what they test. Perhaps it happens that `head_mask` isn't being used, so it happens to work w/ it.
@younesbelkada, could you please check if this is an omission or it wasn't passed for a specific reason? | 02-16-2023 01:53:41 | 02-16-2023 01:53:41 | |
transformers | 21,653 | closed | [WhisperModel] fix bug in reshaping labels | Currently, in the Whisper model's forward pass, the target `labels` are reshaped using the `view` method before being passed into the loss function:
https://github.com/huggingface/transformers/blob/1567bef3b35c51b7a3cc6b4edf243b208279155d/src/transformers/models/whisper/modeling_whisper.py#L1214
The view method requires the Torch Tensor to be contiguous, and certain operations are commonly performed on the labels that might cause them not to be contiguous.
So using the `view` can cause problems during model training. This issue has already been fixed on another model by the @sanchit-gandhi in a [PR](https://github.com/huggingface/transformers/pull/16748), and I'm just replicating the same solution (using `reshape()` instead of `view()`) here for the Whisper model.
| 02-15-2023 23:47:26 | 02-15-2023 23:47:26 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Also cc @ArthurZucker |
transformers | 21,652 | closed | refactor: Make direct_transformers_import util | # What does this PR do?
Is this wanted? Will just close out if not.
Moves the common process of directly importing `transformers` to a utility function.
Related [PR](https://github.com/huggingface/transformers/pull/21651)
Related to this issue: #21645
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests? Happy to write if wanted
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@sgugger
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 02-15-2023 22:22:25 | 02-15-2023 22:22:25 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 21,651 | closed | Update deprecated load_module | # What does this PR do?
This PR updates the uses of `load_module` (which is going to be dropped in Python 3.12) to a non-deprecated API (this should work starting Python 3.5, so all good for Transformers).
Fixes #21645 | 02-15-2023 19:50:12 | 02-15-2023 19:50:12 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 21,650 | closed | Converting Megatron_T5 to HF_T5 | Currently both [Megatron_BERT conversion](https://github.com/huggingface/transformers/blob/main/src/transformers/models/megatron_bert/convert_megatron_bert_checkpoint.py) and [Megatron_GPT conversion](https://github.com/huggingface/transformers/blob/main/src/transformers/models/megatron_gpt2/convert_megatron_gpt2_checkpoint.py) are supported. Do you have any plan to support T5 models conversion? | 02-15-2023 19:49:00 | 02-15-2023 19:49:00 | Hello guys,
I saw you closed this issue.
Did you find any way to convert megatron t5 <-> hf T5?<|||||>I would be interested too.
Did you find any solution? |
transformers | 21,649 | closed | Fix axial positional encoding calculations for reformer.mdx | Fix axial positional encoding calculations
# What does this PR do?
This PR corrects the calculations for Axial Positional Encoding in Reformer model documentation.
- Since d = d_1 + d_2
- if d = 2^10 = 1024
- then
- d_1 and d_2 both should not be equal to 2^5. As 2^5 + 2^5 = 32+32 = 64 not equal to 1024.
- d_1 and d_2 should sum to 2^10. Therefore I fixed the d_1 and d_2 dimensions to be equal to 2^9.
- that is 2^9 + 2^9 = 1024.
and fixed the subsequent calculations.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 02-15-2023 19:44:35 | 02-15-2023 19:44:35 | cc @ArthurZucker <|||||>_The documentation is not available anymore as the PR was closed or merged._ |
transformers | 21,648 | closed | Generate: PT Dynamo without graph breaks in the main greedy/sample loop | # What does this PR do?
This PR is part of our PT Dynamo + `.generate()` readiness.
Let's start with the basics:
1 - Calling generate after `torch.compile()` doesn't explode -- this PR fixes the error seen in https://github.com/pytorch/pytorch/issues/93042
2 - There are no graph breaks in the main generation loop, for greedy search and sample. [Graph breaks are a major source of slowdowns](https://pytorch.org/docs/master/dynamo/faq.html#why-am-i-not-seeing-speedups).
A quick run on GPT2 shows that we gain a ~1.5x speed with `torch.compile()` on `.generate()`, after these changes (~4 mins of compilation time). Please note that this is a quick check, and not a proper benchmark ;) | 02-15-2023 17:44:47 | 02-15-2023 17:44:47 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 21,647 | closed | Skipping more high mem tests - Wav2Vec2 Hubert | # What does this PR do?
In #21643 there's some tests which I forgot to skip e.g. for Wav2Vec2 I skipped the high memory tests for `TFWav2Vec2ModelTest` but didn't added them to the other class `TFWav2Vec2RobustModelTest`. This means some tests on circleci still fail with processes crashing - cf [this run](https://app.circleci.com/pipelines/github/huggingface/transformers/57822/workflows/479318b1-d4b2-4f72-8d98-7b23dde142e8/jobs/702252). This PR adds a `unittest.skip` decorator to the missed tests.
On this run, `tests/models/wav2vec2_phoneme/test_tokenization_wav2vec2_phoneme.py::Wav2Vec2PhonemeCTCTokenizerTest::test_number_of_added_tokens` also failed with a crashing process. After inspecting, the following tests were also run on this process (`gw2`):
```
tests/models/wav2vec2_phoneme/test_tokenization_wav2vec2_phoneme.py
TFBartModelTest.test_keras_fit
TFBartModelTest.test_keras_save_load
TFGPTJModelTest.test_keras_save_load
TFMBartModelTest.test_keras_save_load
TFOpenAIGPTModelTest.test_keras_save_load
TFRemBertModelTest.test_keras_save_load
TFSwinModelTest.test_keras_save_load
Wav2Vec2PhonemeCTCTokenizerTest.test_add_tokens_tokenizer
Wav2Vec2PhonemeCTCTokenizerTest.test_added_token_are_matched_longest_first
Wav2Vec2PhonemeCTCTokenizerTest.test_batch_encode_plus_batch_sequence_length
Wav2Vec2PhonemeCTCTokenizerTest.test_batch_encode_plus_overflowing_tokens
Wav2Vec2PhonemeCTCTokenizerTest.test_batch_encode_plus_padding
Wav2Vec2PhonemeCTCTokenizerTest.test_call
Wav2Vec2PhonemeCTCTokenizerTest.test_case_insensitive
Wav2Vec2PhonemeCTCTokenizerTest.test_change_phonemizer_lang
Wav2Vec2PhonemeCTCTokenizerTest.test_encode
Wav2Vec2PhonemeCTCTokenizerTest.test_encode_decode
Wav2Vec2PhonemeCTCTokenizerTest.test_encode_decode_with_del
Wav2Vec2PhonemeCTCTokenizerTest.test_encode_decode_with_del_filter
Wav2Vec2PhonemeCTCTokenizerTest.test_encode_plus_with_padding
Wav2Vec2PhonemeCTCTokenizerTest.test_encode_with_del
```
None of these models - PyTorch Wav2Vec2 Phoneme, TFOpenAIGPT, TFMBart, TFGPTJ, TFRemBert, TFSwin should have been affected by the PR #21502 which has cause the recent memory issues with Hubert and Wav2Vec2. I'm therefore unsure if resolving upstream issues with wav2vec2 and hubert will resolve this unfortunately.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
| 02-15-2023 15:39:28 | 02-15-2023 15:39:28 | cc @ydshieh <|||||>_The documentation is not available anymore as the PR was closed or merged._ |
transformers | 21,646 | closed | Fix dynamic module import error | # What does this PR do?
### Issue
We have failing test
```bash
FAILED tests/models/auto/test_modeling_auto.py::AutoModelTest::test_from_pretrained_dynamic_model_distant
ModuleNotFoundError: No module named 'transformers_modules.local.modeling'
```
The full trace is given at the end.
After a long debug process, it turns out that, when reloading from the saved model
```python
model = AutoModel.from_pretrained("hf-internal-testing/test_dynamic_model", trust_remote_code=True)
with tempfile.TemporaryDirectory() as tmp_dir:
model.save_pretrained(tmp_dir)
reloaded_model = AutoModel.from_pretrained(tmp_dir, trust_remote_code=True)
```
if `configuration.py` appears in the dynamic module directory (here `transformers_modules/local`), sometimes it interferes the import of `transformers_modules.local.modeling`. I have no clear reason for this situation however.
### What this PR fixes
This PR therefore tries to avoid the appearance of other module files while the code imports a specific module file, around this line
```
def get_class_in_module():
...
module = importlib.import_module(module_path)
...
```
### Result
Running the reproduce code snippet (provided in the comment below) in a loop of 300 times:
- with this PR: this issue doesn't appear, [job run](https://app.circleci.com/pipelines/github/huggingface/transformers/57824/workflows/fb3b74ed-9231-41f8-80c9-7d43fb871a35/jobs/702257/steps)
- without the fix: this issue appears with 50% probability [job run](https://app.circleci.com/pipelines/github/huggingface/transformers/57826/workflows/1dee1636-9aa3-433c-a044-0c4c4c9dcbff/jobs/702277/steps)
#### Full traceback
```bash
Traceback (most recent call last):
...
reloaded_model = AutoModel.from_pretrained(tmp_dir, trust_remote_code=True)
File "/home/circleci/.pyenv/versions/3.7.12/lib/python3.7/site-packages/transformers/models/auto/auto_factory.py", line 463, in from_pretrained
pretrained_model_name_or_path, module_file + ".py", class_name, **hub_kwargs, **kwargs
File "/home/circleci/.pyenv/versions/3.7.12/lib/python3.7/site-packages/transformers/dynamic_module_utils.py", line 367, in get_class_from_dynamic_module
return get_class_in_module(class_name, final_module.replace(".py", ""))
File "/home/circleci/.pyenv/versions/3.7.12/lib/python3.7/site-packages/transformers/dynamic_module_utils.py", line 147, in get_class_in_module
module = importlib.import_module(module_path)
File "/home/circleci/.pyenv/versions/3.7.12/lib/python3.7/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1006, in _gcd_import
File "<frozen importlib._bootstrap>", line 983, in _find_and_load
File "<frozen importlib._bootstrap>", line 965, in _find_and_load_unlocked
ModuleNotFoundError: No module named 'transformers_modules.local.modeling'
``` | 02-15-2023 15:08:50 | 02-15-2023 15:08:50 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Run the following commpand
```python
python run_debug.py
```
with the 2 files
### run_debug.py
```python
import os
for i in range(300):
print(i)
with open("output.txt", "a+") as fp:
fp.write(str(i) + "\n")
os.system("python3 debug.py")
```
(we need to run the debugging code `foo` (contained in file `debug.py`) in difference processes each time, instead of running the script `debug.py` with a for loop defined inside it - as this will be always in the same process)
### debug.py
```python
import time, traceback, tempfile, os
from transformers.utils import HF_MODULES_CACHE
def foo():
from transformers import AutoModel
model = AutoModel.from_pretrained("hf-internal-testing/test_dynamic_model", trust_remote_code=True)
# Test model can be reloaded.
with tempfile.TemporaryDirectory() as tmp_dir:
model.save_pretrained(tmp_dir)
try:
reloaded_model = AutoModel.from_pretrained(tmp_dir, trust_remote_code=True)
except Exception as e:
print(e)
with open("output.txt", "a+") as fp:
fp.write(f"{traceback.format_exc()}" + "\n")
if __name__ == "__main__":
timeout = os.environ.get("PYTEST_TIMEOUT", 10)
timeout = int(timeout)
for i in range(1):
time.sleep(1)
print(i)
with open("output.txt", "a+") as fp:
fp.write(str(i) + "\n")
try:
os.system(f'rm -rf "{HF_MODULES_CACHE}"')
except:
pass
foo()
print("=" * 80)
with open("output.txt", "a+") as fp:
fp.write("=" * 80 + "\n")
```<|||||>Thanks for working on this! I was going to have a look at it when back from vacation but if you beat me to it ;-)
My solution would have been to change the way the local module works: for now I dumb every file there without structure, I wanted to add a folder per model (so given by `pretrained_model_name_or_path`) which would also fix this issue I believe.<|||||>@sgugger I am open to explore further, but I have a bit doubt regarding
> I wanted to add a folder per model (so given by `pretrained_model_name_or_path`) which would also fix this issue I believe.
While I am debugging (this single test), the only model appears
```
transformers_modules/hf-internal-testing/test_dynamic_model/12345678901234567890.../
transformers_modules/local/
```
so I don't see multiple models sharing the same folder, but the issue still occurs. So, I am not sure how to proceed with the solution you mentioned above.<|||||>Hmm, there seems to affect other related tests. I will have to take a look ๐ญ <|||||>I believe the conflict is between two files in local being written/deleted concurrently (but I might be wrong) hence making sure we things like
```
transformers_modules/local/123456...
transformers_modules/local/777888...
```
might fix the issue.<|||||>> I believe the conflict is between two files in local being written/deleted concurrently
On (circleci) CI, we have `pytest -n 8`, which might cause the situation you mentioned. But I am debugging by running the following function in a loop (and the issue still appears), so I kinda feel the issue is not from the concurrently read/write/delete operations
```python
def foo():
from transformers import AutoModel
model = AutoModel.from_pretrained("hf-internal-testing/test_dynamic_model", trust_remote_code=True)
# Test model can be reloaded.
with tempfile.TemporaryDirectory() as tmp_dir:
model.save_pretrained(tmp_dir)
reloaded_model = AutoModel.from_pretrained(tmp_dir, trust_remote_code=True)
```
I could explore anyway - but maybe let me finalize the current PR (make CI green) first <|||||>Finally get it:
- we don't need to remove other files (config, `__init__.py`) or `__pycache__` folder
- the point is: we need to remove the `module_file_name` in a subprocess, then copy it back
- os.system("rm -rf ...") works: as it is in another process
- os.system(f"python3 -c '{cmd}'"): same, but we don't use Linux specific command --> way to go
- os.remove(...): not working! I could not explain (as I don't know the reason behind) ๐ข
<|||||>Don't know why we get an error where a module is not a python file, but a package. See below.
Can't reproduce so far, but the fix works for the auto model dynamic loading test.
```bash
FAILED tests/models/auto/test_image_processing_auto.py::AutoImageProcessorTest::test_from_pretrained_dynamic_image_processor
- ModuleNotFoundError: No module named 'transformers_modules.local__tmp_tmpkcj_lb5j'
```<|||||>This PR is ready for review.
There is one failure thtat I can't reproduce with the same code snippet. See [this comment](https://github.com/huggingface/transformers/pull/21646#issuecomment-1433336303). It seems this happens much rarely. And probably we can investigate it if it happens again.
<|||||>Thanks for investigating so deeply this issue! |
transformers | 21,645 | closed | load_module will be removed in Python 3.12 | Several of our scripts call `spec.loader.load_module()`. Although this mostly affects standalone utils scripts, it's also called at the top level of `processing_utils.py`, which will be executed by all `Processor` classes.
`load_module()` has been deprecated for a while and will be fully deleted in Python 3.12, which is entering beta soon. We need to replace this code or the library will not be usable in Py3.12.
I can investigate and try to find a suitable replacement when I have time, but if anyone is more familiar with that code and can think of an obvious replacement that achieves the same goal, let me know!
cc @sgugger | 02-15-2023 14:59:30 | 02-15-2023 14:59:30 | Thanks for pointing this out! Will have a look. |
transformers | 21,644 | closed | Mask2Former - ValueError: cost matrix is infeasible | ### System Info
```
- `transformers` version: 4.26.0
- Platform: Linux-5.4.0-137-generic-x86_64-with-glibc2.27
- Python version: 3.10.9
- Huggingface_hub version: 0.12.0
- PyTorch version (GPU?): 1.13.1+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: 4 x RTX 2080 Ti
- Using distributed or parallel set-up in script?: Single node DistributedDataParallel setup
```
### Who can help?
@amyeroberts
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I am fine-tuning Mask2Former for a semantic segmentation task. I sometimes get the error `ValueError: cost matrix is infeasible`. With the learning rate of `1e-4` that I use, it usually takes many thousands of batches with the loss happily dropping to get this error. From my experience, the higher the learning rate, the more often it will happen. The error happens during the forward pass, which roughly looks like this:
```py
image_processor = AutoImageProcessor.from_pretrained("facebook/mask2former-swin-small-ade-semantic")
inputs = image_processor.preprocess(batch, mask_labels, return_tensors="pt")
batch, mask, class_labels = inputs["pixel_values"], inputs["mask_labels"], inputs["class_labels"]
batch = batch.to(device)
mask = [x.to(device) for x in mask]
class_labels = [x.to(device) for x in class_labels]
with torch.cuda.amp.autocast():
out = model(
pixel_values = batch,
mask_labels = mask,
class_labels = class_labels,
)
```
Unfortunately, I don't have the time to set up a full minimally reproducible example, but I have tracked the error down to `cost_matrix` containing `torch.inf` [here](https://github.com/huggingface/transformers/blob/48327c57182fdade7f7797d1eaad2d166de5c55b/src/transformers/models/mask2former/modeling_mask2former.py#L491) (or see stack trace below).
Full stack trace
```
Traceback (most recent call last):
File "/path/to/my/amazing/virtual/environment/lib/python3.10/site-packages/torch/multiprocessing/spawn.py", line 69, in _wrap
fn(i, *args)
File "/path/to/my/code/train/run.py", line 36, in _train_wrapper
train(rank, world_size, job, dgpm)
File "/path/to/my/code/train/train.py", line 219, in train
out = model(
File "/path/to/my/amazing/virtual/environment/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/path/to/my/amazing/virtual/environment/lib/python3.10/site-packages/torch/nn/parallel/distributed.py", line 1040, in forward
output = self._run_ddp_forward(*inputs, **kwargs)
File "/path/to/my/amazing/virtual/environment/lib/python3.10/site-packages/torch/nn/parallel/distributed.py", line 1000, in _run_ddp_forward
return module_to_run(*inputs[0], **kwargs[0])
File "/path/to/my/amazing/virtual/environment/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/path/to/my/code/model/mask2former.py", line 35, in forward
return self.model(pixel_values=pixel_values, mask_labels=mask_labels, class_labels=class_labels)
File "/path/to/my/amazing/virtual/environment/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/path/to/my/amazing/virtual/environment/lib/python3.10/site-packages/transformers/models/mask2former/modeling_mask2former.py", line 2464, in forward
loss_dict = self.get_loss_dict(
File "/path/to/my/amazing/virtual/environment/lib/python3.10/site-packages/transformers/models/mask2former/modeling_mask2former.py", line 2351, in get_loss_dict
loss_dict: Dict[str, Tensor] = self.criterion(
File "/path/to/my/amazing/virtual/environment/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/path/to/my/amazing/virtual/environment/lib/python3.10/site-packages/transformers/models/mask2former/modeling_mask2former.py", line 792, in forward
indices = self.matcher(masks_queries_logits, class_queries_logits, mask_labels, class_labels)
File "/path/to/my/amazing/virtual/environment/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/path/to/my/amazing/virtual/environment/lib/python3.10/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "/path/to/my/amazing/virtual/environment/lib/python3.10/site-packages/transformers/models/mask2former/modeling_mask2former.py", line 496, in forward
assigned_indices: Tuple[np.array] = linear_sum_assignment(cost_matrix.cpu())
ValueError: cost matrix is infeasible
```
### Expected behavior
From what I can tell, this is not expected behavior and is caused by how `scipy.optimize.linear_sum_assignment` handles infinite values. Replacing these with very large numbers seems to fix the issue, as proposed [here](https://stackoverflow.com/questions/42035999/why-does-linear-sum-assignment-in-scipy-optimize-never-return-if-one-of-the-assi) (though for a slightly different issue). This is achieved by adding the following two lines above the call to `linear_sum_assignment` in the line linked earlier.
```py
cost_matrix = torch.minimum(cost_matrix, torch.tensor(1e10))
cost_matrix = torch.maximum(cost_matrix, torch.tensor(-1e10))
```
I have found that the error becomes more common as the learning rate is increased, so it could be related to diverging loss. However, I first discovered the error using the same learning rate as in the Mask2Former paper, `1e-4`, so I would not expect this to be too high, especially since it had happily chugged along for over 11k batches with dropping or steady loss before throwing the error.
If this is expected behavior, I at least think the error message should be improved.
Edit: I have fixed this locally by overwriting the `Mask2FormerHungarianMatcher` class with the aforementioned fix. I have not seen any diverging loss since then over many runs of thousands of epochs. | 02-15-2023 14:10:38 | 02-15-2023 14:10:38 | Cc @alaradirik <|||||>Hi @asgerius, thanks for opening the issue, I'm looking into this and will try to replicate the error first<|||||>Friendly ping @alaradirik :) <|||||>Hi @asgerius, I tried fine-tuning Mask2Former on the semantic segmentation subset of the Scene Parsing dataset and couldn't replicate the issue.
Is it possible that you are using a buggy version of scipy (the bug in scipy.optimize.linear_sum_assignment is fixed in this [PR](https://github.com/scipy/scipy/pull/7031)) or there are issues with the data preprocessing?
You can refer to [Fine-tuning MaskFormer](https://pyimagesearch.com/2023/03/13/train-a-maskformer-segmentation-model-with-hugging-face-transformers/) blog post (exactly the same steps for Mask2Former) on PyImageSearch. I can take another look if the issue still persists but I'd need a minimal reproducible example to pinpoint the exact issue.<|||||>Hi
I put together a small example (see attached), and the results are somewhat contradictory to what I wrote in the original post. The error does indeed seem to be caused by diverging loss, which often recovers after a few batches. If I implement the described fix, the code does not crash, but instead just produces a high loss. However, I have never seen this behavior in my actual project, where the loss remains well-behaved when the fix is implemented. The only major difference is the data source, as the data in this example is simply random noise. I should also mention that my use case is two-class semantic segmentation.
Further, amp seems to be another important factor in addition to the learning rate. All my trainings have been run with it enabled. If disable in the example (controlled by the `use_amp` variable), the error becomes significantly harder to reproduce, indicating to me that the error is caused by overflowing floats.
To run the code, just put the files in the same directory and run `python example.py`. My versions of the dependencies are `torch==1.13.1 numpy==1.24.2 scipy==1.10.0 transformers==4.26.0`. With `use_amp = True` and `lr = 1e-4`, I usually get the error within the first 10-20 batches.
I changed the file types to .txt, as github does not allow .py and .json as attachments, so you'll have to change them back.
[example.txt](https://github.com/huggingface/transformers/files/11040427/example.txt)
[facebook.mask2former-swin-small-ade-semantic.config.txt](https://github.com/huggingface/transformers/files/11040428/facebook.mask2former-swin-small-ade-semantic.config.txt)
<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>@alaradirik I am facing the same issue, where I am using MaskFormer
```
914 cost_matrix = self.cost_mask * cost_mask + self.cost_class * cost_class + self.cost_dice * cost_dice
915 # do the assigmented using the hungarian algorithm in scipy
--> 916 assigned_indices: Tuple[np.array] = linear_sum_assignment(cost_matrix.cpu())
917 indices.append(assigned_indices)
919 # It could be stacked in one tensor
ValueError: cost matrix is infeasible
```
I was following @NielsRogge tutorial for fine tuning on semantic masks where I only changed training to :
```
with torch.cuda.amp.autocast():
outputs = model(
pixel_values=data["pixel_values"].to(device),
mask_labels=[labels.to(device) for labels in data["mask_labels"]],
class_labels=[labels.to(device) for labels in data["class_labels"]],
)
loss = outputs.loss
optimizer.zero_grad()
scaler.scale(loss).backward()
scaler.step(optimizer)
scaler.update()
```
and my ground truths are only 0s and 1s i.e. binary masks
@alaradirik I don't know how to replicate the issue as it don't know the real cause just occurs sometimes.<|||||>Small update: I have also seen the error during inference (amp enabled) when running on my trained model. However, this seems to be incredibly rare, as I have only ever experienced it once. I have not seen it after I implemented the fix described above in the inference code. |
transformers | 21,643 | closed | Skip wav2vec2 hubert high mem tests | # What does this PR do?
Skips some more troublesome Hubert and Wav2Vec2 tests. `dataset_conversion` for Hubert is now occasionally causing errors - see [this comment](https://github.com/huggingface/transformers/pull/20725#issuecomment-1430442813) until resolved. Skipping all tests that I've seen hit high memory and have changed because of #21502.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests? | 02-15-2023 12:58:46 | 02-15-2023 12:58:46 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 21,642 | closed | ONNX export fails for TFSegformerForSemanticSegmentation | ### System Info
- `transformers` version: 4.27.0.dev0
- Platform: Linux-5.10.147+-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.12.0
- PyTorch version (GPU?): 1.13.1+cu116 (False)
- Tensorflow version (GPU?): 2.11.0 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
Additionally, the versions of some relevant packages are
```
transformers @ git+https://github.com/huggingface/transformers@762dda44deed29baab049aac5324b49f134e7536
onnx==1.13.0
onnxruntime==1.14.0
tf2onnx==1.13.0
```
### Who can help?
@gante, @Rocketknight1
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1. Run this notebook (https://github.com/deep-diver/segformer-tf-transformers/blob/main/notebooks/TFSegFormer_ONNX.ipynb) in Colab. ONNX export apparently worked there as of July 25 2022, but it fails now.
The error message is
```
ERROR:tf2onnx.tfonnx:Tensorflow op [tf_segformer_for_semantic_segmentation/segformer/encoder/block.0.0/mlp/dwconv/dwconv/PartitionedCall: PartitionedCall] is not supported
ERROR:tf2onnx.tfonnx:Tensorflow op [tf_segformer_for_semantic_segmentation/segformer/encoder/block.0.1/mlp/dwconv/dwconv/PartitionedCall: PartitionedCall] is not supported
ERROR:tf2onnx.tfonnx:Tensorflow op [tf_segformer_for_semantic_segmentation/segformer/encoder/block.0.2/mlp/dwconv/dwconv/PartitionedCall: PartitionedCall] is not supported
ERROR:tf2onnx.tfonnx:Tensorflow op [tf_segformer_for_semantic_segmentation/segformer/encoder/block.1.0/mlp/dwconv/dwconv/PartitionedCall: PartitionedCall] is not supported
ERROR:tf2onnx.tfonnx:Tensorflow op [tf_segformer_for_semantic_segmentation/segformer/encoder/block.1.1/mlp/dwconv/dwconv/PartitionedCall: PartitionedCall] is not supported
ERROR:tf2onnx.tfonnx:Tensorflow op [tf_segformer_for_semantic_segmentation/segformer/encoder/block.1.2/mlp/dwconv/dwconv/PartitionedCall: PartitionedCall] is not supported
ERROR:tf2onnx.tfonnx:Tensorflow op [tf_segformer_for_semantic_segmentation/segformer/encoder/block.1.3/mlp/dwconv/dwconv/PartitionedCall: PartitionedCall] is not supported
ERROR:tf2onnx.tfonnx:Tensorflow op [tf_segformer_for_semantic_segmentation/segformer/encoder/block.1.4/mlp/dwconv/dwconv/PartitionedCall: PartitionedCall] is not supported
ERROR:tf2onnx.tfonnx:Tensorflow op [tf_segformer_for_semantic_segmentation/segformer/encoder/block.1.5/mlp/dwconv/dwconv/PartitionedCall: PartitionedCall] is not supported
ERROR:tf2onnx.tfonnx:Tensorflow op [tf_segformer_for_semantic_segmentation/segformer/encoder/block.2.0/mlp/dwconv/dwconv/PartitionedCall: PartitionedCall] is not supported
ERROR:tf2onnx.tfonnx:Tensorflow op [tf_segformer_for_semantic_segmentation/segformer/encoder/block.2.1/mlp/dwconv/dwconv/PartitionedCall: PartitionedCall] is not supported
ERROR:tf2onnx.tfonnx:Tensorflow op [tf_segformer_for_semantic_segmentation/segformer/encoder/block.2.2/mlp/dwconv/dwconv/PartitionedCall: PartitionedCall] is not supported
ERROR:tf2onnx.tfonnx:Tensorflow op [tf_segformer_for_semantic_segmentation/segformer/encoder/block.2.3/mlp/dwconv/dwconv/PartitionedCall: PartitionedCall] is not supported
ERROR:tf2onnx.tfonnx:Tensorflow op [tf_segformer_for_semantic_segmentation/segformer/encoder/block.2.4/mlp/dwconv/dwconv/PartitionedCall: PartitionedCall] is not supported
ERROR:tf2onnx.tfonnx:Tensorflow op [tf_segformer_for_semantic_segmentation/segformer/encoder/block.2.5/mlp/dwconv/dwconv/PartitionedCall: PartitionedCall] is not supported
ERROR:tf2onnx.tfonnx:Tensorflow op [tf_segformer_for_semantic_segmentation/segformer/encoder/block.2.6/mlp/dwconv/dwconv/PartitionedCall: PartitionedCall] is not supported
ERROR:tf2onnx.tfonnx:Tensorflow op [tf_segformer_for_semantic_segmentation/segformer/encoder/block.2.7/mlp/dwconv/dwconv/PartitionedCall: PartitionedCall] is not supported
ERROR:tf2onnx.tfonnx:Tensorflow op [tf_segformer_for_semantic_segmentation/segformer/encoder/block.2.8/mlp/dwconv/dwconv/PartitionedCall: PartitionedCall] is not supported
ERROR:tf2onnx.tfonnx:Tensorflow op [tf_segformer_for_semantic_segmentation/segformer/encoder/block.2.9/mlp/dwconv/dwconv/PartitionedCall: PartitionedCall] is not supported
ERROR:tf2onnx.tfonnx:Tensorflow op [tf_segformer_for_semantic_segmentation/segformer/encoder/block.2.10/mlp/dwconv/dwconv/PartitionedCall: PartitionedCall] is not supported
ERROR:tf2onnx.tfonnx:Tensorflow op [tf_segformer_for_semantic_segmentation/segformer/encoder/block.2.11/mlp/dwconv/dwconv/PartitionedCall: PartitionedCall] is not supported
ERROR:tf2onnx.tfonnx:Tensorflow op [tf_segformer_for_semantic_segmentation/segformer/encoder/block.2.12/mlp/dwconv/dwconv/PartitionedCall: PartitionedCall] is not supported
ERROR:tf2onnx.tfonnx:Tensorflow op [tf_segformer_for_semantic_segmentation/segformer/encoder/block.2.13/mlp/dwconv/dwconv/PartitionedCall: PartitionedCall] is not supported
ERROR:tf2onnx.tfonnx:Tensorflow op [tf_segformer_for_semantic_segmentation/segformer/encoder/block.2.14/mlp/dwconv/dwconv/PartitionedCall: PartitionedCall] is not supported
ERROR:tf2onnx.tfonnx:Tensorflow op [tf_segformer_for_semantic_segmentation/segformer/encoder/block.2.15/mlp/dwconv/dwconv/PartitionedCall: PartitionedCall] is not supported
ERROR:tf2onnx.tfonnx:Tensorflow op [tf_segformer_for_semantic_segmentation/segformer/encoder/block.2.16/mlp/dwconv/dwconv/PartitionedCall: PartitionedCall] is not supported
ERROR:tf2onnx.tfonnx:Tensorflow op [tf_segformer_for_semantic_segmentation/segformer/encoder/block.2.17/mlp/dwconv/dwconv/PartitionedCall: PartitionedCall] is not supported
ERROR:tf2onnx.tfonnx:Tensorflow op [tf_segformer_for_semantic_segmentation/segformer/encoder/block.2.18/mlp/dwconv/dwconv/PartitionedCall: PartitionedCall] is not supported
ERROR:tf2onnx.tfonnx:Tensorflow op [tf_segformer_for_semantic_segmentation/segformer/encoder/block.2.19/mlp/dwconv/dwconv/PartitionedCall: PartitionedCall] is not supported
ERROR:tf2onnx.tfonnx:Tensorflow op [tf_segformer_for_semantic_segmentation/segformer/encoder/block.2.20/mlp/dwconv/dwconv/PartitionedCall: PartitionedCall] is not supported
ERROR:tf2onnx.tfonnx:Tensorflow op [tf_segformer_for_semantic_segmentation/segformer/encoder/block.2.21/mlp/dwconv/dwconv/PartitionedCall: PartitionedCall] is not supported
ERROR:tf2onnx.tfonnx:Tensorflow op [tf_segformer_for_semantic_segmentation/segformer/encoder/block.2.22/mlp/dwconv/dwconv/PartitionedCall: PartitionedCall] is not supported
ERROR:tf2onnx.tfonnx:Tensorflow op [tf_segformer_for_semantic_segmentation/segformer/encoder/block.2.23/mlp/dwconv/dwconv/PartitionedCall: PartitionedCall] is not supported
ERROR:tf2onnx.tfonnx:Tensorflow op [tf_segformer_for_semantic_segmentation/segformer/encoder/block.2.24/mlp/dwconv/dwconv/PartitionedCall: PartitionedCall] is not supported
ERROR:tf2onnx.tfonnx:Tensorflow op [tf_segformer_for_semantic_segmentation/segformer/encoder/block.2.25/mlp/dwconv/dwconv/PartitionedCall: PartitionedCall] is not supported
ERROR:tf2onnx.tfonnx:Tensorflow op [tf_segformer_for_semantic_segmentation/segformer/encoder/block.2.26/mlp/dwconv/dwconv/PartitionedCall: PartitionedCall] is not supported
ERROR:tf2onnx.tfonnx:Tensorflow op [tf_segformer_for_semantic_segmentation/segformer/encoder/block.2.27/mlp/dwconv/dwconv/PartitionedCall: PartitionedCall] is not supported
ERROR:tf2onnx.tfonnx:Tensorflow op [tf_segformer_for_semantic_segmentation/segformer/encoder/block.2.28/mlp/dwconv/dwconv/PartitionedCall: PartitionedCall] is not supported
ERROR:tf2onnx.tfonnx:Tensorflow op [tf_segformer_for_semantic_segmentation/segformer/encoder/block.2.29/mlp/dwconv/dwconv/PartitionedCall: PartitionedCall] is not supported
ERROR:tf2onnx.tfonnx:Tensorflow op [tf_segformer_for_semantic_segmentation/segformer/encoder/block.2.30/mlp/dwconv/dwconv/PartitionedCall: PartitionedCall] is not supported
ERROR:tf2onnx.tfonnx:Tensorflow op [tf_segformer_for_semantic_segmentation/segformer/encoder/block.2.31/mlp/dwconv/dwconv/PartitionedCall: PartitionedCall] is not supported
ERROR:tf2onnx.tfonnx:Tensorflow op [tf_segformer_for_semantic_segmentation/segformer/encoder/block.2.32/mlp/dwconv/dwconv/PartitionedCall: PartitionedCall] is not supported
ERROR:tf2onnx.tfonnx:Tensorflow op [tf_segformer_for_semantic_segmentation/segformer/encoder/block.2.33/mlp/dwconv/dwconv/PartitionedCall: PartitionedCall] is not supported
ERROR:tf2onnx.tfonnx:Tensorflow op [tf_segformer_for_semantic_segmentation/segformer/encoder/block.2.34/mlp/dwconv/dwconv/PartitionedCall: PartitionedCall] is not supported
ERROR:tf2onnx.tfonnx:Tensorflow op [tf_segformer_for_semantic_segmentation/segformer/encoder/block.2.35/mlp/dwconv/dwconv/PartitionedCall: PartitionedCall] is not supported
ERROR:tf2onnx.tfonnx:Tensorflow op [tf_segformer_for_semantic_segmentation/segformer/encoder/block.2.36/mlp/dwconv/dwconv/PartitionedCall: PartitionedCall] is not supported
ERROR:tf2onnx.tfonnx:Tensorflow op [tf_segformer_for_semantic_segmentation/segformer/encoder/block.2.37/mlp/dwconv/dwconv/PartitionedCall: PartitionedCall] is not supported
ERROR:tf2onnx.tfonnx:Tensorflow op [tf_segformer_for_semantic_segmentation/segformer/encoder/block.2.38/mlp/dwconv/dwconv/PartitionedCall: PartitionedCall] is not supported
ERROR:tf2onnx.tfonnx:Tensorflow op [tf_segformer_for_semantic_segmentation/segformer/encoder/block.2.39/mlp/dwconv/dwconv/PartitionedCall: PartitionedCall] is not supported
ERROR:tf2onnx.tfonnx:Tensorflow op [tf_segformer_for_semantic_segmentation/segformer/encoder/block.3.0/mlp/dwconv/dwconv/PartitionedCall: PartitionedCall] is not supported
ERROR:tf2onnx.tfonnx:Tensorflow op [tf_segformer_for_semantic_segmentation/segformer/encoder/block.3.1/mlp/dwconv/dwconv/PartitionedCall: PartitionedCall] is not supported
ERROR:tf2onnx.tfonnx:Tensorflow op [tf_segformer_for_semantic_segmentation/segformer/encoder/block.3.2/mlp/dwconv/dwconv/PartitionedCall: PartitionedCall] is not supported
ERROR:tf2onnx.tfonnx:Unsupported ops: Counter({'PartitionedCall': 52})
```
### Expected behavior
ONNX export of TFSegformerForSemanticSegmentation works. | 02-15-2023 12:47:23 | 02-15-2023 12:47:23 | cc @sayakpaul, any idea what the cause might be?<|||||>Maybe this issue is better suited for https://github.com/onnx/tensorflow-onnx
Could you try downgrading the `tf2onnx` version to 1.11.1?<|||||>@sayakpaul
>Could you try downgrading the tf2onnx version to 1.11.1?
I tried this, unfortunately there is still an error that `PartitionedCall` is not supported.
I also tried to downgrade tensorflow and ONNX export worked with `tensorflow==2.8.4`, here is an example: https://colab.research.google.com/gist/OutSorcerer/c8cd27a455091b57d9ea90ab3450035e/tfsegformer_onnx.ipynb
>Maybe this issue is better suited for https://github.com/onnx/tensorflow-onnx
There are already issues there about `PartitionedCall` support e.g. https://github.com/onnx/tensorflow-onnx/issues/1864.
However, since export works with a previous version of TensorFlow, it seems that `PartitionedCall` operation is not essential for a model to work. This is a low-level operation automatically added by TensorFlow and another workaround with new versions of TensorFlow could be to disable its insertion into an operation graph, but I was not able to quickly find a way to do it.
Also, regardless of the error message printed an ONNX file is still generated (which obviously fails at inference time), so yet another workaround could be to remove `PartitionedCall`s from an ONNX file.
<|||||>Thanks for investigating. With your workaround, does the model work during inference as expected?
If so, I guess we can safely close the issue here? <|||||>>Thanks for investigating. With your workaround, does the model work during inference as expected?
Yes, I rerun the cells that were comparing outputs of a TF model and an ONNX model and the outputs match.
>If so, I guess we can safely close the issue here?
Well, from my perspective ideally one of workarounds would be applied in `transformers` and `TFSegformerForSemanticSegmentation` would work with the most recent releases of TF and other packages, but I also understand that eventually `tf2onnx` developers should do something with `PartitionedCall` export and this issue would be solved too.
<|||||>In fact, `PartitionedCall` may not be the root cause of the problem.
I looked at the ONNX file produced with TF 2.11.0 [in the notebook above](https://github.com/deep-diver/segformer-tf-transformers/blob/main/notebooks/TFSegFormer_ONNX.ipynb) by doing
```
onnx_model = onnx.load(onnx_model_path)
with open("model.txt", "w") as f:
f.write(str(onnx_model))
```
It has the following node
```
node {
input: "tf_segformer_for_semantic_segmentation/segformer/encoder/block.1.0/mlp/dwconv/Reshape:0"
input: "tf_segformer_for_semantic_segmentation/segformer/encoder/block.1.0/mlp/dwconv/dwconv/ReadVariableOp:0"
output: "tf_segformer_for_semantic_segmentation/segformer/encoder/block.1.0/mlp/dwconv/dwconv/PartitionedCall:0"
name: "tf_segformer_for_semantic_segmentation/segformer/encoder/block.1.0/mlp/dwconv/dwconv/PartitionedCall"
op_type: "PartitionedCall"
...
attribute {
name: "f"
s: "__inference__jit_compiled_convolution_op_6171"
type: STRING
}
}
```
The issue is that node `__inference__jit_compiled_convolution_op_6171` is referenced, but its definition is nowhere to be found. So likely tf2onnx failed to convert that operation at the first place.
There was a similar issue, where one of tf2onnx contributors [said](https://github.com/onnx/tensorflow-onnx/issues/1093#issuecomment-707239545):
>StatefulPartitionedCall is an op that does a simple function call in TF. Our converter doesn't normally have to deal with it since the optimizer we run before conversion automatically inlines most function calls. If it shows up in the optimized graph there is usually some reason that will prevent conversion from working.
I created an issue with the details above in tf2onnx GitHub: https://github.com/onnx/tensorflow-onnx/issues/2127
<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 21,641 | closed | Confusing documentation in T5 | ### System Info
latest
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Not sure if this is a bug exactly, but the way the documentation reads for T5 doesn't seem correct in the context of Flan-T5. Specifically, the configuration parameter `d_kv` it states:
```
d_kv (int, optional, defaults to 64) โ Size of the key, query, value projections per attention head. d_kv has to be equal to d_model // num_heads.
```
However if you look at `flan-t5-small`, d_kv is 64 while the hidden size is 512 and the number of heads is 6, which obviously doesn't hold up with the statement in the docs.
Is the T5 behavior correct as is? Are the docs just wrong?
### Expected behavior
Looking at the implementation of T5 it seems generic w.r.t to `d_kv`, `num_heads`, and `d_model` and `inner_dim` is actually the size of the k/v/q projection | 02-15-2023 12:43:40 | 02-15-2023 12:43:40 | cc @ArthurZucker and @younesbelkada <|||||>The T5 behaviour is correct, and as pointed out the doc is probably not! I'll open a PR to fix this ๐๐ป thanks |
transformers | 21,640 | closed | [WIP] Move X-MOD models to facebook organization | # What does this PR do?
As discussed in https://github.com/huggingface/transformers/pull/20939, the new models https://huggingface.co/jvamvas/xmod-base etc. should be moved to the [facebook](https://huggingface.co/facebook) organization.
This PR changes the hardcoded model names in the code.
Next steps:
- [ ] Someone please add me to the facebook org
- [ ] I move the models
- [ ] PR can be merged
- [ ] I leave the facebook org
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case. https://github.com/huggingface/transformers/pull/20939
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
@ArthurZucker
| 02-15-2023 12:24:47 | 02-15-2023 12:24:47 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 21,639 | closed | DataCollatorForTokenClassification pads labels incorrectly for LukeModel | ### System Info
- `transformers` version: 4.26.0
- Platform: Linux-4.18.0-348.12.2.el8_5.x86_64-x86_64-with-glibc2.28
- Python version: 3.9.16
- Huggingface_hub version: 0.12.0
- PyTorch version (GPU?): 1.13.1+cu117 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help?
@ArthurZucker
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I am using a LUKE based workflow with my own text dataset. This works perfectly fine using a standard workflow (e.g. the pretrained luke-large-finetuned-conll-2003 LukeTokenizer with padding and truncation in combination with a standard Trainer instance), but when trying to implement dynamic padding strange behavior was observed. Still using the same tokenizer, but now only with truncation, and then using the DataCollatorForTokenClassification to pad batches during Trainer.train(), the batch size was reportedly wrong (ValueError: Expected input batch_size (30) to match target batch_size (46)).
When comparing the output of the working workflow with padding and truncation in the Tokenizer, it is observed that the labels, entity_ids, entity_position_ids, entity_start_positions, entity_end_positions and entity_attention_mask are of the same size, and the input_ids and attention_mask are also of the same (but possibly different) size. As mentioned, this works.
When checking the output after the DataCollatorForTokenClassification, the labels are the same size as the input_ids and attention_mask. This is incorrect for the selected tokenizer, and makes it such that the mentioned error is given.
### Expected behavior
labels are padded according to the entity_ids, not according to the input_ids. | 02-15-2023 12:03:13 | 02-15-2023 12:03:13 | Did you tried the `DataCollatorForLukeTokenClassification` :thinking:
For LUKE and NER there's this demo project available:
https://github.com/huggingface/transformers/tree/main/examples/research_projects/luke<|||||>Thanks, I did not know that exists, as it is not part of the normal datacollators. Good chance that that works.
Any chance this will become part of the transformers library? Instead of just importing it we have to copy over the utils file. <|||||>No this is too specific to be in the library proper.<|||||>The data collator in the script pointed out by @stefan-it works, when removing the original_entity_spans part of that collator that is not used or outputted by the LukeTokenizer. So thanks for that!
Is it too specific? All the other luke parts are available as imports, having this one thing only available as a custom code import feels as a lost opportunity.
Anyways, thanks for the quick help!<|||||>Keep in mind that the Transformers library is primarily a library of models, not data collators ;-)<|||||>Good point, you tend to forget that when the whole solution works so seamlessly :) I guess I was spoiled ;) |
transformers | 21,638 | open | CLIP image processor fails when resizing a 1x1 image | ### System Info
- `transformers` version: 4.26.1
- Platform: Linux-4.19.0-23-cloud-amd64-x86_64-with-debian-10.13
- Python version: 3.7.12
- Huggingface_hub version: 0.12.0
- PyTorch version (GPU?): 1.12.1+cu113 (True)
- Tensorflow version (GPU?): 2.11.0 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help?
@amyeroberts
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Run the following:
```python
from PIL import Image
import requests
from transformers import AutoProcessor, CLIPModel
model = CLIPModel.from_pretrained("openai/clip-vit-base-patch32")
processor = AutoProcessor.from_pretrained("openai/clip-vit-base-patch32")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
image = image.resize((1,1))
print(image.mode, image.size)
inputs = processor(
text=["a photo of a cat", "a photo of a dog"], images=image, return_tensors="pt", padding=True
)
outputs = model(**inputs)
logits_per_image = outputs.logits_per_image # this is the image-text similarity score
probs = logits_per_image.softmax(dim=1) # we can take the softmax to get the label probabilities
```
The issue appears to be caused by `infer_channel_dimension_format(image)` which for a numpy array of shape 1x1x3 corresponding to a 1x1 rgb image, the return value is `<ChannelDimension.FIRST: 'channels_first'>` for the input data which is incorrect in this case.
Gives the error:
```
Traceback (most recent call last):
File "hf_bug.py", line 15, in <module>
text=["a photo of a cat", "a photo of a dog"], images=image, return_tensors="pt", padding=True
File "/some/path/.venv/lib/python3.7/site-packages/transformers/models/clip/processing_clip.py", line 102, in __call__
image_features = self.image_processor(images, return_tensors=return_tensors, **kwargs)
File "/some/path/.venv/lib/python3.7/site-packages/transformers/image_processing_utils.py", line 446, in __call__
return self.preprocess(images, **kwargs)
File "/some/path/.venv/lib/python3.7/site-packages/transformers/models/clip/image_processing_clip.py", line 327, in preprocess
images = [self.normalize(image=image, mean=image_mean, std=image_std) for image in images]
File "/some/path/.venv/lib/python3.7/site-packages/transformers/models/clip/image_processing_clip.py", line 327, in <listcomp>
images = [self.normalize(image=image, mean=image_mean, std=image_std) for image in images]
File "/some/path/.venv/lib/python3.7/site-packages/transformers/models/clip/image_processing_clip.py", line 211, in normalize
return normalize(image, mean=mean, std=std, data_format=data_format, **kwargs)
File "/some/path/.venv/lib/python3.7/site-packages/transformers/image_transforms.py", line 334, in normalize
raise ValueError(f"mean must have {num_channels} elements if it is an iterable, got {len(mean)}")
ValueError: mean must have 1 elements if it is an iterable, got 3
```
### Expected behavior
the 1x1 input image should be resized to 224x224 | 02-15-2023 10:38:35 | 02-15-2023 10:38:35 | Thanks for raising this issue @justinpinkney and for the detailed snippet and trackback!
Indeed, you're right, the issue is arising from trying to infer the image channel dimension. As it's possible to have images with a single channel, and images with 3 channels, an image with shape `(1, 1, 3)` could be either a 1x3 single channel image or 1x1 3 channel image. This ambiguity in dimensions causes many issues and it's one that I'm currently trying to address.
Depending on the input data format you're feeding to the image processor (torch/pil/tf/np/jax and batched/single image/list of images), the fastest way around this would be tiling the pixels to create a compatible shape e.g. 2x2x3 image, as this will result in the same image after resizing as the original 1x1. However, this is quite hacky and the bug will still persist in cases when the dimensions cannot be confidently inferred e.g. a 3x3x3 image.
I'll make sure to keep this issue updated with changes to the code to address this. <|||||>I was hoping I could specify the data format using the `data_format` argument, but that turned out to be just for the output images, not specifying the inputs. In my case these 1xn and 1x1 images were just bad samples, so I could filter them out in the data loading pipeline.
Thanks for the quick response though!<|||||>Yes, at the moment it just controls the output format. I think being able to specify the input data format is a good solution however! I'll draft something up :) |
transformers | 21,637 | closed | Fix Blip-2 CI again | # What does this PR do?
The fix added in #21566 have to be applied to a later merged commit #21624 | 02-15-2023 09:23:59 | 02-15-2023 09:23:59 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@gante Don't worry. You added 2 new tests, and we just need to use FP16 (for those 2 new tests) to avoid GPU OOM. The original 2 tests are not undo by your PR.<|||||>Oh, I see! haha that makes more sense now :) |
transformers | 21,636 | closed | Pass parent exception as context exception to provide clearer stack trace | # What does this PR do?
Passes the parent exception as the context exception so that it is clearer what was the original cause of the exception. Currently the message has the confusing: "During handling of the above exception, another exception occurred:
"
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
@ArthurZucker small improvement
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 02-15-2023 08:20:41 | 02-15-2023 08:20:41 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 21,635 | closed | How to finetune mt0-xxl-mt(13B parameters) seq2seq_qa with deepspeed | ### System Info
```shell
I tried to finetune mt0-xxl-mt with the script examples/pytorch/question-answering/run_qa_seq2seq_qa.py,the machine has 8 x V100(32GB) GPU and 250GB CPU memory, but failed with OOM. Anyone can help me?
```
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
this is the command:
`deepspeed --num_gpus 8 run_seq2seq_qa.py --model_name_or_path bigscience/mt0-xxl-mt --output_dir ./output --dataset_name squad_v2 --context_column context --question_column question --answer_column answers
--do_train --auto_find_batch_size --learning_rate 3e-5 --num_train_epochs 2 --max_seq_length 512 --deepspeed ./ds_config.json`
this is the ds_config.json:
{
"fp16": {
"enabled": "auto",
"loss_scale": 0,
"loss_scale_window": 1000,
"initial_scale_power": 16,
"hysteresis": 2,
"min_loss_scale": 1
},
"bf16": {
"enabled": "auto"
},
"optimizer": {
"type": "AdamW",
"params": {
"lr": "auto",
"betas": "auto",
"eps": "auto",
"weight_decay": "auto"
}
},
"scheduler": {
"type": "WarmupLR",
"params": {
"warmup_min_lr": "auto",
"warmup_max_lr": "auto",
"warmup_num_steps": "auto"
}
},
"zero_optimization": {
"stage": 3,
"offload_optimizer": {
"device": "cpu",
"pin_memory": true
},
"offload_param": {
"device": "cpu",
"pin_memory": true
},
"overlap_comm": true,
"contiguous_gradients": true,
"allgather_partitions": true,
"allgather_bucket_size": 2e8,
"reduce_scatter": true,
"reduce_bucket_size": 2e8,
"sub_group_size": 1e9,
"stage3_prefetch_bucket_size": "auto",
"stage3_param_persistence_threshold": "auto",
"stage3_max_live_parameters": 1e9,
"stage3_max_reuse_distance": 1e9,
"stage3_gather_16bit_weights_on_model_save": true
},
"gradient_accumulation_steps": "auto",
"gradient_clipping": "auto",
"steps_per_print": 2000,
"train_batch_size": "auto",
"train_micro_batch_size_per_gpu": "auto",
"wall_clock_breakdown": false
}
### Expected behavior
```shell
Time to load utils op: 0.0003857612609863281 seconds
[2023-02-15 11:00:40,462] [INFO] [utils.py:831:see_memory_usage] DeepSpeedZeRoOffload initialize [begin]
[2023-02-15 11:00:40,463] [INFO] [utils.py:832:see_memory_usage] MA 0.0 GB Max_MA 0.0 GB CA 0.0 GB Max_CA 0 GB
[2023-02-15 11:00:40,463] [INFO] [utils.py:840:see_memory_usage] CPU Virtual Memory: used = 226.3 GB, percent = 90.0%
Parameter Offload: Total persistent parameters: 503808 in 124 params
[2023-02-15 11:00:40,595] [INFO] [utils.py:831:see_memory_usage] DeepSpeedZeRoOffload initialize [end]
[2023-02-15 11:00:40,595] [INFO] [utils.py:832:see_memory_usage] MA 0.0 GB Max_MA 0.0 GB CA 0.0 GB Max_CA 0 GB
[2023-02-15 11:00:40,596] [INFO] [utils.py:840:see_memory_usage] CPU Virtual Memory: used = 226.32 GB, percent = 90.0%
[2023-02-15 11:00:40,700] [INFO] [utils.py:831:see_memory_usage] Before creating fp16 partitions
[2023-02-15 11:00:40,701] [INFO] [utils.py:832:see_memory_usage] MA 0.0 GB Max_MA 0.0 GB CA 0.0 GB Max_CA 0 GB
[2023-02-15 11:00:40,702] [INFO] [utils.py:840:see_memory_usage] CPU Virtual Memory: used = 226.33 GB, percent = 90.0%
[2023-02-15 11:00:43,709] [INFO] [utils.py:831:see_memory_usage] After creating fp16 partitions: 12
[2023-02-15 11:00:43,710] [INFO] [utils.py:832:see_memory_usage] MA 0.0 GB Max_MA 0.0 GB CA 0.0 GB Max_CA 0 GB
[2023-02-15 11:00:43,711] [INFO] [utils.py:840:see_memory_usage] CPU Virtual Memory: used = 226.3 GB, percent = 90.0%
[2023-02-15 11:00:43,807] [INFO] [utils.py:831:see_memory_usage] Before creating fp32 partitions
[2023-02-15 11:00:43,807] [INFO] [utils.py:832:see_memory_usage] MA 0.0 GB Max_MA 0.0 GB CA 0.0 GB Max_CA 0 GB
[2023-02-15 11:00:43,808] [INFO] [utils.py:840:see_memory_usage] CPU Virtual Memory: used = 226.32 GB, percent = 90.0%
Traceback (most recent call last):
File "/data/yckj1358/projects/nlg/jobs/mt0-xxl-mt/../..//tools/hf_run_seq2seq_qa.py", line 767, in <module>
main()
File "/data/yckj1358/projects/nlg/jobs/mt0-xxl-mt/../..//tools/hf_run_seq2seq_qa.py", line 703, in main
train_result = trainer.train(resume_from_checkpoint=checkpoint)
File "/data/yckj1358/.virtualenvs/transformers-pytorch-gpu/lib/python3.9/site-packages/transformers-4.27.0.dev0-py3.9.egg/transformers/trainer.py", line 1571, in train
return inner_training_loop(
File "/data/yckj1358/.virtualenvs/transformers-pytorch-gpu/lib/python3.9/site-packages/accelerate/utils/memory.py", line 122, in decorator
raise RuntimeError("No executable batch size found, reached zero.")
RuntimeError: No executable batch size found, reached zero.
[2023-02-15 11:01:24,284] [INFO] [launch.py:318:sigkill_handler] Killing subprocess 115133
```
### Checklist
- [X] I have read the migration guide in the readme. ([pytorch-transformers](https://github.com/huggingface/transformers#migrating-from-pytorch-transformers-to-transformers); [pytorch-pretrained-bert](https://github.com/huggingface/transformers#migrating-from-pytorch-pretrained-bert-to-transformers))
- [X] I checked if a related official extension example runs on my machine. | 02-15-2023 03:03:15 | 02-15-2023 03:03:15 | Please use the [forums](https://discuss.huggingface.co/) to help debug your code as we keep issues for bugs and feature requests only.<|||||>Oh sorry, I will close this issue and move to forums.<|||||>I closed this issue because this is not about bugs and feature requests |
transformers | 21,634 | closed | Remove extra "`max_length` is reached." from InfNaNLogitsProcessor documentation | # What does this PR do?
Remove extra "`max_length` is reached." from InfNaNLogitsProcessor documentation
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
| 02-14-2023 21:00:38 | 02-14-2023 21:00:38 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 21,633 | closed | Add SwinIR for Image Super Resolution | ### Model description
SwinIR: Image Restoration Using Swin Transformer
- This paper presents an Image Super Resolution / Image Restoration model inspired by the SwinTransformer architecture.
- It demonstrates superior performance on various vision tasks including classical/lightweight Image Super Resolution, Image Denoising, and JPEG compression artifact reduction
This issue is primarily focused on the Image Super Resolution model only. I would love to see this model on HuggingFace.
## Contribution: I would love to work on this!
I am new to HuggingFace and open-source in general. I am open to comments/suggestions/feedback.
### Open source status
- [X] The model implementation is available
- [X] The model weights are available
### Provide useful links for the implementation
Paper Link: https://arxiv.org/pdf/2108.10257v1.pdf
Implementation Link: https://github.com/JingyunLiang/SwinIR
Weights Link: https://github.com/JingyunLiang/SwinIR/releases/tag/v0.0 | 02-14-2023 19:23:18 | 02-14-2023 19:23:18 | Just wanna note that Swin2SR is already integrated, which improves upon SwinIR: https://huggingface.co/docs/transformers/main/model_doc/swin2sr<|||||>Ohh okay, thank you, I'll close this issue then.<|||||>Hi @NielsRogge thanks for adding this model to HF.
Do you have in plans to add training process too? |
transformers | 21,632 | closed | Fix typo in documentation. | # What does this PR do?
Replaces "the value used to module the next token probabilities" with "the value used to modulate the next token probabilities", which I think is what was originally meant.
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests? | 02-14-2023 18:41:38 | 02-14-2023 18:41:38 | |
transformers | 21,631 | closed | [XLN] Fix XLN | # What does this PR do?
[This commit](https://github.com/huggingface/transformers/commit/87e6e4fe5c7e65cb69e70306f22de6daf16b6e14) broke the XLNet docstyle explaining what the output of the `create_mask` function should be.
| 02-14-2023 16:53:41 | 02-14-2023 16:53:41 | Still a draft, will be fixing #21626 via a deprecation cycle<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_21631). All of your documentation changes will be reflected on that endpoint.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 21,630 | closed | Fix generation config for empty state dict | # What does this PR do?
This PR follows up on #21542 which didn't fix the problem for generative models. For those, the generation config can't be generated properly and returns another type of error than the one intercepted in `from_pretrained`. The fix is thus easy.
To make sure to catch the problem on all models, the test added in #21542 is graduated as a common test.
Fixes #21610 | 02-14-2023 15:29:49 | 02-14-2023 15:29:49 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 21,629 | closed | Update data_collator.py | Fix 10% random token replacement.
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 02-14-2023 15:19:16 | 02-14-2023 15:19:16 | Ah, got it! Thank you!<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_21629). All of your documentation changes will be reflected on that endpoint. |
transformers | 21,628 | closed | Donut base-sized model, pre-trained only for a new language tutorial | ### Feature request
I'm trying to generate a new pre-training for Donut model using Romanian language documents.
I have about 100k scanned documents and want to create a pre-training for Romanian language to be able to fine-tune different types of documents for parsing and classification on it.
Can anyone share or create a tutorial on how to generate new Donut pre-train model from scratch?
Thanks
### Motivation
Donut only have a a few language models
### Your contribution
I can share the pretrained model i create for Romanian language | 02-14-2023 14:59:38 | 02-14-2023 14:59:38 | Those questions are most suited to the [forums](https://discuss.huggingface.co/) where the whole community will be able to help. We keep issues for bugs and feature requests only.<|||||>OK, will do that!
Thanks |
transformers | 21,627 | closed | Error (also in original) model, scaling only q matrix not qk.T dot product (qk.T/sqrt(dim_per_head)) | As per Vaswani et al, 2017 p.4
Is torch.matmul(q, k.transpose(2, 3)) / math.sqrt(dim_per_head) not q / math.sqrt(dim_per_head) https://arxiv.org/pdf/1912.05372.pdf
Error was in original FlauBERT repo and effectively scales queries but not keys cf. https://github.com/getalp/Flaubert/pull/45/commits/6d176880ca3a1a8dfa2b76c97030bb51c5e917b8
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 02-14-2023 14:31:59 | 02-14-2023 14:31:59 | cc @ArthurZucker and @younesbelkada
Also note that the same change would need to be applied to XLM.<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>Hi @younesbelkada ok I'm on it thanks<|||||>@younesbelkada curious `make fixup` executed fine and passing `make repo_consistency` locally but not remotely any clue ?<|||||>This comment might help you: https://github.com/huggingface/transformers/pull/20939#issuecomment-1423974311
Looking closer at your code, maybe a bad rebase happened on the `xlm` file, can you revert the changes there, and just modify the line as you did for flaubert, then run `make fix-copies` ?<|||||>@younesbelkada on a side note, the sqrt could be computed only once at init as self.sqrt_d and also with torch.sqrt() which is about 29x faster than math.sqrt() cf. benchmark https://twitter.com/k_saifullaah/status/1430510295257030658/photo/1<|||||>This is interesting, thanks for sharing!
Feel free to add this change inside [`MultiHeadAttention`](https://github.com/huggingface/transformers/blob/d3b1adf59fff726f1c8f324728e562237f080ce6/src/transformers/models/xlm/modeling_xlm.py#L104) and then apply `make fix-copies` so that users can benefit from interesting speedups as you are suggesting
Otherwise, this can be addressed in a follow up PR too<|||||>Will do next in new PR to ensure consistency with above title |
transformers | 21,626 | closed | XLNet fails with attn_type "uni" | ### System Info
- `transformers` version: 4.26.1
- Platform: Linux-5.10.104-linuxkit-aarch64-with-glibc2.17
- Python version: 3.8.16
- Huggingface_hub version: 0.12.0
- PyTorch version (GPU?): 1.13.0 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help?
@thomwolf
### Information
- My own modified scripts
### Tasks
- An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
### Reproduction
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("xlnet-base-cased")
tokenizer = AutoTokenizer.from_pretrained("xlnet-base-cased")
# Set attention type
model.transformer.attn_type = "uni"
inputs = tokenizer(["Hello, my dog is cute", "Hello, my dog is cute too"], return_tensors="pt", padding=True)
print(inputs)
outputs = model(**inputs)
```
Error:
```python-traceback
{'input_ids': tensor([[ 5, 17, 11368, 19, 94, 2288, 27, 10920, 4, 3],
[ 17, 11368, 19, 94, 2288, 27, 10920, 269, 4, 3]]), 'token_type_ids': tensor([[3, 0, 0, 0, 0, 0, 0, 0, 0, 2],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 2]]), 'attention_mask': tensor([[0, 1, 1, 1, 1, 1, 1, 1, 1, 1],
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1]])}
Traceback (most recent call last):
File "xlnet.py", line 70, in <module>
outputs = model(**inputs)
File "/usr/local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1190, in _call_impl
return forward_call(*input, **kwargs)
File "/home/vscode/.local/lib/python3.8/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1547, in forward
transformer_outputs = self.transformer(
File "/usr/local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1190, in _call_impl
return forward_call(*input, **kwargs)
File "/home/vscode/.local/lib/python3.8/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1161, in forward
attn_mask += data_mask[:, :, :, None]
RuntimeError: output with shape [10, 10, 1, 1] doesn't match the broadcast shape [10, 10, 2, 1]
```
### Expected behavior
Successful forward pass with the appropriate attention masks applied. | 02-14-2023 14:26:07 | 02-14-2023 14:26:07 | cc @ArthurZucker and @younesbelkada <|||||>This is a fairly old model ๐
It does make sense to drop `uni` (first because it is not working and did not bother anyone) but also let's just redirect to the new [TransformerXL](https://huggingface.co/docs/transformers/model_doc/transfo-xl). Thanks for reporting<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 21,625 | closed | Add OPT resources to the transformers documentation | # What does this PR do?
Fixes #20055 (partially)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
## Who can review?
@stevhliu
Thanks in advance, if I miss anything please let me know :) | 02-14-2023 13:48:57 | 02-14-2023 13:48:57 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Thanks a lot! I think when I saved the file in my editor it automatically changed the formatting. But anyway, I've reverted the formatting changes ๐ <|||||>Thanks again for the changes, everything looks great! Pinging @sgugger for a final review, and then we can merge ๐ |
transformers | 21,624 | closed | Generate: input expansion for any model input | # What does this PR do?
Fixes #21599
In line with #21603, this PR aims at generalizing `.generate()` for any-to-text models. In particular, it rewrites the function that expands the inputs when `num_beams>1` or `num_return_sequences>1` -- instead of expanding certain keywords within `model_kwargs`, expands any tensor therein. This assumes that all tensors in `model_kwargs` are per-row inputs, but that seems to be the case so far ๐
The TF case had a more complex change, as we had two functions to expand the inputs (depending on whether we wanted a new dimension or not). This PR also standardizes that distinction.
Slow tests were run for:
- [x] GPT2 (both frameworks)
- [x] T5 (both frameworks)
- [x] BLIP2 (PT) | 02-14-2023 11:21:08 | 02-14-2023 11:21:08 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 21,622 | closed | Don't run tests that hit CTC loss calculation | # What does this PR do?
Any tests which hit CTC loss calculation - passing in labels into the model's `call` method - results in OOM errors. For these tests, the CTC models are removed to prevent CI failing.
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
| 02-14-2023 09:38:26 | 02-14-2023 09:38:26 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_21622). All of your documentation changes will be reflected on that endpoint. |
transformers | 21,621 | closed | Update document of WhisperDecoderLayer | # What does this PR do?
Fix the document of input `hidden_states` and `encoder_hidden_states` in `WhisperDecoderLayer.forward`
According to the document of `WhisperDecoder` the shape should be `(batch, seq_len, embed_dim)` not `(seq_len, batch, embed_dim)`
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@sanchit-gandhi | 02-14-2023 08:53:48 | 02-14-2023 08:53:48 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Perfect, thanks again! |
transformers | 21,620 | closed | Unable to disable the `do_resize` option in the CLIPImageProcessor | ### System Info
```
- `transformers` version: 4.26.1
- Platform: Linux-3.10.0-1160.53.1.el7.x86_64-x86_64-with-glibc2.31
- Python version: 3.10.8
- Huggingface_hub version: 0.11.1
- PyTorch version (GPU?): 1.13.1+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
```
### Who can help?
@amyeroberts @NielsRogge @arthur
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I have a situation where I convert my inputs to tensor and resize them before passing them to CLIPImageProcessor. Hence, I want to disable this resize operation inside the CLIPImageProcessor. However, when i pass `False` to the `do_resize` flag, the tensors that it returns are still resized to the default 224x224 size. Here is a reproducible code:
```
from transformers import CLIPImageProcessor
batched_tensor = tensor.rand((2,3,512,512))
processor = CLIPImageProcessor.from_pretrained("openai/clip-vit-base-patch32")
processed_image = processor(list(batched_tensor), return_tensors='pt', padding=True, do_resize=False)
print(processed_image['pixel_values'].shape)
Output:
torch.Size([2, 3, 224, 224])
```
I need to use a `DataLoader` which does not accept variable sizes input (which is the case with my data) and hence I need to resize them before sending it to the CLIPImageProcessor.
### Expected behavior
I would expect the output of the last line to be `torch.Size([2,3,512,512])` | 02-14-2023 08:11:44 | 02-14-2023 08:11:44 | Hi @adhakal224, thanks for raising! The reason the output images are still being returned with 224x224 shape is that there's two operations which modify the images' size: resizing and center cropping. In order for the image to be returned in the original dimension, cropping will also have to be disabled.
```
from transformers import CLIPImageProcessor
import torch
batched_tensor = torch.rand((2,3,512,512))
processor = CLIPImageProcessor.from_pretrained("openai/clip-vit-base-patch32")
processed_image = processor(
list(batched_tensor),
return_tensors='pt',
padding=True,
do_resize=False,
do_center_crop=False
)
print(processed_image.pixel_values.shape)
Output:
torch.Size([2, 3, 512, 512])
```
A few things to note:
* If working from the dev branch, you don't need to convert the input batch of images into a list (this is handled in the image processor)
* You will only be able to return a batch of torch arrays (`return_tensors="pt"`) if all of the input images are of the same dimension - in this case 512x512 - and `do_resize=False` and `do_center_crop=False`.
* You mentioned `"I need to resize them before sending it to the CLIPImageProcessor."`. If you're resizing just before passing into CLIPImageProcessor, and it's a standard resizing operation e.g. [torch's resize ](https://pytorch.org/vision/main/generated/torchvision.transforms.Resize.html) you could keep `do_resize=True` instead. Resizing is the first transformation the image processor so would be equivalent. However, it might be slower going through the image processor as it converts to PIL.Image.Image for resizing. <|||||>Thanks for the reply @amyeroberts. Currently I have a large dataset that is saved in `WebDataset` format. It has a very large volume of images and many of them are of different sizes. When iterating through the webdataset I read the images as numpy arrays. I am using `pytorch_lightning` and so I need to wrap the `WebDataset` with a `DataLoader` (for which I need to resize them all to a fixed size) before sending them to the `trainer`. In my current pipeline, I am using `torch.transform.ToTensor()` and `torch.transform.Resize()` to convert the images to tensor and resize them and then sending them to CLIPImageProcessor. Is there a more efficient way I can be doing this?
Below is the class that creates and returns me the `webdataset`
```
class MultiData:
def __init__(self, wds_path):
self.img_size = 224
self.wds_path = wds_path
self.dataset = wds.WebDataset(self.wds_path)
print('Initializing dataset')
def get_ds(self):
self.dataset = self.dataset.shuffle(1000).decode('rgb').to_tuple("groundlevel.jpg", "overhead.jpg", "metadata.json")
self.dataset = self.dataset.map(self.do_transforms)
return self.dataset
def do_transforms(self, sample):
img, imo, json = sample
self.transforms_img = transforms.Compose([
transforms.ToTensor(),
transforms.Resize(size=(224,224), interpolation=transforms.InterpolationMode.BILINEAR)
])
self.transforms_imo = transforms.Compose([
transforms.ToTensor()
])
img = self.transforms_img(img)
imo = self.transforms_imo(imo)
return img, imo, json
```
Both `img` and `imo` are later passed to the processor as:
`processed_img = self.image_processor(list(img), return_tensors='pt', padding=True).to(self.device)`
`processed_imo = self.image_processor(list(imo), return_tensors='pt', padding=True).to(self.device)`<|||||>In the case of using very large datasets in PyTorch, I would recommend using torchvision transformations for the whole preprocessing pipeline in place of the image processors. The image processors are great for getting started, however they unfortunately aren't fast. We have examples of pipelines using the data stored in image processor and how to integrate them with torchvision e.g. [this one](https://github.com/huggingface/transformers/blob/main/examples/pytorch/image-classification/run_image_classification.py).
In order to replicate the CLIP processing pipeline, I recommend looking at the [image processor code](https://github.com/huggingface/transformers/blob/main/src/transformers/models/clip/image_processing_clip.py) and the [corresponding configuration](https://huggingface.co/openai/clip-vit-base-patch32/blob/main/preprocessor_config.json) you're trying to emulate.
In the example you posted above, it would look something like this:
```
class MultiData:
def __init__(self, wds_path, image_size, image_mean, image_std):
self.wds_path = wds_path
self.dataset = wds.WebDataset(self.wds_path)
self.transforms_img = transforms.Compose([
transforms.Resize(size=image_size, interpolation=transforms.InterpolationMode.BILINEAR),
transforms.CenterCrop(size=image_size),
transforms.ToTensor(),
transforms.Normalize(mean=image_processor.image_mean, std=image_processor.image_std),
])
print('Initializing dataset')
def get_ds(self):
self.dataset = self.dataset.shuffle(1000).decode('rgb').to_tuple("groundlevel.jpg", "overhead.jpg", "metadata.json")
self.dataset = self.dataset.map(self.do_transforms)
return self.dataset
def do_transforms(self, sample):
img, imo, json = sample
img = self.transforms_img(img)
imo = self.transforms_imo(imo)
return img, imo, json
image_processor = AutoImageProcessor.from_pretrained("openai/clip-vit-base-patch32")
image_size = image_processor.size["shortest_edge"]
multidata = Multidata(wds_path, image_size, image_processor.image_mean, image_processor.image_std)
```
Additional things to note:
* In your [example above](https://github.com/huggingface/transformers/issues/21620#issuecomment-1429359464), the images are being resized to (224, 224). So even if center cropping was disabled as I suggested earlier, the output images wouldn't be of dimension (512, 512).
* For [torchvision](https://pytorch.org/vision/main/generated/torchvision.transforms.Resize.html), the resulting output size of the image will be different for `Resize(a)` versus `Resize((a, a))`. The image processor emulates the first behaviour `Resize(a)`, where the shortest edge is resized to `a` and the longest edge resized to preserve the aspect ratio.
* For `processed_img = self.image_processor(list(img), return_tensors='pt', padding=True).to(self.device)` - `padding` isn't a defined argument in the `image_process.preprocess` method. As such, it won't do anything and can be removed. <|||||>I'm closing this issue, as `do_resize` was behaving as expected. |
transformers | 21,619 | closed | Fix passing kwargs to TFBertTokenizer | # What does this PR do?
This PR fixes passing `kwargs` when creating a `TFBertTokenizer.from_pretrained()`. Currently, a `kwarg` is passed the following error is raised:
```
>>> from transformers import TFBertTokenizer
>>> tokenizer = TFBertTokenizer.from_pretrained("distilbert-base-cased", do_lower_case=False)
TypeError: transformers.models.bert.tokenization_bert_tf.TFBertTokenizer() got multiple values for keyword argument 'vocab_list'
```
By popping the arguments from `kwargs` we avoid the ambiguity.
@ArthurZucker You might be interested in this PR. Not clear to me which `kwargs` should actually be allowed when calling `from_pretrained()`. For example, `do_lower_case` makes sense but not sure if `vocab_list` or `cls_token_id` should be even allowed?
Thanks!
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 02-14-2023 07:52:41 | 02-14-2023 07:52:41 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Also cc @Rocketknight1 |
transformers | 21,618 | closed | what's the format of my own datasets when running language-modeling with gpt2 | ### System Info
- `transformers` version: 4.26.0.dev0
- Platform: Linux-3.10.0-1160.81.1.el7.x86_64-x86_64-with-glibc2.17
- Python version: 3.9.15
- Huggingface_hub version: 0.11.1
- PyTorch version (GPU?): 1.13.1+cu116 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@vanpelt @pvl @arfon @xeb @kashif @sgugger
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
want to know how to construct my own data,
and does it support chinese words with gpt2,
my dataset is Chinese sentence , the format is txt, and no label
could you pls help me ?
### Expected behavior
must have [CLS] in the beginning of the sentence ?
and what's the meaning of the "==" as follow (from wiki.test.raw) :
= Robert Boulter =
Robert Boulter is an English film , television and theatre actor . He had a guest @-@ starring role on the television series The Bill in 2000 . This was followed by a starring role in the play Herons written by Simon Stephens , which was performed in 2001 at the Royal Court Theatre . He had a guest role in the television series Judge John Deed in 2002 . In 2004 Boulter landed a role as " Craig " in the episode " Teddy 's Story " of the television series The Long Firm ; he starred alongside actors Mark Strong and Derek Jacobi . He was cast in the 2005 theatre productions of the Philip Ridley play Mercury Fur , which was performed at the Drum Theatre in Plymouth and the Menier Chocolate Factory in London . He was directed by John Tiffany and starred alongside Ben Whishaw , Shane Zaza , Harry Kent , Fraser Ayres , Sophie Stanton and Dominic Hall .
In 2006 , Boulter starred alongside Whishaw in the play Citizenship written by Mark Ravenhill . He appeared on a 2006 episode of the television series , Doctors , followed by a role in the 2007 theatre production of How to Curse directed by Josie Rourke . How to Curse was performed at Bush Theatre in the London Borough of Hammersmith and Fulham . Boulter starred in two films in 2008 , Daylight Robbery by filmmaker Paris Leonti , and Donkey Punch directed by Olly Blackburn . In May 2008 , Boulter made a guest appearance on a two @-@ part episode arc of the television series Waking the Dead , followed by an appearance on the television series Survivors in November 2008 . He had a recurring role in ten episodes of the television series Casualty in 2010 , as " Kieron Fletcher " . Boulter starred in the 2011 film Mercenaries directed by Paris Leonti .
= = Career = =
| 02-14-2023 05:38:58 | 02-14-2023 05:38:58 | Please use the [forums](https://discuss.huggingface.co/) for questions like this as we keep the issues for bugs and feature requests only.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 21,617 | closed | [WHISPER] Unreliable timestamp with whisper for videos under 30 seconds | ### System Info
Hey, I noticed that there's an unreliable timestamp thing happening which whisper through transformers that doesn't show up in original whisper.
In this example:
https://targum.video/v/47160791e0e305ff7f22e84203f1b196
The "people on streches.." was said at the end of the video, but the timestamp was placed around 8 seconds in.
Here's the same video translated with large-2 whisper from OpenAI
https://targum.video/v/8c5e21ff6da8947c02cdb40097eadf50
### Who can help?
@ArthurZucker
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Can send the exact link where this is happens over DM, but basically the video from this tweet:
https://twitter.com/wxkaitlin/status/1625326828264071168
Is getting the wrong subtitles (some are missing) + a wrong timestamp
### Expected behavior
Expected the transformers whisper to behave exactly like (but faster) the openAI whisper | 02-14-2023 04:02:40 | 02-14-2023 04:02:40 | Can you provide a reproduction script to make sure we are running with the same parameters? ๐
Also this might ring some bels to @Narsil.
I know we interacted before, but just want to make sure which transformers version you are using and which calls. Then I'll be able to dig ! Thanks for the issue ๐
<|||||>@altryne @ArthurZucker .
While deep diving into whisper, I've notived `openai/whisper` uses timestamp ALL the time, while `transformers` doesn't (you have to ask for timestamps for us to use them).
I have seen BIG discrepancies on some examples, I am guessing because training was somehow biased with timestamps for whisper.
Could that be it ?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 21,616 | closed | Different inference results from local transformer vs inference API | ### System Info
I am getting two slightly different probability values when comparing inference results from the local transformer and inference API on the same sentence. I am wondering why this is happening? It only occurs for some sentences.
<img width="1617" alt="Screen Shot 2023-02-13 at 7 46 51 PM" src="https://user-images.githubusercontent.com/49734611/218634176-73911bbc-26a0-443c-8aac-96329a3d613f.png">
Moreover, the local transformer seems to select the highest probability result and return it alone compared to the API that returns a score for each label. Sometimes a score from the API is greater than 1 (have seen 9) and I am wondering why that is and am if it invalidates the results?
Cheers!
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
<img width="1612" alt="Screen Shot 2023-02-13 at 7 53 26 PM" src="https://user-images.githubusercontent.com/49734611/218635058-6322388b-2f50-48c3-8c5d-e3357a125002.png">
### Expected behavior
Naturally I expect each version of the model to produce the same score. | 02-14-2023 03:54:37 | 02-14-2023 03:54:37 | cc @Narsil <|||||>Small differences in numbers can be explained by hardware, torch version etc... Nothing can be done about it.
For the difference in output the API uses a different default from the pipeline `pipe = pipeline(..., topk=None)` as it makes more sense for the widget to see multiple proposition.
In addition the results are sorted for the API (again for UX).
Are you able to reproduce larger than 1 results ? Seems like a pretty bad bug if true !<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 21,615 | closed | run run_language_modeling got bug | ### System Info
- `transformers` version: 4.26.0.dev0
- Platform: Linux-3.10.0-1160.81.1.el7.x86_64-x86_64-with-glibc2.17
- Python version: 3.9.15
- Huggingface_hub version: 0.11.1
- PyTorch version (GPU?): 1.13.1+cu116 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@vanpelt @pvl @arfon @xeb @kashif @sgugger
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
link:https://github.com/huggingface/transformers/blob/main/examples/legacy/run_language_modeling.py
```
export TRAIN_FILE=/path/to/dataset/wiki.train.raw
export TEST_FILE=/path/to/dataset/wiki.test.raw
python run_language_modeling.py \
--output_dir=output \
--model_type=gpt2 \
--model_name_or_path=gpt2 \
--do_train \
--train_data_file=$TRAIN_FILE \
--do_eval \
--eval_data_file=$TEST_FILE
```
error:
Traceback (most recent call last):
File "/data/transformers/examples/legacy/run_language_modeling.py", line 375, in <module>
main()
File "/data/transformers/examples/legacy/run_language_modeling.py", line 291, in main
data_args.block_size = tokenizer.max_len
AttributeError: 'GPT2TokenizerFast' object has no attribute 'max_len'
### Expected behavior
looking forward to kind reply
and solve the problem | 02-14-2023 03:25:44 | 02-14-2023 03:25:44 | This is an unmaintained example that won't work with the last version of transformers.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 21,614 | closed | fix: Race Condition when using Sagemaker Checkpointing and Model Repository | # What does this PR do?
Fixes #21586
With the following changes:
- Added `_add_sm_patterns_to_gitignore()` as a helper method in the Trainer class that will add the patterns used by the SageMaker Checkpointing feature in the .gitignore file when initializing the Model Repository.
- A condition in the `init_git_repo()` to check if we have an important SageMaker environment variable
It also includes a fix in the huggingface_hub library in order to consider excluding patterns when defining large files: https://github.com/huggingface/huggingface_hub/pull/1339
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@sgugger as we discussed this issue in the https://github.com/huggingface/transformers/issues/21586 | 02-14-2023 02:36:35 | 02-14-2023 02:36:35 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Perfect, you just need to run `make style` on your branch with our quality tools installed and we should be good to merge!<|||||>> Perfect, you just need to run `make style` on your branch with our quality tools installed and we should be good to merge!
ok, let me do that so.<|||||>@sgugger sorry for the delay, some meetings here :/
it's done |
transformers | 21,613 | closed | `pipeline` does not load from local folder, instead, it always downloads models from the internet. | ### System Info
I create `pipeline` and called `save_pretrained(...)` to save to some local directory. However, when I load it back using `pipeline(model="local_folder")`, it either load from cache or try to start downloading from the internet.
However, if I do the following, it works. I am using the latest `transformers`. am I misused it or misunderstood something?
```
mypipeline.save_pretrained(save_directory=model_path)
mypipeline.model.config.use_pretrained_backbone = False
mypipeline.model.config.save_pretrained(save_directory=model_path)
```
### Who can help?
_No response_
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1. run the code below
```
from transformers import pipeline
vision_classifier = pipeline(task="image-classification", model="google/vit-base-patch16-224")
vision_classifier.save_pretrained("./huggingface")
```
2. now delete the cache
3. load the model now
```
vision_classifier = pipeline("./huggingface")
```
and it will start download the pretrained model again.
### Expected behavior
I expect it loads the model from the local folder | 02-14-2023 01:08:49 | 02-14-2023 01:08:49 | cc @Narsil <|||||>Seems to be working fine on my end, but there needs to be a few modifications (I'm super suprises it *can* download anything, your included code just crashes normally).
```python
from transformers import pipeline
pipe = pipeline(task="image-classification", model="google/vit-base-patch16-224")
pipe.save_pretrained("./local_vit")
pipe = pipeline(task="image-classification", model="./local_vit")
```
You need to specify the `task` for local, since that information is contained in the HUB, not in the config directly. So if you specify it, it works with everything locally.
On what version of transformers are you on ?<|||||>Hi, sorry I specified the task for local, and it still does not work. I will check the version. <|||||>the version is 4.24.0<|||||>I cannot reproduce even on 4.24.0 Can you include the full script + error ? <|||||>sure. I am still seeing the error. I will post the code later.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 21,612 | closed | fix: Change is_last chunk calc and add conditional break in chunk_iter | # What does this PR do?
Fixes #21568
With three functional changes
- Updates the is_last calc to include the right_stride
- Conditionally breaks at the end of the loop block if is_last is true
- Adds to a test
And two non-functional changes
- Renamed `i` to `chunk_start_idx` and put `chunk_end_idx` in a variable
- Removed a comment and added another
## Before submitting
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
[Issue](https://github.com/huggingface/transformers/issues/21568)
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@Narsil
| 02-13-2023 23:46:28 | 02-13-2023 23:46:28 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@ArthurZucker Could you please review this ?
I am ok with this PR, but since I made the proposed change, I'm too heavily biased to review properly.
I will take another look still now that I might be able to think straighter now.
<|||||>Just thinking we should update the values of the whisper timestamps (so try running the slow tests for the ASR pipeline, only those related to whisper)<|||||>> Just thinking we should update the values of the whisper timestamps (so try running the slow tests for the ASR pipeline, only those related to whisper)
Updated the timestamps after running the below slow asr pipeline tests:
`test_return_timestamps_in_preprocess`
`test_torch_whisper`
`test_find_longest_common_subsequence`
`test_whisper_timestamp_prediction` (only update was here)
`test_simple_whisper_asr`
`test_simple_whisper_translation`<|||||>Thank you very much for this ! And detecting this bug !
<|||||>Thanks again for your contribution! |
transformers | 21,611 | closed | Add in big model inference to issue template | # What does this PR do?
Adds in @sgugger and myself as @'s on the big model inference as part of the issue template
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@sgugger
| 02-13-2023 21:08:37 | 02-13-2023 21:08:37 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 21,610 | closed | from_pretrained() breaks with empty state_dict and model path as None | ### System Info
- `transformers` version: 4.26.1
- Platform: Linux-5.15.0-58-generic-x86_64-with-glibc2.35
- Python version: 3.10.6
- Huggingface_hub version: 0.12.0
- PyTorch version (GPU?): 1.12.1+cu116 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help?
@ArthurZucker @youne
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
To reproduce the behavior, you can run this snippet of code:
```py
from transformers import GPTJConfig, AutoModelForCausalLM
from collections import OrderedDict
config = GPTJConfig(
n_positions=128,
n_embd=16,
n_layer=2,
n_head=2
)
AutoModelForCausalLM.from_pretrained(
None, config=config, state_dict=OrderedDict()
)
```
### Expected behavior
The expected behavior from this is to allow a model to be initialized from scratch with an empty `state_dict` and `None` as the pretrained model. There is a [tool that I am working with](https://github.com/coreweave/tensorizer) that is broken due to a regression that was introduced around version 4.23.1. From the trace of running the reproduction code, a PR that handles the case when `resolved_archive_file` and the pretrained model path are `None` would fix this issue.
It also seems that this issue is related: https://github.com/huggingface/transformers/issues/21526 | 02-13-2023 20:38:12 | 02-13-2023 20:38:12 | This was fixed by #21542. You will need to install from source while the fix makes it way to the next release.<|||||>The PR didn't seem to fix the issue, I get a different error this time but the reproduction code still throws an error with the latest changes from source<|||||>Indeed, the fix was incomplete for generative models. The PR linked above should fully fix it. |
transformers | 21,609 | closed | Fix env. variable type issue in testing | # What does this PR do?
Fix env. variable type issue in testing.
If `PYTEST_TIMEOUT` is set by `export PYTEST_TIMEOUT=...` or `PYTEST_TIMEOUT=xxx python3 -m pytest ...`, we actually get a `string` instead of `int`, and the test fails.
On our CI (within docker), we don't have this issue, probably due to the way of docker dealing with env. variable.
Let's try to avoid unexpected error though.
| 02-13-2023 19:10:41 | 02-13-2023 19:10:41 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 21,608 | closed | Fix typo in QA task guide | Removes random link to ALBERT model doc in the question-answering task guide. | 02-13-2023 18:32:59 | 02-13-2023 18:32:59 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 21,607 | closed | Clarify available pipelines in quicktour | Addresses feedback from #21557 to make it super clear the table doesn't contain all available pipelines and redirects users to the pipeline API reference docs instead. This PR also swaps out some of the NLP tasks for some more multimodal ones :) | 02-13-2023 18:24:34 | 02-13-2023 18:24:34 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 21,606 | closed | Fix TF CTC tests | # What does this PR do?
(see title) | 02-13-2023 15:42:26 | 02-13-2023 15:42:26 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@ydshieh before @amyeroberts work in #21502 these tests had no overwrite, correct. However, see the note in Amy's PR -- the labels were not correctly handled, which meant that `test_dataset_conversion` was probably terminating early [here](https://github.com/huggingface/transformers/blob/edc1e734bfc01109b8c66881d950ebbda032a6d2/tests/test_modeling_tf_common.py#L1815), which would explain the ausence of crashes prior to the PR :)<|||||>Thanks for the fix @gante !
@ydshieh Yes, Joao's correct. There's three different cases where the state of the tests changed:
* For `test_loss_computation`, the previous test was being skipped. [The given reason](https://github.com/huggingface/transformers/blob/6f79d264422245d88c7a34032c1a8254a0c65752/tests/models/hubert/test_modeling_tf_hubert.py#L327) was incorrect shapes - however, at the time of adding #21502 the returned loss was actually `None`.
* Some tests were being skipped by overloading [with an empty test](https://github.com/huggingface/transformers/blob/6f79d264422245d88c7a34032c1a8254a0c65752/tests/models/hubert/test_modeling_tf_hubert.py#L313) rather than `unittest.skip` and so previously showed as passing.
* The branch calculating loss wasn't being touched when fitting the model - `input_values` passed in as a dictionary - as `labels` was [taken from keyword arguments, rather than `outputs['labels']`](https://github.com/huggingface/transformers/blob/6f79d264422245d88c7a34032c1a8254a0c65752/src/transformers/models/hubert/modeling_tf_hubert.py#L1640). This meant some tests e.g. [`test_keras_fit` in TFModelTesterMixin](https://github.com/huggingface/transformers/blob/6f79d264422245d88c7a34032c1a8254a0c65752/tests/test_modeling_tf_common.py#L1526) previously passed as the memory intensive operation was skipped. |
transformers | 21,605 | closed | Huge JSON file causes error in run_summarization.py | I tried to run transformers\examples\pytorch\summarization\run_summarization.py with my own data files (in JSON Lines format). The problem arises only when using huge JSON file (> 20 GB). With smaller size files it works normally.
Any suggestions how to use the code with large size JSON files?
**Framework:**
Transformers 4.20.1
Pytorch 1.11.0+cu113
Datasets 2.3.2
Tokenizers 0.12.1 | 02-13-2023 14:18:03 | 02-13-2023 14:18:03 | Hi @AtheerAlgherairy This is not a `transformers` issue, and it's very likely some memory issue due to the huge file size. [Hugging Face Forum](https://discuss.huggingface.co/) is the place for such kind of questions.
Without going into the specific detail, for large dataset like this, you should try to use iterator to avoid loading the whole file into the memory from the beginning.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 21,604 | closed | Seq2SeqTrainer with predict_with_generate prints config every step | ### System Info
Transformers version 4.26.1
Python version 3.8.12
### Who can help?
@sgugger
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I'm working on a Hebrew-Arabic machine translation task by training the T5 model from scratch
The issue occurs with the Translation code from the tutorial in the huggingface's course:
https://huggingface.co/course/chapter7/4?fw=pt
I have a slightly modified version of it (extra metrics and a slightly different dataset)
here's a link to my code:
https://pastebin.com/LaCAPsCF
``python3 train_model.py --dataset_path ./data/HF_HE_AR_Dataset.json --tokenizer_path ./T5Tokenizer/ --max_length=128 --batch_size=16 --logging_steps 100 --save_steps 100 --model t5-base``
this is was the command that I ran but it doesn't matter much since the problem is with ``predict_with_generate=True,`` in the Seq2SeqTrainingArguments. If it's false the problem do not occurs
This is what happens when it reaches the evaluation loop

### Expected behavior
Only the TQDM bar is printed | 02-13-2023 13:32:31 | 02-13-2023 13:32:31 | cc @gante <|||||>Hey @eyalmazuz ๐ There is a chance that the issue is sorted already (see [here](https://github.com/huggingface/transformers/pull/21385)). To try it out, install `transformers` from `main` -- let me know if it works!
`pip install --upgrade git+https://github.com/huggingface/transformers.git`<|||||>Hi @gante there's no extra printing when using transformers-main I'll close the issue
thank you for your help
|
transformers | 21,603 | closed | Generate: filter encoder inputs when its signature does not accept wildcards | # What does this PR do?
Now that we are moving towards any-to-text modalities, `generate` should be strengthened to work as-is without throwing exceptions just because a user has designed a slightly different architecture.
This PR enables the case where the model has some kwargs for operations between the encoder and the decoder -- i.e. for kwargs that can't be used in the encoder, but are also not decoder inputs. Normally, when a kwarg is not an encoder input, a `decoder_` prefix is added to its name, which is not the right argument naming in this case. Fortunately, the solution is simple :D
This PR is a soft requirement to integrate [MM-CoT](https://github.com/amazon-science/mm-cot/tree/main), the alternative being the incorrect renaming of a few arguments to `decoder_(...)`.
| 02-13-2023 13:03:38 | 02-13-2023 13:03:38 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Tests for the feature added ๐
The failing CI tests are two known failures, merging. |
transformers | 21,602 | closed | [MINOR] Fix link in timeseries transformer docs | I'm not sure this will also fix the currently broken link in the docs (Specifically here: https://huggingface.co/docs/transformers/model_doc/time_series_transformer) whereby clicking on `kashif` attempts to link to the following non-existent URL: https://huggingface.co/docs/transformers/model_doc/%3Chttps://huggingface.co/kashif
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes a broken link in the above-linked documentation.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
I'm not sure. @sgugger or @osanseviero maybe?
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 02-13-2023 12:36:35 | 02-13-2023 12:36:35 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Link seems to work in the staging docs |
transformers | 21,601 | closed | CI: skip failing TF hubert test | # What does this PR do?
| 02-13-2023 11:41:25 | 02-13-2023 11:41:25 | cc @amyeroberts <|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>@gante is this issue fixed? I am asking because I am still getting this error [here](https://app.circleci.com/pipelines/github/huggingface/transformers/57650/workflows/dcdbb1bd-f61c-445a-b276-4f649416931e/jobs/699951?invite=true#step-111-4137) in this [PR](https://github.com/huggingface/transformers/pull/21349) . I did rebase to `upstream/main` .<|||||>@susnato see #21606 (I skipped the wrong one here)<|||||>Hi @gante thanks for the fix but now there seems to be a new issue popping up regarding that same model,
```
tests/models/hubert/test_modeling_tf_hubert.py::TFHubertRobustModelTest::test_keras_fit
```
This test is failing in the same PR I mentioned above. I did rebase before pushing. |
transformers | 21,600 | closed | [Pipeline] Add zero shot audio classificatoin pipeline | # What does this PR do?
Add the `zero_shot_audio_classification_pipeline` for the `CLAP` models. See #21370 | 02-13-2023 11:15:18 | 02-13-2023 11:15:18 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_21600). All of your documentation changes will be reflected on that endpoint.<|||||>LGTM. I'm confused by the error in tests which doesn't seem linked to this PR.<|||||>LGTM ! |
transformers | 21,599 | closed | BLIP-2 batch generate error | ### System Info
`transformers` version: 4.27.0.dev0
### Who can help?
@NielsRogge
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
The following error happens when running BLIP-2 model's generate() with num_beams>1 and input batch_size>1
<img width="892" alt="Screenshot 2023-02-13 at 6 53 28 pm" src="https://user-images.githubusercontent.com/13638455/218443029-8bc45f00-4884-4785-95a9-228aa08d1266.png">
### Expected behavior
The model should be able to do batch generate. | 02-13-2023 11:14:13 | 02-13-2023 11:14:13 | Can confirm that:
1. The issue only happens with `batch_size>1`
2. The issue happens with both `num_beams>1` and `num_return_sequences>1` (they both rely on input replication, which is my suspicion)
3. https://github.com/huggingface/transformers/pull/21580, which addresses some BLIP2 `.generate()` issues does not fix this issue<|||||>I think that [this](https://github.com/salesforce/LAVIS/blob/3ac397aa075c3e60b9521b012dda3660e3e35f1e/lavis/models/blip2_models/blip2_opt.py#L213) is not incorporated in our current implementation. It seems the authors only defined this for OPT but not for T5.<|||||>@NielsRogge yeah, that's the fix ๐ However, I'm generalizing the PR to correctly expand any model input, as it is a bit limited at the moment (it is expanding tensors with specific names, as opposed to all tensors that might be used as model input).<|||||>FYI, I have tried to do repeat_interleave for the inputs_embeds, but that results in another error.
T5 does not seem need this because of the encoder-decoder architecture.<|||||>@LiJunnan1992 that's correct, only non-encoder inputs need the expansion. In a nutshell, in the generation loop, we have a 1:1 input-output row correspondence, so we need to expand the model inputs before the loop accordingly.
T5-BLIP2 sends `inputs_embeds` to the (text) encoder, whereas OPT-BLIP2 has no (text) encoder at all. In the former, the encoder outputs need to be expanded, whereas in the latter the `inputs_embeds` need the expansion treatment.
#21624 takes care of all those cases for any model input in `model_kwargs`, which should future-proof `.generate()`<|||||>@LiJunnan1992 if you install `transformers` from `main`, it should be working ๐ <|||||>@gante I can verify that this is working now, thanks! When will I be able to pip install this version?<|||||>@LiJunnan1992 we aim at monthly releases, so 1-2 weeks from now :) |
transformers | 21,598 | closed | Enable `requires_grad` on input embedding to train on top of frozen layers | # What does this PR do?
## Motivation
In the context of `peft`, users currently needs to manually add a forward hook that enables gradient computation to the input after computing the embedding, for e.g. for `t5` one needs to call:
```python
def make_inputs_require_grad(module, input, output):
output.requires_grad_(True)
model.get_input_embeddings().register_forward_hook(make_inputs_require_grad)
```
This PR makes the life easy for the users by wrapping this protocol in a single method `enable_input_require_grads` | Related: https://github.com/huggingface/peft/issues/80
cc @pacman100
Wdyt @sgugger ? Maybe there is a better solution but not sure here, would love some guidance! | 02-13-2023 11:07:52 | 02-13-2023 11:07:52 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Thanks everyone! Merging! |
transformers | 21,597 | closed | [`bnb`] Let's make the daily CI green ๐ | # What does this PR do?
This PR fixes a test that is currently failing on the `main` branch.
Link to failing test: https://github.com/huggingface/transformers/actions/runs/4154270129/jobs/7186572507
Since the introduction of https://github.com/huggingface/transformers/pull/21524 - when loading `bigscience/bloom-1b7` it detects `fp32` weights when using `torch_dtype="auto"`. Therefore the expected relative difference of the memory footprint is different than the expected one.
This PR fixes the test by forcing `torch_dtype=torch.float16` when loading the fp16 model
cc @ydshieh @sgugger | 02-13-2023 10:56:52 | 02-13-2023 10:56:52 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Yeah I am unsure too, `auto` shold give `float16` according to the weights size: https://huggingface.co/bigscience/bloom-1b7/tree/main (1b7 parameters = 3.4GB in fp16) |
transformers | 21,595 | closed | Fix Blip-2 CI | # What does this PR do?
Avoid GPU OOM by using FP16. | 02-13-2023 09:51:06 | 02-13-2023 09:51:06 | _The documentation is not available anymore as the PR was closed or merged._<|||||>since we have put `_keep_in_fp32_modules` [in a previous PR](https://github.com/huggingface/transformers/pull/21574), I think it should work!<|||||>@sgugger I never merge my PR without running on a CI runner-like machines: ran it 3 times and all pass.<|||||>Yes I think we can be confident to say that @sgugger |
transformers | 21,594 | closed | Remove trailing 'extractive' word from en documentation | # What does this PR do?
Removes an extra word from the documentation.
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Documentation: @sgugger, @stevhliu and @MKhalusova
| 02-13-2023 07:05:53 | 02-13-2023 07:05:53 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 21,593 | closed | Region error | Endregion was not causing error as region was not recognizing.
| 02-13-2023 07:01:43 | 02-13-2023 07:01:43 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_21593). All of your documentation changes will be reflected on that endpoint.<|||||>Thanks for suggestion <|||||>Yes, if you could replace `# Metrics` with `# region Metrics` instead that would be great!
Alternatively I could just ditch all of these region tags, since I think I'm the only person who likes them.<|||||>@Rocketknight1 I will definitely replace all the metrics with region metrics but any reasons of doing that ?? I am a Beginner so just wanted to understand .<|||||>@Aniketsingh12 Using region tags allows IDEs like PyCharm and VS Code ([with an addon](https://marketplace.visualstudio.com/items?itemName=maptz.regionfolder)) to fold the areas of code inside the `# region` and `# endregion`, which can make the code easier to read in a long script. They don't serve any function other than that!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 21,592 | closed | Removes duplicate computations in DETR post processing | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Removes duplicate softmax computation, change variable names accordingly.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 02-13-2023 06:10:23 | 02-13-2023 06:10:23 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 21,591 | closed | how to fine tune with the gpt2?what's the dataset format with my own data | ### System Info
transformers-cli env
Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points.
- `transformers` version: 4.26.0.dev0
- Platform: Linux-3.10.0-1160.81.1.el7.x86_64-x86_64-with-glibc2.17
- Python version: 3.9.15
- Huggingface_hub version: 0.11.1
- PyTorch version (GPU?): 1.13.1+cu116 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@sgugger @Rocketknight1
@gmftbyGMFTBY
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
https://github.com/huggingface/transformers/tree/main/examples/pytorch/text-generation
python run_generation.py \
--model_type=gpt2 \
--model_name_or_path=gpt2
???
--train_file ?
--valid_file?
### Expected behavior
fine tuned gpt2 will generate chinese well | 02-13-2023 02:55:21 | 02-13-2023 02:55:21 | Please use the [forums](https://discuss.huggingface.co/) for such questions as we keep the issues for bugs in the library and feature requests only.<|||||>@ucas010 The script you linked is not for training/fine-tuing however. It's only for generation with a provided prompt.<|||||>@ydshieh year๏ผI have tried with Chinese๏ผbut the result is not good , you see
02/14/2023 09:52:08 - INFO - __main__ - Namespace(model_type='gpt2', model_name_or_path='gpt2', prompt='', length=20, stop_token=None, temperature=1.0, repetition_penalty=1.0, k=0, p=0.9, prefix='', padding_text='', xlm_language='', seed=42, no_cuda=False, num_return_sequences=1, fp16=False, device=device(type='cuda'), n_gpu=4)
Model prompt >>> ๅไบ่ดฃไปป๏ผๆฐไบ่ดฃไปป๏ผๅไบ่ฏ่ฎผๆกไปถ๏ผ็ฎๆ็จๅบ๏ผไบบๆฐๆณ้ข
The attention mask and the pad token id were not set. As a consequence, you may observe unexpected behavior. Please pass your input's `attention_mask` to obtain reliable results.
Setting `pad_token_id` to `eos_token_id`:50256 for open-end generation.
=== GENERATED SEQUENCE 1 ===
ๅไบ่ดฃไปป๏ผๆฐไบ่ดฃไปป๏ผๅไบ่ฏ่ฎผๆกไปถ๏ผ็ฎๆ็จๅบ๏ผไบบๆฐๆณ้ข๏ผๅณ๏ผๆๆๆฐ่ฆไน๏ฟฝ<|||||>@ucas010 , as sgugger mentioned, this kind of question is for [Hugging Face Forums](https://discuss.huggingface.co/).
The GitHub repository is only for issues, bugs or features in the library.
I am going to close this issue. But FYI: GPT-2 is only trained on English corpus, and you can't use it for other languages (not even with just fine-tuning).
|
transformers | 21,590 | open | Project_Test01 | ### Model description
This model takes parameters - training data(optional) and rubrics(list of strings) to train. It gives us zero shot results for scoring logical reasoning answers provided by the students.
### Open source status
- [X] The model implementation is available
- [X] The model weights are available
### Provide useful links for the implementation
chrome-extension://efaidnbmnnnibpcajpcglclefindmkaj/https://arxiv.org/pdf/2301.08771.pdf | 02-13-2023 02:10:46 | 02-13-2023 02:10:46 | |
transformers | 21,589 | closed | [i18n-fr] Translate quicktour page to French | # What does this PR do?
Translated the `quicktour.mdx` file of the documentation to French.
Part of #21456
Thank you in advance for your review.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@sgugger, could you review this PR?
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 02-12-2023 18:30:31 | 02-12-2023 18:30:31 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 21,588 | closed | Add missing arguemtn to run_clip.py | # Missing test_file argument in run_clip.py
## Bug Experience
- Including the test_file parameter during runtime causes HfArguments to throw error for extra parameters.
- Not including the test_file parameter during runtime causes NPE [here](https://github.com/huggingface/transformers/blob/main/examples/pytorch/contrastive-image-text/run_clip.py#L299)
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 02-12-2023 17:40:11 | 02-12-2023 17:40:11 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 21,587 | closed | [Whisper] ASR pipeline ignores/ rejects generate_kwargs on inference | ### System Info
- `transformers` version: 4.26.1
- Platform: Linux-5.10.147+-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.12.0
- PyTorch version (GPU?): 1.13.1+cu116 (False)
- Tensorflow version (GPU?): 2.11.0 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help?
pipeline: @Narsil
whisper/ speech: @ArthurZucker / @sanchit-gandhi
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
The failure mode is specific to `Whisper` model in the ASR pipeline. Potentially introduced after the introduction of the `return_timestamps` argument.
Repro can be found in the [colab here](https://colab.research.google.com/drive/1koqO7Tjos5vGlFBCugGeCRKJm5gf2hfj?usp=sharing) or on [GitHub here](https://github.com/Vaibhavs10/scratchpad/blob/main/Whisper_return_timestamp_bug_report.ipynb)
Essentially, when the pipeline is invoked and inferred with:
```python
pipe = pipeline("automatic-speech-recognition", model="openai/whisper-small", generate_kwargs={"task": "transcribe", "language": "german"})
pipe(test_sample, chunk_length_s=30)
```
It rejects to accept the `generate_kwargs` and throws an error:
```
ValueError: The following `model_kwargs` are not used by the model: ['task', 'language'] (note: typos in the generate arguments will also show up in this list)
```
The interesting bit, if I run inference whilst explicitly setting `return_timestamps=True` it does work however, 'translates' instead of `transcribe`.
```python
pipe(test_sample, return_timestamps=True, chunk_length_s=30, stride_length_s=[6,0])
```
I went more step further and tried with loading a vanilla pipeline and ran the pipeline by explicitly setting the `forced_decoder_ids` and it worked well:
```python
pipe = pipeline("automatic-speech-recognition", model="openai/whisper-small")
pipe.model.config.forced_decoder_ids = (
pipe.tokenizer.get_decoder_prompt_ids(
language="de", task="transcribe"
)
)
pipe(test_sample, chunk_length_s=30)
```
However, if I now pass the `return_timestamps=True` it again rejects the original decoder_ids and `translates` in English:
```python
pipe(test_sample, return_timestamps=True, chunk_length_s=30, stride_length_s=[6,0])
```
As said above, the [colab](https://colab.research.google.com/drive/1koqO7Tjos5vGlFBCugGeCRKJm5gf2hfj?usp=sharing) or on [GitHub](https://github.com/Vaibhavs10/scratchpad/blob/main/Whisper_return_timestamp_bug_report.ipynb) has a proper repro, do give it a look.
### Expected behavior
The expected behaviour of the pipeline would be to respect the `generate_kwargs` and throw potentially a more meaningful error message. | 02-12-2023 17:36:29 | 02-12-2023 17:36:29 | You should use the main branch!
The following
```python
def test_simple_whisper_translation(self):
speech_recognizer = pipeline(
task="automatic-speech-recognition",
model="openai/whisper-large",
framework="pt",
)
ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation").sort("id")
filename = ds[40]["file"]
output = speech_recognizer(filename)
self.assertEqual(output, {"text": " A man said to the universe, Sir, I exist."})
model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-large")
tokenizer = AutoTokenizer.from_pretrained("openai/whisper-large")
feature_extractor = AutoFeatureExtractor.from_pretrained("openai/whisper-large")
speech_recognizer_2 = AutomaticSpeechRecognitionPipeline(
model=model, tokenizer=tokenizer, feature_extractor=feature_extractor
)
output_2 = speech_recognizer_2(filename)
self.assertEqual(output, output_2)
# either use generate_kwargs or set the model's generation_config
# model.generation_config.task = "transcribe"
# model.generation_config.lang = "<|it|>"
speech_translator = AutomaticSpeechRecognitionPipeline(
model=model,
tokenizer=tokenizer,
feature_extractor=feature_extractor,
generate_kwargs={"task": "transcribe", "language": "<|it|>"},
)
output_3 = speech_translator(filename)
self.assertEqual(output_3, {"text": " Un uomo ha detto all'universo, Sir, esiste."})
```
is used in our testing suit and can confirm that it works.<|||||>In my case, your script outputs the folliwing:
```python
โ /home/arthur_huggingface_co/transformers/src/transformers/models/whisper/modeling_whisper.py:134 โ
โ 5 in generate โ
โ โ
โ 1342 โ โ โ
โ 1343 โ โ if hasattr(generation_config, "is_multilingual") and generation_config.is_multil โ
โ 1344 โ โ โ if hasattr(generation_config, "language"): โ
โ โฑ 1345 โ โ โ โ forced_decoder_ids.append((1, generation_config.lang_to_id[generation_co โ
โ 1346 โ โ โ else: โ
โ 1347 โ โ โ โ forced_decoder_ids.append((1, None)) โ
โ 1348 โ
โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ
KeyError: 'german'
```
Because
```python
In [7]: pipe.model.generation_config.language
Out[7]: 'german'
```
It should be `<|XX|>`. There was an issue opened to make it so that the actual language code is used here, but we have not adressed it yet. <|||||>Ha! I am invoking the `pipeline` the same way as you are in your first comment. The only difference is that on `4.26.1` it doesn't work and neither does it raise any errors.
It works perfectly on `main`: https://github.com/Vaibhavs10/scratchpad/blob/main/Whisper_return_timestamp_bug_report_w_main.ipynb
Do we have any ETA on when this would make it to a release?<|||||>Think it was part of the patch release see [here ](https://github.com/huggingface/transformers/releases/tag/v4.26.1) |
transformers | 21,586 | closed | Race Condition when using Sagemaker Checkpointing and Model Repository | ### System Info
transformers version: 4.26.0
huggingface_hub version: 0.12.0
Platform: SageMaker
pytorch version: 1.10.2+cuda113
### Who can help?
@sgugger
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Looks like we have a race condition when using the SageMaker Checkpointing feature together with Model Repository (`push_to_hub=True` in the TrainingArguments).
Basically, SageMaker creates temporary files inside the checkpoint dir. When using a Model Repository, these files are mapped in git, which raises an error FileNotFoundError when the file is deleted by SageMaker later.
I tested several executions and always fails, except when I used another output_dir path such as `./output`, which isn't a SageMaker local checkpoint directory.
## Reproduction
## train.py
```python
...
trainer_args = TrainingArguments(
output_dir="opt/ml/checkpoints",
overwrite_output_dir=True if get_last_checkpoint(
"opt/ml/checkpoints"
) is not None else False,
evaluation_strategy="epoch",
save_strategy="epoch",
save_total_limit=self.args.early_stopping_patience+1,
load_best_model_at_end=True,
push_to_hub=True,
hub_token=self.env.HUGGINGFACE_HUB_TOKEN,
hub_model_id=self.args.hub_model_id,
hub_strategy="checkpoint",
metric_for_best_model="f1",
num_train_epochs=self.args.num_train_epochs,
seed=self.args.seed
)
trainer = Trainer(
model=self.model,
args=trainer_args,
train_dataset=self.dataset["train"],
eval_dataset=self.dataset[self.args.eval_dataset],
tokenizer=self.tokenizer,
compute_metrics=lambda p: compute_metrics(p, threshold=self.args.threshold),
callbacks=[
EarlyStoppingCallback(early_stopping_patience=self.args.early_stopping_patience)
] if self.args.early_stopping_patience is not None else None
)
# check if checkpoint existing if so continue training
last_checkpoint = get_last_checkpoint("opt/ml/checkpoints")
if last_checkpoint is not None:
_logger.info(f"Resuming training from checkpoint: {last_checkpoint}")
trainer.train(resume_from_checkpoint=last_checkpoint)
...
```
## SageMaker Estimator
```python
...
import logging
from sagemaker.huggingface import HuggingFace
checkpoint_s3_uri = f"s3://{bucket_name}/{prefix}/checkpoints"
instance_type = "ml.g4dn.xlarge"
estimator = HuggingFace(
entry_point="train.py",
source_dir="ml",
base_job_name=params.mlflow_experiment_name,
container_log_level=logging.DEBUG,
role=params.sagemaker_execution_role_arn,
sagemaker_session=sagemaker_session,
py_version="py38",
pytorch_version="1.10.2",
transformers_version="4.17.0",
instance_count=1,
instance_type=instance_type,
use_spot_instances=True,
max_wait=10800,
max_run=10800,
checkpoint_s3_uri=checkpoint_s3_uri,
checkpoint_local_path="/opt/ml/checkpoints",
environment={
"MLFLOW_TRACKING_URI": params.mlflow_tracking_uri,
"MLFLOW_EXPERIMENT_NAME": params.mlflow_experiment_name,
"MLFLOW_TRACKING_USERNAME": params.mlflow_tracking_username,
"MLFLOW_TRACKING_PASSWORD": params.mlflow_tracking_password,
"MLFLOW_TAGS": params.mlflow_tags,
"MLFLOW_RUN_ID": mlflow.active_run().info.run_id,
"MLFLOW_FLATTEN_PARAMS": "True",
"HF_MLFLOW_LOG_ARTIFACTS": "True",
"HUGGINGFACE_HUB_TOKEN": params.huggingface_hub_token
},
hyperparameters={
"push_to_hub": "True",
"hub_model_id": f"dougtrajano/{params.mlflow_experiment_name}",
"num_train_epochs": params.num_train_epochs,
"early_stopping_patience": params.early_stopping_patience,
"batch_size": params.batch_size,
"seed": params.seed,
"concat_validation_set": "True",
"eval_dataset": "test"
}
)
estimator.fit(inputs, wait=False)
```
Full code is available in [DougTrajano/ToChiquinho](https://github.com/DougTrajano/ToChiquinho)
## Logs
The file that raises the error always has "sagemaker-uploading" or "sagemaker-uploaded" in its name.
```log
Traceback (most recent call last):
File ""train.py"", line 29, in <module>
experiment.run()
File ""/opt/ml/code/experiments/toxic_comment_classification.py"", line 199, in run
trainer.train(resume_from_checkpoint=last_checkpoint)
File ""/opt/conda/lib/python3.8/site-packages/transformers/trainer.py"", line 1543, in train
return inner_training_loop(
File ""/opt/conda/lib/python3.8/site-packages/transformers/trainer.py"", line 1883, in _inner_training_loop
self._maybe_log_save_evaluate(tr_loss, model, trial, epoch, ignore_keys_for_eval)
File ""/opt/conda/lib/python3.8/site-packages/transformers/trainer.py"", line 2135, in _maybe_log_save_evaluate"
1676172693307,"self._save_checkpoint(model, trial, metrics=metrics)
File ""/opt/conda/lib/python3.8/site-packages/transformers/trainer.py"", line 2279, in _save_checkpoint"
1676172693307,"self._push_from_checkpoint(output_dir)
File ""/opt/conda/lib/python3.8/site-packages/transformers/trainer.py"", line 3443, in _push_from_checkpoint"
1676172693308,"_, self.push_in_progress = self.repo.push_to_hub(
File ""/opt/conda/lib/python3.8/site-packages/huggingface_hub/repository.py"", line 1366, in push_to_hub"
1676172693308,"self.git_add(auto_lfs_track=True)
File ""/opt/conda/lib/python3.8/site-packages/huggingface_hub/repository.py"", line 1046, in git_add"
1676172693308,"tracked_files = self.auto_track_large_files(pattern)
File ""/opt/conda/lib/python3.8/site-packages/huggingface_hub/repository.py"", line 970, in auto_track_large_files"
1676172693308,"size_in_mb = os.path.getsize(path_to_file) / (1024 * 1024)
File ""/opt/conda/lib/python3.8/genericpath.py"", line 50, in getsize
return os.stat(filename).st_size
FileNotFoundError
[Errno 2] No such file or directory: '/opt/ml/checkpoints/toxic-comment-classification-2023-02-12-03-19-37-149/model/last-checkpoint/special_tokens_map.json.sagemaker-uploading'
3%|โ | 1408/42240 [03:37<1:45:03, 6.48it/s]
2023-02-12 03:31:34,706 sagemaker-training-toolkit INFO Waiting for the process to finish and give a return code.
2023-02-12 03:31:34,706 sagemaker-training-toolkit INFO Done waiting for a return code. Received 1 from exiting process.
2023-02-12 03:31:34,707 sagemaker-training-toolkit ERROR Reporting training FAILURE
2023-02-12 03:31:34,707 sagemaker-training-toolkit ERROR ExecuteUserScriptError:
ExitCode 1
ErrorMessage ""FileNotFoundError
[Errno 2] No such file or directory: '/opt/ml/checkpoints/toxic-comment-classification-2023-02-12-03-19-37-149/model/last-checkpoint/special_tokens_map.json.sagemaker-uploading'
3%|โ | 1408/42240 [03:37<1:45:03, 6.48it/s]""
Command ""/opt/conda/bin/python3.8 train.py --batch_size 8 --early_stopping_patience 5 --eval_dataset test --hub_model_id dougtrajano/toxic-comment-classification --num_train_epochs 30 --push_to_hub True --seed 1993"""
1676172695312,"2023-02-12 03:31:34,707 sagemaker-training-toolkit ERROR Encountered exit_code 1
```
## Proposed Solution
In my opinion, the issue happens because SageMaker doesn't use a good sync mechanism for the checkpoint folder, but I don't know if they will change it because of this :( However, I think that there're some options from our side we can do.
One of the possible solutions I thought of is to add `*.sagemaker-uploading` and `*.sagemaker-uploaded` in the `.gitignore` file in the `Trainer().init_repo()` when we know that we are running inside SageMaker.
https://github.com/huggingface/transformers/blob/c836f77266be9ace47bff472f63caf71c0d11333/src/transformers/trainer.py#L3357-L3398
Additionally, we need to add the `--exclude-standard` in the git-lfs-files command called inside the `auto_track_large_files()` function.
I tested it by adding the following code between the `Trainer()` object creation and the execution of the `Trainer().train()` function.
```python
with open(os.path.join(trainer.repo.local_dir, ".gitignore"), "a") as f:
f.write("\n*.sagemaker-uploading")
f.write("\n*.sagemaker-uploaded")
trainer.repo.git_add(".gitignore")
trainer.repo.git_commit("Add *.sagemaker patterns to .gitignore.")
```
and in the [huggingface/huggingface_hub](https://github.com/huggingface/huggingface_hub).
```python
def patched_files_to_be_staged(
pattern: str = ".", folder: Union[str, Path, None] = None
) -> List[str]:
"""
Returns a list of filenames that are to be staged.
Args:
pattern (`str` or `Path`):
The pattern of filenames to check. Put `.` to get all files.
folder (`str` or `Path`):
The folder in which to run the command.
Returns:
`List[str]`: List of files that are to be staged.
"""
try:
# --exclude-standard
p = run_subprocess("git ls-files --exclude-standard -mo".split() + [pattern], folder)
if len(p.stdout.strip()):
files = p.stdout.strip().split("\n")
else:
files = []
except subprocess.CalledProcessError as exc:
raise EnvironmentError(exc.stderr)
_logger.debug(f"Files to be staged: {files}")
return files
# Monkey patching huggingface_hub.repository.files_to_be_staged
from huggingface_hub import repository
repository.files_to_be_staged = patched_files_to_be_staged
```
<details>
<summary>files_to_be_staged() without --exclude-standard arg</summary>
2023-02-13 11:32:32 :: DEBUG :: train :: patched_files_to_be_staged :: Files to be staged: ['.gitattributes.sagemaker-uploaded', '.gitignore.sagemaker-uploaded', 'README.md.sagemaker-uploaded', 'config.json', 'config.json.sagemaker-uploaded', 'last-checkpoint/config.json', 'last-checkpoint/config.json.sagemaker-uploaded', 'last-checkpoint/optimizer.pt', 'last-checkpoint/optimizer.pt.sagemaker-uploading', 'last-checkpoint/pytorch_model.bin', 'last-checkpoint/pytorch_model.bin.sagemaker-uploading', 'last-checkpoint/rng_state.pth', 'last-checkpoint/rng_state.pth.sagemaker-uploaded', 'last-checkpoint/rng_state.pth.sagemaker-uploading', 'last-checkpoint/scheduler.pt', 'last-checkpoint/scheduler.pt.sagemaker-uploading', 'last-checkpoint/special_tokens_map.json', 'last-checkpoint/special_tokens_map.json.sagemaker-uploaded', 'last-checkpoint/special_tokens_map.json.sagemaker-uploading', 'last-checkpoint/tokenizer.json', 'last-checkpoint/tokenizer.json.sagemaker-uploaded', 'last-checkpoint/tokenizer_config.json', 'last-checkpoint/tokenizer_config.json.sagemaker-uploaded', 'last-checkpoint/trainer_state.json', 'last-checkpoint/trainer_state.json.sagemaker-uploaded', 'last-checkpoint/training_args.bin', 'last-checkpoint/training_args.bin.sagemaker-uploaded', 'last-checkpoint/vocab.txt', 'last-checkpoint/vocab.txt.sagemaker-uploaded', 'pytorch_model.bin', 'pytorch_model.bin.sagemaker-uploading', 'special_tokens_map.json', 'special_tokens_map.json.sagemaker-uploaded', 'tokenizer.json', 'tokenizer.json.sagemaker-uploading', 'tokenizer_config.json', 'tokenizer_config.json.sagemaker-uploaded', 'training_args.bin', 'training_args.bin.sagemaker-uploaded', 'vocab.txt']
</details>
<details>
<summary>files_to_be_staged() with --exclude-standard arg</summary>
2023-02-13 11:42:35 :: DEBUG :: train :: patched_files_to_be_staged :: Files to be staged: ['config.json', 'last-checkpoint/config.json', 'last-checkpoint/optimizer.pt', 'last-checkpoint/pytorch_model.bin', 'last-checkpoint/rng_state.pth', 'last-checkpoint/scheduler.pt', 'last-checkpoint/special_tokens_map.json', 'last-checkpoint/tokenizer.json', 'last-checkpoint/tokenizer_config.json', 'last-checkpoint/trainer_state.json', 'last-checkpoint/training_args.bin', 'last-checkpoint/vocab.txt', 'pytorch_model.bin', 'special_tokens_map.json', 'tokenizer.json', 'tokenizer_config.json', 'training_args.bin', 'vocab.txt']
</details>
If you agree with this solution, I'll be very happy to submit a PR to implement this.
## Some links
- [Run training on Amazon SageMaker](https://huggingface.co/docs/sagemaker/train)
- [Renate/file.py at main ยท awslabs/Renate](https://github.com/awslabs/Renate/blob/main/src/renate/utils/file.py#L98-L116)
### Expected behavior
I expected that I can use SageMaker Checkpointing with Model Repository with no errors. | 02-12-2023 14:11:23 | 02-12-2023 14:11:23 | Thanks for diving into this and offering solutions! I think your plan sounds sensible, woudl you like to open a PR with it?<|||||>> Thanks for diving into this and offering solutions! I think your plan sounds sensible, woudl you like to open a PR with it?
yeah! I'll do that and submit a PR soon.<|||||>**Update:** I merged https://github.com/huggingface/huggingface_hub/pull/1339. Will not be released before a few weeks but you can always install it from git if it's urgent :) <|||||>> **Update:** I merged [huggingface/huggingface_hub#1339](https://github.com/huggingface/huggingface_hub/pull/1339). Will not be released before a few weeks but you can always install it from git if it's urgent :)
no problem, I monkey-patched it in my training script. :)
If someone also needs it before the new release. The monkey patch code is in the issue description. |
transformers | 21,585 | closed | How to correct TypeError: zip argument #1 must support iteration training in multiple GPU | I am doing a creating custom pytorch layer and model training using `Trainer API` function on top of `Hugging face` model.
When I run on `single GPU`, it trains fine. But when I train it on `multiple GPU` it throws me error.
`TypeError: zip argument #1 must support iteration training in multiple GPU`
**Training Code**
bert_model = BertForTokenClassification.from_pretrained( model_checkpoint,id2label=id2label,label2id=label2id)
bert_model.config.output_hidden_states=True
class BERT_CUSTOM(nn.Module):
def __init__(self, bert_model,id2label,num_labels):
super(BERT_CUSTOM, self).__init__()
self.bert = bert_model
self.config=self.bert.config
self.dropout = nn.Dropout(0.25)
self.classifier = nn.Linear(768, num_labels)
self.crf = CRF(num_labels, batch_first = True)
def forward(self, input_ids, attention_mask, labels=None, token_type_ids=None):
outputs = self.bert(input_ids, attention_mask=attention_mask)
sequence_output = torch.stack((outputs[1][-1], outputs[1][-2], outputs[1][-3], outputs[1][-4])).mean(dim=0)
sequence_output = self.dropout(sequence_output)
emission = self.classifier(sequence_output) # [32,256,21] logits
if labels is not None:
labels=labels.reshape(attention_mask.size()[0],attention_mask.size()[1])
loss = -self.crf(log_soft(emission, 2), labels, mask=attention_mask.type(torch.uint8), reduction='mean')
prediction = self.crf.decode(emission, mask=attention_mask.type(torch.uint8))
return [loss, prediction]
else:
prediction = self.crf.decode(emission, mask=attention_mask.type(torch.uint8))
prediction=[id2label[k] for k in prediction]
return prediction
**Training API**
model = BERT_CUSTOM(bert_model, id2label,num_labels=len(label2id))
model.to(device)
args = TrainingArguments(
"model",
save_strategy="epoch",
learning_rate=2e-5,
num_train_epochs=2,
weight_decay=0.01,
per_device_train_batch_size=32,
fp16=True
)
trainer = Trainer(
model=model,
args=args,
train_dataset=train_data,
tokenizer=tokenizer)
trainer.train() | 02-12-2023 13:34:45 | 02-12-2023 13:34:45 | Hi @pratikchhapolika, without a full traceback of the errors, we can't tell if this is a `transfomers` issue or a issue in your custom modeling code.<|||||>> Hi @pratikchhapolika, without a full traceback of the errors, we can't tell if this is a `transfomers` issue or a issue in your custom modeling code.
This is the only error I get @ydshieh
`TypeError: zip argument #1 must support iteration training in multiple GPU`<|||||>Hi @pratikchhapolika
What @ydshieh is requesting is the full output from your terminal printed out between the line of execution and the error e.g. something like this:
```
(ml) amyroberts:transformers $ python ../scripts/dummy_traceback.py
Traceback (most recent call last):
File "/Users/amyroberts/code/transformers/../scripts/dummy_traceback.py", line 12, in <module>
foo()
File "/Users/amyroberts/code/transformers/../scripts/dummy_traceback.py", line 7, in foo
c = a / b
ZeroDivisionError: division by zero
```
This gives the full context of what lines of code have been called and exactly the line where the error is being triggered. Without this it's very difficult for us to be able to help debug this issue.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 21,584 | closed | Update setup.py | I was tweaking m4's `setup.py` with some goodies from here and saw a few minor improvements that could be made here:
- clarify the license and description
- add `jax` to the keywords
- add recent python versions
If anything doesn't fit please let me know and I will remove it. | 02-12-2023 05:03:51 | 02-12-2023 05:03:51 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 21,583 | closed | Correct Markdown bullets indentation | This is a minor change to correct Markdown indentations of this file gpt2/CONVERSION.md | 02-12-2023 03:53:11 | 02-12-2023 03:53:11 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 21,582 | closed | Fix for non-contiguous label tensors in VisonEncoderDecoder | # What does this PR do?
Fixes loss calculation error due to non-contiguous label tensors, replaced `labels.view(-1)` with `labels.reshape(-1)`
When passing a slice of a labels tensor to the `labels` argument the following error would get triggered:
`RuntimeError: view size is not compatible with input tensor's size and stride`
```
outputs = model(pixel_values,
decoder_input_ids=decoder_input_ids[:, :-1],
labels=labels[:, 1:])
```
Using `reshape` instead of `view` means that if `view` is not possible, the input call fall back to using `resahpe`, as per the [PyTorch `reshape` documentation here](https://pytorch.org/docs/stable/generated/torch.reshape.html#torch.reshape)
From the docs:
> When possible, the returned tensor will be a view of input. Otherwise, it will be a copy
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- vision models: @amyeroberts and @NielsRogge
-->
@amyeroberts @NielsRogge
| 02-11-2023 22:26:32 | 02-11-2023 22:26:32 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@morganmcg1 - thanks for the fix!
Change looks good to me, just two small things:
* Could you give more details about the error e.g. a code snippet? This will help with documentation for future us to track and resolve any similar issues.
* I can see the `check_code_quality` checks fail. I'm not sure why it's hitting an issue with flax electra, as this PR doesn't touch it. Could you add the latest changes from main and run `make fixup` in the top level of your local `transformers`? If it persists I'll do some more digging. <|||||>@amyeroberts The quality issue is fixed on main, the PR just needs a quick rebase :-) <|||||>hey @amyeroberts , sure! rebased and updated the description, let me know if you need any more details <|||||>Awesome fix, @morganmcg1 ! Thank you, @sgugger for the quick review and merge!
|
transformers | 21,581 | closed | [Flax] adding support for batch norm layers | # What does this PR do?
Adding support for batch norm layers in flax. Several models like ResNets, MobileNets have BatchNorm layers which (for now) cannot be ported to HuggingFace Flax due to Flax requiring an extra `batch_stats` key along with `params` to perform training and inference.
The changes here are tested locally with my work-in-progress resnet PR [https://github.com/huggingface/transformers/pull/21472](https://github.com/huggingface/transformers/pull/21472), ~~though more testing is needed.~~ LGTM :rocket:
### Design Choices
_Mentioning a few design decisions and some notes to ease the review process_ :)
- To detect whether the flax model contains batch norm layers, we use a simple if statement
`if "params" in flax_model.params and "batch_stats" in flax_model.params`
An alternative could have been adding a `contains_batchnorm` attribute to `FlaxPreTrainedModel` class but I decided not to go with it since found the first option simpler and kept no. of class attributes as low as possible unless strictly needed.
- `num_batches_tracked` is not transferred from pt to flax weight conversion and are newly initialized when doing flax to pt weight conversion.
- ~~`pt_flax_cross_test` in `test_modeling_flax_common.py` currently does not support comparing the weights between `batch_stats` and PyTorch equivalent BatchNorm mean and var. I am still currently looking into supporting it.~~
It does now :)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
- Flax: @sanchit-gandhi | 02-11-2023 18:29:35 | 02-11-2023 18:29:35 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Hi @sanchit-gandhi, the PR is now ready for your review. I'm sorry if my previous ping was confusing since I mentioned it when the PR was in WIP. Next time, I will make sure to ping you only when it's ready for review. Thank you so much for taking the time to review the PR.
[Flax Convnext PR](https://github.com/huggingface/transformers/pull/21485) is also ready for your review. <|||||>HI, @sgugger I have been trying to mention @sanchit-gandhi but unfortunately have not received a response. I was hoping you could assist me in finding the right person to reach out to for a review. Any help would be greatly appreciated.<|||||>The right person is @sanchit-gandhi <|||||>Cool! Such a pleasure to work with :hugs: team & transformers. Along with ResNets, I have tested this PR on flax implementation of RegNet and EfficientNet ( which uses batch norm ) locally. Both of which are nearly complete and will make PR once this get's merged.<|||||>Super fun working with you too @Shubhamai! Thank you for your contribution and high quality PR! Very excited to see the follow-ups with ResNet, RegNet and EfficientNet, these will be very nice additions to the library. Feel free to ping me when they're in a review ready state and I'll take a look as quickly as possible! Likewise if you have any Flax questions / queries I'm more than happy to help! |
transformers | 21,580 | closed | Generate: correct default model input creation for decoder-only models | # What does this PR do?
Fixes #21578 and addresses concerns in #21575
## Context
Support for `.generate()` from `inputs_embeds` with selected decoder-only models was added recently (#21405). This feature enables a `.generate(inputs_embeds=inputs_embeds)` call, i.e. without a `input_ids`.
This specific call strategy, under the hood, implies that `.generate()` is in charge of creating a) an `input_ids` for later use (to concatenate the generated tokens) and b) the corresponding `attention_mask`. This automated creation was not working properly for `batch_size>1`.
## Changes
The changes in the PR respect the following desiderata (which required moving a few things around):
1. The `attention_mask` can be automatically inferred (with all ones) regardless of the shape of `inputs_embeds`;
2. When `inputs_embeds` is passed and `input_ids` is not, the automated `input_ids` has a sequence length of 1. This is particularly relevant for BLIP, as we don't want `input_ids` to start with the embeddings' sequence length.
This PR also adds/enhances tests, to ensure we don't regress on this capability.
โ ๏ธ if approved, I will make the corresponding TF changes before merging. | 02-11-2023 16:51:06 | 02-11-2023 16:51:06 | cc @dimitry12 this fixes the error you reported in #21575 (GPT2 was throwing the same error as GPTJ, and gets fixed here)
cc @NielsRogge please don't forget to add batched tests on models with generation capabilities, generate+batching is surprisingly tricky๐ <|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>But it does seem like it breaks a lot of tests ๐
<|||||>(Merging -- the failing test is a known failing test)<|||||>@dimitry12 lmk if you see the error when using GPT-J :)<|||||>> @dimitry12 lmk if you see the error when using GPT-J :)
@gante GPT-J generation without dummy `input_ids` using only `inputs_embeds` works without errors now. Thank you! |
transformers | 21,579 | closed | [`bnb`] Introducing `BitsAndBytesConfig` | # What does this PR do?
This PR introduces a new feature termed as `BitsandbytesConfig` - enabling users to play more flexibly with `transformers` + `bitsandbytes` API.
This is also a first step of one of the major refactor we are planning to support possibly more quantization features
This PR also addresses: https://github.com/huggingface/transformers/pull/20281#issuecomment-1409894311
With this PR it will be also possible to apply advanced usecases for `bnb` models such as offloading parameters in cpu & gpu to run some part in int8 and the other part in cpu (but in `fp32`)
Draft for now, will update the docs and fix the CI tests after a first pass ๐ช
One of my comment being that we should keep `load_in_8bit` argument as it is for now (as I think this argument is quite powerful) and progressively entirely replace it with the config in the future if more quantization methods will be supported on `bitsandbytes`.
cc @sgugger | 02-11-2023 10:00:25 | 02-11-2023 10:00:25 | @younesbelkada great job again. :)
I really like the refactored API which supports passing a whole config object containing all relevant BnB options in one place.
I am curious are the calculations during inference on the offloaded parameters (in fp32) to CPU and disk actually executed on the CPU or do they get transferred to one of the GPUs for calculations during every pass?<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>This PR is now ready for review! Would love to have a round of review @sgugger <|||||>Thanks for the extensive review! I should have addressed the comments now ๐ช <|||||>Thanks ! Exaclty! Looking forward to it! |
transformers | 21,578 | closed | `RuntimeError` when running batched inference for `Salesforce/blip2-opt-2.7b` | ### System Info
Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points.
- `transformers` version: 4.27.0.dev0
- Platform: Linux-5.15.0-57-generic-x86_64-with-glibc2.10
- Python version: 3.8.12
- Huggingface_hub version: 0.11.1
- PyTorch version (GPU?): 1.11.0a0+b6df043 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help?
I'm trying to run batched inference using BLIP v2 with a slight modification from the [example script](https://huggingface.co/docs/transformers/main/en/model_doc/blip-2#transformers.Blip2ForConditionalGeneration.forward.example). Anything above a batch size of produces a `RuntimeError` due to a shape mismatch in a `torch.cat` operation. The error stems from the `.generate()` function inside the language model componenet of BLIP v2.
cc @NielsRogge @gante
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
## Code To Reproduce
```python
from PIL import Image
import requests
from transformers import Blip2Processor, Blip2ForConditionalGeneration
import torch
device = "cuda" if torch.cuda.is_available() else "cpu"
processor = Blip2Processor.from_pretrained("Salesforce/blip2-opt-2.7b")
model = Blip2ForConditionalGeneration.from_pretrained(
"Salesforce/blip2-opt-2.7b", torch_dtype=torch.float16
)
model.to(device)
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
# ---------------------- Change made here ---------------------- #
# Passing `images=[image, image]` instead of `images=image` for testing batched inference
inputs = processor(images=[image, image], return_tensors="pt").to(device, torch.float16)
# ---------------------------------------------------------------- #
generated_ids = model.generate(**inputs)
```
## Error Stack Trace
```python
โญโโโโโโโโโโโโโโโโโโโโโโโโโโโโ Traceback (most recent call last) โโโโโโโโโโโโโโโโโโโโโโโโโโโโโฎ
โ โ
โ /tmp/ipykernel_1000/1893006406.py:18 in <module> โ
โ โ
โ [Errno 2] No such file or directory: '/tmp/ipykernel_1000/1893006406.py' โ
โ /opt/conda/lib/python3.8/site-packages/torch/autograd/grad_mode.py:28 in decorate_context โ
โ โ
โ 25 โ โ @functools.wraps(func) โ
โ 26 โ โ def decorate_context(*args, **kwargs): โ
โ 27 โ โ โ with self.__class__(): โ
โ โฑ 28 โ โ โ โ return func(*args, **kwargs) โ
โ 29 โ โ return cast(F, decorate_context) โ
โ 30 โ โ
โ 31 โ def _wrap_generator(self, func): โ
โ โ
โ /home/synopsis/git/transformers/src/transformers/models/blip_2/modeling_blip_2.py:1421 in โ
โ generate โ
โ โ
โ 1418 โ โ inputs_embeds = self.language_model.get_input_embeddings()(input_ids) โ
โ 1419 โ โ inputs_embeds = torch.cat([language_model_inputs, inputs_embeds], dim=1) โ
โ 1420 โ โ โ
โ โฑ 1421 โ โ outputs = self.language_model.generate( โ
โ 1422 โ โ โ inputs_embeds=inputs_embeds, โ
โ 1423 โ โ โ attention_mask=attention_mask, โ
โ 1424 โ โ โ **generate_kwargs, โ
โ โ
โ /opt/conda/lib/python3.8/site-packages/torch/autograd/grad_mode.py:28 in decorate_context โ
โ โ
โ 25 โ โ @functools.wraps(func) โ
โ 26 โ โ def decorate_context(*args, **kwargs): โ
โ 27 โ โ โ with self.__class__(): โ
โ โฑ 28 โ โ โ โ return func(*args, **kwargs) โ
โ 29 โ โ return cast(F, decorate_context) โ
โ 30 โ โ
โ 31 โ def _wrap_generator(self, func): โ
โ โ
โ /home/synopsis/git/transformers/src/transformers/generation/utils.py:1386 in generate โ
โ โ
โ 1383 โ โ โ โ ) โ
โ 1384 โ โ โ โ
โ 1385 โ โ โ # 11. run greedy search โ
โ โฑ 1386 โ โ โ return self.greedy_search( โ
โ 1387 โ โ โ โ input_ids, โ
โ 1388 โ โ โ โ logits_processor=logits_processor, โ
โ 1389 โ โ โ โ stopping_criteria=stopping_criteria, โ
โ โ
โ /home/synopsis/git/transformers/src/transformers/generation/utils.py:2224 in โ
โ greedy_search โ
โ โ
โ 2221 โ โ โ โ next_tokens = next_tokens * unfinished_sequences + pad_token_id * โ
โ 2222 โ โ โ โ
โ 2223 โ โ โ # update generated ids, model inputs, and length for next step โ
โ โฑ 2224 โ โ โ input_ids = torch.cat([input_ids, next_tokens[:, None]], dim=-1) โ
โ 2225 โ โ โ model_kwargs = self._update_model_kwargs_for_generation( โ
โ 2226 โ โ โ โ outputs, model_kwargs, is_encoder_decoder=self.config.is_encoder_d โ
โ 2227 โ โ โ ) โ
โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ
RuntimeError: Sizes of tensors must match except in dimension 1. Expected size 1 but got size
2 for tensor number 1 in the list.
```
### Expected behavior
I'd expect being able to run batched inference for a batch size > 1 | 02-11-2023 08:04:42 | 02-11-2023 08:04:42 | Hey there @rsomani95 ๐ I believe I know what the root cause is, on it :)<|||||>@rsomani95 if you install from `main`, it should be working now :)
Note that batched generation + beam search is still not working, and is being tracked [here](https://github.com/huggingface/transformers/issues/21599)<|||||>@gante great, thanks a bunch for the quick work! |
transformers | 21,577 | closed | Bump ipython from 8.1.1 to 8.10.0 in /examples/research_projects/decision_transformer | Bumps [ipython](https://github.com/ipython/ipython) from 8.1.1 to 8.10.0.
<details>
<summary>Commits</summary>
<ul>
<li><a href="https://github.com/ipython/ipython/commit/15ea1ed5a886d6c19c1cc4856f2cf04a2a547c57"><code>15ea1ed</code></a> release 8.10.0</li>
<li><a href="https://github.com/ipython/ipython/commit/560ad109197c0f8373865896af369bb3b36fd229"><code>560ad10</code></a> DOC: Update what's new for 8.10 (<a href="https://github-redirect.dependabot.com/ipython/ipython/issues/13939">#13939</a>)</li>
<li><a href="https://github.com/ipython/ipython/commit/7557ade0ed927475d5ab5b573d0ea4febfb22683"><code>7557ade</code></a> DOC: Update what's new for 8.10</li>
<li><a href="https://github.com/ipython/ipython/commit/385d69325319a5972ee9b5983638e3617f21cb1f"><code>385d693</code></a> Merge pull request from GHSA-29gw-9793-fvw7</li>
<li><a href="https://github.com/ipython/ipython/commit/e548ee23ac460a99901f1cd43b94ae84a35ec393"><code>e548ee2</code></a> Swallow potential exceptions from showtraceback() (<a href="https://github-redirect.dependabot.com/ipython/ipython/issues/13934">#13934</a>)</li>
<li><a href="https://github.com/ipython/ipython/commit/0694b08b436203817059ec7e7136cf8561a6f013"><code>0694b08</code></a> MAINT: mock slowest test. (<a href="https://github-redirect.dependabot.com/ipython/ipython/issues/13885">#13885</a>)</li>
<li><a href="https://github.com/ipython/ipython/commit/865591252a67c6907fe03228b4053305715286e6"><code>8655912</code></a> MAINT: mock slowest test.</li>
<li><a href="https://github.com/ipython/ipython/commit/a011765b44febfb11bae122d2ed7db763621ac8f"><code>a011765</code></a> Isolate the attack tests with setUp and tearDown methods</li>
<li><a href="https://github.com/ipython/ipython/commit/c7a9470e540392c575aac46c3ee5cf4fe5123eb1"><code>c7a9470</code></a> Add some regression tests for this change</li>
<li><a href="https://github.com/ipython/ipython/commit/fd34cf5f1f6e243243c738c6e0cf62eb682c4d68"><code>fd34cf5</code></a> Swallow potential exceptions from showtraceback()</li>
<li>Additional commits viewable in <a href="https://github.com/ipython/ipython/compare/8.1.1...8.10.0">compare view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
- `@dependabot use these labels` will set the current labels as the default for future PRs for this repo and language
- `@dependabot use these reviewers` will set the current reviewers as the default for future PRs for this repo and language
- `@dependabot use these assignees` will set the current assignees as the default for future PRs for this repo and language
- `@dependabot use this milestone` will set the current milestone as the default for future PRs for this repo and language
You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts).
</details> | 02-11-2023 01:34:19 | 02-11-2023 01:34:19 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 21,576 | closed | Auto-conversion of FP16 tensors to FP32 causes OOM error starting in transformers 4.26.0/1 releases | ### System Info
```
transformers version: 4.26.0 and newer
platform: Ubuntu 20.04.5 (ec2 instance, 5.15.0-1028-aws)
python version: 3.8.13
other dependencies: boto3 1.26.68, awscli 1.26.68, datasets[s3] 2.5.2, sagemaker 2.132.0 + TrainingCompilerConfig, and xla
juggingface_hub version: 0.12.0
PyTorch version: 1.12.0
Using GPU in script?: yes
GPU: 1 NVIDIA T4 Tensor Core GPU with 16GB of VRAM
CPU: 8 Virtual Cascade Lake P-8259L CPUs
Using distributed or parallel set-up in script?: no
```
### Who can help?
@ArthurZucker @younesbelkada, possibly @sgugger given I'm using Trainer
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
### Background ###
**Model Used:** roberta-base
**Dataset:** SST-2
My team at AWS works on the [SageMaker Training Compiler](https://docs.aws.amazon.com/sagemaker/latest/dg/training-compiler.html) that uses XLA to provide compiler-level optimizations. Since the last two transformers releases, we've noticed that datasets with fp16 tensors are being auto-converted to fp32 tensors. This is despite some of our training scripts specifying the `fp16` hyperparameter to `True`.
### Problem ###
In some of our models, we were seeing memory footprints double, leading to a lot of OOM errors, especially on ec2 instances with GPUs with 16GB of VRAM. We've gotten around this by fixing the transformers version to 4.25.1 in all of our models, but we're concerned about this because our customers are just going to run `pip install transformers` and wonder why they're seeing OOM errors that they haven't seen before.
### Verifying and Trying to Fix the Problem ###
I've tried setting `fp16` to `False` in `transformers 4.25.1` using the `roberta-base` model with `sst-2` dataset, and the memory footprint was the same as when setting `fp16` to `True` in `transformers >= 4.26.0`. I spent some time trying to bisect the commit that made this change, but I wasn't able to narrow down the commit.
I thought maybe [`keep_in_fp32_modules`](https://github.com/huggingface/transformers/blob/3b7ed25da9ce8291b9dabd6943f5b928352f4a16/src/transformers/modeling_utils.py#L628-L634) was being enabled under the hood, but I still observed this OOM error. I also tried reverting some of the T5 and RoBERTa-specific commits made in the last release, and wasn't able to restore the old behavior.
Please let me know what I can do to fix this problem. I've seen this bug on multiple models using 4.26.0, 4.26.1, and 4.27.0.dev0 (using commit [`1efe9c0`](https://github.com/huggingface/transformers/commit/1efe9c0b24064acb3971260f505e46caa94617e9)). All the code needed to reproduce should be below - paste the estimator code into one file and the training script code in another, and modify the training script filename/path as needed. Thanks a lot for all the work the team does - I'm a long time user but first-time bug reporter.
### Code ###
**Estimator**
```
!pip install --upgrade "boto3==1.26.68" "awscli==1.27.68"
!pip install "datasets[s3]==2.5.2" "torch==1.12.0" "fsspec<=2022.7.1"--upgrade
!pip install "transformers==4.26.0"
!pip install -U "sagemaker==2.132.0" "protobuf"
from datasets import Dataset
from transformers import AutoTokenizer
import os
import pandas as pd
import sagemaker
import botocore
tokenizer_name = 'roberta-base'
!aws s3 sync "s3://sagemaker-sample-files/datasets/text/SST2/" "./"
# download tokenizer
tokenizer = AutoTokenizer.from_pretrained(tokenizer_name)
# tokenizer helper function
def tokenize(batch):
return tokenizer(batch['text'], padding='max_length', truncation=True)
# load dataset
test_df = pd.read_csv('sst2.test', sep='delimiter', header=None, engine='python', names=['line'])
train_df = pd.read_csv('sst2.train', sep='delimiter', header=None, engine='python', names=['line'])
test_df[['label', 'text']] = test_df['line'].str.split(' ', 1, expand=True)
train_df[['label', 'text']] = train_df['line'].str.split(' ', 1, expand=True)
test_df.drop('line', axis=1, inplace=True)
train_df.drop('line', axis=1, inplace=True)
test_df['label'] = pd.to_numeric(test_df['label'], downcast='integer')
train_df['label'] = pd.to_numeric(train_df['label'], downcast='integer')
train_dataset = Dataset.from_pandas(train_df)
test_dataset = Dataset.from_pandas(test_df)
# tokenize dataset
train_dataset = train_dataset.map(tokenize, batched=True)
test_dataset = test_dataset.map(tokenize, batched=True)
# set format for pytorch
train_dataset = train_dataset.rename_column("label", "labels")
train_dataset.set_format('torch', columns=['input_ids', 'attention_mask', 'labels'])
test_dataset = test_dataset.rename_column("label", "labels")
test_dataset.set_format('torch', columns=['input_ids', 'attention_mask', 'labels'])
save_path = "/opt/ml/model"
optimized_estimator = PyTorch(
entry_point=trainer_script.py, #see below
source_dir="./scripts",
instance_type="ml.g4dn.2xlarge",
instance_count=1,
role="admin",
py_version="py38",
framework_version="1.12.0",
volume_size=volume_size,
hyperparameters={
'epochs': 3,
'max_steps': 5.
'train_batch_size': 24,
'model_name': 'roberta-base',
'optim': 'adamw_torch_xla',
} ,
disable_profiler=True,
debugger_hook_config=False,
max_retry_attempts=3,
compiler_config=TrainingCompilerConfig(debug=True),
environment={
"XLA_FLAGS": f"--xla_dump_to={save_path} --xla_dump_hlo_as_text",
"XLA_SAVE_TENSORS_FILE": f"{save_path}/XLA_SAVE_TENSORS_FILE.hlo",
"XLA_METRICS_FILE": f"{save_path}/XLA_METRICS_FILE.txt",
"XLA_SAVE_HLO_FILE": f"{save_path}/XLA_SAVE_HLO_FILE.hlo",
},
distribution={'pytorchxla': {'enabled': True}},
)
# starting the train job with our uploaded datasets as input
optimized_estimator.fit({"train": training_input_path, "test": test_input_path}, wait=True)
```
**Training Script**
```
from transformers import AutoModelForSequenceClassification, Trainer, TrainingArguments, AutoTokenizer, TrainerCallback
from sklearn.metrics import accuracy_score, precision_recall_fscore_support
from datasets import load_from_disk
import random
import logging
import sys
import argparse
import os
import torch
import numpy as np
def _mp_fn(index):
# For xla_spawn (TPUs)
main()
if __name__ == "__main__":
main()
def main():
parser = argparse.ArgumentParser()
# hyperparameters sent by the client are passed as command-line arguments to the script.
parser.add_argument("--epochs", type=int, default=50)
parser.add_argument("--max_steps", type=int, default=-1)
parser.add_argument("--train_batch_size", type=int, default=32)
parser.add_argument("--eval_batch_size", type=int, default=64)
parser.add_argument("--warmup_steps", type=int, default=500)
parser.add_argument("--model_name", type=str)
parser.add_argument("--learning_rate", type=str, default=5e-5)
# Data, model, and output directories
parser.add_argument("--output_data_dir", type=str, default=os.environ["SM_OUTPUT_DATA_DIR"])
parser.add_argument("--model_dir", type=str, default=os.environ["SM_MODEL_DIR"])
parser.add_argument("--n_gpus", type=str, default=os.environ["SM_NUM_GPUS"])
parser.add_argument("--training_dir", type=str, default=os.environ["SM_CHANNEL_TRAIN"])
parser.add_argument("--test_dir", type=str, default=os.environ["SM_CHANNEL_TEST"])
args, _ = parser.parse_known_args()
# The original LR was set for a batch of 32. We are scaling learning rate with batch size.
args.learning_rate = float(args.learning_rate)
args.learning_rate = float('5e-5')/32*args.train_batch_size
# Set up logging
logger = logging.getLogger(__name__)
logging.basicConfig(
level=logging.getLevelName("INFO"),
handlers=[logging.StreamHandler(sys.stdout)],
format="%(asctime)s - %(name)s - %(levelname)s - %(message)s",
)
# load datasets
train_dataset = load_from_disk(args.training_dir)
test_dataset = load_from_disk(args.test_dir)
# compute metrics function for binary classification
def compute_metrics(pred):
labels = pred.label_ids
preds = pred.predictions.argmax(-1)
precision, recall, f1, _ = precision_recall_fscore_support(labels, preds, average="binary")
acc = accuracy_score(labels, preds)
return {"accuracy": acc, "f1": f1, "precision": precision, "recall": recall}
# download model from model hub
model = AutoModelForSequenceClassification.from_pretrained(args.model_name)
tokenizer = AutoTokenizer.from_pretrained(args.model_name)
# define training args
training_args = TrainingArguments(
output_dir=args.model_dir,
num_train_epochs=args.epochs,
max_steps=args.max_steps,
per_device_train_batch_size=args.train_batch_size,
per_device_eval_batch_size=args.eval_batch_size,
warmup_steps=args.warmup_steps,
logging_dir=f"{args.output_data_dir}/logs",
learning_rate=args.learning_rate,
fp16=True, #note that this is True
dataloader_drop_last=True,
disable_tqdm=True,
evaluation_strategy="no",
save_strategy="no",
save_total_limit=1,
logging_strategy="epoch",
)
# create Trainer instance
trainer = Trainer(
model=model,
args=training_args,
compute_metrics=compute_metrics,
train_dataset=train_dataset,
eval_dataset=test_dataset,
tokenizer=tokenizer,
)
# train model
trainer.train()
```
### Expected behavior
**Expected:** With a GPU with 16GB VRAM, the largest XLA module based on buffer-assignment/disassembly files should be Module 3, and the total memory footprint should be 7.859GB. Training should finish without going out of memory. The buffer assignment file shows a mix of fp16 and fp32 tensors. This is the behavior on transformers versions older than 4.26.0, which I tested on 4.21.0, 4.24.0, and 4.25.1 and observed this behavior.
**Actual:** Module 3 has a total memory footprint of 15.463GB, and will lead to an out-of-memory error on a GPU of 16GB. Looking at the buffer assignment file for module 3, all tensors are fp32. This happens on 4.26.0, 4.26.1, and 4.27.0.dev0 | 02-11-2023 00:17:49 | 02-11-2023 00:17:49 | May I please know if there's an update on this? <|||||>Please be patient while we try to reproduce your problem. If you can single out the commit you think is responsible, it might help our search.<|||||>Sure, not a problem. Thank you for your reply. You're right, I should've included the commits in my initial post. I'll cover them in this comment.
So, I did try a run at using git bisect to narrow down the issue, and I wasn't able to find the correct commit(s) to revert. I think maybe there is more than 1 commit that may have contributed to this bug, or (more likely) I didn't find the right commit. With that being said, here are some commits that I was looking at:
`fe65657de112531a2d5303491f245f9e7534ae8d` - maybe `torch_dtype` is not returning the proper type for my model? It could be defaulting `fp32` instead of `fp16`, whereas the input argument for preprocess function, `dtype` may be `fp16` due to the override I'm passing in my estimator.
`accad48e5b4a98302ea396b9f15c5f1c987b6f7f` - There's an explicit conversion to `torch.float32` in the `_load_state_dict_into_meta_model` function. Maybe that's what I'm seeing.
`1af4bee8965c80e54c0e21aa8aadd035fd1f4189` - Maybe a bug in `keep_in_fp32_modules`? It could be if `fp16` is set to `True` in an estimator, `use_keep_in_fp32_modules` is being set to `True` when it should be `False`?<|||||>You are not using any `torch_dtype` to build your model and the model you are using is saved in float32, so none of those commits should change anything. `fp16=True` is mixed precision training in float16, so the model is still in float32 (there are actually two copies of the model, one in float16 and one in float32).<|||||>So `torch_dtype` is not set based on the `fp16` value. Hmmm ok. I realize that setting fp16 is for mixed precision, and doesn't convert fp32 tensors to fp16 or something like that. I was thinking that the fp16 copy is being converted to fp32 at some point, or something like that. I don't know for sure if that's happening. Also I'm not sure what about other commits could've caused the issue, other than it's something committed during the 4.26.0 release
<|||||>Seems like this is still a problem we're observing. The largest XLA module is doubling in VRAM for any mixed precision training, as fp16 tensors are being converted to fp32. This is with setting `fp16=True`, and the behavior has been different since transformers `v4.26.0`. Is there a special thing we should do for our notebooks to ensure this doesn't happen? We want to use the latest Transformers, but our customers need to avoid the increased memory footprint. Please advise. Thanks!<|||||>@awskila can you please share the whole example so we are able to reproduce. We are missing the `requirements.txt` for the libraries you installed in the SageMaker training job also does you "Estimator" code have errors. <|||||>Sure, give me a bit more time. I will share everything very soon.<|||||>I experience similar OOM errors when calling `model.generate()` at inference time, after the upgrade from 4.25 to 4.26.<|||||>Same issue here, `model.generate()` gave me OOM with 4.26.1, so I rolled back to 4.25.1 to fix it.<|||||>This has nothing to do with this issue which relates to mixed precision training, so please open a new one with a reproducer.<|||||>Good to hear this isn't just me. I will try and post a full working sample. Have to modify the model code in a way that allows it to stay open source, and I was using a customized sst-2 dataset that had differences from the open-source sst-2 dataset. My apologies for the delays.
As far as how this relates to mixed precision training - I was showing that behavior is not working as it did in a prior release. It's very possible (and likely imo) that the real issue is occurring elsewhere, and I just noticed this with mixed precision training. I don't think the issue is at the `model.generate` level given that code has been [stable since 4.25.1](https://github.com/huggingface/transformers/commit/f270b960d6b755d5355a0f193ba71f47131c86b8).
Essentially my issue was 4.26.0+ mixed precision training had a memory footprint double of <4.25.1 mixed precision training. And the resulting disassembly is similar to <4.25.1 fp32 training.
I was running training off a pre-trained model, and it looks like this has been seen on inference too per @pgmikhael. It may not be two separate issues, but maybe it's a more general issue. I would welcome a new issue with a reproducer though, as I have been getting delayed for compliance reasons.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 21,575 | closed | Add `inputs_embeds` support when generating with GPT-J | # What does this PR do?
This PR extends https://github.com/huggingface/transformers/pull/21405 by @gante to GPT-J, making it accept `inputs_embeds` when generating.
This is generally useful for soft-prompting but I am specifically using this with https://github.com/jmerullo/limber by @jmerullo
Importantly, I find that dummy `input_ids` are still required. Sample code using this feature with GPT-J:
```
from transformers import GPTJForCausalLM, AutoTokenizer
model_name = "hf-internal-testing/tiny-random-GPTJModel"; revision="main"
model = GPTJForCausalLM.from_pretrained(model_name, revision=revision)
tokenizer = AutoTokenizer.from_pretrained(model_name)
inputs_embeds = torch.rand((1, 144, 32,)) # 144 dummy soft-prompt token embeddings
filler_input_ids = torch.zeros((1, inputs_embeds.shape[1]), dtype=torch.long).to(model.device)
filler_input_ids += model.config.bos_token_id # setting dummy input_ids to bos tokens
model.generate(filler_input_ids, inputs_embeds=inputs_embeds, max_length=300)
```
| 02-10-2023 23:45:49 | 02-10-2023 23:45:49 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@dimitry12 to clarify, `input_ids` are not a required argument when `input_embeds` is passed :) Or should not be, let me know if you're getting errors in that case.
If you don't pass them, the output of `.generate()` should only contain the newly generated tokens.<|||||>@dimitry12 oh I see, the case when the batch size is larger than one is not handled, it is the same issue as in #21578! I'll open a PR soon that fixes it<|||||>@gante thank you for the prompt review! Is merging done by HF staff?<|||||>Hey @dimitry12 yes it is, but we still need a review from our transformers master @sgugger :)<|||||>@gante
> @dimitry12 to clarify, `input_ids` are not a required argument when `input_embeds` is passed :) Or should not be, let me know if you're getting errors in that case.
>
> If you don't pass them, the output of `.generate()` should only contain the newly generated tokens.
Yes, it fails without dummy `input_ids`, but to be clear, it fails differently compared to BLIP2 (https://github.com/huggingface/transformers/issues/21578).
To replicate, a slightly modified `test_generate_from_input_embeds_decoder_only` is sufficient:
```
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("hf-internal-testing/tiny-random-gpt2")
tokenizer = AutoTokenizer.from_pretrained("hf-internal-testing/tiny-random-gpt2")
text = "Hello world"
input_ids = tokenizer.encode(text, return_tensors="pt")
# Same thing, but from input embeddings
inputs_embeds = model.transformer.wte(input_ids)
outputs_from_embeds = model.generate(inputs_embeds=inputs_embeds)
```
results in:
```
โญโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ Traceback (most recent call last) โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฎ
โ in <module> โ
โ โ
โ 8 โ
โ 9 # Same thing, but from input embeddings โ
โ 10 inputs_embeds = model.transformer.wte(input_ids) โ
โ โฑ 11 outputs_from_embeds = model.generate(inputs_embeds=inputs_embeds) โ
โ 12 โ
โ โ
โ /home/dzmitry/miniconda3/envs/limber.py310/lib/python3.10/site-packages/torch/autograd/grad_mode โ
โ .py:27 in decorate_context โ
โ โ
โ 24 โ โ @functools.wraps(func) โ
โ 25 โ โ def decorate_context(*args, **kwargs): โ
โ 26 โ โ โ with self.clone(): โ
โ โฑ 27 โ โ โ โ return func(*args, **kwargs) โ
โ 28 โ โ return cast(F, decorate_context) โ
โ 29 โ โ
โ 30 โ def _wrap_generator(self, func): โ
โ โ
โ /home/dzmitry/miniconda3/envs/limber.py310/lib/python3.10/site-packages/transformers/generation/ โ
โ utils.py:1386 in generate โ
โ โ
โ 1383 โ โ โ โ ) โ
โ 1384 โ โ โ โ
โ 1385 โ โ โ # 11. run greedy search โ
โ โฑ 1386 โ โ โ return self.greedy_search( โ
โ 1387 โ โ โ โ input_ids, โ
โ 1388 โ โ โ โ logits_processor=logits_processor, โ
โ 1389 โ โ โ โ stopping_criteria=stopping_criteria, โ
โ โ
โ /home/dzmitry/miniconda3/envs/limber.py310/lib/python3.10/site-packages/transformers/generation/ โ
โ utils.py:2181 in greedy_search โ
โ โ
โ 2178 โ โ โ model_inputs = self.prepare_inputs_for_generation(input_ids, **model_kwargs) โ
โ 2179 โ โ โ โ
โ 2180 โ โ โ # forward pass to get next token โ
โ โฑ 2181 โ โ โ outputs = self( โ
โ 2182 โ โ โ โ **model_inputs, โ
โ 2183 โ โ โ โ return_dict=True, โ
โ 2184 โ โ โ โ output_attentions=output_attentions, โ
โ โ
โ /home/dzmitry/miniconda3/envs/limber.py310/lib/python3.10/site-packages/torch/nn/modules/module. โ
โ py:1194 in _call_impl โ
โ โ
โ 1191 โ โ # this function, and just call forward. โ
โ 1192 โ โ if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks o โ
โ 1193 โ โ โ โ or _global_forward_hooks or _global_forward_pre_hooks): โ
โ โฑ 1194 โ โ โ return forward_call(*input, **kwargs) โ
โ 1195 โ โ # Do not call functions when jit is used โ
โ 1196 โ โ full_backward_hooks, non_full_backward_hooks = [], [] โ
โ 1197 โ โ if self._backward_hooks or _global_backward_hooks: โ
โ โ
โ /home/dzmitry/miniconda3/envs/limber.py310/lib/python3.10/site-packages/transformers/models/gpt2 โ
โ /modeling_gpt2.py:1073 in forward โ
โ โ
โ 1070 โ โ """ โ
โ 1071 โ โ return_dict = return_dict if return_dict is not None else self.config.use_return โ
โ 1072 โ โ โ
โ โฑ 1073 โ โ transformer_outputs = self.transformer( โ
โ 1074 โ โ โ input_ids, โ
โ 1075 โ โ โ past_key_values=past_key_values, โ
โ 1076 โ โ โ attention_mask=attention_mask, โ
โ โ
โ /home/dzmitry/miniconda3/envs/limber.py310/lib/python3.10/site-packages/torch/nn/modules/module. โ
โ py:1194 in _call_impl โ
โ โ
โ 1191 โ โ # this function, and just call forward. โ
โ 1192 โ โ if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks o โ
โ 1193 โ โ โ โ or _global_forward_hooks or _global_forward_pre_hooks): โ
โ โฑ 1194 โ โ โ return forward_call(*input, **kwargs) โ
โ 1195 โ โ # Do not call functions when jit is used โ
โ 1196 โ โ full_backward_hooks, non_full_backward_hooks = [], [] โ
โ 1197 โ โ if self._backward_hooks or _global_backward_hooks: โ
โ โ
โ /home/dzmitry/miniconda3/envs/limber.py310/lib/python3.10/site-packages/transformers/models/gpt2 โ
โ /modeling_gpt2.py:793 in forward โ
โ โ
โ 790 โ โ if token_type_ids is not None: โ
โ 791 โ โ โ token_type_ids = token_type_ids.view(-1, input_shape[-1]) โ
โ 792 โ โ if position_ids is not None: โ
โ โฑ 793 โ โ โ position_ids = position_ids.view(-1, input_shape[-1]) โ
โ 794 โ โ โ
โ 795 โ โ if past_key_values is None: โ
โ 796 โ โ โ past_length = 0 โ
โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ
RuntimeError: shape '[-1, 5]' is invalid for input of size 1
```
@gante I can open a draft PR with an updated failing test, and see if I can figure it out or if your planned fix for https://github.com/huggingface/transformers/issues/21578 will also fix it. Please advise what the best process is, definitely willing to help here.<|||||>(For context, the fix to the issue above is in #21580. Merging :D ) |
transformers | 21,574 | closed | [`Blip2`] Add int8 support for `blip2-flan-t5-xxl` | # What does this PR do?
Similarly as https://github.com/huggingface/transformers/pull/20683 - currently the largest blip2 + flan-t5-xxl fails to run in int8 since we forgot to add the patch presented in the aforementioned PR
This PR adds the `_keep_in_fp32_modules` support for `Blip2` so that users can run this model in fp16 and int8 with no performance degradation as the original model trained in bf16
cc @sgugger @NielsRogge
| 02-10-2023 21:10:59 | 02-10-2023 21:10:59 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 21,573 | closed | [deepspeed] performance docs | This PR extends the deepspeed docs to include:
1. performance docs
2. network communication docs | 02-10-2023 19:33:09 | 02-10-2023 19:33:09 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 21,572 | closed | Use PyAV instead of Decord in examples | # What does this PR do?
Replaces decord with pyav as the library to decode video files in examples. This resolves an issue where the CUDA crashes if `decode` has been imported.
The doc tests have been run to validate they pass cc @ydshieh
Note: decord is still being used in the video pipelines. cc @nateraw
Fixes #21085
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests? | 02-10-2023 19:06:49 | 02-10-2023 19:06:49 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Thanks for the review @nateraw! I've updated the docstrings with your suggestions :) |
transformers | 21,571 | closed | Generate: TF supports multiple eos tokens | # What does this PR do?
TF generation test addition PR 4 (out of 4) ๐ All PT generation integration tests for features that also exist in TF are now framework-agnostic ๐
To complete this last step, I've added support to multiple EOS tokens in TF.
Because this touches the core of the TF generation code, slow tests were run for:
- [x] TF GPT2
- [x] TF T5 | 02-10-2023 18:03:38 | 02-10-2023 18:03:38 | _The documentation is not available anymore as the PR was closed or merged._<|||||>The two failing tests are known failures -- merging |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.