repo
stringclasses 1
value | number
int64 1
25.3k
| state
stringclasses 2
values | title
stringlengths 1
487
| body
stringlengths 0
234k
โ | created_at
stringlengths 19
19
| closed_at
stringlengths 19
19
| comments
stringlengths 0
293k
|
---|---|---|---|---|---|---|---|
transformers | 23,388 | closed | Wrong RoBERTa configuration | https://github.com/huggingface/transformers/blob/c2393cad085e3875ee2206d917d46d15e50602a3/src/transformers/models/roberta/configuration_roberta.py#L108
Must match the tokenizer vocab. size, i.e., `50265`.
Others also mentioned this: https://discuss.pytorch.org/t/hugging-faces-roberta-config-and-tokenizer-do-not-have-matching-vocabulary/134868 | 05-16-2023 09:15:03 | 05-16-2023 09:15:03 | cc @ArthurZucker <|||||>Thanks for pointing this out and opening a PR! <|||||>Hey @ArthurZucker @amyeroberts , as i still see that #23389 is not merged , so can I fix this and create a new PR? <|||||>Sure! Would be great if you can checkout his branch to include the work he has done ๐ <|||||>fix #23863 |
transformers | 23,387 | closed | Run doctest (in PRs) only when some doc example(s) are modified | # What does this PR do?
Run doctest (in PRs) only when some doc example(s) are modified.
This is a fix for #23327 (which is reverted in #23371 due to the wrong logic) .
This PR implements the correct logic for
> for now the tests are launched on a file if we modify it, but I would only launch it if docstrings are modified (e.g. check the modifications are correct) to go faster.
where I go one step further to make it
> only launch it if some _**doc examples**_ are modified`
(instead of any docstring) | 05-16-2023 07:00:54 | 05-16-2023 07:00:54 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 23,386 | closed | FSDP cuda out of memory during checkpoint saving | ### System Info
- `transformers` version: 4.30.0.dev0
- Platform: Linux-4.14.81.bm.22-amd64-x86_64-with-glibc2.24
- Python version: 3.9.16
- Huggingface_hub version: 0.14.1
- Safetensors version: not installed
- PyTorch version (GPU?): 2.0.1+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: yes
### Who can help?
@stas00 @pacman100
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I'm running https://github.com/tatsu-lab/stanford_alpaca/blob/main/train.py on 32GB GPUs to finetune LLaMA-7B with FSDP turned on. The training process goes well but I get cuda oom when saving the final checkpoint. The entire fp32 state dict cannot fit in my gpu memory. A possible solution is to offload state dict to cpu mentioned in https://github.com/pytorch/pytorch/issues/98823? Is there any better way to handle this?
```
/data00/home/lijiahao.plus/miniconda3/envs/mlir/lib/python3.9/site-packages/torch/distributed/fsdp/_state_dict_utils.py:312: UserWarning: Failed to clone() tensor with name _fsdp_wrapped_module.model.layers.31.mlp.gate_proj.weight on rank 2. This may mean that this state_dict entry could point to invalid memory regions after returning from state_dict() call if this parameter is managed by FSDP. Please check clone implementation of _fsdp_wrapped_module.model.layers.31.mlp.gate_proj.weight. Error: CUDA out of memory. Tried to allocate 172.00 MiB (GPU 2; 31.75 GiB total capacity; 29.54 GiB already allocated; 39.75 MiB free; 30.38 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
```
### Expected behavior
Not oom on saving checkpoints. | 05-16-2023 06:40:02 | 05-16-2023 06:40:02 | Not sure why you tagged myself or Sourab - we have worked on Deepspeed integration, and you're welcome to ask questions if you use that. I personally don't know anything about FSDP - Deepspeed works perfectly well and FSDP implements the same ZeRO protocol that Deepspeed innovated. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 23,385 | closed | NLLB-MoE 54B multi-GPU inference throws "Expected all tensors to be on the same device" error | ### System Info
- `transformers` version: 4.28.1
- Platform: Linux-4.18.0-305.25.1.el8_4.x86_64-x86_64-with-glibc2.17
- Python version: 3.8.16
- Huggingface_hub version: 0.13.3
- Safetensors version: not installed
- PyTorch version (GPU?): 2.0.0a0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: 4 x A100 40GB
### Who can help?
@ArthurZucker
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
*Note: there is a workaround/fix with manual device mapping attached below but I'm wondering if there could be an official fix for the bug.*
#### Code sample
infer.py (Mostly from the [HF Hub sample](https://huggingface.co/facebook/nllb-moe-54b) with some modifications to load with multi-GPU and quantization)
```python
import torch
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
def main():
model_name = "facebook/nllb-moe-54b"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSeq2SeqLM.from_pretrained(
model_name,
torch_dtype=torch.float16,
device_map="auto",
load_in_8bit=True,
)
batched_input = [
'We now have 4-month-old mice that are non-diabetic that used to be diabetic," he added.',
"Dr. Ehud Ur, professor of medicine at Dalhousie University in Halifax, Nova Scotia and chair of the clinical and scientific division of the Canadian Diabetes Association cautioned that the research is still in its early days."
"Like some other experts, he is skeptical about whether diabetes can be cured, noting that these findings have no relevance to people who already have Type 1 diabetes."
"On Monday, Sara Danius, permanent secretary of the Nobel Committee for Literature at the Swedish Academy, publicly announced during a radio program on Sveriges Radio in Sweden the committee, unable to reach Bob Dylan directly about winning the 2016 Nobel Prize in Literature, had abandoned its efforts to reach him.",
'Danius said, "Right now we are doing nothing. I have called and sent emails to his closest collaborator and received very friendly replies. For now, that is certainly enough."',
"Previously, Ring's CEO, Jamie Siminoff, remarked the company started when his doorbell wasn't audible from his shop in his garage.",
]
inputs = tokenizer(batched_input, return_tensors="pt", padding=True)
translated_tokens = model.generate(
**inputs, forced_bos_token_id=tokenizer.lang_code_to_id["fra_Latn"]
)
outputs = tokenizer.batch_decode(translated_tokens, skip_special_tokens=True)
print(outputs)
if __name__ == "__main__":
main()
```
Steps:
1. Run `CUDA_VISIBLE_DEVICES=0,1,2,3 python infer.py`
2. See error
```
โญโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ Traceback (most recent call last) โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฎ
โ <path>/code/nscc_working/engr/multi_node/nllb_inference/error_infer.py:38 in โ
โ <module> โ
โ โ
โ 35 โ
โ 36 โ
โ 37 if __name__ == "__main__": โ
โ โฑ 38 โ main() โ
โ 39 โ
โ โ
โ <path>/code/nscc_working/engr/multi_node/nllb_inference/error_infer.py:30 in main โ
โ โ
โ 27 โ ] โ
โ 28 โ inputs = tokenizer(batched_input, return_tensors="pt", padding=True) โ
โ 29 โ โ
โ โฑ 30 โ translated_tokens = model.generate( โ
โ 31 โ โ **inputs, forced_bos_token_id=tokenizer.lang_code_to_id["fra_Latn"] โ
โ 32 โ ) โ
โ 33 โ outputs = tokenizer.batch_decode(translated_tokens, skip_special_tokens=True) โ
โ โ
โ <path>/.conda/envs/megatron/lib/python3.8/site-packages/torch/utils/_contextlib.py โ
โ :115 in decorate_context โ
โ โ
โ 112 โ @functools.wraps(func) โ
โ 113 โ def decorate_context(*args, **kwargs): โ
โ 114 โ โ with ctx_factory(): โ
โ โฑ 115 โ โ โ return func(*args, **kwargs) โ
โ 116 โ โ
โ 117 โ return decorate_context โ
โ 118 โ
โ โ
โ <path>/.conda/envs/megatron/lib/python3.8/site-packages/transformers/generation/ut โ
โ ils.py:1286 in generate โ
โ โ
โ 1283 โ โ if self.config.is_encoder_decoder and "encoder_outputs" not in model_kwargs: โ
โ 1284 โ โ โ # if model is encoder decoder encoder_outputs are created โ
โ 1285 โ โ โ # and added to `model_kwargs` โ
โ โฑ 1286 โ โ โ model_kwargs = self._prepare_encoder_decoder_kwargs_for_generation( โ
โ 1287 โ โ โ โ inputs_tensor, model_kwargs, model_input_name โ
โ 1288 โ โ โ ) โ
โ 1289 โ
โ โ
โ <path>/.conda/envs/megatron/lib/python3.8/site-packages/transformers/generation/ut โ
โ ils.py:638 in _prepare_encoder_decoder_kwargs_for_generation โ
โ โ
โ 635 โ โ model_input_name = model_input_name if model_input_name is not None else self.ma โ
โ 636 โ โ encoder_kwargs["return_dict"] = True โ
โ 637 โ โ encoder_kwargs[model_input_name] = inputs_tensor โ
โ โฑ 638 โ โ model_kwargs["encoder_outputs"]: ModelOutput = encoder(**encoder_kwargs) โ
โ 639 โ โ โ
โ 640 โ โ return model_kwargs โ
โ 641 โ
โ โ
โ <path>/.conda/envs/megatron/lib/python3.8/site-packages/torch/nn/modules/module.py โ
โ :1501 in _call_impl โ
โ โ
โ 1498 โ โ if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks โ
โ 1499 โ โ โ โ or _global_backward_pre_hooks or _global_backward_hooks โ
โ 1500 โ โ โ โ or _global_forward_hooks or _global_forward_pre_hooks): โ
โ โฑ 1501 โ โ โ return forward_call(*args, **kwargs) โ
โ 1502 โ โ # Do not call functions when jit is used โ
โ 1503 โ โ full_backward_hooks, non_full_backward_hooks = [], [] โ
โ 1504 โ โ backward_pre_hooks = [] โ
โ โ
โ <path>/.conda/envs/megatron/lib/python3.8/site-packages/transformers/models/nllb_m โ
โ oe/modeling_nllb_moe.py:1165 in forward โ
โ โ
โ 1162 โ โ โ โ โ โ (head_mask[idx] if head_mask is not None else None), โ
โ 1163 โ โ โ โ โ ) โ
โ 1164 โ โ โ โ else: โ
โ โฑ 1165 โ โ โ โ โ layer_outputs = encoder_layer( โ
โ 1166 โ โ โ โ โ โ hidden_states, โ
โ 1167 โ โ โ โ โ โ attention_mask, โ
โ 1168 โ โ โ โ โ โ layer_head_mask=(head_mask[idx] if head_mask is not None else No โ
โ โ
โ <path>/.conda/envs/megatron/lib/python3.8/site-packages/torch/nn/modules/module.py โ
โ :1501 in _call_impl โ
โ โ
โ 1498 โ โ if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks โ
โ 1499 โ โ โ โ or _global_backward_pre_hooks or _global_backward_hooks โ
โ 1500 โ โ โ โ or _global_forward_hooks or _global_forward_pre_hooks): โ
โ โฑ 1501 โ โ โ return forward_call(*args, **kwargs) โ
โ 1502 โ โ # Do not call functions when jit is used โ
โ 1503 โ โ full_backward_hooks, non_full_backward_hooks = [], [] โ
โ 1504 โ โ backward_pre_hooks = [] โ
โ โ
โ <path>/.conda/envs/megatron/lib/python3.8/site-packages/transformers/models/nllb_m โ
โ oe/modeling_nllb_moe.py:701 in forward โ
โ โ
โ 698 โ โ โ
โ 699 โ โ hidden_states = self.ff_layer_norm(hidden_states) โ
โ 700 โ โ if self.is_sparse: โ
โ โฑ 701 โ โ โ hidden_states, router_states = self.ffn(hidden_states, attention_mask) โ
โ 702 โ โ else: โ
โ 703 โ โ โ hidden_states = self.ffn(hidden_states) โ
โ 704 โ โ hidden_states = self.ff_dropout(hidden_states) โ
โ โ
โ <path>/.conda/envs/megatron/lib/python3.8/site-packages/torch/nn/modules/module.py โ
โ :1501 in _call_impl โ
โ โ
โ 1498 โ โ if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks โ
โ 1499 โ โ โ โ or _global_backward_pre_hooks or _global_backward_hooks โ
โ 1500 โ โ โ โ or _global_forward_hooks or _global_forward_pre_hooks): โ
โ โฑ 1501 โ โ โ return forward_call(*args, **kwargs) โ
โ 1502 โ โ # Do not call functions when jit is used โ
โ 1503 โ โ full_backward_hooks, non_full_backward_hooks = [], [] โ
โ 1504 โ โ backward_pre_hooks = [] โ
โ โ
โ <path>/.conda/envs/megatron/lib/python3.8/site-packages/transformers/models/nllb_m โ
โ oe/modeling_nllb_moe.py:474 in forward โ
โ โ
โ 471 โ โ top_1_mask, router_probs = self.router(hidden_states, padding_mask) โ
โ 472 โ โ router_mask = router_probs.bool() โ
โ 473 โ โ hidden_states = hidden_states.reshape((batch_size * sequence_length), hidden_dim โ
โ โฑ 474 โ โ masked_hidden_states = torch.einsum("bm,be->ebm", hidden_states, router_mask) โ
โ 475 โ โ for idx, expert in enumerate(self.experts.values()): โ
โ 476 โ โ โ token_indices = router_mask[:, idx] โ
โ 477 โ โ โ combining_weights = router_probs[token_indices, idx] โ
โ โ
โ <path>/.conda/envs/megatron/lib/python3.8/site-packages/torch/functional.py:378 in โ
โ einsum โ
โ โ
โ 375 โ if len(operands) <= 2 or not opt_einsum.enabled: โ
โ 376 โ โ # the path for contracting 0 or 1 time(s) is already optimized โ
โ 377 โ โ # or the user has disabled using opt_einsum โ
โ โฑ 378 โ โ return _VF.einsum(equation, operands) # type: ignore[attr-defined] โ
โ 379 โ โ
โ 380 โ path = None โ
โ 381 โ if opt_einsum.is_available(): โ
โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:1 and
cuda:0!
```
### Expected behavior
A list of translated text.
The following code contains a workaround to prevent certain module splits and moves certain modules to the same device as the input in order to run the inference without errors.
#### Code
```python
import torch
from accelerate.big_modeling import infer_auto_device_map, init_empty_weights
from transformers import AutoConfig, AutoModelForSeq2SeqLM, AutoTokenizer
def main():
model_name = "facebook/nllb-moe-54b"
config = AutoConfig.from_pretrained(model_name)
with init_empty_weights():
model = AutoModelForSeq2SeqLM.from_config(config)
model.tie_weights()
device_map = infer_auto_device_map(
model,
# Force splits model.encoder into separate layers and devices
max_memory={0: "6GIB", 1: "30GIB", 2: "30GIB", 3: "30GIB"},
no_split_module_classes=model._no_split_modules
+ ["NllbMoeEncoderLayer", "NllbMoeDecoderLayer"],
dtype="int8",
)
# Demonstrate that only "model.encoder.layer_norm" and "model.encoder.embed_tokens"
# needs to be on the same device as the input
for module, device in device_map.items():
if module in {"model.encoder.layer_norm", "model.encoder.embed_tokens"}:
if device != 0:
device_map[module] = 0
else:
if device == 0:
device_map[module] = 1
tokenizer = AutoTokenizer.from_pretrained(model_name, cache_dir=cache_dir)
model = AutoModelForSeq2SeqLM.from_pretrained(
model_name,
torch_dtype=torch.float16,
device_map=device_map, # Use the custom device map
load_in_8bit=True,
)
batched_input = [
'We now have 4-month-old mice that are non-diabetic that used to be diabetic," he added.',
"Dr. Ehud Ur, professor of medicine at Dalhousie University in Halifax, Nova Scotia and chair of the clinical and scientific division of the Canadian Diabetes Association cautioned that the research is still in its early days."
"Like some other experts, he is skeptical about whether diabetes can be cured, noting that these findings have no relevance to people who already have Type 1 diabetes."
"On Monday, Sara Danius, permanent secretary of the Nobel Committee for Literature at the Swedish Academy, publicly announced during a radio program on Sveriges Radio in Sweden the committee, unable to reach Bob Dylan directly about winning the 2016 Nobel Prize in Literature, had abandoned its efforts to reach him.",
'Danius said, "Right now we are doing nothing. I have called and sent emails to his closest collaborator and received very friendly replies. For now, that is certainly enough."',
"Previously, Ring's CEO, Jamie Siminoff, remarked the company started when his doorbell wasn't audible from his shop in his garage.",
]
inputs = tokenizer(batched_input, return_tensors="pt", padding=True)
for i in inputs:
if torch.is_tensor(inputs[i]):
inputs[i] = inputs[i].to("cuda:0")
translated_tokens = model.generate(
**inputs, forced_bos_token_id=tokenizer.lang_code_to_id["fra_Latn"]
)
outputs = tokenizer.batch_decode(translated_tokens, skip_special_tokens=True)
print(outputs)
if __name__ == "__main__":
main()
```
Output:
```
['Nous avons maintenant des souris de 4 mois qui ne sont pas diabรฉtiques mais qui l\'รฉtaient", a-t-il ajoutรฉ.', "Le Dr Ehud Ur, professeur de mรฉdecine ร l'Universitรฉ Dalhousie ร Halifax, en Nouvelle-รcosse, et prรฉsident de la division clinique et scientifique de l'Association canadienne du diabรจte, a averti que la recherche en รฉtait encore ร ses dรฉbuts. Comme d'autres experts, il est sceptique quant ร la possibilitรฉ de guรฉrir le diabรจte, notant que ces rรฉsultats n'ont aucune pertinence pour les personnes atteintes de diabรจte de type 1.", 'Danius a dรฉclarรฉ: "Pour le moment, nous ne faisons rien. J\'ai appelรฉ et envoyรฉ des courriels ร son plus proche collaborateur et j\'ai reรงu des rรฉponses trรจs amicales. Pour l\'instant, c\'est certainement suffisant".', "Auparavant, le PDG de Ring, Jamie Siminoff, a dรฉclarรฉ que la sociรฉtรฉ avait commencรฉ lorsque sa sonnette n'รฉtait pas audible depuis son magasin dans son garage."]
```
| 05-16-2023 06:07:12 | 05-16-2023 06:07:12 | cc @younesbelkada <|||||>Hi @liyier90
Thanks! Sounds like the `_no_split_modules` was not properly checked , I think the fix should be to replace the current `_no_split_modules` with the ones you have defined.
Is this block :
```python
# Demonstrate that only "model.encoder.layer_norm" and "model.encoder.embed_tokens"
# needs to be on the same device as the input
for module, device in device_map.items():
if module in {"model.encoder.layer_norm", "model.encoder.embed_tokens"}:
if device != 0:
device_map[module] = 0
else:
if device == 0:
device_map[module] = 1
```
necessary? I think `accelerate` automatically takes care of setting the input to the correct device through hooks.
What happens if you remove it in your case and just use the correct `_no_split_modules`?<|||||>If I comment out that block, I get the following error:
```
โญโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ Traceback (most recent call last) โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฎ
โ <path>/code/nscc_working/engr/multi_node/nllb_inference/correct_infer.py:66 โ
โ in <module> โ
โ โ
โ 63 โ
โ 64 โ
โ 65 if __name__ == "__main__": โ
โ โฑ 66 โ main() โ
โ 67 โ
โ โ
โ <path>/code/nscc_working/engr/multi_node/nllb_inference/correct_infer.py:58 โ
โ in main โ
โ โ
โ 55 โ โ if torch.is_tensor(inputs[i]): โ
โ 56 โ โ โ inputs[i] = inputs[i].to("cuda:0") โ
โ 57 โ โ
โ โฑ 58 โ translated_tokens = model.generate( โ
โ 59 โ โ **inputs, forced_bos_token_id=tokenizer.lang_code_to_id["fra_Latn"] โ
โ 60 โ ) โ
โ 61 โ outputs = tokenizer.batch_decode(translated_tokens, skip_special_tokens=True) โ
โ โ
โ <path>/.conda/envs/megatron/lib/python3.8/site-packages/torch/utils/_contextl โ
โ ib.py:115 in decorate_context โ
โ โ
โ 112 โ @functools.wraps(func) โ
โ 113 โ def decorate_context(*args, **kwargs): โ
โ 114 โ โ with ctx_factory(): โ
โ โฑ 115 โ โ โ return func(*args, **kwargs) โ
โ 116 โ โ
โ 117 โ return decorate_context โ
โ 118 โ
โ โ
โ <path>/.conda/envs/megatron/lib/python3.8/site-packages/transformers/generati โ
โ on/utils.py:1437 in generate โ
โ โ
โ 1434 โ โ โ โ ) โ
โ 1435 โ โ โ โ
โ 1436 โ โ โ # 11. run greedy search โ
โ โฑ 1437 โ โ โ return self.greedy_search( โ
โ 1438 โ โ โ โ input_ids, โ
โ 1439 โ โ โ โ logits_processor=logits_processor, โ
โ 1440 โ โ โ โ stopping_criteria=stopping_criteria, โ
โ โ
โ <path>/.conda/envs/megatron/lib/python3.8/site-packages/transformers/generati โ
โ on/utils.py:2288 in greedy_search โ
โ โ
โ 2285 โ โ โ if eos_token_id is not None: โ
โ 2286 โ โ โ โ if pad_token_id is None: โ
โ 2287 โ โ โ โ โ raise ValueError("If `eos_token_id` is defined, make sure that ` โ
โ โฑ 2288 โ โ โ โ next_tokens = next_tokens * unfinished_sequences + pad_token_id * (1 โ
โ 2289 โ โ โ โ
โ 2290 โ โ โ # update generated ids, model inputs, and length for next step โ
โ 2291 โ โ โ input_ids = torch.cat([input_ids, next_tokens[:, None]], dim=-1) โ
โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ
RuntimeError: Expected all tensors to be on the same device, but found at least two devices,
cuda:1 and cuda:0!
```
Because `model.encoder.layer_norm` got put on device 1:
```
{'lm_head': 0,
'model.decoder.embed_positions': 1,
'model.decoder.embed_tokens': 1,
'model.decoder.layer_norm': 2,
'model.decoder.layers.0': 1,
'model.decoder.layers.1': 1,
'model.decoder.layers.10': 2,
'model.decoder.layers.11': 2,
'model.decoder.layers.12': 2,
'model.decoder.layers.13': 2,
'model.decoder.layers.14': 2,
'model.decoder.layers.15': 2,
'model.decoder.layers.16': 2,
'model.decoder.layers.17': 2,
'model.decoder.layers.18': 2,
'model.decoder.layers.19': 2,
'model.decoder.layers.2': 1,
'model.decoder.layers.20': 2,
'model.decoder.layers.21': 2,
'model.decoder.layers.22': 2,
'model.decoder.layers.23': 2,
'model.decoder.layers.3': 1,
'model.decoder.layers.4': 1,
'model.decoder.layers.5': 1,
'model.decoder.layers.6': 1,
'model.decoder.layers.7': 2,
'model.decoder.layers.8': 2,
'model.decoder.layers.9': 2,
'model.encoder.embed_positions': 0,
'model.encoder.embed_tokens': 0,
'model.encoder.layer_norm': 1,
'model.encoder.layers.0': 0,
'model.encoder.layers.1': 0,
'model.encoder.layers.10': 1,
'model.encoder.layers.11': 1,
'model.encoder.layers.12': 1,
'model.encoder.layers.13': 1,
'model.encoder.layers.14': 1,
'model.encoder.layers.15': 1,
'model.encoder.layers.16': 1,
'model.encoder.layers.17': 1,
'model.encoder.layers.18': 1,
'model.encoder.layers.19': 1,
'model.encoder.layers.2': 0,
'model.encoder.layers.20': 1,
'model.encoder.layers.21': 1,
'model.encoder.layers.22': 1,
'model.encoder.layers.23': 1,
'model.encoder.layers.3': 1,
'model.encoder.layers.4': 1,
'model.encoder.layers.5': 1,
'model.encoder.layers.6': 1,
'model.encoder.layers.7': 1,
'model.encoder.layers.8': 1,
'model.encoder.layers.9': 1,
'model.shared': 0}
```
It could be because I'm moving all inputs to device 0, but if I were to remove the
```
for i in inputs:
if torch.is_tensor(inputs[i]):
inputs[i] = inputs[i].to("cuda:0")
```
block. I get
```
RuntimeError: Expected all tensors to be on the same device, but found at least two devices,
cuda:1 and cpu!
```
<|||||>~Hey thanks for reporting! From the look of it, it seems like this is an `accelerate` issue rather than a transformer issue. (as accelerate should be moving the layers to the correct device on its own, and no_split modules does not support individual layers to be on the same module). Could you open an issue over there? ๐~
edit: I got confused by the only 2 layers that you had to put on another device, @younesbelkada explained offline what he think should fix it! <|||||>I don't see where the error in Accelerate lies. No layers that is not supposed to be split has been split. So the issue is definitely a Transformers one.<|||||>Yeah I think it is definitely something that has to do with no split modules not correctly set. Having a look now<|||||>@liyier90
I made https://github.com/huggingface/transformers/pull/23758 that should fix your issue.
Also make sure to put the input ids on the same device as your lm head. Otherwise you will get device mismatch issues in `generate`.
The snippet I used is the one below, on a 2xNVIDIA A100 80GB:
```python
import torch
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
model_name = "facebook/nllb-moe-54b"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSeq2SeqLM.from_pretrained(
model_name,
torch_dtype=torch.float16,
device_map="auto",
load_in_8bit=True,
)
batched_input = [
'We now have 4-month-old mice that are non-diabetic that used to be diabetic," he added.',
"Dr. Ehud Ur, professor of medicine at Dalhousie University in Halifax, Nova Scotia and chair of the clinical and scientific division of the Canadian Diabetes Association cautioned that the research is still in its early days."
"Like some other experts, he is skeptical about whether diabetes can be cured, noting that these findings have no relevance to people who already have Type 1 diabetes."
"On Monday, Sara Danius, permanent secretary of the Nobel Committee for Literature at the Swedish Academy, publicly announced during a radio program on Sveriges Radio in Sweden the committee, unable to reach Bob Dylan directly about winning the 2016 Nobel Prize in Literature, had abandoned its efforts to reach him.",
'Danius said, "Right now we are doing nothing. I have called and sent emails to his closest collaborator and received very friendly replies. For now, that is certainly enough."',
"Previously, Ring's CEO, Jamie Siminoff, remarked the company started when his doorbell wasn't audible from his shop in his garage.",
]
inputs = tokenizer(batched_input, return_tensors="pt", padding=True).to(1)
translated_tokens = model.generate(
**inputs, forced_bos_token_id=tokenizer.lang_code_to_id["fra_Latn"]
)
outputs = tokenizer.batch_decode(translated_tokens, skip_special_tokens=True)
print(outputs)
```
I had to assign the input to the device 1 because in my case the lm head was on the device 1. But you can retrieve it with
```python
lm_head_device = model.hf_device_map["lm_head"]
```
And the result I get is:
```bash
['Nous avons maintenant des souris de 4 mois qui ne sont pas diabรฉtiques mais qui l\'รฉtaient", a-t-il ajoutรฉ.', "Le Dr Ehud Ur, professeur de mรฉdecine ร l'Universitรฉ Dalhousie ร Halifax, en Nouvelle-รcosse, et prรฉsident de la division clinique et scientifique de l'Association canadienne du diabรจte, a averti que la recherche en รฉtait encore ร ses dรฉbuts. Comme d'autres experts, il est sceptique quant ร la possibilitรฉ de guรฉrir le diabรจte, notant que ces rรฉsultats n'ont aucune pertinence pour les personnes atteintes de diabรจte de type 1.", 'Danius a dรฉclarรฉ: "Pour le moment, nous ne faisons rien. J\'ai appelรฉ et envoyรฉ des courriels ร son plus proche collaborateur et j\'ai reรงu des rรฉponses trรจs amicales. Pour l\'instant, c\'est certainement suffisant".', "Auparavant, le PDG de Ring, Jamie Siminoff, a dรฉclarรฉ que la sociรฉtรฉ avait commencรฉ lorsque sa sonnette n'รฉtait pas audible depuis son magasin dans son garage."]
```
<|||||>@younesbelkada
Unfortunately, I don't think changes in the PR was sufficient to resolve the error.
I updated `transformers` to include the fix using
```
pip install git+https://github.com/huggingface/transformers
```
The latest commit on the `main` branch was https://github.com/huggingface/transformers/commit/f67dac97bdc63874f2288546b3fa87e69d2ea1c8.
I ran code snippet you provided but on 4 x A100 40GB as I do not have access to 80 GB cards. I made the modification to move the input to the same device as `lm_head` based on your advice.
```python
import os
import torch
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
model_name = "facebook/nllb-moe-54b"
cache_dir = <path>
tokenizer = AutoTokenizer.from_pretrained(model_name, cache_dir=cache_dir)
model = AutoModelForSeq2SeqLM.from_pretrained(
model_name,
torch_dtype=torch.float16,
device_map="auto",
load_in_8bit=True,
cache_dir=cache_dir,
)
batched_input = [
'We now have 4-month-old mice that are non-diabetic that used to be diabetic," he added.',
"Dr. Ehud Ur, professor of medicine at Dalhousie University in Halifax, Nova Scotia and chair of the clinical and scientific division of the Canadian Diabetes Association cautioned that the research is still in its early days."
"Like some other experts, he is skeptical about whether diabetes can be cured, noting that these findings have no relevance to people who already have Type 1 diabetes."
"On Monday, Sara Danius, permanent secretary of the Nobel Committee for Literature at the Swedish Academy, publicly announced during a radio program on Sveriges Radio in Sweden the committee, unable to reach Bob Dylan directly about winning the 2016 Nobel Prize in Literature, had abandoned its efforts to reach him.",
'Danius said, "Right now we are doing nothing. I have called and sent emails to his closest collaborator and received very friendly replies. For now, that is certainly enough."',
"Previously, Ring's CEO, Jamie Siminoff, remarked the company started when his doorbell wasn't audible from his shop in his garage.",
]
inputs = tokenizer(batched_input, return_tensors="pt", padding=True).to(
model.hf_device_map["lm_head"]
)
translated_tokens = model.generate(
**inputs, forced_bos_token_id=tokenizer.lang_code_to_id["fra_Latn"]
)
outputs = tokenizer.batch_decode(translated_tokens, skip_special_tokens=True)
print(outputs)
```
But I am still getting an "Expected all tensors to be on the same device" error.
```
โญโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ Traceback (most recent call last) โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฎ
โ /home/users/nus/yier/code/nscc_working/engr/multi_node/nllb_inference/sample_infer.py:31 in โ
โ <module> โ
โ โ
โ 28 โ model.hf_device_map["lm_head"] โ
โ 29 ) โ
โ 30 โ
โ โฑ 31 translated_tokens = model.generate( โ
โ 32 โ **inputs, forced_bos_token_id=tokenizer.lang_code_to_id["fra_Latn"] โ
โ 33 ) โ
โ 34 outputs = tokenizer.batch_decode(translated_tokens, skip_special_tokens=True) โ
โ โ
โ /home/users/nus/yier/.conda/envs/megatron/lib/python3.8/site-packages/torch/utils/_contextlib.py โ
โ :115 in decorate_context โ
โ โ
โ 112 โ @functools.wraps(func) โ
โ 113 โ def decorate_context(*args, **kwargs): โ
โ 114 โ โ with ctx_factory(): โ
โ โฑ 115 โ โ โ return func(*args, **kwargs) โ
โ 116 โ โ
โ 117 โ return decorate_context โ
โ 118 โ
โ โ
โ /home/users/nus/yier/.conda/envs/megatron/lib/python3.8/site-packages/transformers/generation/ut โ
โ ils.py:1518 in generate โ
โ โ
โ 1515 โ โ โ โ ) โ
โ 1516 โ โ โ โ
โ 1517 โ โ โ # 11. run greedy search โ
โ โฑ 1518 โ โ โ return self.greedy_search( โ
โ 1519 โ โ โ โ input_ids, โ
โ 1520 โ โ โ โ logits_processor=logits_processor, โ
โ 1521 โ โ โ โ stopping_criteria=stopping_criteria, โ
โ โ
โ /home/users/nus/yier/.conda/envs/megatron/lib/python3.8/site-packages/transformers/generation/ut โ
โ ils.py:2375 in greedy_search โ
โ โ
โ 2372 โ โ โ if eos_token_id is not None: โ
โ 2373 โ โ โ โ if pad_token_id is None: โ
โ 2374 โ โ โ โ โ raise ValueError("If `eos_token_id` is defined, make sure that `pad_ โ
โ โฑ 2375 โ โ โ โ next_tokens = next_tokens * unfinished_sequences + pad_token_id * (1 - u โ
โ 2376 โ โ โ โ
โ 2377 โ โ โ # update generated ids, model inputs, and length for next step โ
โ 2378 โ โ โ input_ids = torch.cat([input_ids, next_tokens[:, None]], dim=-1) โ
โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:2 and cuda:0!
```
I notice that one of the layers I moved in my earlier snippets (`model.encoder.layer_norm`) was on `cuda:2`.
```
{'lm_head': 0,
'model.decoder.embed_positions': 2,
'model.decoder.embed_tokens': 2,
'model.decoder.layer_norm': 3,
'model.decoder.layers.0': 2,
'model.decoder.layers.1': 2,
'model.decoder.layers.10': 3,
'model.decoder.layers.11': 3,
'model.decoder.layers.12': 3,
'model.decoder.layers.13': 3,
'model.decoder.layers.14': 3,
'model.decoder.layers.15': 3,
'model.decoder.layers.16': 3,
'model.decoder.layers.17': 3,
'model.decoder.layers.18': 3,
'model.decoder.layers.19': 3,
'model.decoder.layers.2': 2,
'model.decoder.layers.20': 3,
'model.decoder.layers.21': 3,
'model.decoder.layers.22': 3,
'model.decoder.layers.23': 3,
'model.decoder.layers.3': 2,
'model.decoder.layers.4': 2,
'model.decoder.layers.5': 2,
'model.decoder.layers.6': 2,
'model.decoder.layers.7': 3,
'model.decoder.layers.8': 3,
'model.decoder.layers.9': 3,
'model.encoder.embed_positions': 0,
'model.encoder.embed_tokens': 0,
'model.encoder.layer_norm': 2,
'model.encoder.layers.0': 0,
'model.encoder.layers.1': 0,
'model.encoder.layers.10': 1,
'model.encoder.layers.11': 1,
'model.encoder.layers.12': 1,
'model.encoder.layers.13': 1,
'model.encoder.layers.14': 1,
'model.encoder.layers.15': 1,
'model.encoder.layers.16': 1,
'model.encoder.layers.17': 1,
'model.encoder.layers.18': 1,
'model.encoder.layers.19': 2,
'model.encoder.layers.2': 0,
'model.encoder.layers.20': 2,
'model.encoder.layers.21': 2,
'model.encoder.layers.22': 2,
'model.encoder.layers.23': 2,
'model.encoder.layers.3': 0,
'model.encoder.layers.4': 0,
'model.encoder.layers.5': 0,
'model.encoder.layers.6': 0,
'model.encoder.layers.7': 1,
'model.encoder.layers.8': 1,
'model.encoder.layers.9': 1,
'model.shared': 0}
```
The code ran successfully after I moved `model.encoder.layer_norm` to `cuda:0` while keeping the other device mapping untouched.
Please let me know if I made any mistakes in trying out your solution or if I should be raising this in the Accelerate repo instead. Thanks!<|||||>I am having the same issues. I installed transformers after the fix and I get ```RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:1 and cuda:0!```
Unfortunately I only have 3 A100 40gb gpus that I can use.
```
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
import torch
model_name = "nllb_image/nllb-moe-54b"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSeq2SeqLM.from_pretrained(model_name,
torch_dtype=torch.float16,
device_map = 'auto',
load_in_8bit=True,)
inputs = tokenizer("test", return_tensors="pt").to(model.hf_device_map["lm_head"])
translated_tokens = model.generate(
**inputs, forced_bos_token_id=tokenizer.lang_code_to_id["fr_Latn"], max_length=512
)
decoded_sentence = tokenizer.batch_decode(translated_tokens, skip_special_tokens=True)[0]
return decoded_sentence
```
expected result: translated "test" (french)
actual result: RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:1 and cuda:0!
Am I doing anything wrong?
```
{
"model.shared":0,
"lm_head":0,
"model.encoder.embed_tokens":0,
"model.encoder.embed_positions":0,
"model.encoder.layers.0":0,
"model.encoder.layers.1":0,
"model.encoder.layers.2":0,
"model.encoder.layers.3":0,
"model.encoder.layers.4":0,
"model.encoder.layers.5":0,
"model.encoder.layers.6":0,
"model.encoder.layers.7":0,
"model.encoder.layers.8":0,
"model.encoder.layers.9":0,
"model.encoder.layers.10":0,
"model.encoder.layers.11":0,
"model.encoder.layers.12":0,
"model.encoder.layers.13":0,
"model.encoder.layers.14":0,
"model.encoder.layers.15.self_attn":0,
"model.encoder.layers.15.attn_dropout":0,
"model.encoder.layers.15.self_attn_layer_norm":0,
"model.encoder.layers.15.ffn.router":0,
"model.encoder.layers.15.ffn.token_dropout":0,
"model.encoder.layers.15.ffn.experts.expert_0":0,
"model.encoder.layers.15.ffn.experts.expert_1":0,
"model.encoder.layers.15.ffn.experts.expert_2":0,
"model.encoder.layers.15.ffn.experts.expert_3":0,
"model.encoder.layers.15.ffn.experts.expert_4":0,
"model.encoder.layers.15.ffn.experts.expert_5":0,
"model.encoder.layers.15.ffn.experts.expert_6":0,
"model.encoder.layers.15.ffn.experts.expert_7":0,
"model.encoder.layers.15.ffn.experts.expert_8":0,
"model.encoder.layers.15.ffn.experts.expert_9":0,
"model.encoder.layers.15.ffn.experts.expert_10":0,
"model.encoder.layers.15.ffn.experts.expert_11":0,
"model.encoder.layers.15.ffn.experts.expert_12":0,
"model.encoder.layers.15.ffn.experts.expert_13":0,
"model.encoder.layers.15.ffn.experts.expert_14":0,
"model.encoder.layers.15.ffn.experts.expert_15":0,
"model.encoder.layers.15.ffn.experts.expert_16":0,
"model.encoder.layers.15.ffn.experts.expert_17":0,
"model.encoder.layers.15.ffn.experts.expert_18":0,
"model.encoder.layers.15.ffn.experts.expert_19":0,
"model.encoder.layers.15.ffn.experts.expert_20":0,
"model.encoder.layers.15.ffn.experts.expert_21":0,
"model.encoder.layers.15.ffn.experts.expert_22":0,
"model.encoder.layers.15.ffn.experts.expert_23":0,
"model.encoder.layers.15.ffn.experts.expert_24":0,
"model.encoder.layers.15.ffn.experts.expert_25":0,
"model.encoder.layers.15.ffn.experts.expert_26":0,
"model.encoder.layers.15.ffn.experts.expert_27":0,
"model.encoder.layers.15.ffn.experts.expert_28":0,
"model.encoder.layers.15.ffn.experts.expert_29":0,
"model.encoder.layers.15.ffn.experts.expert_30":0,
"model.encoder.layers.15.ffn.experts.expert_31":0,
"model.encoder.layers.15.ffn.experts.expert_32":0,
"model.encoder.layers.15.ffn.experts.expert_33":0,
"model.encoder.layers.15.ffn.experts.expert_34":0,
"model.encoder.layers.15.ffn.experts.expert_35":0,
"model.encoder.layers.15.ffn.experts.expert_36":0,
"model.encoder.layers.15.ffn.experts.expert_37":0,
"model.encoder.layers.15.ffn.experts.expert_38":0,
"model.encoder.layers.15.ffn.experts.expert_39":0,
"model.encoder.layers.15.ffn.experts.expert_40":0,
"model.encoder.layers.15.ffn.experts.expert_41":0,
"model.encoder.layers.15.ffn.experts.expert_42":0,
"model.encoder.layers.15.ffn.experts.expert_43":0,
"model.encoder.layers.15.ffn.experts.expert_44":0,
"model.encoder.layers.15.ffn.experts.expert_45":0,
"model.encoder.layers.15.ffn.experts.expert_46":0,
"model.encoder.layers.15.ffn.experts.expert_47":0,
"model.encoder.layers.15.ffn.experts.expert_48":0,
"model.encoder.layers.15.ffn.experts.expert_49":0,
"model.encoder.layers.15.ffn.experts.expert_50":0,
"model.encoder.layers.15.ffn.experts.expert_51":0,
"model.encoder.layers.15.ffn.experts.expert_52":0,
"model.encoder.layers.15.ffn.experts.expert_53":0,
"model.encoder.layers.15.ffn.experts.expert_54":0,
"model.encoder.layers.15.ffn.experts.expert_55":0,
"model.encoder.layers.15.ffn.experts.expert_56":0,
"model.encoder.layers.15.ffn.experts.expert_57":0,
"model.encoder.layers.15.ffn.experts.expert_58":0,
"model.encoder.layers.15.ffn.experts.expert_59":0,
"model.encoder.layers.15.ffn.experts.expert_60":0,
"model.encoder.layers.15.ffn.experts.expert_61":0,
"model.encoder.layers.15.ffn.experts.expert_62":0,
"model.encoder.layers.15.ffn.experts.expert_63":0,
"model.encoder.layers.15.ffn.experts.expert_64":0,
"model.encoder.layers.15.ffn.experts.expert_65":0,
"model.encoder.layers.15.ffn.experts.expert_66":0,
"model.encoder.layers.15.ffn.experts.expert_67":0,
"model.encoder.layers.15.ffn.experts.expert_68":0,
"model.encoder.layers.15.ffn.experts.expert_69":0,
"model.encoder.layers.15.ffn.experts.expert_70":0,
"model.encoder.layers.15.ffn.experts.expert_71":0,
"model.encoder.layers.15.ffn.experts.expert_72":0,
"model.encoder.layers.15.ffn.experts.expert_73":0,
"model.encoder.layers.15.ffn.experts.expert_74":0,
"model.encoder.layers.15.ffn.experts.expert_75":0,
"model.encoder.layers.15.ffn.experts.expert_76":0,
"model.encoder.layers.15.ffn.experts.expert_77":0,
"model.encoder.layers.15.ffn.experts.expert_78":0,
"model.encoder.layers.15.ffn.experts.expert_79":0,
"model.encoder.layers.15.ffn.experts.expert_80":0,
"model.encoder.layers.15.ffn.experts.expert_81":0,
"model.encoder.layers.15.ffn.experts.expert_82":0,
"model.encoder.layers.15.ffn.experts.expert_83":0,
"model.encoder.layers.15.ffn.experts.expert_84":0,
"model.encoder.layers.15.ffn.experts.expert_85":0,
"model.encoder.layers.15.ffn.experts.expert_86":0,
"model.encoder.layers.15.ffn.experts.expert_87":0,
"model.encoder.layers.15.ffn.experts.expert_88":0,
"model.encoder.layers.15.ffn.experts.expert_89":0,
"model.encoder.layers.15.ffn.experts.expert_90":0,
"model.encoder.layers.15.ffn.experts.expert_91":0,
"model.encoder.layers.15.ffn.experts.expert_92":0,
"model.encoder.layers.15.ffn.experts.expert_93":0,
"model.encoder.layers.15.ffn.experts.expert_94":0,
"model.encoder.layers.15.ffn.experts.expert_95":0,
"model.encoder.layers.15.ffn.experts.expert_96":0,
"model.encoder.layers.15.ffn.experts.expert_97":0,
"model.encoder.layers.15.ffn.experts.expert_98":0,
"model.encoder.layers.15.ffn.experts.expert_99":0,
"model.encoder.layers.15.ffn.experts.expert_100":0,
"model.encoder.layers.15.ffn.experts.expert_102":1,
"model.encoder.layers.15.ffn.experts.expert_103":1,
"model.encoder.layers.15.ffn.experts.expert_104":1,
"model.encoder.layers.15.ffn.experts.expert_105":1,
"model.encoder.layers.15.ffn.experts.expert_106":1,
"model.encoder.layers.15.ffn.experts.expert_107":1,
"model.encoder.layers.15.ffn.experts.expert_108":1,
"model.encoder.layers.15.ffn.experts.expert_109":1,
"model.encoder.layers.15.ffn.experts.expert_110":1,
"model.encoder.layers.15.ffn.experts.expert_111":1,
"model.encoder.layers.15.ffn.experts.expert_112":1,
"model.encoder.layers.15.ffn.experts.expert_113":1,
"model.encoder.layers.15.ffn.experts.expert_114":1,
"model.encoder.layers.15.ffn.experts.expert_115":1,
"model.encoder.layers.15.ffn.experts.expert_116":1,
"model.encoder.layers.15.ffn.experts.expert_117":1,
"model.encoder.layers.15.ffn.experts.expert_118":1,
"model.encoder.layers.15.ffn.experts.expert_119":1,
"model.encoder.layers.15.ffn.experts.expert_120":1,
"model.encoder.layers.15.ffn.experts.expert_121":1,
"model.encoder.layers.15.ffn.experts.expert_122":1,
"model.encoder.layers.15.ffn.experts.expert_123":1,
"model.encoder.layers.15.ffn.experts.expert_124":1,
"model.encoder.layers.15.ffn.experts.expert_125":1,
"model.encoder.layers.15.ffn.experts.expert_126":1,
"model.encoder.layers.15.ffn.experts.expert_127":1,
"model.encoder.layers.15.ff_layer_norm":1,
"model.encoder.layers.15.ff_dropout":1,
"model.encoder.layers.16":1,
"model.encoder.layers.17":1,
"model.encoder.layers.18":1,
"model.encoder.layers.19":1,
"model.encoder.layers.20":1,
"model.encoder.layers.21":1,
"model.encoder.layers.22":1,
"model.encoder.layers.23":1,
"model.encoder.layer_norm":1,
"model.decoder.embed_tokens":1,
"model.decoder.embed_positions":1,
"model.decoder.layers.0":1,
"model.decoder.layers.1":1,
"model.decoder.layers.2":1,
"model.decoder.layers.3":1,
"model.decoder.layers.4":1,
"model.decoder.layers.5":1,
"model.decoder.layers.6":1,
"model.decoder.layers.7.self_attn":1,
"model.decoder.layers.7.activation_fn":1,
"model.decoder.layers.7.attn_dropout":1,
"model.decoder.layers.7.self_attn_layer_norm":1,
"model.decoder.layers.7.cross_attention":1,
"model.decoder.layers.7.cross_attention_layer_norm":1,
"model.decoder.layers.7.ffn.router":1,
"model.decoder.layers.7.ffn.token_dropout":1,
"model.decoder.layers.7.ffn.experts.expert_0":1,
"model.decoder.layers.7.ffn.experts.expert_1":1,
"model.decoder.layers.7.ffn.experts.expert_2":1,
"model.decoder.layers.7.ffn.experts.expert_3":1,
"model.decoder.layers.7.ffn.experts.expert_4":1,
"model.decoder.layers.7.ffn.experts.expert_5":1,
"model.decoder.layers.7.ffn.experts.expert_6":1,
"model.decoder.layers.7.ffn.experts.expert_7":1,
"model.decoder.layers.7.ffn.experts.expert_8":1,
"model.decoder.layers.7.ffn.experts.expert_9":1,
"model.decoder.layers.7.ffn.experts.expert_10":1,
"model.decoder.layers.7.ffn.experts.expert_11":1,
"model.decoder.layers.7.ffn.experts.expert_12":1,
"model.decoder.layers.7.ffn.experts.expert_13":1,
"model.decoder.layers.7.ffn.experts.expert_14":1,
"model.decoder.layers.7.ffn.experts.expert_15":1,
"model.decoder.layers.7.ffn.experts.expert_16":1,
"model.decoder.layers.7.ffn.experts.expert_17":1,
"model.decoder.layers.7.ffn.experts.expert_18":1,
"model.decoder.layers.7.ffn.experts.expert_19":1,
"model.decoder.layers.7.ffn.experts.expert_20":1,
"model.decoder.layers.7.ffn.experts.expert_21":1,
"model.decoder.layers.7.ffn.experts.expert_22":1,
"model.decoder.layers.7.ffn.experts.expert_23":1,
"model.decoder.layers.7.ffn.experts.expert_24":1,
"model.decoder.layers.7.ffn.experts.expert_25":1,
"model.decoder.layers.7.ffn.experts.expert_26":1,
"model.decoder.layers.7.ffn.experts.expert_27":1,
"model.decoder.layers.7.ffn.experts.expert_28":1,
"model.decoder.layers.7.ffn.experts.expert_29":1,
"model.decoder.layers.7.ffn.experts.expert_30":1,
"model.decoder.layers.7.ffn.experts.expert_31":1,
"model.decoder.layers.7.ffn.experts.expert_32":1,
"model.decoder.layers.7.ffn.experts.expert_33":1,
"model.decoder.layers.7.ffn.experts.expert_34":1,
"model.decoder.layers.7.ffn.experts.expert_35":1,
"model.decoder.layers.7.ffn.experts.expert_36":1,
"model.decoder.layers.7.ffn.experts.expert_37":1,
"model.decoder.layers.7.ffn.experts.expert_38":1,
"model.decoder.layers.7.ffn.experts.expert_39":1,
"model.decoder.layers.7.ffn.experts.expert_40":1,
"model.decoder.layers.7.ffn.experts.expert_41":1,
"model.decoder.layers.7.ffn.experts.expert_42":1,
"model.decoder.layers.7.ffn.experts.expert_43":1,
"model.decoder.layers.7.ffn.experts.expert_44":1,
"model.decoder.layers.7.ffn.experts.expert_45":1,
"model.decoder.layers.7.ffn.experts.expert_46":1,
"model.decoder.layers.7.ffn.experts.expert_47":1,
"model.decoder.layers.7.ffn.experts.expert_48":1,
"model.decoder.layers.7.ffn.experts.expert_49":1,
"model.decoder.layers.7.ffn.experts.expert_50":1,
"model.decoder.layers.7.ffn.experts.expert_51":1,
"model.decoder.layers.7.ffn.experts.expert_52":1,
"model.decoder.layers.7.ffn.experts.expert_53":1,
"model.decoder.layers.7.ffn.experts.expert_54":1,
"model.decoder.layers.7.ffn.experts.expert_55":1,
"model.decoder.layers.7.ffn.experts.expert_56":1,
"model.decoder.layers.7.ffn.experts.expert_57":1,
"model.decoder.layers.7.ffn.experts.expert_58":1,
"model.decoder.layers.7.ffn.experts.expert_59":1,
"model.decoder.layers.7.ffn.experts.expert_60":1,
"model.decoder.layers.7.ffn.experts.expert_61":1,
"model.decoder.layers.7.ffn.experts.expert_62":1,
"model.decoder.layers.7.ffn.experts.expert_63":1,
"model.decoder.layers.7.ffn.experts.expert_64":1,
"model.decoder.layers.7.ffn.experts.expert_65":1,
"model.decoder.layers.7.ffn.experts.expert_66":1,
"model.decoder.layers.7.ffn.experts.expert_67":1,
"model.decoder.layers.7.ffn.experts.expert_68":1,
"model.decoder.layers.7.ffn.experts.expert_69":1,
"model.decoder.layers.7.ffn.experts.expert_70":1,
"model.decoder.layers.7.ffn.experts.expert_71":1,
"model.decoder.layers.7.ffn.experts.expert_72":1,
"model.decoder.layers.7.ffn.experts.expert_73":1,
"model.decoder.layers.7.ffn.experts.expert_74":1,
"model.decoder.layers.7.ffn.experts.expert_75":1,
"model.decoder.layers.7.ffn.experts.expert_76":1,
"model.decoder.layers.7.ffn.experts.expert_77":1,
"model.decoder.layers.7.ffn.experts.expert_78":1,
"model.decoder.layers.7.ffn.experts.expert_79":1,
"model.decoder.layers.7.ffn.experts.expert_80":1,
"model.decoder.layers.7.ffn.experts.expert_81":1,
"model.decoder.layers.7.ffn.experts.expert_82":1,
"model.decoder.layers.7.ffn.experts.expert_83":1,
"model.decoder.layers.7.ffn.experts.expert_84":1,
"model.decoder.layers.7.ffn.experts.expert_85":1,
"model.decoder.layers.7.ffn.experts.expert_86":1,
"model.decoder.layers.7.ffn.experts.expert_87":1,
"model.decoder.layers.7.ffn.experts.expert_88":1,
"model.decoder.layers.7.ffn.experts.expert_89":1,
"model.decoder.layers.7.ffn.experts.expert_90":1,
"model.decoder.layers.7.ffn.experts.expert_91":1,
"model.decoder.layers.7.ffn.experts.expert_92":1,
"model.decoder.layers.7.ffn.experts.expert_93":1,
"model.decoder.layers.7.ffn.experts.expert_94":1,
"model.decoder.layers.7.ffn.experts.expert_95":1,
"model.decoder.layers.7.ffn.experts.expert_96":1,
"model.decoder.layers.7.ffn.experts.expert_97":1,
"model.decoder.layers.7.ffn.experts.expert_98":1,
"model.decoder.layers.7.ffn.experts.expert_99":1,
"model.decoder.layers.7.ffn.experts.expert_100":1,
"model.decoder.layers.7.ffn.experts.expert_101":1,
"model.decoder.layers.7.ffn.experts.expert_102":1,
"model.decoder.layers.7.ffn.experts.expert_103":1,
"model.decoder.layers.7.ffn.experts.expert_104":1,
"model.decoder.layers.7.ffn.experts.expert_105":1,
"model.decoder.layers.7.ffn.experts.expert_106":1,
"model.decoder.layers.7.ffn.experts.expert_107":1,
"model.decoder.layers.7.ffn.experts.expert_108":1,
"model.decoder.layers.7.ffn.experts.expert_109":1,
"model.decoder.layers.7.ffn.experts.expert_110":1,
"model.decoder.layers.7.ffn.experts.expert_111":1,
"model.decoder.layers.7.ffn.experts.expert_112":1,
"model.decoder.layers.7.ffn.experts.expert_113":1,
"model.decoder.layers.7.ffn.experts.expert_114":1,
"model.decoder.layers.7.ffn.experts.expert_115":1,
"model.decoder.layers.7.ffn.experts.expert_116":1,
"model.decoder.layers.7.ffn.experts.expert_118":2,
"model.decoder.layers.7.ffn.experts.expert_119":2,
"model.decoder.layers.7.ffn.experts.expert_120":2,
"model.decoder.layers.7.ffn.experts.expert_121":2,
"model.decoder.layers.7.ffn.experts.expert_122":2,
"model.decoder.layers.7.ffn.experts.expert_123":2,
"model.decoder.layers.7.ffn.experts.expert_124":2,
"model.decoder.layers.7.ffn.experts.expert_125":2,
"model.decoder.layers.7.ffn.experts.expert_126":2,
"model.decoder.layers.7.ffn.experts.expert_127":2,
"model.decoder.layers.7.ff_layer_norm":2,
"model.decoder.layers.7.ff_dropout":2,
"model.decoder.layers.8":2,
"model.decoder.layers.9":2,
"model.decoder.layers.10":2,
"model.decoder.layers.11":2,
"model.decoder.layers.12":2,
"model.decoder.layers.13":2,
"model.decoder.layers.14":2,
"model.decoder.layers.15":2,
"model.decoder.layers.16":2,
"model.decoder.layers.17":2,
"model.decoder.layers.18":2,
"model.decoder.layers.19":2,
"model.decoder.layers.20":2,
"model.decoder.layers.21":2,
"model.decoder.layers.22":2,
"model.decoder.layers.23":2,
"model.decoder.layer_norm":2,
"model.encoder.layers.15.ffn.experts.expert_101":1,
"model.decoder.layers.7.ffn.experts.expert_117":2
}
``` |
transformers | 23,384 | closed | Fixed FLAVA tensor masking | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #23378
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [X] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [X] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 05-16-2023 06:00:39 | 05-16-2023 06:00:39 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23384). All of your documentation changes will be reflected on that endpoint.<|||||>Hi @amariucaitheodor, thanks for reporting the issue and for opening this PR to resolve it! ๐
Going through the code, I think we can simplify the original logic a bit, which would remove the need for these additional checks: we can simply remove the `sequence_for_text = sequence_for_text[pos_mask]` and `sequence_for_image = sequence_for_image[pos_mask]` blocks.
As either:
* `pos_mask` is not `None` - then `multimodal_masked_embeddings` will have been masked and `sequence_for_image = multimodal_masked_embeddings` or
* `pos_mask` is `None` - then `multimodal_masked_embeddings` won't have been masked and `sequence_for_image = multimodal_masked_embeddings`
I noticed two additional related pieces which would be great to add to this PR too:
* `bool_masked_pos` isn't masked in the `ITM Loss` loss block, and should be after `mim_labels`
* We don't need the `if multimodal_masked_embeddings is not None:` check on L1949 - `multimodal_masked_embeddings` is never `None` in this ITM loss block. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Hi @amariucaitheodor, are you still working on this PR? We'll want to add these changes, and merging in this branch means you'll get the contribution :) <|||||>Hello @amyeroberts, thank you for the additions and reminder! I can push my changes to GitHub around the 22nd of June. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 23,383 | closed | Appending mapped dataset to list changes previous elements of a list | ### System Info
Hi,
I've been trying to tokenize a dataset with different tokenizers and store it, but in doing so, am running into a bug. The general idea is that appending to a list of datasets, seems to modify previous elements.
A code notebook is here: https://colab.research.google.com/drive/1ljMwBqzCe1fHffBcPP2py9IJMrocoNIU
### Who can help?
@ArthurZucker @younesbelkada
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
https://colab.research.google.com/drive/1ljMwBqzCe1fHffBcPP2py9IJMrocoNIU
### Expected behavior
Appending to the list of datasets shouldn't modify previous elements of that list. | 05-16-2023 03:04:06 | 05-16-2023 03:04:06 | update- the error seems to be in the use of the lambda function, no idea why<|||||>@surya-narayanan This isn't an issue specific to `datasets` - it's a Python behaviour due to binding of the `x` var when then `lambda` function is first defined:
https://docs.python.org/3/faq/programming.html#why-do-lambdas-defined-in-a-loop-with-different-values-all-return-the-same-result
For future issues experienced when using `datasets`, could you make sure to open them under the [datasets repo](https://github.com/huggingface/datasets)? <|||||>Great, thanks :) |
transformers | 23,382 | closed | Debug example code for MegaForCausalLM | set ignore_mismatched_sizes=True in model loading code for MegaForCausalLM so that the example code runs without errors.
# What does this PR do?
Fixes # 22974
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case. https://github.com/huggingface/transformers/issues/22974
- [X] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@amyeroberts | 05-16-2023 00:32:23 | 05-16-2023 00:32:23 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@amyeroberts I ran "make style" in the main Transformers directory and then pushed the changes, and every test now fails, including the ones that passed previously. What am I doing wrong?<|||||>@Tylersuard Hmmmm..... OK, I'm not exactly sure what happened - I suspect there might be a mismatch in installed package versions. Here's how I would try to resolve:
First undo the most recent changes added to the unrelated files:
* Undo the last two commits: `git revert --hard HEAD~2`
* Push these changes to the PR: `git push -f`
Then get your branch in sync with `main`:
* Get the latest version of main: `git checkout main && git fetch upstream main && git rebase upstream/main`
* Install latest formatting settings `pip install -e ".[quality]"`
* Rebase main onto this branch `git checkout patch-1 && git rebase upstream/main`
* Push these changes to the PR (you'll have to force): `git push --force`
* Make any style changes `make style`
* Commit changes made (should just be to the modeling_mega.py file): `git add src/transformers/models/mega/modeling_mega.py && git commit -m "Fix up" && git push`<|||||>@amyeroberts Very clear instructions, thank you!<|||||>@amyeroberts Ok, all done! I do not have write access, so I can't merge the PR |
transformers | 23,381 | closed | IndexError: index out of range in self | My batch shape looked like below:
- past_values: torch.Size([17, 35])
- past_time_features: torch.Size([17, 35, 9])
- past_observed_mask: torch.Size([17, 35])
- static_categorical_features: torch.Size([17, 4])
- static_real_features: torch.Size([17, 2])
- future_values: torch.Size([17, 7])
- future_time_features: torch.Size([17, 7, 9])
I run
```
model = TimeSeriesTransformerModel.from_pretrained("huggingface/time-series-transformer-tourism-monthly")
# during training, one provides both past and future values
# as well as possible additional features
outputs = model(
past_values=batchTrain["past_values"],
past_time_features=batchTrain["past_time_features"],
past_observed_mask=batchTrain["past_observed_mask"],
static_categorical_features=batchTrain["static_categorical_features"],
static_real_features=batchTrain["static_real_features"],
future_values=batchTrain["future_values"],
future_time_features=batchTrain["future_time_features"],
)
last_hidden_state = outputs.last_hidden_state
```
```
Below is the error message.
Some weights of the model checkpoint at huggingface/time-series-transformer-tourism-monthly were not used when initializing TimeSeriesTransformerModel: ['parameter_projection.proj.2.weight', 'parameter_projection.proj.2.bias', 'parameter_projection.proj.1.weight', 'parameter_projection.proj.0.bias', 'parameter_projection.proj.0.weight', 'parameter_projection.proj.1.bias']
- This IS expected if you are initializing TimeSeriesTransformerModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing TimeSeriesTransformerModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Output exceeds the [size limit](command:workbench.action.openSettings?[). Open the full output data [in a text editor](command:workbench.action.openLargeOutput?fd064990-38d7-46ff-ab2b-610b0cd790a4)
---------------------------------------------------------------------------
IndexError Traceback (most recent call last)
Cell In[34], line 53
48 model = TimeSeriesTransformerModel.from_pretrained("huggingface/time-series-transformer-tourism-monthly")
50 # during training, one provides both past and future values
51 # as well as possible additional features
---> 53 outputs = model(
54 past_values=batchTrain["past_values"],
55 past_time_features=batchTrain["past_time_features"],
56 past_observed_mask=batchTrain["past_observed_mask"],
57 static_categorical_features=batchTrain["static_categorical_features"],
58 static_real_features=batchTrain["static_real_features"],
59 future_values=batchTrain["future_values"],
60 future_time_features=batchTrain["future_time_features"],
61 )
63 last_hidden_state = outputs.last_hidden_state
File ~/opt/anaconda3/lib/python3.8/site-packages/torch/nn/modules/module.py:1501, in Module._call_impl(self, *args, **kwargs)
1496 # If we don't have any hooks, we want to skip the rest of the logic in
1497 # this function, and just call forward.
1498 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
1499 or _global_backward_pre_hooks or _global_backward_hooks
1500 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1501 return forward_call(*args, **kwargs)
1502 # Do not call functions when jit is used
1503 full_backward_hooks, non_full_backward_hooks = [], []
File ~/opt/anaconda3/lib/python3.8/site-packages/transformers/models/time_series_transformer/modeling_time_series_transformer.py:1417, in TimeSeriesTransformerModel.forward(self, past_values, past_time_features, past_observed_mask, static_categorical_features, static_real_features, future_values, future_time_features, decoder_attention_mask, head_mask, decoder_head_mask, cross_attn_head_mask, encoder_outputs, past_key_values, output_hidden_states, output_attentions, use_cache, return_dict)
1414 use_cache = use_cache if use_cache is not None else self.config.use_cache
1415 return_dict = return_dict if return_dict is not None else self.config.use_return_dict
-> 1417 transformer_inputs, loc, scale, static_feat = self.create_network_inputs(
1418 past_values=past_values,
1419 past_time_features=past_time_features,
1420 past_observed_mask=past_observed_mask,
1421 static_categorical_features=static_categorical_features,
1422 static_real_features=static_real_features,
1423 future_values=future_values,
1424 future_time_features=future_time_features,
1425 )
1427 if encoder_outputs is None:
1428 enc_input = transformer_inputs[:, : self.config.context_length, ...]
File ~/opt/anaconda3/lib/python3.8/site-packages/transformers/models/time_series_transformer/modeling_time_series_transformer.py:1324, in TimeSeriesTransformerModel.create_network_inputs(self, past_values, past_time_features, static_categorical_features, static_real_features, past_observed_mask, future_values, future_time_features)
1322 static_feat = torch.cat((static_real_features, static_feat), dim=1)
1323 if static_categorical_features is not None:
-> 1324 embedded_cat = self.embedder(static_categorical_features)
1325 static_feat = torch.cat((embedded_cat, static_feat), dim=1)
1326 expanded_static_feat = static_feat.unsqueeze(1).expand(-1, time_feat.shape[1], -1)
File ~/opt/anaconda3/lib/python3.8/site-packages/torch/nn/modules/module.py:1501, in Module._call_impl(self, *args, **kwargs)
1496 # If we don't have any hooks, we want to skip the rest of the logic in
1497 # this function, and just call forward.
1498 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
1499 or _global_backward_pre_hooks or _global_backward_hooks
1500 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1501 return forward_call(*args, **kwargs)
1502 # Do not call functions when jit is used
1503 full_backward_hooks, non_full_backward_hooks = [], []
File ~/opt/anaconda3/lib/python3.8/site-packages/transformers/models/time_series_transformer/modeling_time_series_transformer.py:76, in TimeSeriesFeatureEmbedder.forward(self, features)
72 else:
73 cat_feature_slices = [features]
75 return torch.cat(
---> 76 [
77 embed(cat_feature_slice.squeeze(-1))
78 for embed, cat_feature_slice in zip(self.embedders, cat_feature_slices)
79 ],
80 dim=-1,
81 )
File ~/opt/anaconda3/lib/python3.8/site-packages/transformers/models/time_series_transformer/modeling_time_series_transformer.py:77, in <listcomp>(.0)
72 else:
73 cat_feature_slices = [features]
75 return torch.cat(
76 [
---> 77 embed(cat_feature_slice.squeeze(-1))
78 for embed, cat_feature_slice in zip(self.embedders, cat_feature_slices)
79 ],
80 dim=-1,
81 )
File ~/opt/anaconda3/lib/python3.8/site-packages/torch/nn/modules/module.py:1501, in Module._call_impl(self, *args, **kwargs)
1496 # If we don't have any hooks, we want to skip the rest of the logic in
1497 # this function, and just call forward.
1498 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
1499 or _global_backward_pre_hooks or _global_backward_hooks
1500 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1501 return forward_call(*args, **kwargs)
1502 # Do not call functions when jit is used
1503 full_backward_hooks, non_full_backward_hooks = [], []
File ~/opt/anaconda3/lib/python3.8/site-packages/torch/nn/modules/sparse.py:162, in Embedding.forward(self, input)
161 def forward(self, input: Tensor) -> Tensor:
--> 162 return F.embedding(
163 input, self.weight, self.padding_idx, self.max_norm,
...
2208 # remove once script supports set_grad_enabled
2209 _no_grad_embedding_renorm_(weight, input, max_norm, norm_type)
-> 2210 return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
IndexError: index out of range in self
``` | 05-15-2023 22:28:19 | 05-15-2023 22:28:19 | cc @kashif <|||||>@Bateoriginal thank you for the report. Since you are using the model weights for the tourism dataset the cardinality of the embedding layers is fixed to that from this dataset. Thus it seems you are passing it some integer id which is too large with respect to what the model expects.
May I ask if you are training with the tourism dataset?<|||||>
Thank you for your response.
I'm currently working with retail transaction data.
It's reassuring to understand that the number of elements (cardinality) in the embedding layers remains constant.
Could you elaborate on how the cardinality of these embedding layers is determined based on the original data used for training? Is it possible to predict the cardinality just by examining the dimensions of the batch?
When you refer to an 'integer ID', does this pertain to static categorical/real features or something else?
<|||||>@Bateoriginal yes the embedding layer will output a vector for a specific number of ids typically from 0, ... cardinality-1 and if given an id outside this range it will error out as it is internally a mapping from these ids to a vector.
this cardinality as mentioned is set for your specific problem and dataset and corresponds to static covariates, and thus the cardinality is not something that is predictable from the batch as a batch is some random collection of some time series within your batch and also you need to specify it when initializing your model.
So the cardinality is chosen at the start and has to remain fixed for the duration of the model's life cycle. This is both good and bad... it's good because for example this way the model can be given information about say the id of each time series in a dataset but it is bad as it constrains your model from only being able to do predictions on time series with a known id...
In any case, i encourage you to initialize a model with the configurations of your dataset rather than loading a model trained on the tourism dataset. If you can have a look at the blog post: https://huggingface.co/blog/time-series-transformers and try to replicate it for your dataset
Hopefully, that helps!<|||||>Thank you for your time! <|||||>so what i meant to say is that you can chose to train your model without using the categorical covariates and sometimes such a model performs (paradoxically) better than with categorical covariates.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 23,380 | closed | TextToVideo tool raising name 'init_empty_weights' is not defined error | ### System Info
2023-05-15 21:13:50.400043: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
WARNING:tensorflow:From /usr/local/lib/python3.10/dist-packages/transformers/commands/env.py:63: is_gpu_available (from tensorflow.python.framework.test_util) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.config.list_physical_devices('GPU')` instead.
No GPU/TPU found, falling back to CPU. (Set TF_CPP_MIN_LOG_LEVEL=0 and rerun for more info.)
Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points.
- `transformers` version: 4.29.1
- Platform: Linux-5.15.107+-x86_64-with-glibc2.31
- Python version: 3.10.11
- Huggingface_hub version: 0.14.1
- Safetensors version: not installed
- PyTorch version (GPU?): 2.0.0+cu118 (False)
- Tensorflow version (GPU?): 2.12.0 (False)
- Flax version (CPU?/GPU?/TPU?): 0.6.9 (cpu)
- Jax version: 0.4.8
- JaxLib version: 0.4.7
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help?
@LysandreJik
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```python
from transformers import Tool, OpenAiAgent, HfAgent
agent = HfAgent("https://api-inference.huggingface.co/models/bigcode/starcoder")
output = agent.run("Please make a video of `prompt`", prompt="a man eating spaghetti")
```
```bash
/usr/local/lib/python3.10/dist-packages/transformers/tools/python_interpreter.py:101 in โ
โ evaluate_ast โ
โ โ
โ 98 โ โ return evaluate_assign(expression, state, tools) โ
โ 99 โ elif isinstance(expression, ast.Call): โ
โ 100 โ โ # Function call -> we return the value of the function call โ
โ โฑ 101 โ โ return evaluate_call(expression, state, tools) โ
โ 102 โ elif isinstance(expression, ast.Constant): โ
โ 103 โ โ # Constant -> just return the value โ
โ 104 โ โ return expression.value โ
โ โ
โ /usr/local/lib/python3.10/dist-packages/transformers/tools/python_interpreter.py:167 in โ
โ evaluate_call โ
โ โ
โ 164 โ # Todo deal with args โ
โ 165 โ args = [evaluate_ast(arg, state, tools) for arg in call.args] โ
โ 166 โ kwargs = {keyword.arg: evaluate_ast(keyword.value, state, tools) for keyword in call โ
โ โฑ 167 โ return func(*args, **kwargs) โ
โ 168 โ
โ 169 โ
โ 170 def evaluate_subscript(subscript, state, tools): โ
โ โ
โ /root/.cache/huggingface/modules/transformers_modules/huggingface-tools/text-to-video/15f8f33935 โ
โ f9653aa806382d1536f8a48a0c6cc0/text_to_video.py:45 in __call__ โ
โ โ
โ 42 โ โ
โ 43 โ def __call__(self, prompt, seconds=2): โ
โ 44 โ โ if not self.is_initialized: โ
โ โฑ 45 โ โ โ self.setup() โ
โ 46 โ โ โ
โ 47 โ โ return self.pipeline(prompt, num_frames=8 * seconds).frames โ
โ 48 โ
โ โ
โ /root/.cache/huggingface/modules/transformers_modules/huggingface-tools/text-to-video/15f8f33935 โ
โ f9653aa806382d1536f8a48a0c6cc0/text_to_video.py:36 in setup โ
โ โ
โ 33 โ โ if self.device is None: โ
โ 34 โ โ โ self.device = get_default_device() โ
โ 35 โ โ โ
โ โฑ 36 โ โ self.pipeline = DiffusionPipeline.from_pretrained( โ
โ 37 โ โ โ self.default_checkpoint, variant="fp16" โ
โ 38 โ โ ) โ
โ 39 โ โ self.pipeline.to(self.device) โ
โ โ
โ /usr/local/lib/python3.10/dist-packages/diffusers/pipelines/pipeline_utils.py:1039 in โ
โ from_pretrained โ
โ โ
โ 1036 โ โ โ โ loaded_sub_model = passed_class_obj[name] โ
โ 1037 โ โ โ else: โ
โ 1038 โ โ โ โ # load sub model โ
โ โฑ 1039 โ โ โ โ loaded_sub_model = load_sub_model( โ
โ 1040 โ โ โ โ โ library_name=library_name, โ
โ 1041 โ โ โ โ โ class_name=class_name, โ
โ 1042 โ โ โ โ โ importable_classes=importable_classes, โ
โ โ
โ /usr/local/lib/python3.10/dist-packages/diffusers/pipelines/pipeline_utils.py:445 in โ
โ load_sub_model โ
โ โ
โ 442 โ โ
โ 443 โ # check if the module is in a subdirectory โ
โ 444 โ if os.path.isdir(os.path.join(cached_folder, name)): โ
โ โฑ 445 โ โ loaded_sub_model = load_method(os.path.join(cached_folder, name), **loading_kwar โ
โ 446 โ else: โ
โ 447 โ โ # else load from the root directory โ
โ 448 โ โ loaded_sub_model = load_method(cached_folder, **loading_kwargs) โ
โ โ
โ /usr/local/lib/python3.10/dist-packages/transformers/modeling_utils.py:2608 in from_pretrained โ
โ โ
โ 2605 โ โ โ logger.info("Detected DeepSpeed ZeRO-3: activating zero.init() for this mode โ
โ 2606 โ โ โ init_contexts = [deepspeed.zero.Init(config_dict_or_path=deepspeed_config()) โ
โ 2607 โ โ elif load_in_8bit or low_cpu_mem_usage: โ
โ โฑ 2608 โ โ โ init_contexts.append(init_empty_weights()) โ
โ 2609 โ โ โ
โ 2610 โ โ with ContextManagers(init_contexts): โ
โ 2611 โ โ โ model = cls(config, *model_args, **model_kwargs) โ
โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ
NameError: name 'init_empty_weights' is not defined
```
Same behavior calling the tool directly,
```python
from transformers.tools import load_tool
tool = load_tool("huggingface-tools/text-to-video")
tool(prompt="a man eating spaghetti")
```
### Expected behavior
The tool does not error | 05-15-2023 21:15:03 | 05-15-2023 21:15:03 | Thanks for reporting @freddyaboulton! Do you have `accelerate` installed? If so, which version?
<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 23,379 | closed | [AutoModel] fix `torch_dtype=auto` in `from_pretrained` | This PR:
1. fixes the case of `torch_dtype=auto` in `AutoModel.from_pretrained` which got unintentionally stripped in https://github.com/huggingface/transformers/pull/21524 - now `torch_dtype=auto` gets always passed on to the `from_pretrained` method of the resolved class.
2. adds a test
Fixes: https://github.com/huggingface/transformers/issues/23357 | 05-15-2023 19:16:20 | 05-15-2023 19:16:20 | _The documentation is not available anymore as the PR was closed or merged._<|||||>The only change is that `torch_dtype="auto"` remains in `kwargs` and it wasn't before.
I have just flipped around the `_copy` vs `_orig` as it looked simpler to read that way. There is no functional change in that part of the code.
Probably could just set a flag of `is_torch_dtype_auto = True` instead of copying `kwargs` - I just thought that perhaps down the road other entries might need a special handling. Let me know if you prefer that I recode to use the flag instead. It surely would be cleaner I think.<|||||>No no, that works as is. |
transformers | 23,378 | closed | FLAVA tensors are masked twice, forward pass fails | https://github.com/huggingface/transformers/blob/cf11493dce0a1d22446efe0d6c4ade02fd928e50/src/transformers/models/flava/modeling_flava.py#L1965
The line above is the second time this tensor is masked if the previous ITM logic happens (line 1950), resulting e.g. `IndexError: The shape of the mask [14] at index 0 does not match the shape of the indexed tensor [13, 196, 768] at index 0`
The fix could be something like `if pos_mask is not None and sequence_for_image.size(0) == pos_mask.size(0)` on line 1964. | 05-15-2023 19:02:18 | 05-15-2023 19:02:18 | A similar thing happens at line 1969 because of line 1956 (`mim_labels` changes shape):
```python
mim_labels[bool_masked_pos.ne(True)] = self.ce_ignore_index
IndexError: The shape of the mask [14, 196] at index 0 does not match the shape of the indexed tensor [13, 196] at index 0
```
The fix could be adding `bool_masked_pos = bool_masked_pos[pos_mask]` between lines 1969 and 1968.<|||||>Same for line 1988, fix could be `if pos_mask is not None and sequence_for_text.size(0) == pos_mask.size(0):`<|||||>Tried the fixes and FLAVA runs. I wonder how no one else noticed ๐ค<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 23,377 | closed | Default Models of the pipeline function | I made issue a on the NLP Course (https://github.com/huggingface/course/issues/561):
> In the video:
> Chapter 1 Live Session with Sylvain
> https://youtu.be/aV4wfnIakSQ?t=928
>
> There a question of what are the default models in the pipeline library.
> His answer is to look at:
> https://github.com/huggingface/transformers/blob/main/src/transformers/pipelines/__init__.py
>
> I think that this information should be in the chapter 1, also on the documentation
Then I was asked to make a Issue here too.
After looking the Transformers Agents also have this problem

For a instance Speech to text uses Whisper but it don't say what exact version of Whisper
There more than 3000 versions of Whisper on the Models Directory
https://huggingface.co/models?search=whisper | 05-15-2023 17:40:17 | 05-15-2023 17:40:17 | Hi @DiogenesBR, thanks for raising this issue!
Similar to answer in the linked video, at the moment this info is best found by looking at the source code. For example, for the [speech-to-text tool](https://github.com/huggingface/transformers/blob/918a06e25dfd6f79a20b6f07f63598c71e440161/src/transformers/tools/speech_to_text.py#L22), the checkpoint used is [openai/whisper-base](https://huggingface.co/openai/whisper-base).
Would you be interested in opening a PR to add this information?
cc @MKhalusova @stevhliu <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 23,376 | closed | [`SAM`] fix sam slow test | # What does this PR do?
This PR fixes the slow tests that were failing due to https://github.com/huggingface/transformers/pull/23295
In fact, in the slow test that we have designed, we forgot to use the correct format for the input bounding boxes
Will address a PR on `notebooks` to address the changes in the example notebook
cc @ydshieh @amyeroberts | 05-15-2023 16:30:31 | 05-15-2023 16:30:31 | Fix for the notebook: https://github.com/huggingface/notebooks/pull/371<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>Indeed the tests were passing before because the processor was force unsqueezeing the boxes here: https://github.com/huggingface/transformers/blob/d765717c76026281f2fb27ddc44fa3636306bb48/src/transformers/models/sam/processing_sam.py#L141
> you can't have floating integers :)
Hahah yes, thanks for noticing! Copilot does some bad job sometimes ... will update that as well<|||||>Thanks a lot @amyeroberts !<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23376). All of your documentation changes will be reflected on that endpoint. |
transformers | 23,375 | closed | Add bark | ## What does this PR do?
Fixes #
## TODO
- [x] Add autoregressive text model
- [x] Add autoregressive coarse model
- [x] Add non-autoregressive fine model
- [x] Check text weights
- [ ] Check coarse weights
- [ ] Check fine weights
- [ ] Add Bark model / config -> what design for concatenating the three models?
- [ ] Generation code
- [ ] Update with transformers Encodec checkpoint
- [ ] Docs
| 05-15-2023 15:36:49 | 05-15-2023 15:36:49 | Superseded by #24086 |
transformers | 23,374 | closed | Skip failing `AlignModelTest::test_multi_gpu_data_parallel_forward` | # What does this PR do?
`tests/models/align/test_modeling_align.py::AlignModelTest::test_multi_gpu_data_parallel_forward` starts to fail after we switch to `torch+cu118`. If I install back with `torch+cu117`, it passes again.
This test uses `torch.nn.DataParallel` which is not recommended (despite not deprecated yet). The error is pure CUDA thing for which I have no knowledge. Combing all the above facts + the usage of this model, let's just skip this particular test for `AlignModelTest`.
(This failing test cause the other 18 tests to fail due to the CUDA is in a bad state)
```bash
E RuntimeError: Caught RuntimeError in replica 0 on device 0.
E Original Traceback (most recent call last):
E File "/usr/local/lib/python3.8/dist-packages/torch/nn/parallel/parallel_apply.py", line 64, in _worker
E output = module(*input, **kwargs)
E File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1501, in _call_impl
E return forward_call(*args, **kwargs)
E File "/transformers/src/transformers/models/align/modeling_align.py", line 1596, in forward
E vision_outputs = self.vision_model(
E File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1501, in _call_impl
E return forward_call(*args, **kwargs)
E File "/transformers/src/transformers/models/align/modeling_align.py", line 1395, in forward
E embedding_output = self.embeddings(pixel_values)
E File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1501, in _call_impl
E return forward_call(*args, **kwargs)
E File "/transformers/src/transformers/models/align/modeling_align.py", line 345, in forward
E features = self.convolution(features)
E File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1501, in _call_impl
E return forward_call(*args, **kwargs)
E File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/conv.py", line 463, in forward
E return self._conv_forward(input, self.weight, self.bias)
E File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/conv.py", line 459, in _conv_forward
E return F.conv2d(input, weight, bias, self.stride,
E RuntimeError: GET was unable to find an engine to execute this computation
``` | 05-15-2023 14:33:57 | 05-15-2023 14:33:57 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23374). All of your documentation changes will be reflected on that endpoint. |
transformers | 23,373 | closed | Update error message when Accelerate isn't installed | # What does this PR do?
This PR provides a bit more verbose error when `accelerate` isn't found on an install of `transformers`, as the `Trainer` (on PyTorch) requires Accelerate to be installed.
The error message was changed from:
```python
ImportError: Using the Trainer with PyTorch requires accelerate: Run pip install --upgrade accelerate
```
To be:
```python
Using the `Trainer` with `PyTorch` requires `accelerate>=0.19.0`: Please run `pip install transformers[torch]` or `pip install accelerate -U`
```
Fixes # (issue)
- https://github.com/huggingface/transformers/issues/23323
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@LysandreJik
(@sgugger when you are back)
| 05-15-2023 14:09:11 | 05-15-2023 14:09:11 | _The documentation is not available anymore as the PR was closed or merged._<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23373). All of your documentation changes will be reflected on that endpoint. |
transformers | 23,372 | closed | Use `mkstemp` to replace deprecated `mktemp` | The `tempfile.mktemp` function is [deprecated](https://docs.python.org/3/library/tempfile.html#tempfile.mktemp) due to [security issues](https://cwe.mitre.org/data/definitions/377.html).
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes Tempfile issue disclosed in [huntr](https://www.huntr.dev/bounties/a3867b4e-6701-4418-8c20-3c6e7084a44a/).
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@sgugger Can you please review these changes and approve this fix? Thanks.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 05-15-2023 13:20:11 | 05-15-2023 13:20:11 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Hi @sgugger /@amyeroberts, Can you please add this patch in [huntr](https://www.huntr.dev/bounties/a3867b4e-6701-4418-8c20-3c6e7084a44a/) report. Thanks.<|||||>@ready-research Should be done now!<|||||>Is this change going to be included in a release soon?<|||||>This is being reported as having the fix for https://nvd.nist.gov/vuln/detail/CVE-2023-2800
Is there an estimate on the time to release?<|||||>You can install HF from the commit ID with the fix this way:
```bash
$ pip install --no-cache-dir git+https://github.com/huggingface/transformers.git@80ca924
```
and you should have:
```
Collecting git+https://github.com/huggingface/transformers.git@80ca924
Cloning https://github.com/huggingface/transformers.git (to revision 80ca924) to /tmp/pip-req-build-f13han_v
Running command git clone --filter=blob:none --quiet https://github.com/huggingface/transformers.git /tmp/pip-req-build-f13han_v
WARNING: Did not find branch or tag '80ca924', assuming revision or ref.
Running command git checkout -q 80ca924
Resolved https://github.com/huggingface/transformers.git to commit 80ca924
Installing build dependencies ... done
Getting requirements to build wheel ... done
Preparing metadata (pyproject.toml) ... done
Collecting filelock (from transformers==4.30.0.dev0)
Downloading filelock-3.12.0-py3-none-any.whl (10 kB)
Collecting huggingface-hub<1.0,>=0.14.1 (from transformers==4.30.0.dev0)
Downloading huggingface_hub-0.15.1-py3-none-any.whl (236 kB)
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ 236.8/236.8 kB 48.8 MB/s eta 0:00:00
Collecting numpy>=1.17 (from transformers==4.30.0.dev0)
Downloading numpy-1.24.3-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (17.3 MB)
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ 17.3/17.3 MB 132.3 MB/s eta 0:00:00
Collecting packaging>=20.0 (from transformers==4.30.0.dev0)
Downloading packaging-23.1-py3-none-any.whl (48 kB)
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ 48.9/48.9 kB 249.5 MB/s eta 0:00:00
Collecting pyyaml>=5.1 (from transformers==4.30.0.dev0)
Downloading PyYAML-6.0-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl (661 kB)
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ 661.8/661.8 kB 253.8 MB/s eta 0:00:00
Collecting regex!=2019.12.17 (from transformers==4.30.0.dev0)
Downloading regex-2023.6.3-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (769 kB)
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ 769.9/769.9 kB 311.6 MB/s eta 0:00:00
Collecting requests (from transformers==4.30.0.dev0)
Downloading requests-2.31.0-py3-none-any.whl (62 kB)
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ 62.6/62.6 kB 269.3 MB/s eta 0:00:00
Collecting tokenizers!=0.11.3,<0.14,>=0.11.1 (from transformers==4.30.0.dev0)
Downloading tokenizers-0.13.3-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (7.8 MB)
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ 7.8/7.8 MB 160.6 MB/s eta 0:00:00
Collecting tqdm>=4.27 (from transformers==4.30.0.dev0)
Downloading tqdm-4.65.0-py3-none-any.whl (77 kB)
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ 77.1/77.1 kB 277.1 MB/s eta 0:00:00
Collecting fsspec (from huggingface-hub<1.0,>=0.14.1->transformers==4.30.0.dev0)
Downloading fsspec-2023.5.0-py3-none-any.whl (160 kB)
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ 160.1/160.1 kB 304.7 MB/s eta 0:00:00
Collecting typing-extensions>=3.7.4.3 (from huggingface-hub<1.0,>=0.14.1->transformers==4.30.0.dev0)
Downloading typing_extensions-4.6.3-py3-none-any.whl (31 kB)
Collecting charset-normalizer<4,>=2 (from requests->transformers==4.30.0.dev0)
Downloading charset_normalizer-3.1.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (199 kB)
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ 199.2/199.2 kB 312.4 MB/s eta 0:00:00
Collecting idna<4,>=2.5 (from requests->transformers==4.30.0.dev0)
Downloading idna-3.4-py3-none-any.whl (61 kB)
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ 61.5/61.5 kB 269.7 MB/s eta 0:00:00
Collecting urllib3<3,>=1.21.1 (from requests->transformers==4.30.0.dev0)
Downloading urllib3-2.0.2-py3-none-any.whl (123 kB)
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ 123.2/123.2 kB 182.9 MB/s eta 0:00:00
Collecting certifi>=2017.4.17 (from requests->transformers==4.30.0.dev0)
Downloading certifi-2023.5.7-py3-none-any.whl (156 kB)
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ 157.0/157.0 kB 308.0 MB/s eta 0:00:00
Building wheels for collected packages: transformers
Building wheel for transformers (pyproject.toml) ... done
Created wheel for transformers: filename=transformers-4.30.0.dev0-py3-none-any.whl size=7079671 sha256=6be8d9585811de7b3573d50a1e9577a90a36b77b73af16c3b1a0e5dabd679f7b
Stored in directory: /tmp/pip-ephem-wheel-cache-rdzrdy92/wheels/45/b0/e3/2eeba5f2822725123eba400b020e96ec93e60e14fa21699a10
Successfully built transformers
Installing collected packages: tokenizers, urllib3, typing-extensions, tqdm, regex, pyyaml, packaging, numpy, idna, fsspec, filelock, charset-normalizer, certifi, requests, huggingface-hub, transformers
Successfully installed certifi-2023.5.7 charset-normalizer-3.1.0 filelock-3.12.0 fsspec-2023.5.0 huggingface-hub-0.15.1 idna-3.4 numpy-1.24.3 packaging-23.1 pyyaml-6.0 regex-2023.6.3 requests-2.31.0 tokenizers-0.13.3 tqdm-4.65.0 transformers-4.30.0.dev0 typing-extensions-4.6.3 urllib3-2.0.2
```<|||||>Do we have any ETA when will we release this security fix? <|||||>As indicated on the page, v4.30.0 (released last week) contains the fix. |
transformers | 23,371 | closed | Revert "Only add files with modification outside doc blocks" | Reverts huggingface/transformers#23327.
I apologize, but I read Sylvain's messge too quickly and got it completely wrong.
> for now the tests are launched on a file if we modify it, but I would only launch it if docstrings are modified (e.g. check the modifications are correct) to go faster.
That merged PR did the converse instead: it adds a test file if only docstring (instead of only code) are modified.
I will need to create sth like `diff_is_code_only`.
| 05-15-2023 12:05:25 | 05-15-2023 12:05:25 | > Thanks for fixing!
>
> Apologies for not catching in the review either. In the nightly CI, have we run all of the doctests - or do we expect there to be any untested pieces of code between the merge of #23327 and this PR?
On daily doctest CI, everything is tested :-) - there is no filtration, it just check all files in `utils/documentation_tests.txt`.<|||||>_The documentation is not available anymore as the PR was closed or merged._ |
transformers | 23,370 | closed | Fix `OwlViTForObjectDetection.image_guided_detection` doc example | # What does this PR do?
Need to update expected values after #23157 | 05-15-2023 11:55:00 | 05-15-2023 11:55:00 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@amyeroberts I know you actually want to give the approval but forgot doing it ๐
.
As I am a serious man, I would try not to merge without a format approval ๐ |
transformers | 23,369 | closed | Fix `BigBirdForMaskedLM` doctest | # What does this PR do?
Need to update some expected values in the doc example after #23056 (that PR also updated some values in the test file) | 05-15-2023 11:37:53 | 05-15-2023 11:37:53 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 23,368 | closed | RWKV split CPU & GPU results in high perplexity | ### System Info
Using https://github.com/huggingface/transformers/pull/22797#event-9203076880 PR, I tried to evaluate perplexity on wikitext2 using HuggingFace RWKV but found a weird behavior (gist to reproduce the bug: https://gist.github.com/3outeille/e74ec833ec2800a94325f8dad8e0da3d).
- When model is fully loaded on CPU or GPU, perlexity is fine
- When some block of RWKV are loaded in CPU and GPU, perplexity is high
Any idea ?
### Who can help?
@sgugger, @younesbelkada
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
https://gist.github.com/3outeille/e74ec833ec2800a94325f8dad8e0da3d
### Expected behavior
- Full CPU โ๏ธ :
- `nlls: tensor([2.0129, 2.3220, 2.3500])`
- `Perplexity: 9.284077644348145`
- Full GPU โ๏ธ :
- `nlls: tensor([2.0137, 2.3223, 2.3496], device='cuda:0', dtype=torch.float16)`
- `Perplexity: 9.2890625`
- Split ๐ด :
- `nlls: tensor([15.6641, 15.9141, 16.5469], device='cuda:0', dtype=torch.float16)`
- `Perplexity: 9312564.0` | 05-15-2023 11:15:10 | 05-15-2023 11:15:10 | @younesbelkada Any update ?<|||||>Hi @3outeille
Sadly I didn't had time to check that out, are you still facing the issue with the latest main branch of transformers & accelerate?<|||||>Hi @younesbelkada, I update transformers & accelerate to the latest release version as shown here: https://github.com/3outeille/hf_rwkv_bug/blob/master/requirements.txt and the bug is still here<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 23,367 | closed | [Bugfix] `OPTDecoderLayer` does not return attentions when `gradient_checkpointing` and `training` is enabled. | # What does this PR do?
Reorder argument of OPTDecoderLayer.forward
Fixes #23366
| 05-15-2023 10:28:06 | 05-15-2023 10:28:06 | _The documentation is not available anymore as the PR was closed or merged._<|||||>cc @younesbelkada @ArthurZucker |
transformers | 23,366 | closed | `OPTDecoderLayer` does not return attentions when `gradient_checkpointing` and `training` is enabled. | # Bug Description
In `modeling_opt.py#704:710` [code](https://github.com/huggingface/transformers/blob/cf11493dce0a1d22446efe0d6c4ade02fd928e50/src/transformers/models/opt/modeling_opt.py#L704), `OPTDecoder` calls `OPTDecoderLayer.forward` with following argument order.
```py
if self.gradient_checkpointing and self.training:
def create_custom_forward(module):
def custom_forward(*inputs):
# None for past_key_value
return module(*inputs, output_attentions, None)
return custom_forward
layer_outputs = torch.utils.checkpoint.checkpoint(
create_custom_forward(decoder_layer),
hidden_states,
causal_attention_mask,
head_mask[idx] if head_mask is not None else None,
None,
)
else:
layer_outputs = decoder_layer(
hidden_states,
attention_mask=causal_attention_mask,
layer_head_mask=(head_mask[idx] if head_mask is not None else None),
past_key_value=past_key_value,
output_attentions=output_attentions,
use_cache=use_cache,
)
```
However, in `OPTDecoderLayer.forward` [code](https://github.com/huggingface/transformers/blob/cf11493dce0a1d22446efe0d6c4ade02fd928e50/src/transformers/models/opt/modeling_opt.py#L297), the order of argument is different with the previously showed function call argument order .
```py
def forward(
self,
hidden_states: torch.Tensor,
attention_mask: Optional[torch.Tensor] = None,
layer_head_mask: Optional[torch.Tensor] = None,
output_attentions: Optional[bool] = False, # **need to be reorder**
use_cache: Optional[bool] = False, # **need to be reorder**
past_key_value: Optional[Tuple[torch.Tensor]] = None, # **need to be reorder**
) -> Tuple[torch.FloatTensor, Optional[Tuple[torch.FloatTensor, torch.FloatTensor]]]:
```
Therefore, output_attentions of `OPTDecoderLayer.forward` always being `None`, because 4th argument in function call is always `None` [code](https://github.com/huggingface/transformers/blob/cf11493dce0a1d22446efe0d6c4ade02fd928e50/src/transformers/models/opt/modeling_opt.py#LL701C26-L701C26)
# Solution
Just change the order of declaration of `OPTDecoderLayer.forward` as following
```py
def forward(
self,
hidden_states: torch.Tensor,
attention_mask: Optional[torch.Tensor] = None,
layer_head_mask: Optional[torch.Tensor] = None,
past_key_value: Optional[Tuple[torch.Tensor]] = None,
output_attentions: Optional[bool] = False,
use_cache: Optional[bool] = False,
) -> Tuple[torch.FloatTensor, Optional[Tuple[torch.FloatTensor, torch.FloatTensor]]]:
```
### System Information
- `transformers` version: 4.29.1
- Platform: Linux-5.15.0-58-generic-x86_64-with-glibc2.35
- Python version: 3.9.16
- Huggingface_hub version: 0.14.1
- Safetensors version: 0.2.7
- PyTorch version (GPU?): 2.0.1+cu117 (True)
- Tensorflow version (GPU?): 2.12.0 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes and No. Bug happens in both places.
- Using distributed or parallel set-up in script?: None
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```py
import transformers
from transformers.models.opt.modeling_opt import OPTDecoder
import torch
model = transformers.OPTForCausalLM.from_pretrained('facebook/opt-125m')
model.train()
for m in model.modules():
if isinstance(m, OPTDecoder):
m.gradient_checkpointing = True
m.config.use_cache = False
output = model(torch.zeros((1, 4), dtype=torch.int64), output_attentions=True)
assert type(output.attentions) == tuple
assert type(output.attentions[0]) == torch.Tensor, type(output.attentions[0])
```
The above test code should finish without error. However, the result is the following.
```
(torch) ainl@ainl-main-ubuntu:~/library/bug$ python -m opt_bug
Traceback (most recent call last):
File "/home/ainl/anaconda3/envs/torch/lib/python3.9/runpy.py", line 197, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/home/ainl/anaconda3/envs/torch/lib/python3.9/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/home/ainl/library/bug/opt_bug.py", line 13, in <module>
assert type(output.attentions[0]) == torch.Tensor, type(output.attentions[0])
AssertionError: <class 'tuple'>
```
Following is my environment setting.
```
(torch) ainl@ainl-main-ubuntu:~/library/bug$ pip show torch transformers
Name: torch
Version: 2.0.1
Summary: Tensors and Dynamic neural networks in Python with strong GPU acceleration
Home-page: https://pytorch.org/
Author: PyTorch Team
Author-email: [email protected]
License: BSD-3
Location: /home/ainl/anaconda3/envs/torch/lib/python3.9/site-packages
Requires: filelock, jinja2, networkx, nvidia-cublas-cu11, nvidia-cuda-cupti-cu11, nvidia-cuda-nvrtc-cu11, nvidia-cuda-runtime-cu11, nvidia-cudnn-cu11, nvidia-cufft-cu11, nvidia-curand-cu11, nvidia-cusolver-cu11, nvidia-cusparse-cu11, nvidia-nccl-cu11, nvidia-nvtx-cu11, sympy, triton, typing-extensions
Required-by: axial-positional-embedding, basicsr, deepspeed, facexlib, gfpgan, invisible-watermark, local-attention, onnx2torch, open-clip-torch, performer-pytorch, product-key-memory, pytorch-tabnet, realesrgan, sinkhorn-transformer, thop, timm, torch-tensorrt, torchaudio, torchdata, torchtext, torchvision, triton
---
Name: transformers
Version: 4.29.1
Summary: State-of-the-art Machine Learning for JAX, PyTorch and TensorFlow
Home-page: https://github.com/huggingface/transformers
Author: The Hugging Face team (past and future) with the help of all our contributors (https://github.com/huggingface/transformers/graphs/contributors)
Author-email: [email protected]
License: Apache 2.0 License
Location: /home/ainl/anaconda3/envs/torch/lib/python3.9/site-packages
Requires: filelock, huggingface-hub, numpy, packaging, pyyaml, regex, requests, tokenizers, tqdm
Required-by:
```
### Expected behavior
Finish the above test code without any errors.
# Call for Moderator (Text-models)
@ArthurZucker and @younesbelkada | 05-15-2023 10:20:52 | 05-15-2023 10:20:52 | |
transformers | 23,365 | closed | Fix some `is_xxx_available` | # What does this PR do?
FYI, after #23163, `is_bs4_available()` and `is_faiss_available()` gives `False` even if they are actually available. This causes some CI errors, in particularly, `MarkupLM`.
This PR fixes the issue in a quick way. It's better to discuss if we want to enhance the function `_is_package_available` to be able to handle such edge cases.
(cc. @apbard FYI) | 05-15-2023 10:08:35 | 05-15-2023 10:08:35 | @ydshieh thanks for fixing this and sorry for having introduced these bugs in first place.<|||||>_The documentation is not available anymore as the PR was closed or merged._ |
transformers | 23,364 | closed | Minor fixes in transformers-tools | Really just a few things as I dig a bit into the implementation of transformers-tools:
- `upload_folder` instead of `os.listdir` + `create_commit` (more robust against recursion)
- some typing
- use `metadata_update` with correct `repo_id` when pushing to Hub
- use `build_hf_headers` instead of `HfFolder` for token retrieval
- use `super().__init__()` and `super().setup()` in `PipelineTool` (otherwise the pipeline is setup again at each run) | 05-15-2023 09:11:06 | 05-15-2023 09:11:06 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 23,363 | closed | Fix issue introduced in PR #23163 | # What does this PR do?
Fix issue introduced in PR #23163.
The previous `torch_version` is removed (which is not a string but a `version` type), and `get_torch_version()` is introduced and used (which is a string). In some places, it is compared against `self.torch_onnx_minimum_version` which is a string, and we know get on CI `TypeError: '<' not supported between instances of 'str' and 'Version'`.
This PR fixes this problem and avoid the > 1000 test failures.
(cc. @apbard FYI) | 05-15-2023 08:22:55 | 05-15-2023 08:22:55 | _The documentation is not available anymore as the PR was closed or merged._<|||||>FYI, after #23163, `is_bs4_available()` and `is_faiss_available()` gives `False` even if they are actually available. This causes some CI errors, in particularly, `MarkupLM`.
I will fix this in a separate PR. |
transformers | 23,362 | closed | [image-to-text pipeline] Add conditional text support + GIT | # What does this PR do?
The `ImageToText` pipeline can generate text given an image, but oftentimes one wants to make the model continue text given a prompt (like "a photo of"). This PR adds support for conditional text generation given an image.
It also adds support for GIT.
This PR fixes a part of #21110 and is based on #22423.
To do:
- [x] add support for Pix2Struct once design is approved
cc @younesbelkada | 05-15-2023 07:54:46 | 05-15-2023 07:54:46 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@Narsil thanks for your review, feel free to approve.<|||||>Apologies, will take this into account. |
transformers | 23,361 | closed | [wip test doc-build] | testing https://github.com/huggingface/doc-builder/pull/372
| 05-15-2023 07:37:48 | 05-15-2023 07:37:48 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 23,360 | closed | Typo suggestion | Typo corrected in docs: "preprocessign" --> "preprocessing"
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 05-15-2023 03:45:49 | 05-15-2023 03:45:49 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 23,359 | closed | Replace appends with list comprehension. | It's more idiomatic and significantly more efficient because 1) it avoids repeated `append` call that Python has to resolve on each iteration 2) can preallocate the size of the final list avoiding resizing
# What does this PR do?
This PR replaces uses list comprehension instead of list appends.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 05-15-2023 03:21:46 | 05-15-2023 03:21:46 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 23,358 | closed | Phidas | Mai multฤ inteligenศฤ artificialฤ! | 05-14-2023 21:40:57 | 05-14-2023 21:40:57 | |
transformers | 23,357 | closed | Model isn't loaded with the right type with AutoModel with torch_dtype="auto" | ### System Info
- `transformers` version: 4.30.0.dev0
### Who can help?
@stas00
(as the relevant change was made in https://github.com/huggingface/transformers/pull/21524)
### Reproduction
```python
from transformers import AutoModelForSeq2SeqLM, T5ForConditionalGeneration
auto_model = AutoModelForSeq2SeqLM.from_pretrained("ybelkada/flan-t5-xl-sharded-bf16", torch_dtype="auto")
print(auto_model.dtype) # torch.float32
t5_model = T5ForConditionalGeneration.from_pretrained("ybelkada/flan-t5-xl-sharded-bf16", torch_dtype="auto")
print(t5_model.dtype) # torch.bfloat16
```
### Expected behavior
`AutoModelForSeq2SeqLM` should also load the model in `torch.bfloat16`. | 05-14-2023 15:54:02 | 05-14-2023 15:54:02 | cc @younesbelkada <|||||>Hi @eladsegal ๐

- The model was saved in `bfloat16` with `"T5ForConditionalGeneration"` architecture so the model was loaded in `bfloat16`
- But in `AutoModel.from_pretrained ` method torch_dtype is set to `auto` and you can read the doc ( image i have uploaded) that dtype picked by `auto` is generally `float32`
- Hope it helps , If i misunderstood your question ,Please ๐ give a feedback
**You can check this --> [doc](https://huggingface.co/docs/transformers/v4.29.1/en/main_classes/configuration#transformers.PretrainedConfig.torch_dtype)**<|||||>Thank you for the report, @eladsegal - that's indeed a bug that I introduced while trying to fix another issue.
Please try this fix: https://github.com/huggingface/transformers/pull/23379<|||||>Thank you @stas00 for the quick fix! Works just as expected. <|||||>Thank you for confirming that, @eladsegal! |
transformers | 23,356 | closed | Replace NumPy Operations with JAX NumPy Equivalents for JIT Compilation Compatibility | # What does this PR do?
This PR modifies the Transformers library to replace NumPy operations with their JAX NumPy equivalents. The main change is the use of JAX's immutable update methods as substitutes for in-place assignments.
Using NumPy methods instead of JAX NumPy results in an error during jit compilation (jit).
```
transformers/models/mbart/modeling_flax_mbart.py", line 226, in shift_tokens_right
prev_output_tokens = np.array(input_ids).copy()
jax._src.errors.TracerArrayConversionError: The numpy.ndarray conversion method __array__() was called on the JAX Tracer object Traced<ShapedArray(int32[4,260])>with<DynamicJaxprTrace(level=0/1)>
```
Here's a brief summary of the changes:
previous:
```python
prev_output_tokens = np.array(input_ids).copy()
if pad_token_id is None:
raise ValueError("self.model.config.pad_token_id has to be defined.")
# replace possible -100 values in labels by `pad_token_id`
prev_output_tokens = np.where(prev_output_tokens == -100, pad_token_id, input_ids)
index_of_eos = (np.where(prev_output_tokens != pad_token_id, 1, 0).sum(axis=-1) - 1).reshape(-1, 1)
decoder_start_tokens = np.array(
[prev_output_tokens[i, eos_idx] for i, eos_idx in enumerate(index_of_eos)], dtype=np.int32
).squeeze()
prev_output_tokens[:, 1:] = prev_output_tokens[:, :-1].copy()
prev_output_tokens[:, 0] = decoder_start_tokens
```
modefied:
```python
prev_output_tokens = jnp.array(input_ids).copy()
if pad_token_id is None:
raise ValueError("self.model.config.pad_token_id has to be defined.")
# replace possible -100 values in labels by `pad_token_id`
prev_output_tokens = jnp.where(prev_output_tokens == -100, pad_token_id, input_ids)
index_of_eos = (jnp.where(prev_output_tokens != pad_token_id, 1, 0).sum(axis=-1) - 1).reshape(-1, 1)
decoder_start_tokens = jnp.array(
[prev_output_tokens[i, eos_idx] for i, eos_idx in enumerate(index_of_eos)], dtype=jnp.int32
).squeeze()
prev_output_tokens = prev_output_tokens.at[:, 1:].set(prev_output_tokens[:, :-1])
prev_output_tokens = prev_output_tokens.at[:, 0].set(decoder_start_tokens)
```
- @sanchit-gandhi
| 05-14-2023 13:37:05 | 05-14-2023 13:37:05 | _The documentation is not available anymore as the PR was closed or merged._<|||||>doneโ
https://github.com/huggingface/transformers/pull/23356/commits/175ab5dd0b8276f70b69ab21ddb4356aa353d611<|||||>@gojiteji To resolve the failing quality tests, you'll need to run `make fix-copies` and `make style` and push the changes. It seems the MT5 also doesn't have `jnp` defined in the modeling file. |
transformers | 23,355 | closed | Added support for AzureOpenAiAgent in tools | # What does this PR do?
Implements a new class AzureOpenAiAgent derived from Agent in transformer agents.
Fixes #23324
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests? | 05-14-2023 11:09:00 | 05-14-2023 11:09:00 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23355). All of your documentation changes will be reflected on that endpoint.<|||||>Awesome that you are contributing.
In my opinion, rather than having a separate AzureOpenAiAgent adding the options and API calls in the OpenAiAgent would be preferrable, so it is only configuration of parameters to switch to Azure. Not my call, but my preference. <|||||>cc @sgugger <|||||>Ah actually I see two arguments are renamed. Maybe have this be a subclass of `OpenAiAgent` to avoid rewriting every method and just rewrites `_completion_generate` and `_chat_generate`?<|||||>DeploymentId is named arbitrarily and does not let you directly derive the model type from it unless you do additional requests to look it up.
The underlying Python OpenAI SDK has a way of differentiating between Azure OpenAI and OpenAI's own deployment. In my opinion it would make sense to align the API style and expose what the API exposes in a similar fashion. <|||||>> The underlying Python OpenAI SDK has a way of differentiating between Azure OpenAI and OpenAI's own deployment. In my opinion it would make sense to align the API style and expose what the API exposes in a similar fashion.
If it's easily doable, then yes let's aim for that!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 23,354 | open | Make it easy to get seperate "prints" for individual runs/ users when using Transformers Agent | ### Feature request
I have started exploring the new Transformers Agent. And I would like to build a UI to help me speed up the process.
I might be running multiple runs in parallel or have multiple users using my application. I would like to be able to stream the information from the run as it arrives. I would like to store the information in a database containing all the runs I've done.
Currently all the valuable information about the run is printed I.e. you are using print to inform me like below
```bash
==Explanation from the agent==
I will use the following tool: `image_generator` to generate an image.
==Code generated by the agent==
image = image_generator(prompt="rivers and lakes")
==Result==
<PIL.PngImagePlugin.PngImageFile image mode=RGB size=512x512 at 0x7F8DDC11C4C0>
```
This is for example done in `agenst.py`

Using `print` makes it hard for me to distinguish between multiple runs/ users. Especially if run in parallel.
Please provide a simple to use method to stream each run individually. It could be as simple as adding a `print` (or `write`) argument to the `Agent.run`, `HFAgent.run` and `OpenAI.run` method.
Alternatively some `run_id` argument could be provided and printed as well. Then I can split the stream that comes in by `run_id`. This is less preferred though that this also adds some complexity.
### Motivation
This will make it much, much easier to create interesting AI apps.
### Your contribution
I might do it ๐ . But I hope someone with knowledge of the code base would do it.
### Additional Context
An async `.run_async` function would also be much appreciated as my UI is built on top of Tornado. This will help me keep the app responsive. | 05-14-2023 03:31:49 | 05-14-2023 03:31:49 | The solution for me is probably to inspect the `run` function and then compose the pieces in a way that works better for my app.
<|||||>cc @sgugger @LysandreJik <|||||>Would the PR mentioned above fix your problem? |
transformers | 23,353 | closed | Add support for SciBART by UCLANLP | # What does this PR do?
Add the support for the SciBART model (https://arxiv.org/abs/2212.10233). This is a BART model trained from scratch on the S2ORC corpus. Its tokenizer is a sentencepiece tokenizer trained from scratch. This PR supports using the newly trained tokenizer. The model checkpoints are already uploaded to https://huggingface.co/uclanlp/scibart-base and https://huggingface.co/uclanlp/scibart-large.
Implementation-wise, this PR refers to #1839.
## Before submitting
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@ArthurZucker and @younesbelkada
| 05-13-2023 22:29:43 | 05-13-2023 22:29:43 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23353). All of your documentation changes will be reflected on that endpoint.<|||||>Hi @younesbelkada, this pr is ready for review.<|||||>Thank you for the comments! I have fixed the mentioned issues and they are ready for review. @amyeroberts @younesbelkada <|||||>> Could you explain the main difference that are added in this tokenizer? I am not sure we have to add a new class for it, that's why I am asking
Yes. We are adding a new model pre-trained from scratch on the science corpus. This model has exactly the same architecture as BART but with a different vocabulary. The new tokenizer class is needed because our tokenizer is trained with sentencepiece, while the facebook BART model does not use sentencepiece. Does this solve your concerns? @ArthurZucker <|||||>Sure, but we have bunch of already implemented tokenizers that rely on `spm`, with slow to fast converters. Look at the `BarthezTokenizer` for example, code seems kinda duplicate. `XGLM` looks also very similar, same for `XLM_Roberta`. <|||||>> Sure, but we have bunch of already implemented tokenizers that rely on `spm`, with slow to fast converters. Look at the `BarthezTokenizer` for example, code seems kinda duplicate. `XGLM` looks also very similar, same for `XLM_Roberta`.
@ArthurZucker Thank you for the reply. `BarthezTokenizer` is indeed similar (so is `XGLMTokenizer`). However, I believe it has some different assumptions and cannot be directly used by SciBART. For example, the default tokens 0, 1, 2, 3 are different (https://github.com/huggingface/transformers/blob/b7b729b38d12309185bcc9fdf8b55418a1ad2421/src/transformers/models/barthez/tokenization_barthez.py#L160). Letting SciBartTokenizer inherit from it also does not seem to make sense because it breaks the modularity.
What are the actionable items here? Our priority is to enable users to use the SciBART model.<|||||>Sorry for being noisy ๐
If you are going to make the model available on the hub, and the only differences are the default tokens, I would simply recommend you to have the tokenizer on the hub. We can't accept a new model which only has this one line that differs. An other solution is to open a PR to allow this default `fairseq_token_ids` to be an argument of the init, which would allow you to store it in the tokenizer config to easily use the BarthezTokenizer!
Another way is to hold your code on the hub! <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 23,352 | closed | Transformers can not load dependency of tensorflow - No module named 'keras.engine' | ### System Info
osX silicon M1
- python 3.8.16 (also tested with newer versions e.g. 3.9)
- tensorflow 2.11.0 eigen_py39h384437f_0
(also tested with tensforflow 2.13 rc0)
tried conda and venv.
- transformers 4.28.1
also tested 4.29.1
```python
#sample code which causes the error below
from transformers import pipeline
summarizer = pipeline("summarization")
```
```
No model was supplied, defaulted to t5-small and revision d769bba (https://huggingface.co/t5-small).
Using a pipeline without specifying a model name and revision in production is not recommended.
Traceback (most recent call last):
File "/opt/homebrew/Caskroom/miniconda/base/envs/lib/python3.9/site-packages/transformers/utils/import_utils.py", line 1146, in _get_module
return importlib.import_module("." + module_name, self.__name__)
File "/opt/homebrew/Caskroom/miniconda/base/envs/lib/python3.9/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1030, in _gcd_import
File "<frozen importlib._bootstrap>", line 1007, in _find_and_load
File "<frozen importlib._bootstrap>", line 986, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 680, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 850, in exec_module
File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed
File "/opt/homebrew/Caskroom/miniconda/base/envs/lib/python3.9/site-packages/transformers/models/t5/modeling_tf_t5.py", line 35, in <module>
from ...modeling_tf_utils import (
File "/opt/homebrew/Caskroom/miniconda/base/envs/lib/python3.9/site-packages/transformers/modeling_tf_utils.py", line 69, in <module>
from keras.engine import data_adapter
ModuleNotFoundError: No module named 'keras.engine'
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "sumary_service.py", line 3, in <module>
summarizer = pipeline("summarization")
File "/opt/homebrew/Caskroom/miniconda/base/envs/lib/python3.9/site-packages/transformers/pipelines/__init__.py", line 779, in pipeline
framework, model = infer_framework_load_model(
File "/opt/homebrew/Caskroom/miniconda/base/envs/lib/python3.9/site-packages/transformers/pipelines/base.py", line 238, in infer_framework_load_model
_class = getattr(transformers_module, f"TF{architecture}", None)
File "/opt/homebrew/Caskroom/miniconda/base/envs/lib/python3.9/site-packages/transformers/utils/import_utils.py", line 1137, in __getattr__
value = getattr(module, name)
File "/opt/homebrew/Caskroom/miniconda/base/envs/lib/python3.9/site-packages/transformers/utils/import_utils.py", line 1136, in __getattr__
module = self._get_module(self._class_to_module[name])
File "/opt/homebrew/Caskroom/miniconda/base/envs/lib/python3.9/site-packages/transformers/utils/import_utils.py", line 1148, in _get_module
raise RuntimeError(
RuntimeError: Failed to import transformers.models.t5.modeling_tf_t5 because of the following error (look up to see its traceback):
No module named 'keras.engine'
```
### Who can help?
@gante and @Rocketknight1
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Reproduction
- follow install example from official docs (https://huggingface.co/docs/transformers/installation)
- run sample code from model page https://huggingface.co/facebook/bart-large-cnn
`**error does not occur when using pytorch**`
### Expected behavior
transformer library does not raise exception | 05-13-2023 19:02:12 | 05-13-2023 19:02:12 | Hi @dcdieci, this issue is the result of some namespace moves inside TensorFlow which occurred because Keras was partly decoupled from TensorFlow and moved to its own repository. If you look at [our codebase](https://github.com/huggingface/transformers/blob/main/src/transformers/modeling_tf_utils.py#L68-L77), you can see that we import these functions from `keras` for TF versions >= 2.11 and from `tensorflow.python.keras` below this. It seems like in your case you're using a newer version of TensorFlow, but you're missing a modern version of the standalone Keras library (which should be installed as a dependency of TensorFlow). Can you try `pip install --upgrade keras` to see if that resolves the issue?<|||||>@Rocketknight1 many thanks for getting back to me that fast (considering the amount of issues, unbelievable).
I changed to pytorch and everything is working fine. I am very new to the AI space and I was wondering if it does make sense to use tensorflow then at all, since pytorch is running?
<|||||>Both PyTorch and TensorFlow do basically the same thing (they're a framework for linear algebra + acceleration on GPU and TPU) - we support both, but you only need one! If PyTorch is easier to get working on your system then it's totally fine to just use it instead.<|||||>I ran into this too. I can't speak to the specific issue, but it's related to the latest pre-release of tensorflow (https://github.com/tensorflow/tensorflow/releases/tag/v2.13.0-rc0), which is installed when you do a `pip install tensorflow` (odd that a pre-release get's installed, but that's another story). There is some keras related breaking changes in that release. In any case, I was able to get around this by building+installing tensorflow 2.12.0 from source.<|||||>Thanks for the heads-up - we'll do some testing with TF 2.13 before it releases!<|||||>There are upcoming breaking changes to keras. Please see
https://github.com/keras-team/keras/issues/18141
Also see the release notes here https://github.com/tensorflow/tensorflow/releases/tag/v2.13.0-rc0 particularly the part in 'Breaking Changes' that talks about restricting access so that only public symbols are accessible.
This will need updates to transformers to resolve I think.<|||||>I've opened a PR at #23663 that should cover this issue as well as future-proof against other changes. My limited testing with 2.13rc0 on my local machine looked good, but if you get the chance please try it out with `pip install --upgrade git+https://github.com/huggingface/transformers.git@tf_future_proofing`
cc @dcdieci @elfringham @sanderpick<|||||>This has now been merged - if anyone else is having compatibility issues with `transformers` and TensorFlow 2.13 and finds this issue, please install transformers from `main` with `pip install --upgrade git+https://github.com/huggingface/transformers.git`. Once we release 2.30 (probably end of May / early June) you can go back to just `pip install --upgrade transformers`.
If anyone is still encountering this problem after installing the latest version, please reply or reopen this issue and let us know!<|||||>Hello! I think I'm still running into this issue.
* python 3.8.16
* tensorflow 2.13.0rc1
* transformers from main
I'm on an M2 rather than an M1, and maybe I should try downgrading TF to 2.11? _Edit: After looking into this, I don't think I can actually downgrade, so I'm stuck on the current tensorflow. I was able to get code running with pytorch though!_<|||||>Hi! This regression was caused by a PR we merged yesterday and should be fixed as of about an hour ago. Please install the latest version from `main` and try again. Thanks again to @frostming for spotting that one so quickly!<|||||>We will also be making a proper release of version 4.30 later this week that should correctly support TF 2.13, so hopefully after that everyone can just `pip install --upgrade transformers` and stop installing from `main`.<|||||>Thank you so much! <|||||>After re-running the installation (from commit 12298cb65c7e9d615b749dde935a0b4966f4ae49) it still fails on my end, but github also seems to be having problems, so maybe that commit is behind.<|||||>@coolhannes Can you paste the error message you're getting?<|||||>Yep! Sorry. (Also you may see some references to pytorch, I switched to that in the meantime but this is the error I get from TF).
Running
```
from transformers import TFAutoModelForSequenceClassification, AutoTokenizer
model_name = "distilbert-base-uncased-finetuned-sst-2-english"
model = TFAutoModelForSequenceClassification.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
```
and getting:
```
---------------------------------------------------------------------------
ModuleNotFoundError Traceback (most recent call last)
File ~/.pyenv/versions/3.8.16/envs/opt-outs/lib/python3.8/site-packages/transformers/utils/import_utils.py:1084, in _LazyModule._get_module(self, module_name)
1083 try:
-> 1084 return importlib.import_module("." + module_name, self.__name__)
1085 except Exception as e:
File ~/.pyenv/versions/3.8.16/lib/python3.8/importlib/__init__.py:127, in import_module(name, package)
126 level += 1
--> 127 return _bootstrap._gcd_import(name[level:], package, level)
File <frozen importlib._bootstrap>:1014, in _gcd_import(name, package, level)
File <frozen importlib._bootstrap>:991, in _find_and_load(name, import_)
File <frozen importlib._bootstrap>:975, in _find_and_load_unlocked(name, import_)
File <frozen importlib._bootstrap>:671, in _load_unlocked(spec)
File <frozen importlib._bootstrap_external>:843, in exec_module(self, module)
File <frozen importlib._bootstrap>:219, in _call_with_frames_removed(f, *args, **kwds)
File ~/.pyenv/versions/3.8.16/envs/opt-outs/lib/python3.8/site-packages/transformers/models/distilbert/modeling_tf_distilbert.py:37
29 from ...modeling_tf_outputs import (
30 TFBaseModelOutput,
31 TFMaskedLMOutput,
(...)
35 TFTokenClassifierOutput,
36 )
---> 37 from ...modeling_tf_utils import (
38 TFMaskedLanguageModelingLoss,
39 TFModelInputType,
40 TFMultipleChoiceLoss,
41 TFPreTrainedModel,
42 TFQuestionAnsweringLoss,
43 TFSequenceClassificationLoss,
44 TFTokenClassificationLoss,
45 get_initializer,
46 keras_serializable,
47 unpack_inputs,
48 )
49 from ...tf_utils import check_embeddings_within_bounds, shape_list, stable_softmax
File ~/.pyenv/versions/3.8.16/envs/opt-outs/lib/python3.8/site-packages/transformers/modeling_tf_utils.py:77
76 from keras.__internal__ import KerasTensor
---> 77 from keras.engine.base_layer_utils import call_context
78 elif parse(tf.__version__).minor >= 11:
ModuleNotFoundError: No module named 'keras.engine'
The above exception was the direct cause of the following exception:
RuntimeError Traceback (most recent call last)
Cell In[4], line 7
4 from torch.nn.parallel import DataParallel
6 model_name = "distilbert-base-uncased-finetuned-sst-2-english"
----> 7 model = TFAutoModelForSequenceClassification.from_pretrained(model_name)
8 tokenizer = AutoTokenizer.from_pretrained(model_name)
File ~/.pyenv/versions/3.8.16/envs/opt-outs/lib/python3.8/site-packages/transformers/models/auto/auto_factory.py:483, in _BaseAutoModelClass.from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
479 return model_class.from_pretrained(
480 pretrained_model_name_or_path, *model_args, config=config, **hub_kwargs, **kwargs
481 )
482 elif type(config) in cls._model_mapping.keys():
--> 483 model_class = _get_model_class(config, cls._model_mapping)
484 return model_class.from_pretrained(
485 pretrained_model_name_or_path, *model_args, config=config, **hub_kwargs, **kwargs
486 )
487 raise ValueError(
488 f"Unrecognized configuration class {config.__class__} for this kind of AutoModel: {cls.__name__}.\n"
489 f"Model type should be one of {', '.join(c.__name__ for c in cls._model_mapping.keys())}."
490 )
File ~/.pyenv/versions/3.8.16/envs/opt-outs/lib/python3.8/site-packages/transformers/models/auto/auto_factory.py:375, in _get_model_class(config, model_mapping)
374 def _get_model_class(config, model_mapping):
--> 375 supported_models = model_mapping[type(config)]
376 if not isinstance(supported_models, (list, tuple)):
377 return supported_models
File ~/.pyenv/versions/3.8.16/envs/opt-outs/lib/python3.8/site-packages/transformers/models/auto/auto_factory.py:657, in _LazyAutoMapping.__getitem__(self, key)
655 if model_type in self._model_mapping:
656 model_name = self._model_mapping[model_type]
--> 657 return self._load_attr_from_module(model_type, model_name)
659 # Maybe there was several model types associated with this config.
660 model_types = [k for k, v in self._config_mapping.items() if v == key.__name__]
File ~/.pyenv/versions/3.8.16/envs/opt-outs/lib/python3.8/site-packages/transformers/models/auto/auto_factory.py:671, in _LazyAutoMapping._load_attr_from_module(self, model_type, attr)
669 if module_name not in self._modules:
670 self._modules[module_name] = importlib.import_module(f".{module_name}", "transformers.models")
--> 671 return getattribute_from_module(self._modules[module_name], attr)
File ~/.pyenv/versions/3.8.16/envs/opt-outs/lib/python3.8/site-packages/transformers/models/auto/auto_factory.py:616, in getattribute_from_module(module, attr)
614 if isinstance(attr, tuple):
615 return tuple(getattribute_from_module(module, a) for a in attr)
--> 616 if hasattr(module, attr):
617 return getattr(module, attr)
618 # Some of the mappings have entries model_type -> object of another model type. In that case we try to grab the
619 # object at the top level.
File ~/.pyenv/versions/3.8.16/envs/opt-outs/lib/python3.8/site-packages/transformers/utils/import_utils.py:1074, in _LazyModule.__getattr__(self, name)
1072 value = self._get_module(name)
1073 elif name in self._class_to_module.keys():
-> 1074 module = self._get_module(self._class_to_module[name])
1075 value = getattr(module, name)
1076 else:
File ~/.pyenv/versions/3.8.16/envs/opt-outs/lib/python3.8/site-packages/transformers/utils/import_utils.py:1086, in _LazyModule._get_module(self, module_name)
1084 return importlib.import_module("." + module_name, self.__name__)
1085 except Exception as e:
-> 1086 raise RuntimeError(
1087 f"Failed to import {self.__name__}.{module_name} because of the following error (look up to see its"
1088 f" traceback):\n{e}"
1089 ) from e
RuntimeError: Failed to import transformers.models.distilbert.modeling_tf_distilbert because of the following error (look up to see its traceback):
No module named 'keras.engine'
```<|||||>Hi @coolhannes, that error refers to code from an older commit - it looks like you might not have upgraded to the most recent version on `main`! Try `pip install --upgrade https://github.com/huggingface/transformers.git` and the error should go away.<|||||>Oh my god, I think I was installing this instead of uninstalling/upgrading -- but this is finally working, thank you @Rocketknight1! |
transformers | 23,351 | closed | Document what layerdrop does | ### System Info
- `transformers` version: 4.28.1
- Platform: Windows-10-10.0.22621-SP0
- Python version: 3.11.2
- Huggingface_hub version: 0.14.1
- Safetensors version: not installed
- PyTorch version (GPU?): 2.0.1+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: NO
- Using distributed or parallel set-up in script?: NO
### Who can help?
@sgugger
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
In `transformers/models/wav2vec2/configuration_wav2vec2.py` there is a parameter `layerdrop` in `__init__` which is not documented. This parameter is set (and overrides the default `0.1`) in the examples at `examples/pytorch/speech-recognition/README.md`, so it seems to be important.
### Expected behavior
Document `layerdrop`. | 05-13-2023 17:02:04 | 05-13-2023 17:02:04 | cc @sanchit-gandhi <|||||>Hey @RobertBaruch - good catch! It's indeed missing from the Wav2Vec2 config docstring. Would you like to open a PR to add this info? Easiest would be to copy the details from one of the existing configs where the info is present, e.g. OPT:
https://github.com/huggingface/transformers/blob/130e15429116689c9d747be2cdd8c4be7bb7e2bd/src/transformers/models/opt/configuration_opt.py#L70<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Closed via https://github.com/huggingface/transformers/pull/23691 |
transformers | 23,350 | closed | Encoder-Decoder: OTP as a decoder | ### System Info
- `transformers` version: 4.29.1
- Platform: Linux-5.14.0-162.6.1.el9_1.0.1.x86_64-x86_64-with-glibc2.34
- Python version: 3.10.10
- Huggingface_hub version: 0.14.1
- Safetensors version: not installed
- PyTorch version (GPU?): 2.0.1+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
### Who can help?
@ArthurZucker and @younesbelkada, and maybe also @gante for generation in enc/dec scenarios
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```python
import torch
from transformers import EncoderDecoderModel, OPTConfig, MT5Config, MT5Model, OPTForCausalLM, AutoTokenizer
def init_enc_dec(enc_model_name: str = "google/mt5-small", dec_model_name: str = "facebook/opt-350m"):
config_encoder = MT5Config.from_pretrained(enc_model_name)
config_encoder.is_encoder_decoder = False
config_encoder.add_cross_attention = False
config_encoder.is_decoder = False
config_encoder.num_decoder_layers = 0
config_decoder = OPTConfig.from_pretrained(dec_model_name)
config_decoder.add_cross_attention = True
config_decoder.is_decoder = True
encoder = MT5Model.from_pretrained(enc_model_name, config=config_encoder).get_encoder()
decoder = OPTForCausalLM.from_pretrained(dec_model_name, config=config_decoder)
model = EncoderDecoderModel(encoder=encoder, decoder=decoder)
return model
def main():
model = init_enc_dec()
model.eval()
enc_tokenizer = AutoTokenizer.from_pretrained("google/mt5-small")
dec_tokenizer = AutoTokenizer.from_pretrained("facebook/opt-350m")
with torch.no_grad():
inputs = enc_tokenizer("I like bananas", return_tensors="pt")
outputs = model.generate(**inputs)
print(dec_tokenizer.batch_decode(**outputs))
if __name__ == '__main__':
main()
```
This leads to
```
Traceback (most recent call last):
File "/home/local/vanroy/llm-generation/enc_dec.py", line 38, in <module>
main()
File "/home/local/vanroy/llm-generation/enc_dec.py", line 33, in main
outputs = model.generate(**inputs)
File "/home/local/vanroy/llm-generation/.venv/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/home/local/vanroy/llm-generation/.venv/lib/python3.10/site-packages/transformers/generation/utils.py", line 1515, in generate
return self.greedy_search(
File "/home/local/vanroy/llm-generation/.venv/lib/python3.10/site-packages/transformers/generation/utils.py", line 2332, in greedy_search
outputs = self(
File "/home/local/vanroy/llm-generation/.venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/home/local/vanroy/llm-generation/.venv/lib/python3.10/site-packages/transformers/models/encoder_decoder/modeling_encoder_decoder.py", line 617, in forward
decoder_outputs = self.decoder(
File "/home/local/vanroy/llm-generation/.venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
TypeError: OPTForCausalLM.forward() got an unexpected keyword argument 'encoder_hidden_states'
```
### Expected behavior
I am trying to use the encoder-decoder functionality but I am not sure whether I am doing something wrong, or whether OPT is simply not compatible with this architecture.
| 05-13-2023 16:47:09 | 05-13-2023 16:47:09 | Hey @BramVanroy ๐
I believe most, if not all, recent decoder-only models are not compatible with `EncoderDecoderModel`, as they are missing a [block like this one in GPT2](https://github.com/huggingface/transformers/blob/main/src/transformers/models/gpt2/modeling_gpt2.py#L404) (plus related changes, like making `encoder_hidden_states` an argument).<|||||>Thanks for the reply @gante! I indeed had found this difference in the code. Do you know whether there are any plans to make more decoders compatible? <|||||>@BramVanroy not on our end, since Decoder-only models have been stealing the spotlight!
We'd be happy to merge the appropriate changes, though<|||||>Okay, that makes sense. Different priorities! Thanks for the reply Joรฃo. |
transformers | 23,349 | open | Add mPLUG-Owl | # What does this PR do?
This PR adds the mPLUG-Owl model from [X-PLUG/mPLUG-Owl](https://github.com/X-PLUG/mPLUG-Owl) which is a multi-modal large language model outperforms LLAVA and mini-GPT4.
Here is a code shows how to play with it:
```Python
from transformers import MplugOwlForConditionalGeneration, MplugOwlProcessor
from PIL import Image
import requests
import torch
model = MplugOwlForConditionalGeneration.from_pretrained("MAGAer13/mplug-owl-llama-7b")
processor = MplugOwlProcessor.from_pretrained("MAGAer13/mplug-owl-llama-7b")
prompts = [
'''The following is a conversation between a curious human and AI assistant. The assistant gives helpful, detailed, and polite answers to the user's questions.
Human: <image>
Human: how many cats are there?
AI: ''']
image_list = ['http://images.cocodataset.org/val2017/000000039769.jpg']
images = [Image.open(requests.get(_, stream=True).raw) for _ in image_list]
inputs = processor(prompts, images, return_tensors='pt')
inputs = inputs.to('cuda')
model = model.to('cuda').half()
res = model.generate(**inputs, max_length=512, num_beams=1)
print(processor.decode(True,token_ids=res.tolist()[0]))
```
<!-- Remove if not applicable -->
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
| 05-13-2023 13:30:33 | 05-13-2023 13:30:33 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23349). All of your documentation changes will be reflected on that endpoint.<|||||>> Thank you very much for this PR and adding this great model! Let us know when this is ready for review, I can see that a lot of CI tests are failing, you should resolve them by removing the `from .xxxx import *` statements in the init file of `mplug_owl` folder
I added some commits, but I noticed that there are still some failed tests. Among them, tests_torch reports an error "worker 'gw1' crashed while running 'tests/models/mplug_owl/test_modeling_mplug_owl.py::MplugOwlModelTest::test_forward_signature'", but I can pass this test in my local environment. I would like to seek some advice.
In addition, for other tests such as tests_flax, is it necessary for me to pass them?
<|||||>> I added some commits, but I noticed that there are still some failed tests. Among them, tests_torch reports an error "worker 'gw1' crashed while running 'tests/models/mplug_owl/test_modeling_mplug_owl.py::MplugOwlModelTest::test_forward_signature'", but I can pass this test in my local environment. I would like to seek some advice.
The job ran out of the available RAM as seen in the picture below.
<img width="1052" alt="Screenshot 2023-05-19 064910" src="https://github.com/huggingface/transformers/assets/2521628/04049d9b-84d0-4cfd-bba4-6f82a6081628">
Note that the CI launched 3 pytest process instead of a single one, so it will use more memory. Also, our CI runner has only 16GB RAM, which might be different from you hardware.
The crash here means you might use (some) large values in the test file to create the models that are used for testing.
> In addition, for other tests such as tests_flax, is it necessary for me to pass them?
Depending on what kinds of failure. If it is something like import error, yes, we expect the contributor to fix and pass the CI :-). If it is something like Hub Connection error, it's fine, we can leave it.
<|||||>> > I added some commits, but I noticed that there are still some failed tests. Among them, tests_torch reports an error "worker 'gw1' crashed while running 'tests/models/mplug_owl/test_modeling_mplug_owl.py::MplugOwlModelTest::test_forward_signature'", but I can pass this test in my local environment. I would like to seek some advice.
>
> The job ran out of the available RAM as seen in the picture below. <img alt="Screenshot 2023-05-19 064910" width="1052" src="https://user-images.githubusercontent.com/2521628/239440445-04049d9b-84d0-4cfd-bba4-6f82a6081628.png"> Note that the CI launched 3 pytest process instead of a single one, so it will use more memory. Also, our CI runner has only 16GB RAM, which might be different from you hardware.
>
> The crash here means you might use (some) large values in the test file to create the models that are used for testing.
>
> > In addition, for other tests such as tests_flax, is it necessary for me to pass them?
>
> Depending on what kinds of failure. If it is something like import error, yes, we expect the contributor to fix and pass the CI :-). If it is something like Hub Connection error, it's fine, we can leave it.
Thank you for your help, now all the checks have passed.<|||||>Awesome @LukeForeverYoung !
Is the PR ready for a first review?<|||||>> Awesome @LukeForeverYoung !
> Is the PR ready for a first review?
Yes<|||||>> Hi @LukeForeverYoung Let us know if you need any help finishing up the PR and if you have more questions!
Sorry for not being able to reply recently. Our team has been swamped with work, which causing that the process of integrating mPLUG-Owl into transformers may be paused for a long time. Thank you very much for your review. It would be great if anyone is willing to take over. |
transformers | 23,348 | closed | Add support for GIT model in VQA pipelines | # What does this PR do?
This PR implement support for generative model in VQA pipeline (and more precisely GIT model).
Fixes part of #21110
This is my first contribution here, I am uncertain if my approach is correct. Please advise me if any modifications are necessary ๐
## Before submitting
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case. => #21110
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@NielsRogge | 05-13-2023 12:08:48 | 05-13-2023 12:08:48 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23348). All of your documentation changes will be reflected on that endpoint.<|||||>I have two remarks here :
* The CI is red due to this change : https://github.com/huggingface/transformers/pull/23348/files#diff-b452cc4f093e4b991e92054cf7d504edab44be07c4957969df85a5562238313cR48 the model now used in test is `hf-internal-testing/tiny-random-ViltForQuestionAnswering` instead of `hf-internal-testing/tiny-vilt-random-vqa` (and it seems like the new model image processor can't process images, I am not sure about how to fix it)
* The test for GIT in VQA pipeline doesn't run because `hf-internal-testing/tiny-random-GitForVisualQuestionAnswering` doesn't exist, I need help about this point as well <|||||>Thanks for the review, I updated my changes following your comments. However, I have several doubt on my approach.
### Beam search for scores
I use beam scores to provide a score to follow the "signature" of the pipeline described [here](https://github.com/huggingface/transformers/blob/c3c9b03d55f2d8094a2ac058db566d469baa8bbd/src/transformers/pipelines/visual_question_answering.py#L106-L110). Is it a correct ?
The beam search is so slow that it makes the pipeline test [timeout](https://app.circleci.com/pipelines/github/huggingface/transformers/65019/workflows/be9d4fde-5cbb-4db0-ba85-0110d8988953/jobs/806494), it runs locally but in more than 120s, which make me think I'm not in the right way here
### Tokenizer padding
Also when I use the pipeline with `microsoft/git-base-textvqa` in batch mode, I have this warning :
```
A decoder-only architecture is being used, but right-padding was detected! For correct generation results, please set `padding_side='left'` when initializing the tokenizer.
```
This warning is legitimate, but I don't know how to fix it as `padding_side` can only be set at tokenizer init.
### Broken `Vilt` tests
Due to [this change](https://github.com/huggingface/transformers/pull/23348/files#diff-b452cc4f093e4b991e92054cf7d504edab44be07c4957969df85a5562238313cR48) , the model used in unit test for Vilt model is now `hf-internal-testing/tiny-random-ViltForQuestionAnswering` instead of `hf-internal-testing/tiny-vilt-random-vqa`.
It seems like the new model image processor can't process images, I am not sure about how to fix it
Should I fix the model directly in the hub ? <|||||>Gently ping here @NielsRogge :-)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 23,347 | closed | Still cannot import cached_path | ### System Info
- `transformers` version: 4.29.1
- Platform: Linux-5.19.0-41-generic-x86_64-with-glibc2.36
- Python version: 3.11.3
- Huggingface_hub version: 0.14.1
- Safetensors version: 0.3.1
- PyTorch version (GPU?): 2.0.1 (False)
- Tensorflow version (GPU?): 2.12.0 (False)
- Flax version (CPU?/GPU?/TPU?): 0.6.10 (cpu)
- Jax version: 0.4.10
- JaxLib version: 0.4.10
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: ?
### Who can help?
@sanchit-gandhi I guess...
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
from transformers import cached_path
be it as a single instruction in ipython or when using the ConvAIModel from simpletransformers
Tried all versions from before 4.22.0
### Expected behavior
Import the method | 05-13-2023 07:00:57 | 05-13-2023 07:00:57 | I don't think that that exists. Did you mean `default_cache_path` or maybe `TRANSFORMERS_CACHE`? Those are in file_utils (and actually utils/hub)
```python
from transformers.file_utils import default_cache_path, TRANSFORMERS_CACHE
```
https://github.com/huggingface/transformers/blob/cf11493dce0a1d22446efe0d6c4ade02fd928e50/src/transformers/utils/hub.py#L80-L102<|||||>Bram,
Thanks so much for answering!
This is what I get from running the model:

Which refers to this part of the >>>miniconda3/envs/ST2/lib/python3.11/site-packages/simpletransformers/conv_ai/conv_ai_utils.py:

And that's where all stops since this function is used across the script and creates objects that are used by the model.py.
Of course this is a simpletransformers code , but it requests an import from transformers that apparently does not exist...
Thanks!
<|||||>Unfortunately that is not a problem of `transformers` but of `simpletransformers`. In their requirements, they specify `"transformers>=4.6.0"` so it is possible that they do not test against/support the most recent versions.
https://github.com/ThilinaRajapakse/simpletransformers/blob/365b27feb27e8337a7f4f0244eff8683c5763ef8/setup.py#L29
I suggest that you try 4.6.0 and if that does not work, you should ask them on their repository because there is not much that can be done on this end.
<|||||>Will do. Appreciate it.<|||||>Thanks @BramVanroy! And best of luck @Fshrink with getting your `simpletransformers` script working @Fshrink! |
transformers | 23,346 | closed | Asynchronous CUDA Execution Issue with Hugging Face Transformers | ### System Info
I am reaching out to you regarding an issue I've been experiencing with the Hugging Face Transformers library in a PyTorch environment. I'm encountering unexpected CUDA synchronizations while executing my code, which seems to be impairing the performance of my model. I am hopeful that you can provide some guidance on this matter.
As a bit of background, I am utilizing the Deepspeed integration for running inference of pre-trained Switch-Transformers with 8 experts model from the Hugging Face library for a project. This involves processing large amounts of data and hence necessitates efficient GPU utilization for timely results. To maximize the GPU's potential, I have been aiming to leverage CUDA's asynchronous execution feature.
In CUDA, as you know, the CPU queues kernels for execution on the GPU. Ideally, while the CPU is busy queueing up kernels, the GPU should be asynchronously running the kernels that have already been queued. This is the behavior I have previously observed when using the Megatron-LM library.
However, when using the Hugging Face Transformers library, I am finding that there seems to be a CUDA synchronize call after every kernel, which effectively serializes the CPU and GPU operations. This has led to a significant decrease in processing speed and efficiency, as the GPU is left idle while the CPU prepares the next kernel.
I am unsure whether this is due to an issue with my code or if it's an inherent characteristic of the Hugging Face Transformers library. I was wondering if you might have any insights into this issue or any suggestions for further troubleshooting steps I could take. Does the implementation of Switch Transformer Model have implicit synchronization that I might not be aware of?
### Who can help?
@ArthurZucker @younesbelkada @stas00
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
1. Wrap the pretrained switch model with deepspeed initialize
2. Take the first Switch Layer
3. Initialize a random tensor to run forward on
4. Profile using Pytorch profiler
5. Behavior of profile seems as if CPU waits for the CUDA kernel to finish

Below is the code I ran :
```
from transformers import AutoTokenizer, SwitchTransformersForConditionalGeneration
from transformers.deepspeed import HfDeepSpeedConfig
# ds_config_file = "ds_zero_stage_0_config.json"
# ds_config_file = "ds_zero_stage_infinity-cpu.json"
ds_config_file = "ds_zero_stage_2_config.json"
with open(ds_config_file) as fin:
ds_config = json.load(fin)
tokenizer = AutoTokenizer.from_pretrained("google/switch-base-8")
model = SwitchTransformersForConditionalGeneration.from_pretrained("google/switch-base-8", device_map="auto")
dschf = HfDeepSpeedConfig(ds_config)
model, optimizer, _, _ = deepspeed.initialize(
config = ds_config,
model = model,
model_parameters = [{
"params": [p for n,p in list(model.named_parameters())],
"name" : "base",
"weight_decay" : 0.01
}]
)
model.eval()
# Get a switch layer
switch_layer = model.encoder.block[1].layer[1].mlp
batch_size = 1000
seq_len = 1000
d_model = model.module.config.d_model
d_ff = model.module.config.d_ff
din = torch.rand(size=(batch_size, seq_len, d_model),
dtype=torch.float32,
device="cuda:0"
)
#hidden_states, (router_logits, expert_index)
with profile(activities=[ProfilerActivity.CPU, ProfilerActivity.CUDA],
record_shapes=True, profile_memory=True) as prof:
for i in range(10):
dout = switch_layer(din)
prof.export_chrome_trace("trace_check.json")
```
### Expected behavior
I would like to know if there are implicit synchronization in the switch model implementation and why the profiled result show that CPU synchronizes with the CUDA runtime? | 05-13-2023 05:48:28 | 05-13-2023 05:48:28 | Hey! Thanks for opening an issue. For other people who might wonder, this is because of the inner workings of accelerate, which are activated by the call to `device_map="auto"`, meaning that the model will be dispatched between both CPU and GPU. |
transformers | 23,345 | closed | Prompt tuning for Dolly-v2-7b model for Question and Answer not supported? | I am following this page for `Prompt tuning for Dolly-v2-7b model for Question and Answer`: https://huggingface.co/docs/peft/task_guides/clm-prompt-tuning
Instead of doing the training in old `pytorch way`. I am doing the training using `Trainer api`. Also in this link
https://huggingface.co/stevhliu/bloomz-560m_PROMPT_TUNING_CAUSAL_LM/tree/main , I see 2 files `adapter_config.json` and `adapter_model.bin`.
But when I save the model using Trainer api I do not see any `config file`. Also model size is bigger than what is shown in above link.
Is this correct way to **train**, **save** and **load** model for Prompt Tuning. ?
**The inference take lot of time to generate. and gives some gibberish output**
### Who can help?
@stevhliu @sgugger @lvwerra
Here is my code:
The use-case is:
I have `Context` which has lot of paragraphs and then `Question` , the model has to `answer` the `Question` based on `Context` in a professional manner. Also can it classify the `Question` as **relevant** if answer is present in `Context` and **irrelevant** if `answer` is not in `Context`
The code that I have written is:
```peft_config = PromptTuningConfig(
task_type=TaskType.CAUSAL_LM,
prompt_tuning_init=PromptTuningInit.TEXT,
num_virtual_tokens=30,
prompt_tuning_init_text="Answer the question as truthfully as possible using and only using the provided context and if the answer is not contained within the context/text, say Irrelevant",
tokenizer_name_or_path="dolly-v2-7b"
)
```
```
tokenizer = AutoTokenizer.from_pretrained("dolly-v2-7b")
model = AutoModelForCausalLM.from_pretrained("dolly-v2-7b",load_in_8bit=True,device_map='auto') #,load_in_8bit=True
```
`model = get_peft_model(model, peft_config)`
```
train_data = [
{
"Context": "How to Link Credit Card to ICICI Bank Account Step 1: Login to ICICIBank.com using your existing internet banking credentials. Step 2: Go to the 'Service Request' section. Step 3: Visit the 'Customer Service' option. Step 4: Select the Link Accounts/ Policy option to link your credit card to the existing user ID.",
"Question": "How to add card?",
"Answer": "Relevant. To add your card you can follow these steps: Step 1: Login to ICICIBank.com using your existing internet banking credentials. Step 2: Go to the 'Service Request' section. Step 3: Visit the 'Customer Service' option. Step 4: Select the Link Accounts/ Policy option to link your credit card to the existing user ID."
},
{
"Context": "The python programming language is used in many different fields including web development, data analysis, artificial intelligence and scientific computing. It is a high-level language that is easy to learn and has a large community of users who can provide support and advice. ",
"Question": "What is Python used for?",
"Answer": "Relevant. Python is used in many different fields including web development, data analysis, artificial intelligence and scientific computing."
}
]
```
# Define a function to map examples to inputs and targets
```
def preprocess_function(examples):
tokenized_examples = tokenizer(
examples["Question"][0],
examples["Context"][0],
truncation=True,
max_length=1024,
padding="max_length"
)
tokenized_examples['labels']=tokenizer(
examples["Answer"],
truncation=True,
max_length=1024,
padding="max_length",
return_tensors="pt")['input_ids'][0]
return tokenized_examples
```
`tokenized_train_data = [preprocess_function(example) for example in train_data]`
```
class DemoDataset(Dataset):
def __init__(self, data):
self.data = data
def __len__(self):
return len(self.data)
def __getitem__(self, idx):
sample = self.data[idx]
item = {k: torch.tensor(v) for k, v in sample.items()}
return item
```
`dataset = DemoDataset(tokenized_train_data)`
```
training_args = TrainingArguments(
output_dir="results",
learning_rate=1e-5,
per_device_train_batch_size=1,
num_train_epochs=10,
weight_decay=0.01,
logging_steps=1,
save_steps=1,
logging_dir="logs"
)
```
```
trainer = Trainer(
model=model,
args=training_args,
train_dataset=dataset,
# data_collator=data_collator,
tokenizer=tokenizer
)
trainer.train()
```
**Is this correct way to save?**
`trainer.save_model("dolly3b_demo_model")`
**Inference**
**Is this correct way to do inference**
```
from peft import PeftModel, PeftConfig
tokenizer = AutoTokenizer.from_pretrained("dolly-v2-3b")
model = AutoModelForCausalLM.from_pretrained("dolly3b_demo_model")
model = get_peft_model(model, peft_config)
```
```
# Define example
context = "How to Link Credit Card to ICICI Bank Account Step 1: Login to ICICIBank.com using your existing internet banking credentials. Step 2: Go to the 'Service Request' section. Step 3: Visit the 'Customer Service' option. Step 4: Select the Link Accounts/ Policy option to link your credit card to the existing user ID."
question = "How to add card?"
# Encode inputs with prompt and tokenize
inputs = [f"{context} {question}"]
inputs_encoded = tokenizer(inputs, padding=True, truncation=True, max_length=1024, return_tensors="pt")
```
```
outputs = model.generate(input_ids=inputs_encoded["input_ids"], attention_mask=inputs_encoded["attention_mask"], max_new_tokens=200,)
print(tokenizer.batch_decode(outputs.detach().cpu().numpy(), skip_special_tokens=True))
``` | 05-13-2023 03:51:15 | 05-13-2023 03:51:15 | Hi @pratikchhapolika, thanks for raising an issue!
This is a question best placed in our [forums](https://discuss.huggingface.co/). We try to reserve the github issues for feature requests and bug reports.<|||||>> Hi @pratikchhapolika, thanks for raising an issue!
>
> This is a question best placed in our [forums](https://discuss.huggingface.co/). We try to reserve the github issues for feature requests and bug reports.
Hi @amyeroberts , Since I did not get any response in forums so thought to ask here. <|||||>@pratikchhapolika I understand, however the github issues are still reserved for feature requests and bugs as it's not sustainable for everyone to ask here if there isn't a response on the forum.
Another place to ask for help on questions such as these are on the [discord forum](https://t.co/1n75wi976V?amp=1). Specifically, there's an `ask-for-help` channel which is very active. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 23,344 | closed | Whisper processor no longer saves mel_filters with `.save_pretrained()` | ### System Info
- `transformers` version: 4.29.1
- Platform: Linux-5.15.107+-x86_64-with-glibc2.31
- Python version: 3.10.11
- Huggingface_hub version: 0.14.1
- Safetensors version: 0.3.1
- PyTorch version (GPU?): 2.0.0+cu118 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): 0.6.9 (cpu)
- Jax version: 0.4.8
- JaxLib version: 0.4.7
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help?
@hollance @sanchit-gandhi
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Running this code:
```python
from transformers import AutoProcessor
processor = AutoProcessor.from_pretrained('openai/whisper-tiny')
processor.save_pretrained('test')
```
will output the following `preprocessor_config.json`:
```json
{
"chunk_length": 30,
"feature_extractor_type": "WhisperFeatureExtractor",
"feature_size": 80,
"hop_length": 160,
"n_fft": 400,
"n_samples": 480000,
"nb_max_frames": 3000,
"padding_side": "right",
"padding_value": 0.0,
"processor_class": "WhisperProcessor",
"return_attention_mask": false,
"sampling_rate": 16000
}
```
which does not include `mel_filters`. This is different to the official models saved on the hub, which do include this: https://huggingface.co/openai/whisper-tiny/blob/main/preprocessor_config.json
This is due to the following recent update: https://github.com/huggingface/transformers/commit/7f9195090160d508c7afb2e444e34f181872dd10
Linked issue: https://github.com/xenova/transformers.js/issues/107
### Expected behavior
Saving the processor should also save the `mel_filters`. | 05-13-2023 01:00:02 | 05-13-2023 01:00:02 | That change was made before the update you mentioned, in https://github.com/huggingface/transformers/pull/21267
It is not necessary to save the mel filters in the JSON file since they are completely defined by the other properties from that JSON file. Plus it makes the JSON file huge and unreadable.
As far as I'm aware, the PyTorch implementation of Whisper will load JSON config files with or without the mel filters just fine. If this breaks in Transformers.js, then the issue would seem to be there.
<|||||>Okay thanks ๐ I agree, it does make it quite unreadable; I just thought I would mention it since it causes a mismatch with some of the official whisper models on the hub. |
transformers | 23,343 | closed | Removing one of the twice defined position_embeddings in LongFormer | # What does this PR do?
The self.position_embeddings in LongFormerEmbeddings is defined twice. Removing the first without padding_idx
l. 451/452
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline]
## Who can review?
- text models: @ArthurZucker and @younesbelkada
| 05-12-2023 22:29:08 | 05-12-2023 22:29:08 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 23,342 | open | [WIP] Add tf swiftformer | # What does this PR do?
Adds the TensorFlow version of the "SwiftFormer".
Fixes #22771
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case. https://github.com/huggingface/transformers/issues/22771
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@amyeroberts @D-Roberts | 05-12-2023 20:02:13 | 05-12-2023 20:02:13 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23342). All of your documentation changes will be reflected on that endpoint.<|||||>Hi @joaocmd,
Rapid work on opening the TF port! Let me or @Rocketknight1 know when the PR is ready for review or you experience any issues when porting. <|||||>Hi @Rocketknight1, could I get some pointers as to why I get errors like in most of the tests:
```
E ValueError: Exception encountered when calling layer 'tf_swift_former_model_18' (type TFSwiftFormerModel).
E
E The following keyword arguments are not supported by this model: ['input_ids'].
E
E Call arguments received by layer 'tf_swift_former_model_18' (type TFSwiftFormerModel):
E โข pixel_values={'pixel_values': 'tf.Tensor(shape=(13, 224, 224, 3), dtype=float32)'}
E โข output_hidden_states=None
E โข return_dict=None
E โข training=False
src/transformers/modeling_tf_utils.py:500: ValueError
```
The PyTorch model has this following docstring but I don't see where the input_ids part is being taken care of.
```py
"""
Here we also overwrite some of the tests of test_modeling_common.py, as SwiftFormer does not use input_ids, inputs_embeds,
attention_mask and seq_length.
"""
```
Thanks!<|||||>It seems like it is entering in the `else` statement at line 581 of `src/transformers/modeling_tf_utils.py`:
```python
if "args" in output:
if output["args"] is not None and is_tf_symbolic_tensor(output["args"]):
tensor_name = output["args"].name.split(":")[0]
output[tensor_name] = output["args"]
else:
# `args` in this case is always the first parameter, then `input_ids`
output["input_ids"] = output["args"]
del output["args"]
```
Thus it is injecting the `input_ids` argument into the dictionary.
@amyeroberts @Rocketknight1 How should I get around this? It must be some misconfiguration in my tests or models.
<|||||>@joaocmd Just looking at the error and the CI runs, I think the issue might be a missing `@unpack_inputs` decorator on the call method for the MainLayer class<|||||>> @joaocmd Just looking at the error and the CI runs, I think the issue might be a missing `@unpack_inputs` decorator on the call method for the MainLayer class
Thank you @amyeroberts! It seems like that wasn't causing any issue (yet), but thanks to your comment I found out that I had a duplicate `@unpack_inputs` in one of the models.<|||||>Hi @amyeroberts and @Rocketknight1, can I get some help with the tests that are still failing? I'm getting `ValueError: cannot reshape array of size 10368 into shape (3,3,3,24)` for these two tests:
* `tests/models/swiftformer/test_modeling_tf_swiftformer.py::TFSwiftFormerModelTest::test_compile_tf_model`
* `tests/models/swiftformer/test_modeling_tf_swiftformer.py::TFSwiftFormerModelTest::test_save_load`
But I don't understand exactly what is being reshaped into the wrong shape. Could I get some insight as to what these tests are doing and why it might be failing? Thanks!<|||||>Hi @joaocmd, there's been some large updates to our TF models regarding how they're built - @Rocketknight1 can give you more details :)
Are these errors happening if you rebase on `main`? <|||||>> Hi @joaocmd, there's been some large updates to our TF models regarding how they're built - @Rocketknight1 can give you more details :)
>
> Are these errors happening if you rebase on `main`?
Hi @amyeroberts, just rebased the branch. I think it's failing on the same tests but the error on these two tests changed to:
```
NotImplementedError: Could not infer input image shape from config, please override input_signature to specify input shapes.
```
Looking at the stack trace it seems like the image size should have been specified:
```python
if hasattr(vision_config, "image_size"):
pixel_values_shape[2] = pixel_values_shape[3] = vision_config.image_size
elif hasattr(vision_config, "input_size"):
pixel_values_shape[2] = pixel_values_shape[3] = vision_config.input_size
else:
raise NotImplementedError( # <------ this error here
"Could not infer input image shape from config, please override input_signature to specify input shapes."
)
```
Shouldn't this also affect the original model?
<|||||>@joaocmd Regarding the error, no, it shouldn't affect the original model. `image_size` is a parameter we add in the configs, even if it's not always used by the model as it's often important for parameterizing other things or understanding. We allow [this here](https://github.com/huggingface/transformers/blob/468aed39afffafe417819a309a4e6d45d2a9e8f4/utils/check_config_attributes.py#L187). It should have been added, and we can add in this PR, but the PT model can do without.
You'll notice that the error is being raise in `modeling_tf_utils.py`. This is because when constructing a TF model, we have to pass in dummy inputs to build it. In PyTorch this isn't necessary, because we explicitly set the input and output dimensions when creating each layer, so the weight matrices can be created immediately. `image_size` is needed to know the shape of the inputs to pass in.
As a side note, did you force push after rebasing? From the PR history, it looks like you might not have. As rebasing is a form of "rewriting history" it's necessary to force push.
<|||||>Thanks @amyeroberts, understood. As for the rebase, I had not done one in quite some time and it seems like I did mess it up. I think that is now fixed.
Since I started this PR I have had a fundamental question about huggingface's approach to tensorflow models. The default in TensorFlow is NHWC while in PyTorch it is NCHW, how should I approach this difference in my PR? Based on `modeling_tf_vit.py` I suppose the correct approach is to assume that images are given in PyTorch format and transpose them in the first block, is that so? How does that affect the following blocks?
Also, if we were implementing a model for semantic segmentation, which would return an image with the same size as the original one, would that be returned in the PyTorch format or the default TensorFlow format?
Thank you!<|||||>@joaocmd The pattern we use for the TF vision models is to transpose the NCHW format in the first MainLayer class e.g. [here](https://github.com/huggingface/transformers/blob/868363abb9e72a638b4710d1f5ef1199839b3eec/src/transformers/models/resnet/modeling_tf_resnet.py#L337C18-L337C18) and then transpose back, if pixel values are returned e.g. [here](https://github.com/huggingface/transformers/blob/868363abb9e72a638b4710d1f5ef1199839b3eec/src/transformers/models/resnet/modeling_tf_resnet.py#L348). For some of the older models e.g. ViT this pattern may not have been applied, as these were the first models to be added.
This pattern means the model is written in the TF compatible NHWC format throughout, but all of our vision models accept and return images in NCHW. <|||||>Thank you @amyeroberts, that makes sense. I've already updated it to match the pattern.
I'm still having some trouble with the `test_compile_tf_model`. Initially it was failing because it was passing a shape `(None, 56, 56, 48)` to a `reshape` (https://github.com/huggingface/transformers/pull/23342/commits/204e216e6047d83775cfb5f0d928b378b73d2e84#diff-7f093399e807b53ca4b63460f610dcc550c2937cb18cd513d71dc49ce6e1b699R385).
I changed the line to use `[-1, width * height, channels]` as shape, which seems like it fixed that case. However, now it is failing because a shape `(None, None, None, 48)` is being passed to that reshape call. Is this expected of this test? According to the stack trace it seems like it's being triggered by a `tf.keras.Model.save()` (https://github.com/joaocmd/transformers/blob/add_tf_swiftformer/tests/test_modeling_tf_common.py#L711).
I've also noticed that there was an overhaul to the serving and dummy_inputs interface (https://github.com/huggingface/transformers/commit/814de8fac7456bd2ce50d1847505da829761bfdc). But maybe @Rocketknight1 can better explain the consequences of this change to mine (and other) PRs.<|||||>@joaocmd Yes, there was a big refactor of the `serving_output` logic. For most models, there's no need to have `serving_output`, `dummy_inputs` or `serving` implemented. You should be able to remove these and have the `test_prepare_serving_output` test pass.
Looking at the CI run, I don't see `test_compile_tf_model` failing. Were you able to resolve? Or perhaps are you refering to `test_save_load`? |
transformers | 23,341 | closed | getting AssertionError when using Trainer with `fsdp` and `torch_compile` | ### System Info
While trying to train a `GPT2` model using the `Trainer` with `torch_compile` and `fsdp` flags I get the following error:
```bash
AssertionError: Dynamo only supports FSDP with use_orig_params=True
```
I'm using `python==3.10.9`, and initially used `transformers==4.27.X` but after stumbling upon #22279 I updated to `transformers==4.28.1` but the problem persisted.
### Who can help?
I'm guessing this is @sgugger territory and maybe @ani300 could help too. ๐
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
My training file looks like:
```python
def train(cfg: DictConfig):
# Setup data and dataloader
data_module = prepare_data_module(**cfg.data)
train_dataloader = data_module.train_dataloader()
val_dataloader = data_module.val_dataloader()
# Extract tokenizer from datamodule
tokenizer = data_module.tokenizer
# Setup model and optimizer
model = GPT(**cfg.model)
# Setup data collator
data_collator = DataCollatorForLanguageModeling(tokenizer, mlm=False)
# Setup and running training
train_args = TrainingArguments(**cfg.train_args)
trainer = Trainer(
model=model,
tokenizer=tokenizer,
train_args=args,
data_collator=data_collator,
train_dataset=train_dataloader,
eval_dataset=val_dataloader,
)
trainer.train()
```
Here's my full `TrainingArguments` configuration in `.yaml` format:
```yaml
per_device_train_batch_size: 8
per_device_eval_batch_size: 8
evaluation_strategy: "steps"
eval_steps: 2000
logging_steps: 5000
gradient_accumulation_steps: 8
num_train_epochs: 300
weight_decay: 0.1
warmup_steps: 1_000
lr_scheduler_type: "cosine"
learning_rate: 5e-4
save_steps: 25000
bf16: True
torch_compile: True
tf32: True
fsdp: "full_shard auto_wrap"
fsdp_transformer_layer_cls_to_wrap: 'GPT2Block'
```
And I'm running the training using `torchrun --nproc_per_node=8 train.py` with 8 NVIDIA A40.
### Expected behavior
It should run the training process without problems. | 05-12-2023 17:16:33 | 05-12-2023 17:16:33 | meet same issue....<|||||>cc @pacman100 <|||||>The FSDP wrapper inside `trainer.py` needs to be initialized with `use_orig_params=True` for FSDP + compile to work well together. As of now, that is not the case and there is no flag in the Trainer to make it do so. I can probably find some time later in the week to make a PR, but in any case, that's the issue.<|||||>Thanks @ani300. I will attempt doing a PR. :blush: <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 23,340 | open | Failed to import transformers.pipelines because of the following error (look up to see its traceback): cannot import name 'PartialState' from 'accelerate' | ### System Info
I am trying to import Segment Anything Model (SAM) using transformers pipeline. But this gives the following error :
"
RuntimeError: Failed to import transformers.pipelines because of the following error (look up to see its traceback):
cannot import name 'PartialState' from 'accelerate' (/opt/conda/lib/python3.10/site-packages/accelerate/__init__.py)"
What i am trying to do :
"
from transformers import pipeline
generator = pipeline("mask-generation", model="facebook/sam-vit-huge", device=0)
"
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
import this line:
from transformers import pipeline
generator = pipeline("mask-generation", model="facebook/sam-vit-huge", device=0)
### Expected behavior
The model should import as per this notebook in official tutorials:
https://github.com/huggingface/notebooks/blob/main/examples/automatic_mask_generation.ipynb | 05-12-2023 16:30:49 | 05-12-2023 16:30:49 | Hi @Abhranta,
So that we can best try and help you, could you provide information about the running environment: run `transformers-cli env` in the terminal and copy-paste the output? <|||||>Hello I am also having this issue hopefully we are having the same root issue. I am new to python and ML. Here is the output from my `transformers-cli env`:
```
PS C:\projects\poc-chatbot> transformers-cli env
Traceback (most recent call last):
File "C:\Users\Mitch\AppData\Local\Programs\Python\Python310\lib\runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "C:\Users\Mitch\AppData\Local\Programs\Python\Python310\lib\runpy.py", line 86, in _run_code
exec(code, run_globals)
File "C:\Users\Mitch\AppData\Local\Programs\Python\Python310\Scripts\transformers-cli.exe\__main__.py", line 4, in <module>
File "C:\Users\Mitch\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\commands\transformers_cli.py", line 25, in <module>
from .run import RunCommand
File "C:\Users\Mitch\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\commands\run.py", line 17, in <module>
from ..pipelines import Pipeline, PipelineDataFormat, get_supported_tasks, pipeline
File "C:\Users\Mitch\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\pipelines\__init__.py", line 44, in <module>
from .audio_classification import AudioClassificationPipeline
File "C:\Users\Mitch\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\pipelines\audio_classification.py", line 21, in <module>
from .base import PIPELINE_INIT_ARGS, Pipeline
File "C:\Users\Mitch\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\pipelines\base.py", line 36, in <module>
from ..modelcard import ModelCard
File "C:\Users\Mitch\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\modelcard.py", line 48, in <module>
from .training_args import ParallelMode
File "C:\Users\Mitch\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\training_args.py", line 67, in <module>
from accelerate import PartialState
ImportError: cannot import name 'PartialState' from 'accelerate' (C:\Users\Mitch\AppData\Local\Programs\Python\Python310\lib\site-packages\accelerate\__init__.py)
```<|||||>Hi @MitchellMonaghan, @Abhranta,
Could you try upgrading the installed version of accelerate in your env: `pip install -U accelerate`? <|||||>Thanks this resolved this error thanks. It upgraded the accelerate package from.
```
Installing collected packages: accelerate
Attempting uninstall: accelerate
Found existing installation: accelerate 0.15.0.dev0
Uninstalling accelerate-0.15.0.dev0:
Successfully uninstalled accelerate-0.15.0.dev0
Successfully installed accelerate-0.19.0
``` <|||||>This Error suddenly pops up in kaggle. Any IDEA!!!!!!!!
I already tried installing accelerate, transformers and datasets as the first line to execute in each notebooks.
ImportError Traceback (most recent call last)
File /opt/conda/lib/python3.10/site-packages/transformers/utils/import_utils.py:1172, in _LazyModule._get_module(self, module_name)
1171 try:
-> 1172 return importlib.import_module("." + module_name, self.__name__)
1173 except Exception as e:
File /opt/conda/lib/python3.10/importlib/__init__.py:126, in import_module(name, package)
125 level += 1
--> 126 return _bootstrap._gcd_import(name[level:], package, level)
File <frozen importlib._bootstrap>:1050, in _gcd_import(name, package, level)
File <frozen importlib._bootstrap>:1027, in _find_and_load(name, import_)
File <frozen importlib._bootstrap>:1006, in _find_and_load_unlocked(name, import_)
File <frozen importlib._bootstrap>:688, in _load_unlocked(spec)
File <frozen importlib._bootstrap_external>:883, in exec_module(self, module)
File <frozen importlib._bootstrap>:241, in _call_with_frames_removed(f, *args, **kwds)
File /opt/conda/lib/python3.10/site-packages/transformers/pipelines/__init__.py:44
35 from ..utils import (
36 HUGGINGFACE_CO_RESOLVE_ENDPOINT,
37 is_kenlm_available,
(...)
42 logging,
43 )
---> 44 from .audio_classification import AudioClassificationPipeline
45 from .automatic_speech_recognition import AutomaticSpeechRecognitionPipeline
File /opt/conda/lib/python3.10/site-packages/transformers/pipelines/audio_classification.py:21
20 from ..utils import add_end_docstrings, is_torch_available, logging
---> 21 from .base import PIPELINE_INIT_ARGS, Pipeline
24 if is_torch_available():
File /opt/conda/lib/python3.10/site-packages/transformers/pipelines/base.py:36
35 from ..image_processing_utils import BaseImageProcessor
---> 36 from ..modelcard import ModelCard
37 from ..models.auto.configuration_auto import AutoConfig
File /opt/conda/lib/python3.10/site-packages/transformers/modelcard.py:48
32 from .models.auto.modeling_auto import (
33 MODEL_FOR_AUDIO_CLASSIFICATION_MAPPING_NAMES,
34 MODEL_FOR_CAUSAL_LM_MAPPING_NAMES,
(...)
46 MODEL_FOR_ZERO_SHOT_IMAGE_CLASSIFICATION_MAPPING_NAMES,
47 )
---> 48 from .training_args import ParallelMode
49 from .utils import (
50 MODEL_CARD_NAME,
51 cached_file,
(...)
57 logging,
58 )
File /opt/conda/lib/python3.10/site-packages/transformers/training_args.py:67
66 if is_accelerate_available():
---> 67 from accelerate import PartialState
68 from accelerate.utils import DistributedType
ImportError: cannot import name 'PartialState' from 'accelerate' (/opt/conda/lib/python3.10/site-packages/accelerate/__init__.py)
The above exception was the direct cause of the following exception:
RuntimeError Traceback (most recent call last)
File <timed exec>:2
File <frozen importlib._bootstrap>:1075, in _handle_fromlist(module, fromlist, import_, recursive)
File /opt/conda/lib/python3.10/site-packages/transformers/utils/import_utils.py:1162, in _LazyModule.__getattr__(self, name)
1160 value = self._get_module(name)
1161 elif name in self._class_to_module.keys():
-> 1162 module = self._get_module(self._class_to_module[name])
1163 value = getattr(module, name)
1164 else:
File /opt/conda/lib/python3.10/site-packages/transformers/utils/import_utils.py:1174, in _LazyModule._get_module(self, module_name)
1172 return importlib.import_module("." + module_name, self.__name__)
1173 except Exception as e:
-> 1174 raise RuntimeError(
1175 f"Failed to import {self.__name__}.{module_name} because of the following error (look up to see its"
1176 f" traceback):\n{e}"
1177 ) from e
RuntimeError: Failed to import transformers.pipelines because of the following error (look up to see its traceback):
cannot import name 'PartialState' from 'accelerate' (/opt/conda/lib/python3.10/site-packages/accelerate/__init__.py)<|||||>Hi @RayGone,
Could you share the versions of accelerate, transformers and datasets installed and the steps taken / code being run? Have you tried restarting the running notebook session then running the installs?
<|||||>A week ago now I ran a code and it works fine, now I come to execute it when I import this trainer and TrainingArguments :
`from transformers import Trainer, TrainingArguments`
I get the following error :

any help please
by the way i am using kaggle notebook.<|||||>> Hi @RayGone,
>
> Could you share the versions of accelerate, transformers and datasets installed and the steps taken / code being run? Have you tried restarting the running notebook session then running the installs?
This is the [kaggle notebook](https://www.kaggle.com/code/reganmaharjan/bert-2-albert-transfer-learning-nepsa) that i am running.
This is the output of installing transformers. (Previously I didn't have to install transformers and datasets; they were already installed)
Requirement already satisfied: transformers[accelerate] in /opt/conda/lib/python3.10/site-packages (4.29.2)
P.S. Have made notebook public now.
@amyeroberts <|||||>> Hi
transformers version : 4.29.2
accelerate version : 0.12.0
I don't know actually what is the issue<|||||>You need a more recent version of Accelerate @AzzedineAftiss: `pip install --upgrade accelerate`.<|||||>> You need a more recent version of Accelerate @AzzedineAftiss: `pip install --upgrade accelerate`.
@sgugger @amyeroberts
Thanks guys for the help
That fixes the issue.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Thank you ๐ค so much for the amazing work, making it so easy to try and learn about the best models in the world. ๐
---
Ref: https://huggingface.co/blog/falcon
Got the same error as this issue while running the Falcon tutorial by HuggingFace on Kaggle. Came across this thread via a Google search (not from an LLM yet ๐) and had to make the following changes to get the Falcon tutorial to work on Kaggle notebooks:
```bash
pip install -q --upgrade accelerate einops xformers
```
- The `accelerate` needs to be upgraded as mentioned in this thread.
- Additional packages in `einops` and `xformers` needs to be installed as well.
My Notebook on Kaggle: https://www.kaggle.com/bkowshik/llms-models-falcon
_NOTE: Had to rerun a couple of times given memory issues on Kaggle, so one needs to keep_ ๐ค
```
Write a poem about India
A land of mystic and ancient lore,
where sacred rivers flow and mountains soar.
In India, the sun is in a brilliant glow,
cascading the hues that paint the sky like a magical show.
From Kanyakumari to Kashmir,
the beauty of India never fails to garner.
Its rich cultural heritage with its myriad hues,
and a kaleidoscope of colors, India is blessed.
Tigers roam in the dense forests,
cascading sound of the Ganges, and its gentle whispers.
The intricate handloom woven sarees in red,
a symphony of colors in India's head.
The holy pilgrimage of the sacred mountains,
the golden glow of Diwali, a festival of lights.
India is the land of the brave and true,
a melting pot of religions, cultures and hues!
```
---
@sgugger @amyeroberts Should we can close this issue then?
<details><summary>Complete logs with warning messages printed as part of the output for reference.</summary>
<p>
```
Downloading (โฆ)okenizer_config.json: 100%
220/220 [00:00<00:00, 14.6kB/s]
Downloading (โฆ)/main/tokenizer.json:
2.73M/? [00:00<00:00, 6.12MB/s]
Downloading (โฆ)cial_tokens_map.json: 100%
281/281 [00:00<00:00, 22.8kB/s]
/opt/conda/lib/python3.10/site-packages/tensorflow_io/python/ops/__init__.py:98: UserWarning: unable to load libtensorflow_io_plugins.so: unable to open file: libtensorflow_io_plugins.so, from paths: ['/opt/conda/lib/python3.10/site-packages/tensorflow_io/python/ops/libtensorflow_io_plugins.so']
caused by: ['/opt/conda/lib/python3.10/site-packages/tensorflow_io/python/ops/libtensorflow_io_plugins.so: undefined symbol: _ZN3tsl6StatusC1EN10tensorflow5error4CodeESt17basic_string_viewIcSt11char_traitsIcEENS_14SourceLocationE']
warnings.warn(f"unable to load libtensorflow_io_plugins.so: {e}")
/opt/conda/lib/python3.10/site-packages/tensorflow_io/python/ops/__init__.py:104: UserWarning: file system plugins are not loaded: unable to open file: libtensorflow_io.so, from paths: ['/opt/conda/lib/python3.10/site-packages/tensorflow_io/python/ops/libtensorflow_io.so']
caused by: ['/opt/conda/lib/python3.10/site-packages/tensorflow_io/python/ops/libtensorflow_io.so: undefined symbol: _ZTVN10tensorflow13GcsFileSystemE']
warnings.warn(f"file system plugins are not loaded: {e}")
Downloading (โฆ)lve/main/config.json: 100%
667/667 [00:00<00:00, 33.3kB/s]
Downloading (โฆ)/configuration_RW.py:
2.61k/? [00:00<00:00, 165kB/s]
A new version of the following files was downloaded from https://huggingface.co/tiiuae/falcon-7b-instruct:
- configuration_RW.py
. Make sure to double-check they do not contain any added malicious code. To avoid downloading new versions of the code file, you can pin a revision.
Downloading (โฆ)main/modelling_RW.py:
47.5k/? [00:00<00:00, 2.70MB/s]
A new version of the following files was downloaded from https://huggingface.co/tiiuae/falcon-7b-instruct:
- modelling_RW.py
. Make sure to double-check they do not contain any added malicious code. To avoid downloading new versions of the code file, you can pin a revision.
Downloading (โฆ)model.bin.index.json:
16.9k/? [00:00<00:00, 850kB/s]
Downloading shards: 100%
2/2 [01:13<00:00, 34.78s/it]
Downloading (โฆ)l-00001-of-00002.bin: 100%
9.95G/9.95G [00:49<00:00, 278MB/s]
Downloading (โฆ)l-00002-of-00002.bin: 100%
4.48G/4.48G [00:24<00:00, 169MB/s]
Loading checkpoint shards: 100%
2/2 [01:15<00:00, 35.25s/it]
Downloading (โฆ)neration_config.json: 100%
111/111 [00:00<00:00, 5.67kB/s]
The model 'RWForCausalLM' is not supported for text-generation. Supported models are ['BartForCausalLM', 'BertLMHeadModel', 'BertGenerationDecoder', 'BigBirdForCausalLM', 'BigBirdPegasusForCausalLM', 'BioGptForCausalLM', 'BlenderbotForCausalLM', 'BlenderbotSmallForCausalLM', 'BloomForCausalLM', 'CamembertForCausalLM', 'CodeGenForCausalLM', 'CpmAntForCausalLM', 'CTRLLMHeadModel', 'Data2VecTextForCausalLM', 'ElectraForCausalLM', 'ErnieForCausalLM', 'GitForCausalLM', 'GPT2LMHeadModel', 'GPT2LMHeadModel', 'GPTBigCodeForCausalLM', 'GPTNeoForCausalLM', 'GPTNeoXForCausalLM', 'GPTNeoXJapaneseForCausalLM', 'GPTJForCausalLM', 'LlamaForCausalLM', 'MarianForCausalLM', 'MBartForCausalLM', 'MegaForCausalLM', 'MegatronBertForCausalLM', 'MvpForCausalLM', 'OpenLlamaForCausalLM', 'OpenAIGPTLMHeadModel', 'OPTForCausalLM', 'PegasusForCausalLM', 'PLBartForCausalLM', 'ProphetNetForCausalLM', 'QDQBertLMHeadModel', 'ReformerModelWithLMHead', 'RemBertForCausalLM', 'RobertaForCausalLM', 'RobertaPreLayerNormForCausalLM', 'RoCBertForCausalLM', 'RoFormerForCausalLM', 'RwkvForCausalLM', 'Speech2Text2ForCausalLM', 'TransfoXLLMHeadModel', 'TrOCRForCausalLM', 'XGLMForCausalLM', 'XLMWithLMHeadModel', 'XLMProphetNetForCausalLM', 'XLMRobertaForCausalLM', 'XLMRobertaXLForCausalLM', 'XLNetLMHeadModel', 'XmodForCausalLM'].
```
</p>
</details>
<|||||>hi @amyeroberts, can you help me with this?
i am trying to import pipline from transformers in Kaggle Notebook. but i get the following error:


<|||||>Thanks @bkowshik it worked!<|||||>@sgugger @amyeroberts as annoying as it is, but pipeline in kaggle is not working as seen in [screenshot](https://github.com/huggingface/transformers/issues/23340#issuecomment-1609261696) above.
It didn't work even when i did this:
`!pip install transformers tokenizers datasets huggingface_hub --upgrade -q`
`!pip install accelerator --upgrade -q`<|||||>@RayGone Have posted details of the fix here: https://github.com/huggingface/transformers/issues/23340#issuecomment-1606719159<|||||>> @RayGone Have posted details of the fix here: https://github.com/huggingface/transformers/issues/23340#issuecomment-1606719159
Thanks, will try that.
Didn't try that because i wasn't using xformers directly. But i guess its used by some other dependecy. <|||||>I know this is not the exact place for this issue. but somebody help me or direct me to correct place.
I'm getting this error:
`RuntimeError: Unrecognized array dtype object.
Nested types and image/audio types are not supported yet.`
This happens when i call `model.prepare_tf_dataset`.
The whole code is basically what is given in the text-classification section of NLP course.
@bkowshik @sgugger @amyeroberts <|||||>cc @Rocketknight1 <|||||>If you're using Kaggle, make sure that the environment variable is not pinned, this fixed it for me:

<|||||>> Hi @MitchellMonaghan, @Abhranta,
>
> Could you try upgrading the installed version of accelerate in your env: `pip install -U accelerate`?
Worked for me. |
transformers | 23,339 | closed | Use cu118 with cudnn >= 8.6 in docker file | # What does this PR do?
We use TF 2.12 after #22759 and #23293. But TF 2.12 requires CUDA 11.8 and CUDNN 8.6 (or up) to work.
Currently, our CI have errors with
```bash
Loaded runtime CuDNN library: 8.5.0 but source was compiled with: 8.6.0. CuDNN library needs to have matching major version and equal or higher minor version. If using a binary install, upgrade your CuDNN library. If building from sources, make sure the library loaded at runtime is compatible with the version specified during compile configuration.
`UNIMPLEMENTED: DNN library is not found.`.
```
This PR uses new base image for some docker files. We also have to use `cu118` for the torch installation with this new base image.
Other docker files (those with deepspeed stuff) are not changed in this PR - better to see what happens with this change and apply to other files.
Running some previous failing tests and they pass now. Still need to watch if the whole suite (doctest) pass on Monday. | 05-12-2023 16:13:18 | 05-12-2023 16:13:18 | _The documentation is not available anymore as the PR was closed or merged._<|||||>We have the so called past CI which runs previous torch and tensorflow versions together the environment we set for them.
In this particular case, since torch version is not changed but using cu118 file, we don't really have any extra CI for torch 2.0 with cu117 after this PR. So no real promise. But this is already the case for our CI, we always fix a cuda and cudnn environment until we really have to change ๐ |
transformers | 23,338 | closed | AutoTokenizer registration not working as expected | ### System Info
- `transformers` version: 4.29.1
- Platform: Linux-3.10.0-1160.80.1.el7.x86_64-x86_64-with-glibc2.17
- Python version: 3.9.2
- Huggingface_hub version: 0.14.1
- Safetensors version: not installed
- PyTorch version (GPU?): 1.13.1+cu117 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help?
@ArthurZucker this seems to be an issue with `AutoTokenizer` registration, so maybe you're the right person to take a look?
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
- Create a directory containing the `config.json`, `pytorch_model.bin`, and `tokenizer.json` from [gpt2](https://huggingface.co/gpt2/tree/main)
- Set `"model_type": "example"` in the `config.json`
- Run the following code excerpt, replacing the dummy path with the path to the prepared directory
```python
from transformers import (
GPT2LMHeadModel,
GPT2Config,
AutoConfig,
AutoModelForCausalLM,
AutoTokenizer,
)
from transformers.tokenization_utils_fast import PreTrainedTokenizerFast
from transformers.models.auto.tokenization_auto import (
CONFIG_MAPPING_NAMES, TOKENIZER_MAPPING
)
from tokenizers import Tokenizer
class ExampleConfig(GPT2Config):
model_type = "example"
class ExampleModel(GPT2LMHeadModel):
config_class = ExampleConfig
ExampleTokenizer = PreTrainedTokenizerFast
AutoConfig.register("example", ExampleConfig)
AutoModelForCausalLM.register(ExampleConfig, ExampleModel)
AutoTokenizer.register(ExampleConfig, fast_tokenizer_class=ExampleTokenizer)
print(", ".join(c.__name__ for c in TOKENIZER_MAPPING))
print(CONFIG_MAPPING_NAMES)
pretrain_path = "/path/to/downloaded/artifacts"
config = AutoConfig.from_pretrained(pretrain_path) # This works just fine
model = AutoModelForCausalLM.from_pretrained(pretrain_path) # This works just fine
tokenizer = AutoTokenizer.from_pretrained(pretrain_path) # This throws an exception
```
A few things to note about the behavior of this script. First, as noted in the comments, loading the `AutoTokenizer` throws an error. The error message I see is
```
Traceback (most recent call last):
File "[base_path]/autotokenizer_reproducer.py", line 36, in <module>
tokenizer = AutoTokenizer.from_pretrained(pretrain_path) # This throws an exception
File "[path_to_conda_env]/lib/python3.9/site-packages/transformers/models/auto/tokenization_auto.py", line 721, in from_pretrained
raise ValueError(
ValueError: Unrecognized configuration class <class '__main__.ExampleConfig'> to build an AutoTokenizer.
Model type should be one of AlbertConfig, AlignConfig, BartConfig, BertConfig, BertGenerationConfig, BigBirdConfig, BigBirdPegasusConfig, BioGptConfig, BlenderbotConfig, BlenderbotSmallConfig, BlipConfig, Blip2Config, BloomConfig, BridgeTowerConfig, CamembertConfig, CanineConfig, ChineseCLIPConfig, ClapConfig, CLIPConfig, CLIPSegConfig, CodeGenConfig, ConvBertConfig, CpmAntConfig, CTRLConfig, Data2VecTextConfig, DebertaConfig, DebertaV2Config, DistilBertConfig, DPRConfig, ElectraConfig, ErnieConfig, ErnieMConfig, EsmConfig, FlaubertConfig, FNetConfig, FSMTConfig, FunnelConfig, GitConfig, GPT2Config, GPT2Config, GPTBigCodeConfig, GPTNeoConfig, GPTNeoXConfig, GPTNeoXJapaneseConfig, GPTJConfig, GPTSanJapaneseConfig, GroupViTConfig, HubertConfig, IBertConfig, JukeboxConfig, LayoutLMConfig, LayoutLMv2Config, LayoutLMv3Config, LEDConfig, LiltConfig, LlamaConfig, LongformerConfig, LongT5Config, LukeConfig, LxmertConfig, M2M100Config, MarianConfig, MBartConfig, MegaConfig, MegatronBertConfig, MgpstrConfig, MobileBertConfig, MPNetConfig, MT5Config, MvpConfig, NezhaConfig, NllbMoeConfig, NystromformerConfig, OneFormerConfig, OpenAIGPTConfig, OPTConfig, OwlViTConfig, PegasusConfig, PegasusXConfig, PerceiverConfig, Pix2StructConfig, PLBartConfig, ProphetNetConfig, QDQBertConfig, RagConfig, RealmConfig, ReformerConfig, RemBertConfig, RetriBertConfig, RobertaConfig, RobertaPreLayerNormConfig, RoCBertConfig, RoFormerConfig, RwkvConfig, Speech2TextConfig, Speech2Text2Config, SpeechT5Config, SplinterConfig, SqueezeBertConfig, SwitchTransformersConfig, T5Config, TapasConfig, TransfoXLConfig, ViltConfig, VisualBertConfig, Wav2Vec2Config, Wav2Vec2ConformerConfig, WhisperConfig, XCLIPConfig, XGLMConfig, XLMConfig, XLMProphetNetConfig, XLMRobertaConfig, XLMRobertaXLConfig, XLNetConfig, XmodConfig, YosoConfig, ExampleConfig.
```
When I print out `TOKENIZER_MAPPING`, I see `ExampleConfig` in it (as in the error message), meaning that the tokenizer registration indeed occurred. However, when I print out `CONFIG_MAPPING_NAMES`, which is the check that fails generating the error, `ExampleConfig` is not in that dictionary.
I have played around with a few different ways of creating the tokenizer class (e.g. actually subclassing `PreTrainedTokenizerFast` instead of just aliasing it), but none of these modifications worked, and the issue seems to be with the registration part rather than the choice of tokenizer.
### Expected behavior
I would expect the `AutoTokenizer.from_pretrained(...)` line from above to run without error | 05-12-2023 15:41:41 | 05-12-2023 15:41:41 | I was able to reproduce the error. I will take a look at this problem.<|||||>Hi @william-cerebras @ArthurZucker!
I have done some quick debugging and the problem is really simple. In `configuration_auto.py` we have `OrderedDict` `CONFIG_MAPPING_NAMES` as well as `CONFIG_MAPPING = _LazyConfigMapping(CONFIG_MAPPING_NAMES)`. When we register new config we are calling `AutoConfig.register` which underneath updates `CONFIG_MAPPING` by calling `CONFIG_MAPPING.register`. `CONFIG_MAPPING` is an instance of `_LazyConfigMapping` class which has two variables `self._mapping` which is just `CONFIG_MAPPING_NAMES` as well as `self._extra_content`. When `CONFIG_MAPPING.register` is being called it is updating `self._extra_content` dict and not `self._mapping` itself. But later in the code when we call `AutoTokenizer.from_pretrained` we search for config class name to convert it into corresponding model type. It is done inside `config_class_to_model_type` function. Now to the clue, inside this function we iterate over `CONFIG_MAPPING_NAMES` which as I pointed out earlier is not being updated while registering new config. So in order to make it work we can do a simple fix:
Change this:
```python
def config_class_to_model_type(config):
"""Converts a config class name to the corresponding model type"""
for key, cls in CONFIG_MAPPING_NAMES.items():
if cls == config:
return key
return None
```
to this:
```python
def config_class_to_model_type(config):
"""Converts a config class name to the corresponding model type"""
for key, cls in CONFIG_MAPPING.items():
if cls.__name__ == config:
return key
return None
```
To make it clear why this will work is because when we call `CONFIG_MAPPING.items()` it underneath merges `self._mapping` and `self._extra_content` and as a result includes newly registered config:
```python
def items(self):
return [(k, self[k]) for k in self._mapping.keys()] + list(self._extra_content.items())
```<|||||>cc @sgugger <|||||>Thanks for the ping @Bearnardd. The change you suggest is a bit too strong as it will actually go import everything config to build `CONFIG_MAPPING.items()`. I think we can keep the check on the names as it is, then add a check looping `CONFIG_MAPPING_extra_content` while keeping the spirit of your fix (if that makes sense).
I can work on that or you can open a PR if you prefer @Bearnardd <|||||>Hi @sgugger! I will fix that<|||||>Thanks for the fix @Bearnardd @sgugger! |
transformers | 23,337 | closed | EncoderDecoderModel forward decoder_attention_mask can't execute the default behavior mentioned in the document | @ArthurZucker @younesbelkada
EncoderDecoderModel forward decoder_attention_mask can't execute the default behavior mentioned in the document.
For example,
EncoderDecoderModel (BertModel, BertLMHeadModel)
In EncoderDecoderModel forward:
decoder_attention_mask will be directly passed in self.decoder as attention_mask. [code link](https://github.com/huggingface/transformers/blob/a3975f94f3a090a54ed4ec78ab736ce6aaee6742/src/transformers/models/encoder_decoder/modeling_encoder_decoder.py#LL616C9-L629C10)
~~~python
# Decode
decoder_outputs = self.decoder(
input_ids=decoder_input_ids,
attention_mask=decoder_attention_mask,
encoder_hidden_states=encoder_hidden_states,
encoder_attention_mask=attention_mask,
inputs_embeds=decoder_inputs_embeds,
output_attentions=output_attentions,
output_hidden_states=output_hidden_states,
use_cache=use_cache,
past_key_values=past_key_values,
return_dict=return_dict,
**kwargs_decoder,
)
~~~
In BertLMHeadModel forward:
attention_mask will be directly passed in self.bert as attention_mask. [code link](https://github.com/huggingface/transformers/blob/7f8b909189547944617741d8d3c6c84504701693/src/transformers/models/bert/modeling_bert.py#L1234)
~~~python
outputs = self.bert(
input_ids,
attention_mask=attention_mask,
token_type_ids=token_type_ids,
position_ids=position_ids,
head_mask=head_mask,
inputs_embeds=inputs_embeds,
encoder_hidden_states=encoder_hidden_states,
encoder_attention_mask=encoder_attention_mask,
past_key_values=past_key_values,
use_cache=use_cache,
output_attentions=output_attentions,
output_hidden_states=output_hidden_states,
return_dict=return_dict,
)
~~~
In BertModel forward:
attention_mask will be filled by number one if it is None. [code link](https://github.com/huggingface/transformers/blob/7f8b909189547944617741d8d3c6c84504701693/src/transformers/models/bert/modeling_bert.py#LL980C9-L981C108)
~~~python
if attention_mask is None:
attention_mask = torch.ones(((batch_size, seq_length + past_key_values_length)), device=device)
~~~
if decoder_attention_mask is None in EncoderDecoderModel forward, then it will be filled by number one.
but in [code link](https://github.com/huggingface/transformers/blob/a3975f94f3a090a54ed4ec78ab736ce6aaee6742/src/transformers/models/encoder_decoder/modeling_encoder_decoder.py#LL106C9-L108C32)
~~~
decoder_attention_mask (`torch.BoolTensor` of shape `(batch_size, target_sequence_length)`, *optional*):
Default behavior: generate a tensor that ignores pad tokens in `decoder_input_ids`. Causal mask will also
be used by default.
~~~ | 05-12-2023 14:55:33 | 05-12-2023 14:55:33 | Hi @efsotr
Thanks for the issue, indeed there seems to be a typo, one could replace the docstring with the correct behavior (the default mask will be created by the decoder)
Do you want to open a Pull Request to address these changes?
Thanks!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 23,336 | closed | to is not supported for `8-bit` models | ### System Info
Hi,
I am using a Llama model and wanted to add to pipeline class but it throws me an error when building the pipeline class.
Does anyone have a solution to this?
thank you!
@Narsil
### Who can help?
@Narsil
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
> ## Model
> model = AutoModelForCausalLM.from_pretrained(
> model_name,
> device_map='auto',
> load_in_8bit=True,
> max_memory=max_memory)
>
> ## llm class
>
> class CustomLLM(LLM):
>
> pipeline = pipeline("text-generation",tokenizer = tokenizer, model=model, device="cuda:0")
>
> def _call(self, prompt: str, stop: Optional[List[str]] = None) -> str:
> prompt_length = len(prompt)
> response = self.pipeline(prompt, max_new_tokens=num_output)[0]["generated_text"]
>
> # only return newly generated tokens
> return response[prompt_length:]
>
> @property
> def _identifying_params(self) -> Mapping[str, Any]:
> return {"name_of_model": self.model_name}
>
> @property
> def _llm_type(self) -> str:
> return "custom"
>
" model has already been set to the correct devices and casted to the correct `dtype`."
### Expected behavior
1879 # Checks if the model has been loaded in 8-bit
1880 if getattr(self, "is_loaded_in_8bit", False):
-> 1881 raise ValueError(
1882 ".to is not supported for 8-bit models. Please use the model as it is, since the"
1883 " model has already been set to the correct devices and casted to the correct dtype." | 05-12-2023 14:26:30 | 05-12-2023 14:26:30 | cc @younesbelkada <|||||>hi @lborcard
Indeed the `to` operation is not supported for 8bit models as users will most likely encounter unexpected behaviour.
What version of `transformers` are you using?
A fix has been introduced in https://github.com/huggingface/transformers/pull/21479 and has been documented [here](https://huggingface.co/docs/transformers/pipeline_tutorial#using-pipeline-on-large-models-with-accelerate) - everything should work fine if you use `transformers>=4.29.0`<|||||>Hi @younesbelkada ,
Thank you for your answer, I was using version 4.29 but I will try a newer version.
have a good day<|||||>@lborcard
can you try:
```python
pipeline = pipeline("text-generation",tokenizer = tokenizer, model=model, device=0)
```<|||||>I'm still getting this error on the latest version of the transformer. Any work around?<|||||>@mrhimanshu can you share an handy reproducible snippet?<|||||>I'm also still getting this error when using transformers==4.3.0.0 version. Anyone figured any work-around?<|||||>hi everyone,
thanks for raising up the issue, I would greatly appreciate if you could share a reproducible snippet as I can't do anything without it<|||||>Thanks @younesbelkada , the error is in the file
/python3.8/site-packages/transformers/modeling_utils.py
def half(self, *args):
# Checks if the model has been loaded in 8-bit
if getattr(self, "is_quantized", False):
raise ValueError(
"`.half()` is not supported for `4-bit` or `8-bit` models. Please use the model as it is, since the"
" model has already been casted to the correct `dtype`."
)
else:
return super().half(*args)
File "/python3.8/site-packages/transformers/modeling_utils.py", line 1907, in half
raise ValueError(
ValueError: `.half()` is not supported for `4-bit` or `8-bit` models. Please use the model as it is, since the model has already been casted to the correct `dtype`.
Please check it out. <|||||>Hi @22Mukesh22
Thank you for your message, I think you are somehow calling `.half` in your script , can you share a handy small snippet to reproduce?
<|||||>This .half only comes in picture when passing load_in_8bit=True , else if we remove this from the script , it gives memory error .
Cuda out of memory , as I have P40 24 GB 4 GPU. <|||||>I am usign the same script "https://github.com/Xirider/finetune-gpt2xl" to finetune the starcoder model.
<|||||>Looking at the repo you shared I think that you are trying to use DeepSpeed + bitsandbytes and purely fine tune the entire model in 8bit or 4bit. This is not supported.
You should look into PEFT library if you want to fine-tune the model in 8bit or 4bit to fine-tune adapters on top of the base model (which leads to the same results), some examples here: https://github.com/huggingface/peft/tree/main/examples/int8_training
And the documentation is here: https://huggingface.co/docs/peft/index<|||||>Thanks a lot , I will try and update <|||||>ValueError: You can't train a model that has been loaded in 8-bit precision on multiple devices in any distributed mode. In order to use 8-bit models that have been loaded across multiple GPUs the solution is to use Naive Pipeline Parallelism. Therefore you should not specify that you are under any distributed regime in your accelerate config.<|||||>@22Mukesh22 can you please update your `accelerate` version?
```bash
pip install --upgrade accelerate
```
Related: https://github.com/huggingface/accelerate/pull/1523<|||||>Okay Sure @younesbelkada , i will re run and update if issue still comes.<|||||>@younesbelkada still the issues remaisn same . I have upgraded accelerate . It doesn't works <|||||>Hi @22Mukesh22
How do you run your script? can you share the accelerate config you are using? Also let's open a different ticket on accelerate for the issue you are facing and ping me there
Thanks! <|||||>for everyone stumbling into this error, my solution was to use accelerate 0.20.3 and transformers 4.30.2 (not necceserally needed). With those versions the training started correctly.<|||||>> for everyone stumbling into this error, my solution was to use accelerate 0.20.3 and transformers 4.30.2 (not necceserally needed). With those versions the training started correctly.
Thanks working!<|||||>> for everyone stumbling into this error, my solution was to use accelerate 0.20.3 and transformers 4.30.2 (not necceserally needed). With those versions the training started correctly.
Thank you! This worked for me. I'll try and investigate what went wrong. For reference, this was the traceback:
```
File "finetune.py", line 552, in main
model = AutoModelForCausalLM.from_pretrained(
File "/usr/local/lib/python3.8/dist-packages/transformers/models/auto/auto_factory.py", line 484, in from_pretrained
return model_class.from_pretrained(
File "/usr/local/lib/python3.8/dist-packages/transformers/modeling_utils.py", line 2937, in from_pretrained
dispatch_model(model, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/accelerate/big_modeling.py", line 391, in dispatch_model
model.to(device)
File "/usr/local/lib/python3.8/dist-packages/transformers/modeling_utils.py", line 1897, in to
raise ValueError(
ValueError: `.to` is not supported for `4-bit` or `8-bit` models. Please use the model as it is, since the model has already been set to the correct devices and casted to the correct `dtype`.
```<|||||>still suffering this issue with accelerate 0.20.3 and transformers 4.30.2, getting "
ValueError: `.to` is not supported for `4-bit` or `8-bit` models. Please use the model as it is, since the model has already been set to the correct devices and casted to the correct `dtype`.
"<|||||>add that i'm using the bnb_4bit, as follows
```
quant_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_use_double_quant=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=torch.bfloat16
)
```
<|||||>> ๆทปๅ ๆๆญฃๅจไฝฟ็จbnb_4bit๏ผๅฆไธๆ็คบ
>
> ```
> quant_config = BitsAndBytesConfig(
> load_in_4bit=True,
> bnb_4bit_use_double_quant=True,
> bnb_4bit_quant_type="nf4",
> bnb_4bit_compute_dtype=torch.bfloat16
> )
> ```
I also meet the problem when I set CUDA_VISIBLE_DEVICES="0" in ,sh file.
However , when I delete this command or I set CUDA_VISIBLE_DEVICES="0,1" or "0,1,2,3" . It can work.
(But I want to save GPU memory and qlora paper say it can work on one GPU <|||||>> > ๆทปๅ ๆๆญฃๅจไฝฟ็จbnb_4bit๏ผๅฆไธๆ็คบ
> > ```
> > quant_config = BitsAndBytesConfig(
> > load_in_4bit=True,
> > bnb_4bit_use_double_quant=True,
> > bnb_4bit_quant_type="nf4",
> > bnb_4bit_compute_dtype=torch.bfloat16
> > )
> > ```
>
> I also meet the problem when I set CUDA_VISIBLE_DEVICES="0" in ,sh file. However , when I delete this command or I set CUDA_VISIBLE_DEVICES="0,1" or "0,1,2,3" . It can work. (But I want to save GPU memory and qlora paper say it can work on one GPU
> for everyone stumbling into this error, my solution was to use accelerate 0.20.3 and transformers 4.30.2 (not necceserally needed). With those versions the training started correctly.
accelerate 0.20.3 works on one GPU and mult GPU(<=4)<|||||>I encountered similar issue. I tried CUDA_VISIBLE_DEVICES=1,2,3. But 8-bit llama is automatically loaded to cuda:0. and I cannot apply ".to('cuda:1') " which gives me ths error, 'to is not supported ....'
<|||||>Even if I use non 8-bit model, the model is still automatically loaded to cuda:0 when i sepcify CUDA_VISIBLE_DEVICES=1 sh run.sh.
model = LLaMAForCausalLM.from_pretrained(
"decapoda-research/llama-7b-hf",
load_in_8bit=False,
torch_dtype=torch.float16,
device_map="auto",
)
print('model_cuda_device {}'.format(model.device))
//output:
model_cuda_device cuda:0
<|||||>@andotalao24
To check the devices with a model that has been loaded with `device_map=xxx` you need to call `set(model.hf_device_map.values())`<|||||>I have been having the same issue, but I don't know if this is related to hardware. because I got the Error in an 8xA100 with cuda 11.8 but work perfectly in an 8xA100SMX cuda 11.7 (RunPod machines)<|||||>Hi @younesbelkada, i was going through the same issue, ISSUE - when we use already loaded quantized model, the pipeline seems to send it device. so I just updated at line 780 in base.py
From
`if self.framework == "pt" and device is not None and not (isinstance(device, int) and device < 0):`
to
`if self.framework == "pt" and device is not None and not (isinstance(device, int) and device < 0) and not (getattr(self.model, "is_quantized")):`<|||||>hi @deepaksen1996
Thanks! Can you please share a reproducible snippet? <|||||>Hi @younesbelkada, apologies for this late reply
transformers==4.31.0
accelerate==0.21.0
here is the reproducible snippet
```from transformers import pipeline as hf_pipeline
from transformers import AutoModelForCausalLM, AutoConfig, AutoTokenizer
import torch
model_path="tiiuae/falcon-7b"
config = AutoConfig.from_pretrained(model_path, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(model_path, trust_remote_code=True, load_in_8bit=True, device_map="auto")
tokenizer = AutoTokenizer.from_pretrained("tiiuae/falcon-7b")
params = {
"max_length":1024,
"pad_token_id": 0,
"device_map":"auto",
"load_in_8bit": True,
# "torch_dtype":"auto"
}
pipeline = hf_pipeline(
task="text-generation",
model=model,
tokenizer=tokenizer,
device=0,
model_kwargs=params,
)
```<|||||>Hi @deepaksen1996
Thanks for the reproducer. I managed to reproduce it.
Note that `device_map="auto"` will automatically dispatch the model into the correct device(s), hence there is no need to add `device` argument into the model_kwargs. The script below works fine for me:
```python
from transformers import AutoModelForCausalLM, AutoConfig, AutoTokenizer, pipeline
import torch
model_path="facebook/opt-350m"
config = AutoConfig.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(model_path, load_in_8bit=True, device_map="auto")
tokenizer = AutoTokenizer.from_pretrained(model_path)
params = {
"max_length":1024,
"pad_token_id": 0,
"device_map":"auto",
"load_in_8bit": True,
}
pipe = pipeline(
task="text-generation",
model=model,
tokenizer=tokenizer,
model_kwargs=params,
)
pipe("Hello")
``` |
transformers | 23,335 | closed | Fix chat prompt in HFAgent | # What does this PR do?
There was a bug in formatting prompts in the chat mode. Actually, user-provided custom prompts were never used.
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@sgugger, please review this PR | 05-12-2023 14:20:56 | 05-12-2023 14:20:56 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 23,334 | closed | gpt2-large and gpt2-xl behave strangely with pad tokens | ### System Info
- `transformers` version: 4.18.0
- Platform: Linux-4.18.0-425.13.1.el8_7.x86_64-x86_64-with-centos-8.7-Green_Obsidian
- Python version: 3.6.8
- Huggingface_hub version: 0.4.0
- PyTorch version (GPU?): 1.10.2+cu113 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: True
- Using distributed or parallel set-up in script?: False
### Who can help?
@ArthurZucker @younesbelkada
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```
pretrained = 'gpt2-large'
model = GPT2LMHeadModel.from_pretrained(pretrained).to('cuda')
tokenizer = AutoTokenizer.from_pretrained(pretrained, padding_side='left')
tokenizer.add_special_tokens({'pad_token': <|endoftext|>})
torch.cuda.manual_seed_all(2266)
input0 = '<|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|>Austin knew Quinn intimately and they slept together many times. Why did Austin do this? (0) Hated Quinn, (1) Found QUinn attractive, (2) Ask Quinn on a date\nANSWER:'
input_ids = tokenizer(input0, return_tensors='pt').input_ids.to(device)
with torch.no_grad():
output0 = model.generate(input_ids, max_new_tokens=16, top_k=20, pad_token_id=50256, eos_token_id=50256, do_sample=True, temperature=0.01)
input1 = 'Austin knew Quinn intimately and they slept together many times. Why did Austin do this? (0) Hated Quinn, (1) Found QUinn attractive, (2) Ask Quinn on a date\nANSWER:'
input_ids = tokenizer(input1, return_tensors='pt').input_ids.to(device)
with torch.no_grad():
output1 = model.generate(input_ids, max_new_tokens=8, top_k=20, pad_token_id=50256, eos_token_id=50256, do_sample=True, temperature=0.01)
```
### Expected behavior
```
output0 = '\nThe The The The The'
output1 = ' Austin was jealous of Quinn's relationship'
```
pad_token works just fine with other gpt models such as gpt2, and gpt2-small. | 05-12-2023 13:20:34 | 05-12-2023 13:20:34 | Hi @boblus
In order for your script to work, you need to properly set the attention mask by masking out the padding tokens and call generate with the attention mask<|||||>Do you mean `tokenizer.add_special_tokens({'pad_token': pad_token})`?
I have done this. I forgot to put that in my initial comment. Just added it.<|||||>I think that attention masks are created only if you pass multiple sentences to the tokenizer, in your case I think you may need to create it manually unless I am wrong and there is a simpler solution.
The below script worked for me:
```python
import torch
from transformers import GPT2Tokenizer, GPT2LMHeadModel
pretrained = 'gpt2-large'
device = 'cuda'
model = GPT2LMHeadModel.from_pretrained(pretrained).to('cuda')
tokenizer = GPT2Tokenizer.from_pretrained(pretrained)
torch.cuda.manual_seed_all(2266)
input0 = '<|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|>Austin knew Quinn intimately and they slept together many times. Why did Austin do this? (0) Hated Quinn, (1) Found QUinn attractive, (2) Ask Quinn on a date\nANSWER:'
tokenizer.pad_token = tokenizer.eos_token
tokenizer.pad_token_id = tokenizer.eos_token_id
inputs = tokenizer(input0, return_tensors='pt').to(device)
inputs["attention_mask"] = inputs["input_ids"].ne(tokenizer.pad_token_id).long().to(device)
with torch.no_grad():
output0 = model.generate(**inputs, max_new_tokens=16, top_k=20, pad_token_id=50256, eos_token_id=50256, do_sample=True, temperature=0.01)
print(tokenizer.decode(output0[0]))
input1 = 'Austin knew Quinn intimately and they slept together many times. Why did Austin do this? (0) Hated Quinn, (1) Found QUinn attractive, (2) Ask Quinn on a date\nANSWER:'
input_ids = tokenizer(input1, return_tensors='pt').input_ids.to(device)
with torch.no_grad():
output1 = model.generate(input_ids, max_new_tokens=8, top_k=20, pad_token_id=50256, eos_token_id=50256, do_sample=True, temperature=0.01)
print(tokenizer.decode(output1[0]))
```<|||||>Thanks @younesbelkada, it works now. I thought setting `pad_token_id` in `model.generate()` is equivalent to setting the attention mask. And my codes worked well with other models, that's why it confused me.<|||||>linked to #22155 and #21080. GPT2 is an old model and does not necessarly create everything by default like our recent models |
transformers | 23,333 | closed | ConvNextImageProcessor / ViTImageProcessor produce inf when do_rescale = False | ### System Info
- `transformers` version: 4.29.1
- Platform: Linux-5.15.0-1022-aws-x86_64-with-glibc2.31
- Python version: 3.9.4
- Huggingface_hub version: 0.14.1
- Safetensors version: not installed
- PyTorch version (GPU?): 2.0.1+cu117 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help?
@amyeroberts
### Information
- [x] The official example scripts
- [X] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```python
import numpy as np
from transformers import AutoImageProcessor
>>> processor = AutoImageProcessor.from_pretrained("microsoft/resnet-50")
>>> inputs = np.random.randint(0, 256, size=(224,224,3)).astype("uint8")
>>> processor(inputs, return_tensors="np", do_rescale=False)["pixel_values"]
array([[[[inf, inf, inf, ..., inf, inf, inf],
[inf, inf, inf, ..., inf, inf, inf],
[inf, inf, inf, ..., inf, inf, inf],
...,
[inf, inf, inf, ..., inf, inf, inf],
[inf, inf, inf, ..., inf, inf, inf],
[inf, inf, inf, ..., inf, inf, inf]],
[[inf, inf, inf, ..., inf, inf, inf],
[inf, inf, inf, ..., inf, inf, inf],
[inf, inf, inf, ..., inf, inf, inf],
...,
[inf, inf, inf, ..., nan, inf, inf],
[inf, inf, inf, ..., inf, inf, inf],
[inf, inf, inf, ..., inf, inf, inf]],
[[inf, inf, inf, ..., inf, inf, inf],
[inf, inf, inf, ..., inf, inf, inf],
[inf, inf, inf, ..., inf, inf, inf],
...,
[inf, inf, inf, ..., inf, inf, inf],
[inf, inf, inf, ..., inf, inf, inf],
[inf, inf, inf, ..., inf, inf, inf]]]])
```
### Expected behavior
AutoImageProcessor (whichever, I've tried a ConvNext and a Vit) should convert to [0 - 1] range explicitly (specifically [here](https://github.com/huggingface/transformers/blob/v4.29.1/src/transformers/models/convnext/image_processing_convnext.py#L288), if you're counting) before `do_normalize`. It's currently done implicitly, requiring `do_rescale=True` or otherwise you have undefined behaviour and only some warining. | 05-12-2023 12:53:49 | 05-12-2023 12:53:49 | Hi @guillermojp, thanks for reporting this issue.
The image processor does convert the pixel between `[0, 1]` explicitly - it's happens in its own independent rescaling step (not within other logic) and has its own flag to control this behaviour. If you want the image to have its values set between 0-1 then `do_rescale` should be set to `True`.
The reason for `inf` is because of [this line](https://github.com/huggingface/transformers/blob/7f8b909189547944617741d8d3c6c84504701693/src/transformers/image_transforms.py#LL372C6-L372C6): the mean and std used to normalize the image are cast to the input image dtype. In this case, the image_std `[0.229, 0.224, 0.225]` when converted to `uint8` becomes `[0, 0, 0]`. Arguably this isn't obvious and perhaps we should think about possible warnings here when normalizing e.g. if the input is of an integer type. <|||||>I think I misunderstood the documentation of the `ConvNextImageProcessor` to this regard, as the text description `do_rescale (bool, optional, defaults to True) โ Whether to rescale the image by the specified scale rescale_factor. Can be overriden by do_rescale in the preprocess method.` wasn't clear to me. Maybe I'd recommend generating a more comprehensive description in the documentation.
Marking as closed |
transformers | 23,332 | closed | Compute the mask in-place, with less memory reads, and on CUDA on `XLNetLMHeadModel` | When working on TorchInductor, I realised that there was a part from `XLNetLMHeadModel` that was being compiled to CPU code.
This PR should allow to fuse this operation with other CUDA operations in `torch.compile`. It also should be faster on eager mode, as it has a this implementation has a lower foot-print.
If in-place operations are not allowed even in non-grad context, I still believe that doing ones + tril rather than a ones + tril + zeros + cat should be faster simply due to the number of memory reads/writes.
I tested that this code produces the same results for `0 <= qlen,mlen < 10` and `same_length in (True, False)`.
@ArthurZucker @younesbelkada | 05-12-2023 12:53:36 | 05-12-2023 12:53:36 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 23,331 | open | RuntimeError: The size of tensor a (16) must match the size of tensor b (16000) at non-singleton dimension 2 | ### System Info
- `transformers` version: 4.30.0.dev0
- Platform: Linux-5.10.147+-x86_64-with-glibc2.31
- Python version: 3.10.11
- Huggingface_hub version: 0.14.1
- Safetensors version: not installed
- PyTorch version (GPU?): 2.0.0+cu118 (False)
- Tensorflow version (GPU?): 2.12.0 (False)
- Flax version (CPU?/GPU?/TPU?): 0.6.9 (cpu)
- Jax version: 0.4.8
- JaxLib version: 0.4.7
- Using GPU in script?: YES
- Using distributed or parallel set-up in script?: NO
### Who can help?
@ArthurZucker
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Run this notebook: https://colab.research.google.com/drive/1TFI84P9W4VPhNLgEngxPN57RwzS0C4bG?usp=sharing
### Expected behavior
Expected the model to train successfully. Instead it gives a tensor mismatch error. | 05-12-2023 12:32:54 | 05-12-2023 12:32:54 | Hi @Tylersuard, thanks for reporting this issue.
So that we can best try and help you, could you update the notebook so that it contains the minimal logic to replicate the error and can be run out-of-the-box? As it stands, there's many blocks with comments; references to loading / processing data we don't have access to; doesn't currently have the reported error shown but does have many other errors. <|||||>Sorry @amyeroberts , Here is the updated version: https://colab.research.google.com/drive/1TFI84P9W4VPhNLgEngxPN57RwzS0C4bG?usp=sharing<|||||>I think you're splitting your input sequence into chunks of length 16: https://github.com/huggingface/transformers/blob/v4.29.1/src/transformers/models/mega/modeling_mega.py#L1063<|||||>@OllieBroadhurst That is correct. As per the documentation (https://huggingface.co/docs/transformers/main/model_doc/mega) , I set the chunk_size equal to 16 and use_chunking to true, and the context length is a multiple of the chunk size. My problem is not solved.<|||||>What I mean is have you tried turning chunking off?<|||||>@OllieBroadhurst Thank you for your suggestion. I would likely run into out-of-memory errors, but I will try it.<|||||>Ok I tried it without chunking and I got out-of-memory errors.<|||||>This should still be adressed! Mega's forward pass might need some debugging. I can't do this fast, but keeping an eye on it! |
transformers | 23,329 | closed | Add ffmpeg install for doctests | # What does this PR do?
`tests_pr_documentation_tests` fails if run on some docs e.g. `docs/source/en/task_summary.mdx` as ffmpeg is not installed.
Install ffmpeg in `pr_documentation_tests` CircleCI job.
Despite trying to add changes to the code and docstrings - I was unable to trigger the tests in the CI suite for the doc tests :(
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests? | 05-12-2023 11:42:25 | 05-12-2023 11:42:25 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 23,328 | closed | Problem with Huggingface Agent | ### System Info
2023-05-12 10:53:56.623476: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
WARNING:tensorflow:From /usr/local/lib/python3.10/dist-packages/transformers/commands/env.py:63: is_gpu_available (from tensorflow.python.framework.test_util) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.config.list_physical_devices('GPU')` instead.
2023-05-12 10:54:01.265612: W tensorflow/core/common_runtime/gpu/gpu_bfc_allocator.cc:47] Overriding orig_value setting because the TF_FORCE_GPU_ALLOW_GROWTH environment variable is set. Original config value was 0.
Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points.
- `transformers` version: 4.29.0
- Platform: Linux-5.10.147+-x86_64-with-glibc2.31
- Python version: 3.10.11
- Huggingface_hub version: 0.14.1
- Safetensors version: not installed
- PyTorch version (GPU?): 2.0.0+cu118 (True)
- Tensorflow version (GPU?): 2.12.0 (True)
- Flax version (CPU?/GPU?/TPU?): 0.6.9 (gpu)
- Jax version: 0.4.8
- JaxLib version: 0.4.7
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I have a image in webp format and wanted to segment the character in foreground, but I get an error.
Let me know how I can send it to you ... this type of file is not supported
Here is the code:
`
from PIL import Image
def leggi_immagine(percorso):
try:
image = Image.open(percorso)
return image
except IOError:
print("Impossibile aprire l'immagine. Controlla il percorso del file.")
image = leggi_immagine("dart.webp")
image2 = agent.run("Draw a red line all around dart vader body in `image`", image=image)
`
and here is the error:
in <cell line: 1>:1 โ
โ โ
โ /usr/local/lib/python3.10/dist-packages/transformers/tools/agents.py:323 in run โ
โ โ
โ 320 โ โ if not return_code: โ
โ 321 โ โ โ print("\n\n==Result==") โ
โ 322 โ โ โ self.cached_tools = resolve_tools(code, self.toolbox, remote=remote, cached_ โ
โ โฑ 323 โ โ โ return evaluate(code, self.cached_tools, state=kwargs.copy()) โ
โ 324 โ โ else: โ
โ 325 โ โ โ tool_code = get_tool_creation_code(code, self.toolbox, remote=remote) โ
โ 326 โ โ โ return f"{tool_code}\n{code}" โ
โ โ
โ /usr/local/lib/python3.10/dist-packages/transformers/tools/python_interpreter.py:61 in evaluate โ
โ โ
โ 58 โ result = None โ
โ 59 โ for idx, node in enumerate(expression.body): โ
โ 60 โ โ try: โ
โ โฑ 61 โ โ โ line_result = evaluate_ast(node, state, tools) โ
โ 62 โ โ except InterpretorError as e: โ
โ 63 โ โ โ msg = f"Evaluation of the code stopped at line {idx} before the end because โ
โ 64 โ โ โ if chat_mode: โ
โ โ
โ /usr/local/lib/python3.10/dist-packages/transformers/tools/python_interpreter.py:98 in โ
โ evaluate_ast โ
โ โ
โ 95 โ if isinstance(expression, ast.Assign): โ
โ 96 โ โ # Assignement -> we evaluate the assignement which should update the state โ
โ 97 โ โ # We return the variable assigned as it may be used to determine the final resul โ
โ โฑ 98 โ โ return evaluate_assign(expression, state, tools) โ
โ 99 โ elif isinstance(expression, ast.Call): โ
โ 100 โ โ # Function call -> we return the value of the function call โ
โ 101 โ โ return evaluate_call(expression, state, tools) โ
โ โ
โ /usr/local/lib/python3.10/dist-packages/transformers/tools/python_interpreter.py:139 in โ
โ evaluate_assign โ
โ โ
โ 136 โ
โ 137 def evaluate_assign(assign, state, tools): โ
โ 138 โ var_names = assign.targets โ
โ โฑ 139 โ result = evaluate_ast(assign.value, state, tools) โ
โ 140 โ โ
โ 141 โ if len(var_names) == 1: โ
โ 142 โ โ state[var_names[0].id] = result โ
โ โ
โ /usr/local/lib/python3.10/dist-packages/transformers/tools/python_interpreter.py:101 in โ
โ evaluate_ast โ
โ โ
โ 98 โ โ return evaluate_assign(expression, state, tools) โ
โ 99 โ elif isinstance(expression, ast.Call): โ
โ 100 โ โ # Function call -> we return the value of the function call โ
โ โฑ 101 โ โ return evaluate_call(expression, state, tools) โ
โ 102 โ elif isinstance(expression, ast.Constant): โ
โ 103 โ โ # Constant -> just return the value โ
โ 104 โ โ return expression.value โ
โ โ
โ /usr/local/lib/python3.10/dist-packages/transformers/tools/python_interpreter.py:167 in โ
โ evaluate_call โ
โ โ
โ 164 โ # Todo deal with args โ
โ 165 โ args = [evaluate_ast(arg, state, tools) for arg in call.args] โ
โ 166 โ kwargs = {keyword.arg: evaluate_ast(keyword.value, state, tools) for keyword in call โ
โ โฑ 167 โ return func(*args, **kwargs) โ
โ 168 โ
โ 169 โ
โ 170 def evaluate_subscript(subscript, state, tools): โ
โ โ
โ /usr/local/lib/python3.10/dist-packages/transformers/tools/base.py:536 in __call__ โ
โ โ
โ 533 โ โ โ
โ 534 โ โ encoded_inputs = self.encode(*args, **kwargs) โ
โ 535 โ โ encoded_inputs = send_to_device(encoded_inputs, self.device) โ
โ โฑ 536 โ โ outputs = self.forward(encoded_inputs) โ
โ 537 โ โ outputs = send_to_device(outputs, "cpu") โ
โ 538 โ โ return self.decode(outputs) โ
โ 539 โ
โ โ
โ /usr/local/lib/python3.10/dist-packages/transformers/tools/image_segmentation.py:52 in forward โ
โ โ
โ 49 โ โ
โ 50 โ def forward(self, inputs): โ
โ 51 โ โ with torch.no_grad(): โ
โ โฑ 52 โ โ โ logits = self.model(**inputs).logits โ
โ 53 โ โ return logits โ
โ 54 โ โ
โ 55 โ def decode(self, outputs): โ
โ โ
โ /usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py:1501 in _call_impl โ
โ โ
โ 1498 โ โ if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks โ
โ 1499 โ โ โ โ or _global_backward_pre_hooks or _global_backward_hooks โ
โ 1500 โ โ โ โ or _global_forward_hooks or _global_forward_pre_hooks): โ
โ โฑ 1501 โ โ โ return forward_call(*args, **kwargs) โ
โ 1502 โ โ # Do not call functions when jit is used โ
โ 1503 โ โ full_backward_hooks, non_full_backward_hooks = [], [] โ
โ 1504 โ โ backward_pre_hooks = [] โ
โ โ
โ /usr/local/lib/python3.10/dist-packages/transformers/models/clipseg/modeling_clipseg.py:1426 in โ
โ forward โ
โ โ
โ 1423 โ โ โ
โ 1424 โ โ # step 1: forward the query images through the frozen CLIP vision encoder โ
โ 1425 โ โ with torch.no_grad(): โ
โ โฑ 1426 โ โ โ vision_outputs = self.clip.vision_model( โ
โ 1427 โ โ โ โ pixel_values=pixel_values, โ
โ 1428 โ โ โ โ output_attentions=output_attentions, โ
โ 1429 โ โ โ โ output_hidden_states=True, # we need the intermediate hidden states โ
โ โ
โ /usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py:1501 in _call_impl โ
โ โ
โ 1498 โ โ if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks โ
โ 1499 โ โ โ โ or _global_backward_pre_hooks or _global_backward_hooks โ
โ 1500 โ โ โ โ or _global_forward_hooks or _global_forward_pre_hooks): โ
โ โฑ 1501 โ โ โ return forward_call(*args, **kwargs) โ
โ 1502 โ โ # Do not call functions when jit is used โ
โ 1503 โ โ full_backward_hooks, non_full_backward_hooks = [], [] โ
โ 1504 โ โ backward_pre_hooks = [] โ
โ โ
โ /usr/local/lib/python3.10/dist-packages/transformers/models/clipseg/modeling_clipseg.py:867 in โ
โ forward โ
โ โ
โ 864 โ โ if pixel_values is None: โ
โ 865 โ โ โ raise ValueError("You have to specify pixel_values") โ
โ 866 โ โ โ
โ โฑ 867 โ โ hidden_states = self.embeddings(pixel_values) โ
โ 868 โ โ hidden_states = self.pre_layrnorm(hidden_states) โ
โ 869 โ โ โ
โ 870 โ โ encoder_outputs = self.encoder( โ
โ โ
โ /usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py:1501 in _call_impl โ
โ โ
โ 1498 โ โ if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks โ
โ 1499 โ โ โ โ or _global_backward_pre_hooks or _global_backward_hooks โ
โ 1500 โ โ โ โ or _global_forward_hooks or _global_forward_pre_hooks): โ
โ โฑ 1501 โ โ โ return forward_call(*args, **kwargs) โ
โ 1502 โ โ # Do not call functions when jit is used โ
โ 1503 โ โ full_backward_hooks, non_full_backward_hooks = [], [] โ
โ 1504 โ โ backward_pre_hooks = [] โ
โ โ
โ /usr/local/lib/python3.10/dist-packages/transformers/models/clipseg/modeling_clipseg.py:215 in โ
โ forward โ
โ โ
โ 212 โ โ โ
โ 213 โ โ if embeddings.shape[1] != self.num_positions: โ
โ 214 โ โ โ new_shape = int(math.sqrt(embeddings.shape[1] - 1)) โ
โ โฑ 215 โ โ โ embeddings = embeddings + self.interpolate_position_embeddings((new_shape, n โ
โ 216 โ โ โ embeddings = embeddings.to(embeddings.dtype) โ
โ 217 โ โ else: โ
โ 218 โ โ โ embeddings = embeddings + self.position_embedding(self.position_ids) โ
โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ
RuntimeError: The size of tensor a (3151) must match the size of tensor b (3137) at non-singleton dimension 1
### Expected behavior
To draw a red line around Dart Vader figure. | 05-12-2023 11:04:15 | 05-12-2023 11:04:15 | Hello @piust
The error message you are seeing suggests that there is a size mismatch between two tensors, 'a' and 'b', at dimension 1. This typically occurs when the dimensions of the tensors involved in a calculation are not compatible.
But the error message doesn't seem to be specific enough to determine the exact cause of the error. It is possible that there is a problem with the code that was passed to the agent, or there could be a problem with the image that was passed to it. One possible solution would be to try running the code on a different image to see if the error persists.<|||||>Hi @piust,
For the image, are you able to reproduce the error if you read in the PNG equivalent? i.e. is it possible to save out the image, read it back and run the agent script again e.g.:
```python
from PIL import Image
image_path = "dart.webp"
image = Image.open(image_path)
# Save out the image as a PNG
image.save("dart.png")
# Read in the image from PNG format
image = Image.open("dart.png")
# Then pass the image to the agent
from transformers import HfAgent
agent = HfAgent("https://api-inference.huggingface.co/models/bigcode/starcoder")
image2 = agent.run(
"Draw a red line all around dart vader body in `image`",
image=image
)
```
PNG images are sharable within issues, and so if it also triggers an error we could debug from that.<|||||>Yes, it triggers a error:
==Explanation from the agent==
I will use the following tools: `image_segmenter` to create a segmentation mask of the dart vader body, then `image_transformer` to draw a red line around it.
==Code generated by the agent==
mask = image_segmenter(image=image, label="Dart Vader")
image = image_transformer(image=image, prompt="Red line around dart vader body")
==Result==
Downloading (โฆ)ge_transformation.py: 100%
2.05k/2.05k [00:00<00:00, 155kB/s]
A new version of the following files was downloaded from https://huggingface.co/space/huggingface-tools/image-transformation:
- image_transformation.py
. Make sure to double-check they do not contain any added malicious code. To avoid downloading new versions of the code file, you can pin a revision.
Downloading (โฆ)rocessor_config.json: 100%
380/380 [00:00<00:00, 29.2kB/s]
Downloading (โฆ)okenizer_config.json: 100%
974/974 [00:00<00:00, 73.6kB/s]
Downloading (โฆ)olve/main/vocab.json: 100%
1.06M/1.06M [00:00<00:00, 3.26MB/s]
Downloading (โฆ)olve/main/merges.txt: 100%
525k/525k [00:00<00:00, 6.42MB/s]
Downloading (โฆ)cial_tokens_map.json: 100%
472/472 [00:00<00:00, 38.5kB/s]
Downloading (โฆ)lve/main/config.json: 100%
4.73k/4.73k [00:00<00:00, 266kB/s]
Downloading pytorch_model.bin: 100%
603M/603M [00:01<00:00, 315MB/s]
โญโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ Traceback (most recent call last) โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฎ
โ in <cell line: 15>:15 โ
โ โ
โ /usr/local/lib/python3.10/dist-packages/transformers/tools/agents.py:323 in run โ
โ โ
โ 320 โ โ if not return_code: โ
โ 321 โ โ โ print("\n\n==Result==") โ
โ 322 โ โ โ self.cached_tools = resolve_tools(code, self.toolbox, remote=remote, cached_ โ
โ โฑ 323 โ โ โ return evaluate(code, self.cached_tools, state=kwargs.copy()) โ
โ 324 โ โ else: โ
โ 325 โ โ โ tool_code = get_tool_creation_code(code, self.toolbox, remote=remote) โ
โ 326 โ โ โ return f"{tool_code}\n{code}" โ
โ โ
โ /usr/local/lib/python3.10/dist-packages/transformers/tools/python_interpreter.py:61 in evaluate โ
โ โ
โ 58 โ result = None โ
โ 59 โ for idx, node in enumerate(expression.body): โ
โ 60 โ โ try: โ
โ โฑ 61 โ โ โ line_result = evaluate_ast(node, state, tools) โ
โ 62 โ โ except InterpretorError as e: โ
โ 63 โ โ โ msg = f"Evaluation of the code stopped at line {idx} before the end because โ
โ 64 โ โ โ if chat_mode: โ
โ โ
โ /usr/local/lib/python3.10/dist-packages/transformers/tools/python_interpreter.py:98 in โ
โ evaluate_ast โ
โ โ
โ 95 โ if isinstance(expression, ast.Assign): โ
โ 96 โ โ # Assignement -> we evaluate the assignement which should update the state โ
โ 97 โ โ # We return the variable assigned as it may be used to determine the final resul โ
โ โฑ 98 โ โ return evaluate_assign(expression, state, tools) โ
โ 99 โ elif isinstance(expression, ast.Call): โ
โ 100 โ โ # Function call -> we return the value of the function call โ
โ 101 โ โ return evaluate_call(expression, state, tools) โ
โ โ
โ /usr/local/lib/python3.10/dist-packages/transformers/tools/python_interpreter.py:139 in โ
โ evaluate_assign โ
โ โ
โ 136 โ
โ 137 def evaluate_assign(assign, state, tools): โ
โ 138 โ var_names = assign.targets โ
โ โฑ 139 โ result = evaluate_ast(assign.value, state, tools) โ
โ 140 โ โ
โ 141 โ if len(var_names) == 1: โ
โ 142 โ โ state[var_names[0].id] = result โ
โ โ
โ /usr/local/lib/python3.10/dist-packages/transformers/tools/python_interpreter.py:101 in โ
โ evaluate_ast โ
โ โ
โ 98 โ โ return evaluate_assign(expression, state, tools) โ
โ 99 โ elif isinstance(expression, ast.Call): โ
โ 100 โ โ # Function call -> we return the value of the function call โ
โ โฑ 101 โ โ return evaluate_call(expression, state, tools) โ
โ 102 โ elif isinstance(expression, ast.Constant): โ
โ 103 โ โ # Constant -> just return the value โ
โ 104 โ โ return expression.value โ
โ โ
โ /usr/local/lib/python3.10/dist-packages/transformers/tools/python_interpreter.py:167 in โ
โ evaluate_call โ
โ โ
โ 164 โ # Todo deal with args โ
โ 165 โ args = [evaluate_ast(arg, state, tools) for arg in call.args] โ
โ 166 โ kwargs = {keyword.arg: evaluate_ast(keyword.value, state, tools) for keyword in call โ
โ โฑ 167 โ return func(*args, **kwargs) โ
โ 168 โ
โ 169 โ
โ 170 def evaluate_subscript(subscript, state, tools): โ
โ โ
โ /usr/local/lib/python3.10/dist-packages/transformers/tools/base.py:536 in __call__ โ
โ โ
โ 533 โ โ โ
โ 534 โ โ encoded_inputs = self.encode(*args, **kwargs) โ
โ 535 โ โ encoded_inputs = send_to_device(encoded_inputs, self.device) โ
โ โฑ 536 โ โ outputs = self.forward(encoded_inputs) โ
โ 537 โ โ outputs = send_to_device(outputs, "cpu") โ
โ 538 โ โ return self.decode(outputs) โ
โ 539 โ
โ โ
โ /usr/local/lib/python3.10/dist-packages/transformers/tools/image_segmentation.py:52 in forward โ
โ โ
โ 49 โ โ
โ 50 โ def forward(self, inputs): โ
โ 51 โ โ with torch.no_grad(): โ
โ โฑ 52 โ โ โ logits = self.model(**inputs).logits โ
โ 53 โ โ return logits โ
โ 54 โ โ
โ 55 โ def decode(self, outputs): โ
โ โ
โ /usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py:1501 in _call_impl โ
โ โ
โ 1498 โ โ if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks โ
โ 1499 โ โ โ โ or _global_backward_pre_hooks or _global_backward_hooks โ
โ 1500 โ โ โ โ or _global_forward_hooks or _global_forward_pre_hooks): โ
โ โฑ 1501 โ โ โ return forward_call(*args, **kwargs) โ
โ 1502 โ โ # Do not call functions when jit is used โ
โ 1503 โ โ full_backward_hooks, non_full_backward_hooks = [], [] โ
โ 1504 โ โ backward_pre_hooks = [] โ
โ โ
โ /usr/local/lib/python3.10/dist-packages/transformers/models/clipseg/modeling_clipseg.py:1426 in โ
โ forward โ
โ โ
โ 1423 โ โ โ
โ 1424 โ โ # step 1: forward the query images through the frozen CLIP vision encoder โ
โ 1425 โ โ with torch.no_grad(): โ
โ โฑ 1426 โ โ โ vision_outputs = self.clip.vision_model( โ
โ 1427 โ โ โ โ pixel_values=pixel_values, โ
โ 1428 โ โ โ โ output_attentions=output_attentions, โ
โ 1429 โ โ โ โ output_hidden_states=True, # we need the intermediate hidden states โ
โ โ
โ /usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py:1501 in _call_impl โ
โ โ
โ 1498 โ โ if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks โ
โ 1499 โ โ โ โ or _global_backward_pre_hooks or _global_backward_hooks โ
โ 1500 โ โ โ โ or _global_forward_hooks or _global_forward_pre_hooks): โ
โ โฑ 1501 โ โ โ return forward_call(*args, **kwargs) โ
โ 1502 โ โ # Do not call functions when jit is used โ
โ 1503 โ โ full_backward_hooks, non_full_backward_hooks = [], [] โ
โ 1504 โ โ backward_pre_hooks = [] โ
โ โ
โ /usr/local/lib/python3.10/dist-packages/transformers/models/clipseg/modeling_clipseg.py:867 in โ
โ forward โ
โ โ
โ 864 โ โ if pixel_values is None: โ
โ 865 โ โ โ raise ValueError("You have to specify pixel_values") โ
โ 866 โ โ โ
โ โฑ 867 โ โ hidden_states = self.embeddings(pixel_values) โ
โ 868 โ โ hidden_states = self.pre_layrnorm(hidden_states) โ
โ 869 โ โ โ
โ 870 โ โ encoder_outputs = self.encoder( โ
โ โ
โ /usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py:1501 in _call_impl โ
โ โ
โ 1498 โ โ if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks โ
โ 1499 โ โ โ โ or _global_backward_pre_hooks or _global_backward_hooks โ
โ 1500 โ โ โ โ or _global_forward_hooks or _global_forward_pre_hooks): โ
โ โฑ 1501 โ โ โ return forward_call(*args, **kwargs) โ
โ 1502 โ โ # Do not call functions when jit is used โ
โ 1503 โ โ full_backward_hooks, non_full_backward_hooks = [], [] โ
โ 1504 โ โ backward_pre_hooks = [] โ
โ โ
โ /usr/local/lib/python3.10/dist-packages/transformers/models/clipseg/modeling_clipseg.py:215 in โ
โ forward โ
โ โ
โ 212 โ โ โ
โ 213 โ โ if embeddings.shape[1] != self.num_positions: โ
โ 214 โ โ โ new_shape = int(math.sqrt(embeddings.shape[1] - 1)) โ
โ โฑ 215 โ โ โ embeddings = embeddings + self.interpolate_position_embeddings((new_shape, n โ
โ 216 โ โ โ embeddings = embeddings.to(embeddings.dtype) โ
โ 217 โ โ else: โ
โ 218 โ โ โ embeddings = embeddings + self.position_embedding(self.position_ids) โ
โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ
RuntimeError: The size of tensor a (3151) must match the size of tensor b (3137) at non-singleton dimension 1
<|||||>@piust OK, thanks for trying. Could you share the PNG image and the agent being used so that we can reproduce to try and debug? <|||||>Hi all,
I confirm that on my set of images, the image segmenter tool has the same behavior with the same issue of matrix dimensions conflicts. Images are PNG of different sizes. Same problem whatever the image dimensions.
[image sample](https://drive.google.com/file/d/1n95JCjltE1WYBjrktgrbyy4uBB1ckBQD/view?usp=share_link)<|||||>@jeromemassot Thanks for sharing the image. Using it I was able to track down the issue to [this line](https://github.com/huggingface/transformers/blob/00f6ba0e7ebd5d19bb7d834a709d74dbb8a5a3d9/src/transformers/tools/image_segmentation.py#L47) in the image segmentation tool, where the `size` parameter for the image processor is overridden with the input image dimensions. I've opened an PR to resolve, but will need to check that this isn't removing any assumptions elsewhere with the tools. |
transformers | 23,327 | closed | Only add files with modification outside doc blocks | # What does this PR do?
Add files for doctesting only when they have modifications outside docstrings.
(offline message from Sylvain)
> One small improvement I see is for the docstrings: for now the tests are launched on a file if we modify it, but I would only launch it if docstrings are modified (e.g. check the modifications are correct) to go faster. If changes in the code of a model file (for instance) trigger a doctest failure we will see it after. | 05-12-2023 10:57:25 | 05-12-2023 10:57:25 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 23,326 | closed | Remove `LanguageIdentificationTool` in `__init__.py` as we don't have it yet | # What does this PR do?
We need to implement it before we can import it :-) | 05-12-2023 09:47:11 | 05-12-2023 09:47:11 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 23,325 | closed | resume_from_checkpoint is not used in TrainingArguments | ### System Info
N/A
### Who can help?
@sgugger
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Just pass a `resume_from_checkpoint` to `TrainingArguments`.
### Expected behavior
I have read the code and I see `resume_from_checkpoint` is used for `Trainer.train(resume_from_checkpoint=...)`, and this setting in `TrainingArguments` is not used. Any reasons? | 05-12-2023 09:44:34 | 05-12-2023 09:44:34 | @SingL3 In the [docs for TrainingArguments](https://huggingface.co/docs/transformers/v4.29.1/en/main_classes/trainer#transformers.TrainingArguments.resume_from_checkpoint), it notes that `resume_from_checkpoint` is intended to be used in your own training/evaluation scripts. In the original PR, [this was discussed and decided upon](https://github.com/huggingface/transformers/pull/11492#discussion_r622339347) in order to remove ambiguity. <|||||>I see but it is somehow confusing. Closing this issue. |
transformers | 23,324 | closed | Support Azure OpenAI in transformer agents | ### Feature request
Support it by setting the openai properties.
``` python
openai.api_type = "azure"
openai.api_version = "2023-03-15-preview"
openai.api_base = "https://THE_NAME.openai.azure.com/"
openai.api_key = "AZURE_OPENAI_API_KEY"
# response = openai.Completion.create(engine="xxxx", ...)
```
### Motivation
Azure OpenAI is also used by a lot of people.
It would be great to support it in the transformer agents.
### Your contribution
I may contribute to test it. | 05-12-2023 09:41:51 | 05-12-2023 09:41:51 | @huajianmao thanks for opening this issue,
As a user can enable azure open AI support by adding these lines to their own code, there isn't a requirement to add support in the library. <|||||>> @huajianmao thanks for opening this issue,
>
> As a user can enable azure open AI support by adding these lines to their own code, there isn't a requirement to add support in the library.
That's not entirely correct. You'll get an InvalidRequestError: Must provide an 'engine' or 'deployment_id' parameter to create a <class
'openai.api_resources.completion.Completion'>
Because you would have to support some way of specifying the Azure-deployment_id in your code.<|||||>https://huggingface.co/docs/transformers/v4.29.1/en/main_classes/agent#transformers.OpenAiAgent
agent = OpenAiAgent(model="text-davinci-003", api_key=pswd)
ideally would be changed. The Azure API is a different type and different enough to allow the relevant parameters to be passed in.
It doesn't sound "clean" to me to access the openai API parameters directly outside the OpenAiAgent.
The Azure based version is rapidly gaining in importance as people can use their corporate or personal Azure credits vs having to pay OpenAI separately.
Also the call needs to be changed to something like that currently:
result = openai.Completion.create(
#model=self.model,
deployment_id=self.model,
prompt=prompts,
temperature=0,
stop=stop,
max_tokens=200,
)
btw. something to easily stumble over that the version is api_version="2022-12-01" even though the portal shows "1". Best is to go to the playground and have it generate example code currently it seems.<|||||>for GPT4 you need to use
openai.api_version = "2023-03-15-preview" # used for GPT-4 - see https://learn.microsoft.com/en-gb/azure/cognitive-services/openai/how-to/chatgpt?pivots=programming-language-chat-completions
Be aware the chat message pattern changed nd as well the json for the response, which is the new mode for all new models after GPT4 as well.<|||||>Sure, but we were not talking about GPT-4. The examples only covered DaVinci-003<|||||>Anyway the api is changed - no matter what model you use - you need to adopt to the new prompt style. |
transformers | 23,323 | closed | no dependency package `accelerate` installed when we install transformers v4.29.1 | ### System Info
transformers v4.29.1
torch 2.0.1
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
cd examples/pytorch/text-classification
export TASK_NAME=mrpc
python run_glue.py --model_name_or_path bert-base-cased --task_name $TASK_NAME --do_train --do_eval --max_seq_length 128 --per_device_train_batch_size 32 --learning_rate 2e-5 --num_train_epochs 3 --output_dir /tmp/$TASK_NAME
the log is:
Traceback (most recent call last):
File "/home/penghuic/transformers/examples/pytorch/text-classification/run_glue.py", line 623, in <module>
main()
File "/home/penghuic/transformers/examples/pytorch/text-classification/run_glue.py", line 217, in main
model_args, data_args, training_args = parser.parse_args_into_dataclasses()
File "/home/penghuic/transformers/src/transformers/hf_argparser.py", line 346, in parse_args_into_dataclasses
obj = dtype(**inputs)
File "<string>", line 111, in __init__
File "/home/penghuic/transformers/src/transformers/training_args.py", line 1333, in __post_init__
and (self.device.type != "cuda")
File "/home/penghuic/transformers/src/transformers/training_args.py", line 1697, in device
return self._setup_devices
File "/home/penghuic/transformers/src/transformers/utils/generic.py", line 54, in __get__
cached = self.fget(obj)
File "/home/penghuic/transformers/src/transformers/training_args.py", line 1613, in _setup_devices
raise ImportError(
ImportError: Using the `Trainer` with `PyTorch` requires `accelerate`: Run `pip install --upgrade accelerate`
### Expected behavior
we expect the dependency package `accelerate` will be installed when we install transformers. | 05-12-2023 09:39:13 | 05-12-2023 09:39:13 | Hi @PenghuiCheng,
All of the examples have their own unique requirements, which are listed in [their own `requirements.txt` file](https://github.com/huggingface/transformers/blob/main/examples/pytorch/text-classification/requirements.txt). The examples are demonstrative of how to perform certain tasks using the transformers library, but are not dependancies.<|||||>@PenghuiCheng you need to do `pip install transformers[torch]` to ensure you're building/installing the right version<|||||>same error ! any solution<|||||>@mohamedoh you need to do `pip install accelerate`, or `pip install transformers[torch]`<|||||>I face the same issue even after the installations<|||||>@flckv if you're in a notebook or similar you'll need to restart the session. Does `pip show accelerate` show anything? (This is a sign)<|||||>>
`pip show accelerate`
It shows version 0.19.0 but still getting the error
ImportError: Using the `Trainer` with `PyTorch` requires `accelerate`: Run `pip install --upgrade accelerate`
on both Colab as well as Jupyter<|||||>@Krish1375 you may need to restart the notebook session to use the new/installed lib<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>I have the same issue but apparently, there is no solution for it.
|
transformers | 23,322 | closed | TokenClassification Pipeline not aggregating entities correctly | ### System Info
I am running transformers==4.27.3 but I believe the issue persists in the latest version as the issue at hand is specific to the `gather_pre_entities` function https://github.com/huggingface/transformers/blob/v4.27.3/src/transformers/pipelines/token_classification.py#L281.
- `transformers` version: 4.27.3
- Platform: macOS-13.3.1-arm64-arm-64bit
- Python version: 3.10.10
- Huggingface_hub version: 0.13.3
- PyTorch version (GPU?): 2.0.0 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes, MPS on M1 Mac
- Using distributed or parallel set-up in script?: No
### Who can help?
Tagging contributors who have committed to the TokenClassification Pipeline lately: @luccailliau @Narsil @sgugger
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
The code below shows the main details of the issue. I want to use the aggregation strategies of the TokenClassification Pipeline, and due to using the LayoutLM model and tokenizer, the aggregation of subwords falls back to the heuristic implemented for the `gather_pre_entities` function of `TokenClassificationPipeline`. This should be fine, however I am experiencing cases where tokens are not properly merged, as shown in the example output below. In the original sentence string, I have a bunch of words, where the following snippet is of interest: `"... I alt DKK inkl. moms 5.975,74 Betalingsbetingelser: KONTANT ..."`. The model correctly predicts the entity, `TOTAL`, but is missing the last digit, 4, which gets grouped to its own `TOTAL`-entity prediction.
```python
# Omitted a bunch of boilerplate code including model definition, setting up dataset, etc.
pipe = TokenClassificationPipeline(model=model, tokenizer=tokenizer)
sample_output = model.forward(
input_ids=sample_input["input_ids"].type(torch.long),
bbox=sample_input["bbox"].type(torch.int32),
image=torch.stack(sample_input["image"]),
attention_mask=sample_input["attention_mask"],
)
sample_scores = sample_output["logits"][0].cpu().detach().numpy()
pre_entities = pipe.gather_pre_entities(
sentence = " ".join(dataset["test"][0]["words"]),
input_ids=sample_input["input_ids"][0],
scores = sample_scores,
offset_mapping=sample_input["offset_mapping"][0],
special_tokens_mask=sample_input["special_tokens_mask"][0].cpu().detach().numpy(),
aggregation_strategy="simple"
) # throws UserWarning: "Tokenizer does not support real words, using fallback heuristic"
grouped_entities = pipe.aggregate(pre_entities, aggregation_strategy="first")
for ent in grouped_entities:
print(ent)
>>> {'entity_group': 'O', 'score': 11.709729, 'word': [... long sentence ...], 'start': 0, 'end': 11}
>>> [... some other predicted entities ...]
>>> {'entity_group': 'TOTAL', 'score': 8.98903, 'word': '5.975,7', 'start': 0, 'end': 7}
>>> {'entity_group': 'TOTAL', 'score': 6.8310637, 'word': '4', 'start': 7, 'end': 8}
>>> {'entity_group': 'O', 'score': 11.587039, 'word': [... long sentence ...], 'start': 0, 'end': 23}
```
### Expected behavior
Diving into the `gather_pre_entities` function, I see that the heuristic uses the `is_subword` boolean to determine how subwords should be aggregated to a combined word, with a corresponding, merged entity. Specifically, the heuristic uses the following rule `is_subword = start_ind > 0 and " " not in sentence[start_ind - 1 : start_ind + 1]`, where if I comment out the second part of the conditional, results in the entity being correctly merged, i.e. `{'entity_group': 'TOTAL', 'score': 8.98903, 'word': '5.975,74', 'start': 0, 'end': 8}`.
```python
else:
# This is a fallback heuristic. This will fail most likely on any kind of text + punctuation mixtures that will be considered "words". Non word aware models cannot do better than this unfortunately.
if aggregation_strategy in {
AggregationStrategy.FIRST,
AggregationStrategy.AVERAGE,
AggregationStrategy.MAX,
}:
warnings.warn("Tokenizer does not support real words, using fallback heuristic", UserWarning)
is_subword = start_ind > 0 and " " not in sentence[start_ind - 1 : start_ind + 1]
```
Since the `start_ind` of the subword is relative to the entire, original word that the subword is part of composing, why does the heuristic then depend on indexing into the entire `sentence` string? These indices, coming from the `offset_mapping` will always be relative to the word and most often range from 0-10 and so forth, depending on the word length. Without understanding the full reason behind why " " would constitute a subword, I am certain that this must be a bug. Even if the start and end indices from `offset_mapping` were relative to the entire sentence, how could you then determine when a new word is starting? | 05-12-2023 09:08:23 | 05-12-2023 09:08:23 | Hello @neilkimn,
I haven't looked at the code in detail but I think the error is not from the pipeline itself but from a wrong prediction of the model. I guess you are using IOB format for your labels and maybe the last digit was predicted with `B-TOTAL` and not `I-TOTAL` which end up creating a new entity for only one digit.
This behavior is common, which is why there are different aggregation strategies. Changing your aggregation strategy from `simple` to `first` to calculate `pre_entities` should solve your problem.<|||||>Hi @luccailliau, thanks for the swift reply. You're right about the prediction for `TOTAL` isn't comprised of the correct IOB format. Here's the output when using no aggregation strategy:
```python
{'entity': 'B-TOTAL', 'score': 0.99910754, 'index': 280, 'word': 'โ5.', 'start': 0, 'end': 2}
{'entity': 'B-TOTAL', 'score': 0.9981998, 'index': 281, 'word': '97', 'start': 2, 'end': 4}
{'entity': 'B-TOTAL', 'score': 0.9978011, 'index': 282, 'word': '5,7', 'start': 4, 'end': 7}
{'entity': 'B-TOTAL', 'score': 0.9913623, 'index': 283, 'word': '4', 'start': 7, 'end': 8}
```
Applying aggregation strategies yields:
```python
# simple
{'entity_group': 'TOTAL', 'score': 0.99910754, 'word': '5.', 'start': 0, 'end': 2}
{'entity_group': 'TOTAL', 'score': 0.9981998, 'word': '97', 'start': 2, 'end': 4}
{'entity_group': 'TOTAL', 'score': 0.9978011, 'word': '5,7', 'start': 4, 'end': 7}
{'entity_group': 'TOTAL', 'score': 0.9913623, 'word': '4', 'start': 7, 'end': 8}
# first
{'entity_group': 'TOTAL', 'score': 0.99910754, 'word': '5.975,7', 'start': 0, 'end': 7}
{'entity_group': 'TOTAL', 'score': 0.9913623, 'word': '4', 'start': 7, 'end': 8}
# average
{'entity_group': 'TOTAL', 'score': 0.99836946, 'word': '5.975,7', 'start': 0, 'end': 7}
{'entity_group': 'TOTAL', 'score': 0.9913623, 'word': '4', 'start': 7, 'end': 8}
# max
{'entity_group': 'TOTAL', 'score': 0.99910754, 'word': '5.975,7', 'start': 0, 'end': 7}
{'entity_group': 'TOTAL', 'score': 0.9913623, 'word': '4', 'start': 7, 'end': 8}
```
And I am confident the issue is due to the heuristic using the `start_ind` of the subword `offset_mapping` and subsequently indexing into `sentence`. Backtracking through the callstack, I could verify that the `sentence` variable contained the **full** input sentence, and it is only coincidental that `" " not in sentence[start_ind - 1 : start_ind + 1]` yields False, ultimately setting `is_subword = False` for the word '4', even though it is a subword.<|||||>@neilkimn,
You're using `sentence = " ".join(dataset["test"][0]["words"])` to generate a sentence from a list of words (or subwords). This is not a problem but the original `offset_mapping` with `offset_mapping=sample_input["offset_mapping"][0]` won't match with the sentence created with `" ".join()`. I am pretty sure that something like this is happening:

I think the easiest solution for your problem is a loop that merges entities if `entities[i]["end"] == entities[i+1]["start"]` or (not a beautiful solution) tokenize the initial sentence to generate tokens, then create a new sentence with `" ".join(tokens)` and finally tokenize this new sentence to have `offset_mapping` aligned with `sentence`.
<|||||> Thanks for clarifying @luccailliau, that explains why the `offset_mapping` is different for my example. I guess the issue is propagated from how the `LayoutXLMProcessor` calls `LayoutXLMTokenizerFast` which I am using. Supplying the processor with both the tokens joined together as well as their original split representation resolves it. |
transformers | 23,321 | closed | Fix docker image | # What does this PR do?
Due to the requirements from other packages, we turns out to get `tensorflow-text==2.11` and cause CI fails from the beginning when we have `tensorflow==2.12`
| 05-12-2023 08:47:28 | 05-12-2023 08:47:28 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 23,320 | closed | Why crash the whole run when HFHub gives a 50x error? | Logging an error and continuing is probably following the principle of least surprise.
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 05-12-2023 04:48:37 | 05-12-2023 04:48:37 | _The documentation is not available anymore as the PR was closed or merged._<|||||>cc @sgugger <|||||>> Thanks for opening a PR. Can you move this inside `_push_from_checkpoint` in the `try`/`finally` block?
Done<|||||>> Thanks! Can you just run a quick `make style` on your branch to fix the quality issue?
Done |
transformers | 23,319 | closed | [Reland] search model buffers for dtype as the last resort | # What does this PR do?
PR #23159 was reverted due to broken tests. However, I still feel the need to check buffers for dtype when the module was frozen by changing parameters to buffers. But now we search buffers as the last resort. At this point, the old code would raise an exception because it tries to deference None as a tuple, so we can safely insert more checks without breaking the current behavior. The old PR fails because the buffer dtype is returned before checking module.\_\_dict\_\_ , which breaks backward compatibility.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 05-12-2023 04:33:57 | 05-12-2023 04:33:57 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 23,318 | closed | Add Multimodal heading and Document question answering in task_summary.mdx | # What does this PR do?
From #18926
This PR creates a new Multimodal heading in task_summary and add Document question answering task example inside of it.
# Who can review?
@stevhliu | 05-12-2023 04:15:31 | 05-12-2023 04:15:31 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@y3sar - The current CI tests are failing because this PR triggers tests checking code snippets in the docs, and the doctest CI environment doesn't have ffmpeg installed. I've opened a PR #23329 to resolve this. Once I've confirmed everything works, we can re-run for this PR and if all green then merge :) <|||||>@y3sar Sorry for the delay in this. There's been a lot of changes in how doctests are retrieved and run. Could you rebase onto main to get the latest updates? <|||||>@amyeroberts sure thing I'll rebase and commit again<|||||>@y3sar Did you force push after rebasing the branch? The current commit history looks like what I get if I push but don't force. <|||||>@amyeroberts no I did not force push. I rebased and pull pushed. What should I do to solve this problem?<|||||>@y3sar To rebase onto main you need to force push, as rebasing is a form of history rewrite. The steps are - running on this branch:
* Get the most recent version of main: `git fetch upstream main`
* Rebase: `git rebase upstream/main`
* Push changes: `git push -f`
<|||||>@amyeroberts ffmpeg bug still remains. What can I do to solve this?
<|||||>@y3sar You'll want to modify [this line](https://github.com/huggingface/transformers/blob/860d11ff7c4235e0baaeee50d96cf1686781bdd3/.circleci/create_circleci_config.py#LL455C14-L455C15) to:
```python
"sudo apt-get -y update && sudo apt-get install -y libsndfile1-dev espeak-ng time ffmpeg",
```<|||||>@amyeroberts should I commit in this branch or should I create another pull request?<|||||>@y3sar Commit on this branch :) <|||||>@y3sar Apologies for the delay with this PR. Could you rebase again and push. We've been having issues with timeouts on the CI which should now be resolved. Additionally, could you update the extension for the `.mdx` to `.md` please? <|||||>@amyeroberts thank you for remembering this pull request ๐๐๐๐
Yes ma'am.<|||||>@amyeroberts looks like the timeout issue remains. Should I change the code example?<|||||>@y3sar Let's draft in help from the king of tests @ydshieh :)
I did a test run of this example on a CPU and it took 40s, so not fats but not super slow either. I think we could try:
* Another checkpoint. Are there any other smaller DocQA checkpoints we could use?
* Forcing this example to be skipped in the tests <|||||>Let me see what's happening on the CI runner.<|||||>Hi, I checked. That `task_summary.md` is considered as a single test by `pytest` , but it has multiple examples (and the checkpoints are not always small).
I can change the environment variable to avoid this situation. Once that change is merged, you can rebase and the CI should be green.<|||||>The PR is opened
https://github.com/huggingface/transformers/pull/24753<|||||>That PR is merged into `main`. If you pull the latest `main` and rebase on it, we should be good to merge this PR.<|||||>Thank you the king of tests and @amyeroberts. Should I check for a smaller checkpoint?<|||||>> Should I check for a smaller checkpoint?
It would be always great to use a small(er) checkpoint for testing (if there is any) ๐ Thank you @y3sar
<|||||>@ydshieh @amyeroberts I have found some small(er) models. The model that is being used currently is 803 mb. I have found a [checkpoint](https://huggingface.co/magorshunov/layoutlm-invoices) that is 500 mb. Which is still very big but smaller. And also I have found an even smaller [checkpoint](hf-tiny-model-private/tiny-random-LayoutLMForQuestionAnswering) that is used for testing by the huggingface internals. But the results are not that reliable.<|||||>@y3sar Thanks for looking into this! The tiny models are just for internal testing and probably not something we want to have in the docs (we want to be able to adapt to our needs without considering breaking changes). I'd say go for the 500MB one :) <|||||>@y3sar Thanks again for iterating and adding this! |
transformers | 23,317 | closed | fix gptj could not jit.trace in GPU | # What does this PR do?
fix the jit failure issue for gptj
Fixes # (issue)
jit.trace for gptj fail in RTX8000
error like
ERROR: Tensor-valued Constant nodes differed in value across invocations. This often indicates that the tracer has encountered untraceable code.
Node:
%192 : Tensor = prim::Constant[value=<Tensor>](), scope: __module.transformer/__module.transformer.h.0/__module.transformer.h.0.attn # /skyrex01/wangyi/transformers/src/transformers/models/gptj/modeling_gptj.py:190:0
Source Location:
/skyrex01/wangyi/transformers/src/transformers/models/gptj/modeling_gptj.py(190): _get_embed_positions
/skyrex01/wangyi/transformers/src/transformers/models/gptj/modeling_gptj.py(220): forward
/skyrex05/wangyi/miniconda3/envs/deepspeed/lib/python3.9/site-packages/torch/nn/modules/module.py(1488): _slow_forward
/skyrex05/wangyi/miniconda3/envs/deepspeed/lib/python3.9/site-packages/torch/nn/modules/module.py(1501): _call_impl
/skyrex01/wangyi/transformers/src/transformers/models/gptj/modeling_gptj.py(309): forward
/skyrex05/wangyi/miniconda3/envs/deepspeed/lib/python3.9/site-packages/torch/nn/modules/module.py(1488): _slow_forward
/skyrex05/wangyi/miniconda3/envs/deepspeed/lib/python3.9/site-packages/torch/nn/modules/module.py(1501): _call_impl
/skyrex01/wangyi/transformers/src/transformers/models/gptj/modeling_gptj.py(688): forward
/skyrex05/wangyi/miniconda3/envs/deepspeed/lib/python3.9/site-packages/torch/nn/modules/module.py(1488): _slow_forward
/skyrex05/wangyi/miniconda3/envs/deepspeed/lib/python3.9/site-packages/torch/nn/modules/module.py(1501): _call_impl
/skyrex01/wangyi/transformers/src/transformers/models/gptj/modeling_gptj.py(853): forward
/skyrex05/wangyi/miniconda3/envs/deepspeed/lib/python3.9/site-packages/torch/nn/modules/module.py(1488): _slow_forward
/skyrex05/wangyi/miniconda3/envs/deepspeed/lib/python3.9/site-packages/torch/nn/modules/module.py(1501): _call_impl
/skyrex05/wangyi/miniconda3/envs/deepspeed/lib/python3.9/site-packages/torch/jit/_trace.py(1056): trace_module
/skyrex05/wangyi/miniconda3/envs/deepspeed/lib/python3.9/site-packages/torch/jit/_trace.py(794): trace
/skyrex01/wangyi/transformers/examples/pytorch/text-generation/run_generation.py(412): main
/skyrex01/wangyi/transformers/examples/pytorch/text-generation/run_generation.py(458): <module>
Comparison exception: The values for attribute 'device' do not match: cpu != cuda:0.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@sgugger
-->
| 05-12-2023 03:31:43 | 05-12-2023 03:31:43 | @yao-matrix<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>@michaelbenayoun Could you have a second look please?<|||||>LGTM!
|
transformers | 23,316 | closed | Using trainer with deepspeed, the program hang on | ### System Info
transformers: 4.26.1
deepspeed: 0.9.1
python: 3.8.0
platform: Ubuntu 18.04
pytorch: 1.12.0
tensorflow: 2.3.1
CUDA Version: 11.4
Driver Version: 470.82.01
### Who can help?
@sgugger
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Finetune Wav2Vec2๏ผjust like [this](https://github.com/huggingface/transformers/blob/main/examples/pytorch/speech-recognition/run_speech_recognition_ctc.py)
the only difference is using `DeepSpeed`
the TrainingArguments:
```python
training_args = TrainingArguments(
output_dir=output_dir,
group_by_length=True,
per_device_train_batch_size=4,
evaluation_strategy='epoch',
save_strategy='epoch',
num_train_epochs=1,
fp16=True,
do_eval=False,
do_train=True,
gradient_checkpointing=True,
gradient_accumulation_steps=16,
logging_steps=50,
learning_rate=1e-4,
weight_decay=0.005,
warmup_steps=1000,
save_total_limit=2,
seed=seed,
remove_unused_columns=False,
local_rank=-1,
deepspeed='./ds_config_zero2.json'
)
```
the ds_config_zero2.json is copied from transformers, it looks like this:
```json
{
"fp16": {
"enabled": "auto",
"loss_scale": 0,
"loss_scale_window": 1000,
"initial_scale_power": 16,
"hysteresis": 2,
"min_loss_scale": 1
},
"bf16": {
"enabled": "auto"
},
"optimizer": {
"type": "AdamW",
"params": {
"lr": "auto",
"betas": "auto",
"eps": "auto",
"weight_decay": "auto"
}
},
"scheduler": {
"type": "WarmupLR",
"params": {
"warmup_min_lr": "auto",
"warmup_max_lr": "auto",
"warmup_num_steps": "auto"
}
},
"zero_optimization": {
"stage": 2,
"offload_optimizer": {
"device": "cpu",
"pin_memory": true
},
"allgather_partitions": true,
"allgather_bucket_size": 2e8,
"overlap_comm": true,
"reduce_scatter": true,
"reduce_bucket_size": 2e8,
"contiguous_gradients": true
},
"gradient_accumulation_steps": "auto",
"gradient_clipping": "auto",
"steps_per_print": 100,
"train_batch_size": "auto",
"train_micro_batch_size_per_gpu": "auto",
"wall_clock_breakdown": false
}
```
everything is OK before training, but when reach the training step, the program hangs on for several hours, anyone can help me, thanks a lot.

### Expected behavior
I think the corrected behavior is starting training not hang on | 05-12-2023 03:20:21 | 05-12-2023 03:20:21 | I know why this happened, I set `group_by_length=True`, and since my dataset is very huge, this process is very slow. |
transformers | 23,315 | closed | Remote text-to-image tool is down | ### System Info
- `transformers` version: 4.29.0
- Platform: macOS-12.2.1-arm64-arm-64bit
- Python version: 3.9.16
- Huggingface_hub version: 0.14.1
- Safetensors version: not installed
- PyTorch version (GPU?): 2.0.1 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help?
@LysandreJik
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```python
from transformers import HfAgent
agent = HfAgent("https://api-inference.huggingface.co/models/bigcode/starcoder")
img3 = agent.run("Please generate an image of a rabbit wearing a space suit", remote=True)
```
```
==Explanation from the agent==
I will use the following tool: `image_generator` to generate an image.
==Code generated by the agent==
prompt = "rabbit wearing a space suit"
image = image_generator(prompt)
==Result==
โญโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ Traceback (most recent call last) โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฎ
โ in <module>:2 โ
โ โ
โ 1 agent = HfAgent("https://api-inference.huggingface.co/models/bigcode/starcoder") โ
โ โฑ 2 img3 = agent.run("Please generate an image of a rabbit wearing a space suit", remote=Tru โ
โ 3 โ
โ โ
โ /Users/freddy/miniconda3/envs/gradio-tools/lib/python3.9/site-packages/transformers/tools/agents โ
โ .py:323 in run โ
โ โ
โ 320 โ โ if not return_code: โ
โ 321 โ โ โ print("\n\n==Result==") โ
โ 322 โ โ โ self.cached_tools = resolve_tools(code, self.toolbox, remote=remote, cached_ โ
โ โฑ 323 โ โ โ return evaluate(code, self.cached_tools, state=kwargs.copy()) โ
โ 324 โ โ else: โ
โ 325 โ โ โ tool_code = get_tool_creation_code(code, self.toolbox, remote=remote) โ
โ 326 โ โ โ return f"{tool_code}\n{code}" โ
โ โ
โ /Users/freddy/miniconda3/envs/gradio-tools/lib/python3.9/site-packages/transformers/tools/python โ
โ _interpreter.py:61 in evaluate โ
โ โ
โ 58 โ result = None โ
โ 59 โ for idx, node in enumerate(expression.body): โ
โ 60 โ โ try: โ
โ โฑ 61 โ โ โ line_result = evaluate_ast(node, state, tools) โ
โ 62 โ โ except InterpretorError as e: โ
โ 63 โ โ โ msg = f"Evaluation of the code stopped at line {idx} before the end because โ
โ 64 โ โ โ if chat_mode: โ
โ โ
โ /Users/freddy/miniconda3/envs/gradio-tools/lib/python3.9/site-packages/transformers/tools/python โ
โ _interpreter.py:98 in evaluate_ast โ
โ โ
โ 95 โ if isinstance(expression, ast.Assign): โ
โ 96 โ โ # Assignement -> we evaluate the assignement which should update the state โ
โ 97 โ โ # We return the variable assigned as it may be used to determine the final resul โ
โ โฑ 98 โ โ return evaluate_assign(expression, state, tools) โ
โ 99 โ elif isinstance(expression, ast.Call): โ
โ 100 โ โ # Function call -> we return the value of the function call โ
โ 101 โ โ return evaluate_call(expression, state, tools) โ
โ โ
โ /Users/freddy/miniconda3/envs/gradio-tools/lib/python3.9/site-packages/transformers/tools/python โ
โ _interpreter.py:139 in evaluate_assign โ
โ โ
โ 136 โ
โ 137 def evaluate_assign(assign, state, tools): โ
โ 138 โ var_names = assign.targets โ
โ โฑ 139 โ result = evaluate_ast(assign.value, state, tools) โ
โ 140 โ โ
โ 141 โ if len(var_names) == 1: โ
โ 142 โ โ state[var_names[0].id] = result โ
โ โ
โ /Users/freddy/miniconda3/envs/gradio-tools/lib/python3.9/site-packages/transformers/tools/python โ
โ _interpreter.py:101 in evaluate_ast โ
โ โ
โ 98 โ โ return evaluate_assign(expression, state, tools) โ
โ 99 โ elif isinstance(expression, ast.Call): โ
โ 100 โ โ # Function call -> we return the value of the function call โ
โ โฑ 101 โ โ return evaluate_call(expression, state, tools) โ
โ 102 โ elif isinstance(expression, ast.Constant): โ
โ 103 โ โ # Constant -> just return the value โ
โ 104 โ โ return expression.value โ
โ โ
โ /Users/freddy/miniconda3/envs/gradio-tools/lib/python3.9/site-packages/transformers/tools/python โ
โ _interpreter.py:167 in evaluate_call โ
โ โ
โ 164 โ # Todo deal with args โ
โ 165 โ args = [evaluate_ast(arg, state, tools) for arg in call.args] โ
โ 166 โ kwargs = {keyword.arg: evaluate_ast(keyword.value, state, tools) for keyword in call โ
โ โฑ 167 โ return func(*args, **kwargs) โ
โ 168 โ
โ 169 โ
โ 170 def evaluate_subscript(subscript, state, tools): โ
โ โ
โ /Users/freddy/miniconda3/envs/gradio-tools/lib/python3.9/site-packages/transformers/tools/base.p โ
โ y:399 in __call__ โ
โ โ
โ 396 โ โ output_image = self.tool_class is not None and self.tool_class.outputs == ["imag โ
โ 397 โ โ inputs = self.prepare_inputs(*args, **kwargs) โ
โ 398 โ โ if isinstance(inputs, dict): โ
โ โฑ 399 โ โ โ outputs = self.client(**inputs, output_image=output_image) โ
โ 400 โ โ else: โ
โ 401 โ โ โ outputs = self.client(inputs, output_image=output_image) โ
โ 402 โ โ if isinstance(outputs, list) and len(outputs) == 1 and isinstance(outputs[0], li โ
โ โ
โ /Users/freddy/miniconda3/envs/gradio-tools/lib/python3.9/site-packages/transformers/tools/base.p โ
โ y:721 in __call__ โ
โ โ
โ 718 โ โ โ
โ 719 โ โ # By default, parse the response for the user. โ
โ 720 โ โ if output_image: โ
โ โฑ 721 โ โ โ return self.decode_image(response.content) โ
โ 722 โ โ else: โ
โ 723 โ โ โ return response.json() โ
โ 724 โ
โ โ
โ /Users/freddy/miniconda3/envs/gradio-tools/lib/python3.9/site-packages/transformers/tools/base.p โ
โ y:698 in decode_image โ
โ โ
โ 695 โ โ โ
โ 696 โ โ from PIL import Image โ
โ 697 โ โ โ
โ โฑ 698 โ โ b64 = base64.b64decode(raw_image) โ
โ 699 โ โ _bytes = io.BytesIO(b64) โ
โ 700 โ โ return Image.open(_bytes) โ
โ 701 โ
โ โ
โ /Users/freddy/miniconda3/envs/gradio-tools/lib/python3.9/base64.py:87 in b64decode โ
โ โ
โ 84 โ โ s = s.translate(bytes.maketrans(altchars, b'+/')) โ
โ 85 โ if validate and not re.fullmatch(b'[A-Za-z0-9+/]*={0,2}', s): โ
โ 86 โ โ raise binascii.Error('Non-base64 digit found') โ
โ โฑ 87 โ return binascii.a2b_base64(s) โ
โ 88 โ
โ 89 โ
โ 90 def standard_b64encode(s): โ
โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ
Error: Incorrect padding
```
### Expected behavior
The image is decoded successfully | 05-11-2023 21:30:24 | 05-11-2023 21:30:24 | I still encouter the same Error.<|||||>Thanks a lot for the reports! It should be fixed now.<|||||>Thanks for the fix @LysandreJik ! |
transformers | 23,313 | closed | [docs] Fix Agents and Tools docstring | Fixes the `kwarg` argument in the docstring to include what to expect, otherwise the `kwarg` gets mixed into the argument above it (see [here](https://huggingface.co/docs/transformers/main_classes/agent#transformers.Agent.chat.remote) for example). | 05-11-2023 20:37:16 | 05-11-2023 20:37:16 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 23,312 | closed | Fixed slow tokenizer behavior to make it remove special tokens when asked | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #23250
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 05-11-2023 20:32:00 | 05-11-2023 20:32:00 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23312). All of your documentation changes will be reflected on that endpoint.<|||||>cc @ArthurZucker <|||||>I still need to do some more tests, but the idea is just add the special in the special tokens list. I will work more on it in this week :)<|||||>Hey! Thanks for working on this and good luck haha! Ping me if you need any help on fixing the tests.
I think that the core bug is gonna be bit tricky to get right, but it should be fixed
<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 23,311 | closed | TypeError: add got incompatible shapes for broadcasting: (512, 50, 1024), (1, 145, 1024). | ### System Info
transformers version: 4.27.4
Platform: Linux-5.15.0-52-generic-x86_64-with-glibc2.29
Python version: 3.8.10
Huggingface_hub version: 0.13.4
PyTorch version (GPU?): 1.9.0+cpu (False)
Tensorflow version (GPU?): 2.9.1 (True)
Flax version (CPU?/GPU?/TPU?): 0.6.8 (cpu)
Jax version: 0.4.8
JaxLib version: 0.4.7
Using GPU in script?:
Using distributed or parallel set-up in script?:
### Who can help?
@sanchit-gandhi
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I am running the Imagenet evaluation script [here](https://github.com/clip-italian/clip-italian/blob/master/evaluation/CLIP_italian_ImageNet_Zero_Shot_Evaluation_.ipynb) to evaluate a version of clip that uses google/vit-large-patch32-384 provided [here](https://huggingface.co/LinaAlhuri/clip-vit-large-patch32). However, around the prediction cell bellow
```
top_ns = [1, 5, 10, 100]
acc_counters = [0. for _ in top_ns]
n = 0.
for i, (images, target) in enumerate(tqdm(loader)):
images = images
target = target.numpy()
# predict
image_features = image_model(images)
image_features = image_features / np.linalg.norm(image_features, axis=-1, keepdims=True)
logits = 100. * image_features @ zeroshot_weights
# measure accuracy
accs = accuracy(logits, target, topk=top_ns)
for j in range(len(top_ns)):
acc_counters[j] += accs[j]
n += images.shape[0]
tops = {f'top{top_ns[i]}': acc_counters[i] / n * 100 for i in range(len(top_ns))}
print(tops)
```
I am getting the below error
```
---------------------------------------------------------------------------
UnfilteredStackTrace Traceback (most recent call last)
in
8 # predict
----> 9 image_features = image_model(images)
10 image_features = image_features / np.linalg.norm(image_features, axis=-1, keepdims=True)
in (images)
24 language_model = lambda queries: np.asarray(model.get_text_features(*tokenize(queries)))
---> 25 image_model = lambda images: np.asarray(model.get_image_features(images.permute(0, 2, 3, 1).numpy(),))
[~/.local/lib/python3.8/site-packages/transformers/models/vision_text_dual_encoder/modeling_flax_vision_text_dual_encoder.py](https://file+.vscode-resource.vscode-cdn.net/c%3A/Users/96654/Downloads/~/.local/lib/python3.8/site-packages/transformers/models/vision_text_dual_encoder/modeling_flax_vision_text_dual_encoder.py) in get_image_features(self, pixel_values, params, dropout_rng, train)
405
--> 406 return self.module.apply(
407 {"params": params or self.params},
[~/.local/lib/python3.8/site-packages/jax/_src/traceback_util.py](https://file+.vscode-resource.vscode-cdn.net/c%3A/Users/96654/Downloads/~/.local/lib/python3.8/site-packages/jax/_src/traceback_util.py) in reraise_with_filtered_traceback(*args, **kwargs)
165 try:
--> 166 return fun(*args, **kwargs)
167 except Exception as e:
[~/.local/lib/python3.8/site-packages/flax/linen/module.py](https://file+.vscode-resource.vscode-cdn.net/c%3A/Users/96654/Downloads/~/.local/lib/python3.8/site-packages/flax/linen/module.py) in apply(self, variables, rngs, method, mutable, capture_intermediates, *args, **kwargs)
1484 method = _get_unbound_fn(method)
-> 1485 return apply(
1486 method, self,
...
---> 96 return lax_fn(x1, x2) if x1.dtype != np.bool_ else bool_lax_fn(x1, x2)
97 fn.__qualname__ = f"jax.numpy.{numpy_fn.__name__}"
98 fn = jit(fn, inline=True)
TypeError: add got incompatible shapes for broadcasting: (512, 50, 1024), (1, 145, 1024).
```
### Expected behavior
to run smoothly and provide accuracy results | 05-11-2023 19:27:07 | 05-11-2023 19:27:07 | Hey @alhuri - as detailed in #22673 and #22780, the Italian CLIP repository is not maintained or affiliated with Hugging Face transformers. It is a standalone repository offering its own fine-tuning / evaluation scripts. As such, you're more likely to receive support regarding this issue by directly asking in the Italian CLIP repository: https://github.com/clip-italian/clip-italian/issues/new |
transformers | 23,310 | closed | Fix test typos - audio feature extractors | # What does this PR do?
Fixes typos I discovered when I was writing [#23309](https://github.com/huggingface/transformers/pull/23309)
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@sanchit-gandhi | 05-11-2023 19:17:23 | 05-11-2023 19:17:23 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 23,309 | closed | is_batched fix for remaining 2-D numpy arrays | # What does this PR do?
Fix `is_batched` logic for 2-D numpy arrays, as described in https://github.com/huggingface/transformers/pull/23223#pullrequestreview-1423033751
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [X] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@sanchit-gandhi
| 05-11-2023 19:14:12 | 05-11-2023 19:14:12 | Hmm, this is odd-- I just noticed manually (locally) running `pytest` on the tests fixed here just shows they're all automatically skipped. Looking into configuration now...
EDIT: oh, due to https://github.com/huggingface/transformers/issues/18355#issuecomment-1543277694 I hadn't had torch installed, torchaudio, a whole bunch of libraries. should work now<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>Thanks for the quick changes @LWprogramming! Let's wait until #23223 is finalised before getting this merged so that we can update this PR with any amendments from there<|||||>> Thanks for the quick changes @LWprogramming! Let's wait until #23223 is finalised before getting this merged so that we can update this PR with any amendments from there
Ok, updated the code with comments from that PR, and ran tests + linters |
transformers | 23,308 | closed | Revert "search buffers for dtype" | Reverts huggingface/transformers#23159
This breaks the FDSP integration for some reason, so reverting for now as we investigate things further. The revert will be included in the patch 4.29.1
(Test that breaks:
```
RUN_SLOW=yes pytest -s -v tests/fsdp -k test_checkpointing
```
in Accelerate) | 05-11-2023 19:08:29 | 05-11-2023 19:08:29 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 23,307 | closed | compute_loss takes a lot of extra memory after saving checkpoint and causes OOM | ### System Info
- `transformers` version: 4.28.1
- Platform: Linux-5.15.0-1033-aws-x86_64-with-glibc2.27
- Python version: 3.10.9
- Huggingface_hub version: 0.14.0
- Safetensors version: not installed
- PyTorch version (GPU?): 2.0.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?:
<img width="639" alt="image" src="https://github.com/huggingface/transformers/assets/1331543/1ade210a-8dd9-4d6a-a70b-b3d2982e1914">
- Using distributed or parallel set-up in script?: No
### Who can help?
@sgugger
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Every time the trainer.py:_save() saves a checkpoint https://github.com/huggingface/transformers/blob/04ab5605fbb4ef207b10bf2772d88c53fc242e83/src/transformers/trainer.py#L2884
After saving the checkpoint and then resume training entering the training_step() function and executing compute_loss() https://github.com/huggingface/transformers/blob/04ab5605fbb4ef207b10bf2772d88c53fc242e83/src/transformers/trainer.py#L2731
There is a memory usage spike that causes OOM.
Without executing ` trainer.py:_save()` (a.k.a all previous normal training steps) however, the compute_loss() function does not allocate extra memory (except for the very first time of the forward backward pass happens which is expected). No memory increase is observed during `trainer.py:_save()` either. I have changed the save_steps to different numbers, the forward pass OOM is always triggered at the step right after saving checkpoint.
Minimal training script:
```
base_model_name="EleutherAI/pythia-6.9b"
model = transformers.AutoModelForCausalLM.from_pretrained(
base_model_name,
load_in_8bit=True,
torch_dtype=torch.float16,
device_map={'': 0}
)
tokenizer = transformers.AutoTokenizer.from_pretrained(
base_model_name,
device_map={'':0}
)
model = peft.prepare_model_for_int8_training(model)
model = peft.get_peft_model(model, peft.LoraConfig(
r=lora_r,
lora_alpha=lora_alpha,
target_modules=["query_key_value", "xxx"],
lora_dropout=lora_dropout,
bias="none",
task_type="CAUSAL_LM",
))
training_args = transformers.TrainingArguments(
per_device_train_batch_size=8,
gradient_accumulation_steps=gradient_accumulation_steps,
num_train_epochs=epochs,
learning_rate=learning_rate,
fp16=True,
logging_steps=20,
output_dir=output_dir,
save_steps=5, #for debugging purpose
)
trainer = transformers.Trainer(
model=model,
train_dataset=data,
args=training_args,
data_collator=transformers.DataCollatorForLanguageModeling(
tokenizer,
mlm=False,
),
)
model.config.use_cache = False
result = trainer.train(resume_from_checkpoint=False)
model.save_pretrained(output_dir)
```
### Expected behavior
I would expect saving action to not change the behavior of forward pass. I am wondering why there is the memory spike and whether it can be solved. | 05-11-2023 18:44:48 | 05-11-2023 18:44:48 | There is a memory spike due to the model being in 8 bits probably, cc @younesbelkada <|||||>Thanks for the prompt response. Do you have any insights why would it only happen after saving checkpoint?<|||||>Hmm, I just figured that since this is using lora, there is no need to save checkpoints anyways?
> There is a memory spike due to the model being in 8 bits probably, cc @younesbelkada
<|||||>Hi @HuiyingLi
Maybe the default saving mechanism is the culprit, to be on the safe zone I suggest to save the adapters only, for that you should use a custom callback to properly save the adapter weights
Please have a look at the suggested solution here: https://discuss.huggingface.co/t/peft-lora-gpt-neox-loraconfig/35790 and let us know how it goes<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 23,306 | closed | Fix image segmentation tool test | # What does this PR do?
There were some `prompt` left from before the rename. | 05-11-2023 18:32:03 | 05-11-2023 18:32:03 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23306). All of your documentation changes will be reflected on that endpoint. |
transformers | 23,305 | closed | Fix typo in gradio-tools docs | # What does this PR do?
Fixes a typo in the gradio-tools guide, `tool` vs `tools`, that prevents the code snippet from running.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 05-11-2023 18:23:03 | 05-11-2023 18:23:03 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23305). All of your documentation changes will be reflected on that endpoint. |
transformers | 23,304 | closed | Cannot decode image from remote image segmentation tool | ### System Info
- `transformers` version: 4.29.0
- Platform: macOS-12.2.1-arm64-arm-64bit
- Python version: 3.9.16
- Huggingface_hub version: 0.14.1
- Safetensors version: not installed
- PyTorch version (GPU?): 2.0.1 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help?
@sgugger @LysandreJik
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Run the following script. The problem goes away when running the local tool.
```python
from transformers import HfAgent
from diffusers.utils import load_image
bunny_img = load_image("https://gradio-builds.s3.amazonaws.com/sample-images/SpaceBunny.png")
agent = HfAgent("https://api-inference.huggingface.co/models/bigcode/starcoder")
segmented_img = agent.run("Please locate the bunny in the image", image=bunny_img, remote=True)
```
```
in <module>:8 โ
โ โ
โ 5 โ
โ 6 agent = HfAgent("https://api-inference.huggingface.co/models/bigcode/starcoder") โ
โ 7 โ
โ โฑ 8 segmented_img = agent.run("Please locate the bunny in the image", image=bunny_img, remot โ
โ 9 โ
โ โ
โ /Users/freddy/miniconda3/envs/gradio-tools/lib/python3.9/site-packages/transformers/tools/agents โ
โ .py:323 in run โ
โ โ
โ 320 โ โ if not return_code: โ
โ 321 โ โ โ print("\n\n==Result==") โ
โ 322 โ โ โ self.cached_tools = resolve_tools(code, self.toolbox, remote=remote, cached_ โ
โ โฑ 323 โ โ โ return evaluate(code, self.cached_tools, state=kwargs.copy()) โ
โ 324 โ โ else: โ
โ 325 โ โ โ tool_code = get_tool_creation_code(code, self.toolbox, remote=remote) โ
โ 326 โ โ โ return f"{tool_code}\n{code}" โ
โ โ
โ /Users/freddy/miniconda3/envs/gradio-tools/lib/python3.9/site-packages/transformers/tools/python โ
โ _interpreter.py:61 in evaluate โ
โ โ
โ 58 โ result = None โ
โ 59 โ for idx, node in enumerate(expression.body): โ
โ 60 โ โ try: โ
โ โฑ 61 โ โ โ line_result = evaluate_ast(node, state, tools) โ
โ 62 โ โ except InterpretorError as e: โ
โ 63 โ โ โ msg = f"Evaluation of the code stopped at line {idx} before the end because โ
โ 64 โ โ โ if chat_mode: โ
โ โ
โ /Users/freddy/miniconda3/envs/gradio-tools/lib/python3.9/site-packages/transformers/tools/python โ
โ _interpreter.py:98 in evaluate_ast โ
โ โ
โ 95 โ if isinstance(expression, ast.Assign): โ
โ 96 โ โ # Assignement -> we evaluate the assignement which should update the state โ
โ 97 โ โ # We return the variable assigned as it may be used to determine the final resul โ
โ โฑ 98 โ โ return evaluate_assign(expression, state, tools) โ
โ 99 โ elif isinstance(expression, ast.Call): โ
โ 100 โ โ # Function call -> we return the value of the function call โ
โ 101 โ โ return evaluate_call(expression, state, tools) โ
โ โ
โ /Users/freddy/miniconda3/envs/gradio-tools/lib/python3.9/site-packages/transformers/tools/python โ
โ _interpreter.py:139 in evaluate_assign โ
โ โ
โ 136 โ
โ 137 def evaluate_assign(assign, state, tools): โ
โ 138 โ var_names = assign.targets โ
โ โฑ 139 โ result = evaluate_ast(assign.value, state, tools) โ
โ 140 โ โ
โ 141 โ if len(var_names) == 1: โ
โ 142 โ โ state[var_names[0].id] = result โ
โ โ
โ /Users/freddy/miniconda3/envs/gradio-tools/lib/python3.9/site-packages/transformers/tools/python โ
โ _interpreter.py:101 in evaluate_ast โ
โ โ
โ 98 โ โ return evaluate_assign(expression, state, tools) โ
โ 99 โ elif isinstance(expression, ast.Call): โ
โ 100 โ โ # Function call -> we return the value of the function call โ
โ โฑ 101 โ โ return evaluate_call(expression, state, tools) โ
โ 102 โ elif isinstance(expression, ast.Constant): โ
โ 103 โ โ # Constant -> just return the value โ
โ 104 โ โ return expression.value โ
โ โ
โ /Users/freddy/miniconda3/envs/gradio-tools/lib/python3.9/site-packages/transformers/tools/python โ
โ _interpreter.py:167 in evaluate_call โ
โ โ
โ 164 โ # Todo deal with args โ
โ 165 โ args = [evaluate_ast(arg, state, tools) for arg in call.args] โ
โ 166 โ kwargs = {keyword.arg: evaluate_ast(keyword.value, state, tools) for keyword in call โ
โ โฑ 167 โ return func(*args, **kwargs) โ
โ 168 โ
โ 169 โ
โ 170 def evaluate_subscript(subscript, state, tools): โ
โ โ
โ /Users/freddy/miniconda3/envs/gradio-tools/lib/python3.9/site-packages/transformers/tools/base.p โ
โ y:399 in __call__ โ
โ โ
โ 396 โ โ output_image = self.tool_class is not None and self.tool_class.outputs == ["imag โ
โ 397 โ โ inputs = self.prepare_inputs(*args, **kwargs) โ
โ 398 โ โ if isinstance(inputs, dict): โ
โ โฑ 399 โ โ โ outputs = self.client(**inputs, output_image=output_image) โ
โ 400 โ โ else: โ
โ 401 โ โ โ outputs = self.client(inputs, output_image=output_image) โ
โ 402 โ โ if isinstance(outputs, list) and len(outputs) == 1 and isinstance(outputs[0], li โ
โ โ
โ /Users/freddy/miniconda3/envs/gradio-tools/lib/python3.9/site-packages/transformers/tools/base.p โ
โ y:721 in __call__ โ
โ โ
โ 718 โ โ โ
โ 719 โ โ # By default, parse the response for the user. โ
โ 720 โ โ if output_image: โ
โ โฑ 721 โ โ โ return self.decode_image(response.content) โ
โ 722 โ โ else: โ
โ 723 โ โ โ return response.json() โ
โ 724 โ
โ โ
โ /Users/freddy/miniconda3/envs/gradio-tools/lib/python3.9/site-packages/transformers/tools/base.p โ
โ y:698 in decode_image โ
โ โ
โ 695 โ โ โ
โ 696 โ โ from PIL import Image โ
โ 697 โ โ โ
โ โฑ 698 โ โ b64 = base64.b64decode(raw_image) โ
โ 699 โ โ _bytes = io.BytesIO(b64) โ
โ 700 โ โ return Image.open(_bytes) โ
โ 701 โ
โ โ
โ /Users/freddy/miniconda3/envs/gradio-tools/lib/python3.9/base64.py:87 in b64decode โ
โ โ
โ 84 โ โ s = s.translate(bytes.maketrans(altchars, b'+/')) โ
โ 85 โ if validate and not re.fullmatch(b'[A-Za-z0-9+/]*={0,2}', s): โ
โ 86 โ โ raise binascii.Error('Non-base64 digit found') โ
โ โฑ 87 โ return binascii.a2b_base64(s) โ
โ 88 โ
โ 89 โ
โ 90 def standard_b64encode(s): โ
โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ
Error: Incorrect padding
```
### Expected behavior
The image is able to be correctly decoded | 05-11-2023 18:17:06 | 05-11-2023 18:17:06 | The endpoint seems to have trouble indeed. It might need a restart @LysandreJik. Form what I see it complains about the inputs being named `image` and `label` and expects a `prompt`.<|||||>Thank you, nice catch @freddyaboulton!
It should be working now.<|||||>Thank you for the very speedy fix @LysandreJik ! |
transformers | 23,303 | closed | Cannot import Tool if old version of huggingface_hub is installed | ### System Info
- `transformers` version: 4.29.0
- Platform: macOS-12.2.1-arm64-arm-64bit
- Python version: 3.9.16
- Huggingface_hub version: 0.13.4
- Safetensors version: not installed
- PyTorch version (GPU?): not installed (NA)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@sgugger @LysandreJik
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1. Have huggingface_hub 0.13.4 installed
2. Import Tool class
```python
from transformers import Tool
```
```
---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
File ~/miniconda3/envs/gradio-tools/lib/python3.9/site-packages/transformers/utils/import_utils.py:1172, in _LazyModule._get_module(self, module_name)
1171 try:
-> 1172 return importlib.import_module("." + module_name, self.__name__)
1173 except Exception as e:
File ~/miniconda3/envs/gradio-tools/lib/python3.9/importlib/__init__.py:127, in import_module(name, package)
126 level += 1
--> 127 return _bootstrap._gcd_import(name[level:], package, level)
File <frozen importlib._bootstrap>:1030, in _gcd_import(name, package, level)
File <frozen importlib._bootstrap>:1007, in _find_and_load(name, import_)
File <frozen importlib._bootstrap>:986, in _find_and_load_unlocked(name, import_)
File <frozen importlib._bootstrap>:680, in _load_unlocked(spec)
File <frozen importlib._bootstrap_external>:850, in exec_module(self, module)
File <frozen importlib._bootstrap>:228, in _call_with_frames_removed(f, *args, **kwds)
File ~/miniconda3/envs/gradio-tools/lib/python3.9/site-packages/transformers/tools/base.py:27
26 from huggingface_hub import CommitOperationAdd, HfFolder, create_commit, create_repo, hf_hub_download, metadata_update
---> 27 from huggingface_hub.utils import RepositoryNotFoundError, get_session
29 from ..dynamic_module_utils import custom_object_save, get_class_from_dynamic_module, get_imports
ImportError: cannot import name 'get_session' from 'huggingface_hub.utils' (/Users/freddy/miniconda3/envs/gradio-tools/lib/python3.9/site-packages/huggingface_hub/utils/__init__.py)
```
### Expected behavior
Importing tool class does not raise an error.
Upgrading to hugginggface_hub version 0.14.1 fixes the issue! | 05-11-2023 16:54:06 | 05-11-2023 16:54:06 | Thanks for reporting! This will be addressed in #23301 <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 23,302 | closed | skip `test_run_squad_no_trainer` for now | # What does this PR do?
Skip `test_run_squad_no_trainer` for now, as it is failing on `main`. | 05-11-2023 16:03:17 | 05-11-2023 16:03:17 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 23,301 | closed | Agents extras | Adds an extras for `agents`.
Fix https://github.com/huggingface/transformers/issues/23298 | 05-11-2023 15:27:25 | 05-11-2023 15:27:25 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Approved as seen offline |
transformers | 23,300 | closed | Add gradient_checkpointing parameter to FlaxWhisperEncoder | Ref https://github.com/huggingface/transformers/pull/23173#discussion_r1188815621
@sanchit-gandhi | 05-11-2023 15:16:26 | 05-11-2023 15:16:26 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Merging as the failing tests are also failing on main and not related to this PR |
transformers | 23,299 | closed | Add gradient_checkpointing parameter to FlaxWhisperEncoder. | Ref https://github.com/huggingface/transformers/pull/23173#discussion_r1188815621
@sanchit-gandhi | 05-11-2023 15:09:35 | 05-11-2023 15:09:35 | |
transformers | 23,298 | closed | ImportError: Datasets needs to be installed if not passing speaker embeddings. | ### System Info
- `transformers` version: 4.29.0
- Platform: macOS-13.3.1-arm64-arm-64bit
- Python version: 3.10.10
- Huggingface_hub version: 0.14.1
- Safetensors version: 0.3.1
- PyTorch version (GPU?): 2.0.0 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: NO
- Using distributed or parallel set-up in script?: NO
### Who can help?
@sanchit-gandhi ?
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Maybe not a bug but lacking documentation:
Following the simple exampe https://huggingface.co/docs/transformers/transformers_agents
```
from transformers import OpenAiAgent
from transformers import HfAgent
...
text="A beaver is swimming in the water"
audio=agent.run("Read the following text out loud", text=text)
```
```
==Explanation from the agent==
I will use the following tool: `text_reader` to read the text out loud.
==Code generated by the agent==
audio = text_reader(text)
==Result==
โ /Users/me/.pyenv/versions/3.10.10/lib/python3.10/site-packages/transformers/tools/text_to_speech.py:52 in encode
โ 50 โ โ if speaker_embeddings is None: โ
โ 51 โ โ โ if not is_datasets_available(): โ
โ โฑ 52 โ โ โ โ raise ImportError("Datasets needs to be installed if not passing speaker โ
```
@sanchit-gandhi ?
### Expected behavior
The example should work out of the box, or add information on how to download the required dataset in the documentation and error message. | 05-11-2023 15:07:59 | 05-11-2023 15:07:59 | Thanks @pannous! Indeed, we should show what are the requirements. The error you're getting is due to `datasets` not being installed.<|||||>This PR should fix it: https://github.com/huggingface/transformers/pull/23301 |
transformers | 23,297 | closed | Fix broken links in the agent docs | # What does this PR do?
This PR fixes a bung of broken links in the documentation. | 05-11-2023 15:02:46 | 05-11-2023 15:02:46 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 23,296 | closed | Seq2Seq Trainer Handling for MLFlow Exception | ### System Info
Transformers 4.28.1, torch 1.13.1
When using the Seq2SeqTrainer with the MLFlow integration enabled, if I lose my connection to mlflow after the training has begun (if the server crashes or if there is a network error), MLFlow throws the exception:
```
raise MlflowException(f"API request to {url} failed with exception {e}")
```
I don't know if this is a bug, or if you have a recommended way to continue: I would like the option to have the seq2seqtrainer log the mlflow connection error but then continue training with the mlflow integration disabled.
I imagine that this behavior would apply to any integration, not just mlflow.
### Who can help?
@sgugger
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1. Start training with Seq2Seq Trainer with a connection to MLflow
2. During training, stop the mlflow server
3. Seq2Seq Trainer raises an MLFlow Exception
### Expected behavior
I would expect that there would be an option to allow for mlflow exception to disable the integration but continue training. | 05-11-2023 12:42:26 | 05-11-2023 12:42:26 | We don't maintain the MlFlow integration ourselves, so I can't really help. Maybe try to tag the persons who added it?<|||||>Thanks for the quick reply! Although my question involves MLFlow, I think the question more broadly is:
if an integration callback throws an error, how can we disable that integration and continue with training?
I think that might be a question for the huggingface team and not for the MLFlow integrators?<|||||>Oh I dind't realize you were asking for that. This is what the [`report_to`](https://huggingface.co/docs/transformers/main_classes/trainer#transformers.TrainingArguments.report_to) argument is for :-)<|||||>Ah. Perfect. Thanks so much! |
transformers | 23,295 | closed | Update conditional logic and -> or in SAM postprocessing | should be ors here | 05-11-2023 12:33:27 | 05-11-2023 12:33:27 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@younesbelkada exactly but with the current implementation it could also be a list of arrays and the exception would not be thrown<|||||>FYI, this PR triggered 3 test failures on the CI.
(The third one is GPU OOM, which is likely caused by the other 2 failures).
cc @younesbelkada
```bash
tests/models/sam/test_modeling_sam.py::SamModelIntegrationTest::test_inference_mask_generation_one_point_one_bb
(line 235) ValueError: Input boxes must be a list of list of list of floating integers.
tests/models/sam/test_modeling_sam.py::SamModelIntegrationTest::test_inference_mask_generation_one_point_one_bb_zero
(line 235) ValueError: Input boxes must be a list of list of list of floating integers.
tests/models/sam/test_modeling_sam.py::SamModelIntegrationTest::test_inference_mask_generation_two_points_batched
(line 808) torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 2.00 GiB (GPU 0; 14.76 GiB total capacity; 11.65 GiB already allocated; 792.75 MiB free; 12.74 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
```<|||||> ValueError: Input boxes must be a list of list of list of floating integers. I AM Still getting this error, when even run the example notebooks, what do I do? I just have a list of bounding box coordinated(flatenned)<|||||>Hi @karthikdatta98
thanks for reporting, in https://github.com/huggingface/notebooks/pull/409 I modified the notebook accordingly to show how to correctly pass bounding boxes |
transformers | 23,294 | closed | Getting ValueError: model.shared.weight doesn't have any device set in running a M2M100's-12B model on colab while using with accelerate | ### System Info
I am getting following error while using accelerate for M2M100 on google colab pro. Following is the code snippet:
import torch
device=torch.device('cuda' if torch.cuda.is_available() else 'cpu')
from transformers import AutoConfig, M2M100ForConditionalGeneration, M2M100Tokenizer, AutoModel
from accelerate import infer_auto_device_map, init_empty_weights
from transformers import AutoModel, M2M100Config
config = M2M100Config.from_pretrained("facebook/m2m100-12B-last-ckpt")
with init_empty_weights():
ย model = AutoModel.from_config(config)
device_map = infer_auto_device_map(model, no_split_module_classes=["M2M100Attention"])
checkpoint = "facebook/m2m100-12B-last-ckpt"
ย
device_map["shared"] = "cpu"
device_map["encoder"] = "cpu"
device_map["decoder.embed_tokens"] = "cpu"
device_map["decoder.embed_positions"] = "cpu"
device_map["decoder.layers.0"] = "cpu"
device_map["decoder.layers.1"] = "cpu"
device_map["decoder.layers.2"] = "cpu"
device_map["decoder.layers.3"] = "cpu"
model = M2M100ForConditionalGeneration.from_pretrained(checkpoint, device_map=device_map, offload_folder="offload", offload_state_dict = True)
Following are the env specs:
Model Link: https://huggingface.co/facebook/m2m100-12B-last-ckpt
Python Version: 3.10
GPU: A100
GPU: 40GB
RAM: 83.5 GB
CUDA version: 12.0
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
import torch
device=torch.device('cuda' if torch.cuda.is_available() else 'cpu')
from transformers import AutoConfig, M2M100ForConditionalGeneration, M2M100Tokenizer, AutoModel
from accelerate import infer_auto_device_map, init_empty_weights
from transformers import AutoModel, M2M100Config
config = M2M100Config.from_pretrained("facebook/m2m100-12B-last-ckpt")
with init_empty_weights():
ย model = AutoModel.from_config(config)
device_map = infer_auto_device_map(model, no_split_module_classes=["M2M100Attention"])
checkpoint = "facebook/m2m100-12B-last-ckpt"
ย
device_map["shared"] = "cpu"
device_map["encoder"] = "cpu"
device_map["decoder.embed_tokens"] = "cpu"
device_map["decoder.embed_positions"] = "cpu"
device_map["decoder.layers.0"] = "cpu"
device_map["decoder.layers.1"] = "cpu"
device_map["decoder.layers.2"] = "cpu"
device_map["decoder.layers.3"] = "cpu"
model = M2M100ForConditionalGeneration.from_pretrained(checkpoint, device_map=device_map, offload_folder="offload", offload_state_dict = True)
### Expected behavior
Expecting the model to load properly and after the following code is to be used for translation:
hi_text='''La vie est comme une boรฎte de chocolat.'''
tokenizer = M2M100Tokenizer.from_pretrained("facebook/m2m100-12B-last-ckpt")
encoded_hi = tokenizer(hi_text, return_tensors="pt").to('cuda')
generated_tokens = model.generate(**encoded_hi, forced_bos_token_id=tokenizer.get_lang_id("en"))

print(tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)[0]) | 05-11-2023 12:18:53 | 05-11-2023 12:18:53 | Hi @abhishektcs1, thanks for reporting this issue!
Could you provide information about the running environment: run `transformers-cli env` in the terminal and copy-paste the output? <|||||>> Hi @abhishektcs1, thanks for reporting this issue!
>
> Could you provide information about the running environment: run `transformers-cli env` in the terminal and copy-paste the output?
Hi @amyeroberts , I am also facing the same error. Please find below the output of 'transformer-cli'
------------------------------------------------------------------------------------------------------------------------
2023-05-13 05:36:18.558293: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
WARNING:tensorflow:From /usr/local/lib/python3.10/dist-packages/transformers/commands/env.py:63: is_gpu_available (from tensorflow.python.framework.test_util) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.config.list_physical_devices('GPU')` instead.
2023-05-13 05:36:22.918424: W tensorflow/core/common_runtime/gpu/gpu_bfc_allocator.cc:47] Overriding orig_value setting because the TF_FORCE_GPU_ALLOW_GROWTH environment variable is set. Original config value was 0.
Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points.
- `transformers` version: 4.29.1
- Platform: Linux-5.15.107+-x86_64-with-glibc2.31
- Python version: 3.10.11
- Huggingface_hub version: 0.14.1
- Safetensors version: not installed
- PyTorch version (GPU?): 2.0.0+cu118 (True)
- Tensorflow version (GPU?): 2.12.0 (True)
- Flax version (CPU?/GPU?/TPU?): 0.6.9 (gpu)
- Jax version: 0.4.8
- JaxLib version: 0.4.7
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
Please find the attachment having nvidia-smi output of google colab pro, I am using
<img width="932" alt="nvidia-smi" src="https://github.com/huggingface/transformers/assets/17768401/5c469e3d-1c60-4d9a-8481-19845504a3f6">
<|||||>@abhishektcs1 @sanyoggupta Could either of you also share a full traceback of the error encountered (the entire error message, from the first lines), preferably as a copy-paste of the text rather than a screenshot please?<|||||>> @abhishektcs1 @sanyoggupta Could either of you also share a full traceback of the error encountered (the entire error message, from the first lines), preferably as a copy-paste of the text rather than a screenshot please?
Hey, I am getting a similar error when I try out my code
This is the Traceback:
```
Explicitly passing a `revision` is encouraged when loading a model with custom code to ensure no malicious code has been contributed in a newer revision.
Traceback (most recent call last):
File "/home/ksuresh6/DataChat_Project/model.py", line 20, in <module>
model = load_checkpoint_and_dispatch(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/hulab/ksuresh6/anaconda3/envs/datachat_env/lib/python3.11/site-packages/accelerate/big_modeling.py", line 479, in load_checkpoint_and_dispatch
load_checkpoint_in_model(
File "/data/hulab/ksuresh6/anaconda3/envs/datachat_env/lib/python3.11/site-packages/accelerate/utils/modeling.py", line 982, in load_checkpoint_in_model
raise ValueError(f"{param_name} doesn't have any device set.")
ValueError: decoder.transformer.h.7.attn.causal_mask doesn't have any device set.
(datachat_env) ksuresh6@AMD4RTX3090GPU14:~/DataChat_Project$ python3 model.py
Explicitly passing a `revision` is encouraged when loading a model with custom code to ensure no malicious code has been contributed in a newer revision.
Traceback (most recent call last):
File "/home/ksuresh6/DataChat_Project/model.py", line 20, in <module>
model = load_checkpoint_and_dispatch(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/hulab/ksuresh6/anaconda3/envs/datachat_env/lib/python3.11/site-packages/accelerate/big_modeling.py", line 479, in load_checkpoint_and_dispatch
load_checkpoint_in_model(
File "/data/hulab/ksuresh6/anaconda3/envs/datachat_env/lib/python3.11/site-packages/accelerate/utils/modeling.py", line 982, in load_checkpoint_in_model
raise ValueError(f"{param_name} doesn't have any device set.")
ValueError: decoder.transformer.h.7.attn.causal_mask doesn't have any device set.
```
This is the code I am trying out:
```
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
import torch
from transformers import AutoConfig
from accelerate import init_empty_weights
from accelerate import load_checkpoint_and_dispatch
checkpoint = "Salesforce/instructcodet5p-16b"
device = "cuda" # for GPU usage or "cpu" for CPU usage
model_path ='/home/ksuresh6/.cache/huggingface/hub/models--Salesforce--instructcodet5p-16b/snapshots/b5aaae8f54e8f13897e395fbc4c22567df0399ef'
tokenizer = AutoTokenizer.from_pretrained(model_path)
config = AutoConfig.from_pretrained(checkpoint,torch_dtype=torch.float16,low_cpu_mem_usage=True,trust_remote_code=True)
with init_empty_weights():
model = AutoModelForSeq2SeqLM.from_config(config, trust_remote_code=True,torch_dtype=torch.float16)
model.tie_weights()
model = load_checkpoint_and_dispatch(
model, model_path, device_map="auto"
)
inputs = tokenizer.encode("def print_hello():", return_tensors="pt").to(device)
outputs = model.generate(inputs, max_length=12)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
This is the output of `transformers-cli env`:
```
Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points.
- `transformers` version: 4.26.1
- Platform: Linux-5.19.0-41-generic-x86_64-with-glibc2.35
- Python version: 3.10.6
- Huggingface_hub version: 0.12.1
- PyTorch version (GPU?): 1.13.1+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: Yes
```
Any help is appreciated! Thanks in advance!<|||||>> @abhishektcs1 @sanyoggupta Could either of you also share a full traceback of the error encountered (the entire error message, from the first lines), preferably as a copy-paste of the text rather than a screenshot please?
Hii,
i am facing the same issue,
this is what i get after executing (! transformers-cli env)

please help me out with this problem.
Thank You!<|||||>@younesbelkada could this be the same bug you fixed on NLLB here? I see the no_split_module_class is also the attention layer.<|||||>Hmm this sounds more like you are using the infer auto device map in an inappropriate way indeed. You should put `"M2M100EncoderLayer"` and `"M2M100DecoderLayer"` inside `_no_split_modules`. Could you try again with these new values? Also can you share us a handy reproducible snippet? ๐ <|||||>Thank You i got it.
@sgugger you have posted a great documentation on hugging face on "how to run these large model on our device".
https://huggingface.co/blog/accelerate-large-models<|||||>> Hmm this sounds more like you are using the infer auto device map in an inappropriate way indeed. You should put `"M2M100EncoderLayer"` and `"M2M100DecoderLayer"` inside `_no_split_modules`. Could you try again with these new values? Also can you share us a handy reproducible snippet? ๐

please help me out what values should i pass in no_split_modules
Thank You!<|||||>
these are the model layers.
<|||||>Hi @anujsahani01
Can you try to put `GPTBigCodeBlock` in no split modules?<|||||>> Hi @anujsahani01 Can you try to put `GPTBigCodeBlock` in no split modules?
Yes it worked.
Thank You!
<|||||>> Hi @anujsahani01 Can you try to put `GPTBigCodeBlock` in no split modules?
Hey,
was having one more doubt if please me with this.
I am finetuning hugging face โHuggingFaceH4/starchat-alphaโ model for making a data science text to code generating bot.
This is the format of my dataset:
train: Dataset({
features: [โinput_idsโ, โlabelsโ],
num_rows: 5012
})
test: Dataset({
features: [โinput_idsโ, โlabelsโ],
num_rows: 1325
})
})
and the structure of the dataset looks somewhat like this, which was explained in starcoder documentation,
<|system|>
Below is a dialogue between a human and an ANUJ_AI
<|end|>
<|user|>
Minimum count of indโฆ so on
<|end|>
<|assistant|>
def possible ( x , S , N ) : โฆso on
<|end|>
I am loading the model on my colab in 8 bit format using :hugs:transformer BitsAndBytesConfig for saving memory, then loaded the model using a device map which was made using :hugs: transformers AutoConfig and the acclerate which divided my model amoung โgpuโ, โcpuโ RAM and my โdiskโ.
Once the model and its checkpoints were downloaded successfully then i used transformers.Trainer to train the model on my custom dataset.
my using the below code:

but i am always getting this error :
Cannot copy out of meta tensor ; no data !

Your inputs will be highly appreciated.
Thank You!<|||||>Hi @anujshani01
Thanks! Could you explain a bit more in details how you train the 8bit model? Are you sure you are using adapters leveraging PEFT library?
Maybe if you can share the full snippet I can help you more on that! ๐ช <|||||>> Hi @anujshani01 Thanks! Could you explain a bit more in details how you train the 8bit model? Are you sure you are using adapters leveraging PEFT library? Maybe if you can share the full snippet I can help you more on that! ๐ช
i have updates the colab notebook.
https://drive.google.com/file/d/1-ccrx1Q5tkLUYtZBGi5lNZGjPMyr_X9U/view?usp=sharing
i am not using 8bit model now.
i am using ๐คtool " accelerate " to initializing the model then using load_checkpoint_and_dispatch i am loading the model weights and all.
But its giving me this error:
ValueError: offload is not a folder containing a .index.json file.
i am not able to understant what exactly the error is.
please have a look at the snip which show the offload folder and error

Please help we out with this error it would be a great help.
Your inputs will be highly appreciated.
Thank You!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 23,293 | closed | unpin tf prob | # What does this PR do?
unpin tf prob | 05-11-2023 11:38:21 | 05-11-2023 11:38:21 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 23,292 | closed | Update custom_tools.mdx: fix link | Wrong parantheses
| 05-11-2023 10:46:51 | 05-11-2023 10:46:51 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 23,291 | closed | add GPTJ/bloom/llama/opt into model list and enhance the jit support | # What does this PR do?
extend the text generation to more model
- generate: @gante
| 05-11-2023 10:23:06 | 05-11-2023 10:23:06 | @amyeroberts please help review<|||||>@jiqing-feng @yao-matrix<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>@amyeroberts for context, this PR builds upon https://github.com/huggingface/transformers/pull/22265 -- an example of how to JIT trace text generation<|||||>> Thanks for iterating!
Thanks @amyeroberts could you help to merge the PR? I check the failed case in ci. it has nothing to do with my code. <|||||>@sywangyi Looking at the CI output, it seems that the examples test run failed before the generation tests were run. Could you rebase from main to include any recent updates? I believe the accelerate errors should now be resolved. |
transformers | 23,290 | closed | Problem with Transformers Agents: audio generation | ### System Info
Hello. I'm exploring Transformers Agents capabilities and wanted to generate an image and ask the agent to say what it contains.
I created the image with the command:
room = agent.run("Generate an image of a 19th century ballroom")
that works fine, but when I ask to describe the image with:
audio = agent.run("Read out loud the contents of the image image", image=room)
play_audio(audio)
it answers with the attached error.
[message.txt](https://github.com/huggingface/transformers/files/11450975/message.txt)
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
room = agent.run("Generate an image of a 19th century ballroom")
audio = agent.run("Read out loud the contents of the image image", image=room)
play_audio(audio)
### Expected behavior
It should create an audio file with the description of the image. | 05-11-2023 10:20:50 | 05-11-2023 10:20:50 | How does it go if you decompose the process in several steps ?
for example :
```python
room = agent.run("Generate an image of a 19th century ballroom")
description = agent.run("describe the contents of the `image`", image=room)
audio = agent.run("Read out load the content of `description`", description=description)
play_audio(audio)
```<|||||>It seems to work now, even if the model said "A red carpet".
The image was:

and the output:
==Explanation from the agent==
I will use the following tool: `image_generator` to generate an image according to the prompt.
==Code generated by the agent==
image = image_generator(prompt="19th century ballroom")
==Result==
Downloading (โฆ)ain/text_to_image.py: 100%
1.76k/1.76k [00:00<00:00, 149kB/s]
A new version of the following files was downloaded from https://huggingface.co/space/huggingface-tools/text-to-image:
- text_to_image.py
. Make sure to double-check they do not contain any added malicious code. To avoid downloading new versions of the code file, you can pin a revision.
Downloading (โฆ)ain/model_index.json: 100%
541/541 [00:00<00:00, 38.8kB/s]
Fetching 15 files: 100%
15/15 [00:22<00:00, 1.59s/it]
Downloading (โฆ)_checker/config.json: 100%
4.72k/4.72k [00:00<00:00, 79.0kB/s]
Downloading (โฆ)rocessor_config.json: 100%
342/342 [00:00<00:00, 3.52kB/s]
Downloading (โฆ)cheduler_config.json: 100%
308/308 [00:00<00:00, 3.79kB/s]
Downloading (โฆ)cial_tokens_map.json: 100%
472/472 [00:00<00:00, 4.20kB/s]
Downloading (โฆ)tokenizer/merges.txt: 100%
525k/525k [00:00<00:00, 3.01MB/s]
Downloading (โฆ)_encoder/config.json: 100%
617/617 [00:00<00:00, 4.73kB/s]
Downloading pytorch_model.bin: 100%
1.22G/1.22G [00:12<00:00, 126MB/s]
Downloading pytorch_model.bin: 100%
492M/492M [00:06<00:00, 80.2MB/s]
Downloading (โฆ)okenizer_config.json: 100%
806/806 [00:00<00:00, 9.97kB/s]
Downloading (โฆ)e6a/unet/config.json: 100%
743/743 [00:00<00:00, 9.54kB/s]
Downloading (โฆ)8e6a/vae/config.json: 100%
547/547 [00:00<00:00, 6.66kB/s]
Downloading (โฆ)tokenizer/vocab.json: 100%
1.06M/1.06M [00:00<00:00, 8.59MB/s]
Downloading (โฆ)on_pytorch_model.bin: 100%
3.44G/3.44G [00:21<00:00, 231MB/s]
Downloading (โฆ)on_pytorch_model.bin: 100%
335M/335M [00:03<00:00, 55.6MB/s]
`text_config_dict` is provided which will be used to initialize `CLIPTextConfig`. The value `text_config["id2label"]` will be overriden.
100%
25/25 [00:01<00:00, 18.37it/s]
==Explanation from the agent==
I will use the following tool: `image_captioner` to generate a description of the image.
==Code generated by the agent==
description = image_captioner(image)
==Result==
Downloading (โฆ)rocessor_config.json: 100%
287/287 [00:00<00:00, 25.8kB/s]
Downloading (โฆ)okenizer_config.json: 100%
438/438 [00:00<00:00, 34.5kB/s]
Downloading (โฆ)solve/main/vocab.txt: 100%
232k/232k [00:00<00:00, 5.23MB/s]
Downloading (โฆ)/main/tokenizer.json: 100%
711k/711k [00:00<00:00, 14.4MB/s]
Downloading (โฆ)cial_tokens_map.json: 100%
125/125 [00:00<00:00, 10.3kB/s]
Downloading (โฆ)lve/main/config.json: 100%
4.56k/4.56k [00:00<00:00, 324kB/s]
Downloading pytorch_model.bin: 100%
990M/990M [00:04<00:00, 233MB/s]
/usr/local/lib/python3.10/dist-packages/transformers/generation/utils.py:1346: UserWarning: Using `max_length`'s default (20) to control the generation length. This behaviour is deprecated and will be removed from the config in v5 of Transformers -- we recommend using `max_new_tokens` to control the maximum length of the generation.
warnings.warn(
==Explanation from the agent==
I will use the following tool: `text_reader` to read out loud the content of the variable `description`.
==Code generated by the agent==
audio_description = text_reader(description)
==Result==
Downloading (โฆ)rocessor_config.json: 100%
433/433 [00:00<00:00, 35.6kB/s]
Downloading spm_char.model: 100%
238k/238k [00:00<00:00, 18.8MB/s]
Downloading (โฆ)in/added_tokens.json: 100%
40.0/40.0 [00:00<00:00, 2.59kB/s]
Downloading (โฆ)cial_tokens_map.json: 100%
234/234 [00:00<00:00, 21.5kB/s]
Downloading (โฆ)okenizer_config.json: 100%
232/232 [00:00<00:00, 17.4kB/s]
Downloading (โฆ)lve/main/config.json: 100%
2.06k/2.06k [00:00<00:00, 164kB/s]
Downloading pytorch_model.bin: 100%
585M/585M [00:05<00:00, 103MB/s]
Downloading (โฆ)lve/main/config.json: 100%
636/636 [00:00<00:00, 52.3kB/s]
Downloading pytorch_model.bin: 100%
50.7M/50.7M [00:00<00:00, 96.5MB/s]
Downloading builder script: 100%
1.36k/1.36k [00:00<00:00, 94.6kB/s]
Downloading readme: 100%
1.01k/1.01k [00:00<00:00, 68.2kB/s]
Downloading and preparing dataset cmu-arctic-xvectors/default to /root/.cache/huggingface/datasets/Matthijs___cmu-arctic-xvectors/default/0.0.1/a62fea1f9415e240301ea0042ffad2a3aadf4d1caa7f9a8d9512d631723e781f...
Downloading data: 100%
17.9M/17.9M [00:00<00:00, 86.0MB/s]
Dataset cmu-arctic-xvectors downloaded and prepared to /root/.cache/huggingface/datasets/Matthijs___cmu-arctic-xvectors/default/0.0.1/a62fea1f9415e240301ea0042ffad2a3aadf4d1caa7f9a8d9512d631723e781f. Subsequent calls will reuse this data.
<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 23,289 | closed | Update transformers_agents.mdx | Make `huggingface-tools` to [`huggingface-tools`](https://huggingface.co/huggingface-tools) | 05-11-2023 10:04:26 | 05-11-2023 10:04:26 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 23,288 | closed | Temporarily increase tol for PT-FLAX whisper tests | # What does this PR do?
Flax whisper equivalence tests have also started to fail in a flaky manner with small increase in the difference between the PT and FLAX model e.g. for [this run](https://app.circleci.com/pipelines/github/huggingface/transformers/64230/workflows/e2d42ca4-f367-4a85-9054-a0ea99e49849/jobs/794534).
Flax equivalent of: #23257
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
| 05-11-2023 09:29:53 | 05-11-2023 09:29:53 | Updated issue #23258 to reference this PR too<|||||>_The documentation is not available anymore as the PR was closed or merged._ |
transformers | 23,287 | closed | Added missing " in CHAT_PROMPT_TEMPLATE | # What does this PR do?
It adds a missing `"` in `CHAT_PROMPT_TEMPLATE` | 05-11-2023 09:05:29 | 05-11-2023 09:05:29 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Merging as failing test is independent of this PR and has been resolved on main. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.