repo
stringclasses 1
value | number
int64 1
25.3k
| state
stringclasses 2
values | title
stringlengths 1
487
| body
stringlengths 0
234k
⌀ | created_at
stringlengths 19
19
| closed_at
stringlengths 19
19
| comments
stringlengths 0
293k
|
---|---|---|---|---|---|---|---|
transformers | 22,378 | closed | [performance] ensure `causal_mask` is created directly on device | # What does this PR do?
@tjruwase and @tohtana discovered that causal_mask is currently being created on CPU then moved to GPU during the forward pass of OPT (and we think other models). This appears to be causing a significant performance degradation on multi-gpu environments due to parallel host to device copies going on. It's not 100% clear to us why this is so bad but here is what we observe before and after this patch:
Before this patch w. OPT-125m on x8 A100s:
<img width="649" alt="image" src="https://user-images.githubusercontent.com/645595/227668447-bf6840dd-bbc4-4520-8a9f-33f046eeb4c2.png">
After the patch:
<img width="628" alt="image" src="https://user-images.githubusercontent.com/645595/227668475-6ed2f1ca-d18a-4776-862d-4be499f62f39.png">
These numbers were gathered from a modified version of https://github.com/huggingface/transformers/blob/main/examples/pytorch/language-modeling/run_clm.py but turning on `wall_clock_breakdown: true` in our deepspeed config.
One major complication we see in accepting this PR is that the two functions being modified are copied across lots of different models and the `make fix-copies` script doesn't seem to address all of them correctly across both `_make_causal_mask` and `_prepare_decoder_attention_mask`
## Who can review?
Tagging @sgugger and @stas00 to help triage to the right people
| 03-25-2023 00:43:11 | 03-25-2023 00:43:11 | _The documentation is not available anymore as the PR was closed or merged._<|||||>cc @thomasw21 @NouamaneTazi since both of you are experts on this kind of things - to see if you have any general opinion and/or if you would like to review this PR too.<|||||>@jeffra Would it possible for you (and/or @tjruwase and @tohtana) to provide your script that finds/measures/profiles the running time for this issue 🙏 . It would be super helpful for us to dive into internally too.<|||||>> LGTM, thanks a lot for the fix! Note that the same modification needs to be applied to BART (since OPT copies from BART) in order for all quality checks to pass.
FYI (@sgugger) : @stas00 mentioned on Slack
> I tried to support Jeff to tell him to how make copies but he found that many copies are either not tagged properly or the copied functions were completely renamed and thus it's very difficult to make an automatedtransformers-wide fix
and in this PR description, the author(s)
> One major complication we see in accepting this PR is that the two functions being modified are copied across lots of different models and the make fix-copies script doesn't seem to address all of them correctly across both _make_causal_mask and _prepare_decoder_attention_mask
It's likely that they expect us to help on this part. I can help (I was waiting for the approval for the fix in `OPT` which is done now.)<|||||>I think just copying the same fix to BART and then applying `make fix-copies` is simple enough for this PR. Dealing with functions that are not copies or are named differently can indeed be done in followup PRs.<|||||>Ok, i've updated the BART implementation and attempted to get `make fix-copies` to work for me but I think I might be doing something wrong. Some of the original issues I saw are now fixed on other models (e.g., https://github.com/huggingface/transformers/pull/22382 adds a `# Copied from` tag for llama). However, I am still seeing issues i think coming from the fix-up scripts getting confused with the function signature change of `_make_causal_mask`. Also, I added the `# Copied from` tag into opt for `_make_causal_mask` which was part of my previous issue i think.
Can someone try `make fix-copies` on their side with this? You should be able to push to my branch.
For example, here's the diff of `src/transformers/models/xglm/modeling_xglm.py` after applying `make fix-copies` in this branch, it does not add `device` as an argument to `_make_causal_mask`:
```diff
diff --git a/src/transformers/models/xglm/modeling_xglm.py b/src/transformers/models/xglm/modeling_xglm.py
index 8a1955793..59851bd85 100755
--- a/src/transformers/models/xglm/modeling_xglm.py
+++ b/src/transformers/models/xglm/modeling_xglm.py
@@ -119,13 +119,13 @@ def _make_causal_mask(input_ids_shape: torch.Size, dtype: torch.dtype, past_key_
Make causal mask used for bi-directional self-attention.
"""
bsz, tgt_len = input_ids_shape
- mask = torch.full((tgt_len, tgt_len), torch.tensor(torch.finfo(dtype).min))
- mask_cond = torch.arange(mask.size(-1))
+ mask = torch.full((tgt_len, tgt_len), torch.tensor(torch.finfo(dtype).min, device=device), device=device)
+ mask_cond = torch.arange(mask.size(-1), device=device)
mask.masked_fill_(mask_cond < (mask_cond + 1).view(mask.size(-1), 1), 0)
mask = mask.to(dtype)
if past_key_values_length > 0:
- mask = torch.cat([torch.zeros(tgt_len, past_key_values_length, dtype=dtype), mask], dim=-1)
+ mask = torch.cat([torch.zeros(tgt_len, past_key_values_length, dtype=dtype, device=device), mask], dim=-1)
return mask[None, None, :, :].expand(bsz, 1, tgt_len, tgt_len + past_key_values_length)
```
It modifies all of these models, so ideally don't want to edit these manually :)
```
modified: src/transformers/models/bigbird_pegasus/modeling_bigbird_pegasus.py
modified: src/transformers/models/biogpt/modeling_biogpt.py
modified: src/transformers/models/blenderbot/modeling_blenderbot.py
modified: src/transformers/models/blenderbot_small/modeling_blenderbot_small.py
modified: src/transformers/models/informer/modeling_informer.py
modified: src/transformers/models/llama/modeling_llama.py
modified: src/transformers/models/m2m_100/modeling_m2m_100.py
modified: src/transformers/models/marian/modeling_marian.py
modified: src/transformers/models/mbart/modeling_mbart.py
modified: src/transformers/models/mvp/modeling_mvp.py
modified: src/transformers/models/nllb_moe/modeling_nllb_moe.py
modified: src/transformers/models/pegasus/modeling_pegasus.py
modified: src/transformers/models/pegasus_x/modeling_pegasus_x.py
modified: src/transformers/models/plbart/modeling_plbart.py
modified: src/transformers/models/speech_to_text/modeling_speech_to_text.py
modified: src/transformers/models/speech_to_text_2/modeling_speech_to_text_2.py
modified: src/transformers/models/speecht5/modeling_speecht5.py
modified: src/transformers/models/time_series_transformer/modeling_time_series_transformer.py
modified: src/transformers/models/trocr/modeling_trocr.py
modified: src/transformers/models/whisper/modeling_whisper.py
modified: src/transformers/models/xglm/modeling_xglm.py
```<|||||>Ah yes, `make fix-copies` does not change the signature of the function so that is indeed something to edit manually. If it's too much work I can try to push this to your branch tomorrow.<|||||>> Ah yes, `make fix-copies` does not change the signature of the function so that is indeed something to edit manually. If it's too much work I can try to push this to your branch tomorrow.
Sounds good, I might have some time this afternoon for this. Otherwise feel free to do it :) Just wasn't sure if this was an expected issue with the copy scripts or not.<|||||>Okay all the models should be fixed now, `make fixup` is clear on my local tests. |
transformers | 22,377 | closed | load_in_8bit now respects 'balanced' device maps in multi-gpu environments | # What does this PR do?
Fixes `max_memory` generation for 'auto' 'balanced' and 'balanced_low_0' `device_map`s for models being loaded in 8bit
Fixes # (N/A)
no issues found, but one guy made a [comment](https://github.com/TimDettmers/bitsandbytes/issues/177#issuecomment-1481609654) about it in the bnb issues, and it caused confusion and workarounds elsewhere.
The problem was easily worked around by manually passing a device map or max memory config.
Before this change, the following code would attempt to load the whole model on the first GPU in a two gpu setup, potentially causing OOM errors. After the change, it loads it evenly across GPUs, as intended.
```python
model = AutoModelForCausalLM.from_pretrained(
checkpoint,
load_in_8bit=True,
device_map="auto",
)
```
Additionally, I removed 'pipeline' from my earlier comment in a previous PR, as true Pipeline Parallelism would require some more non-trivial changes to the model.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 03-24-2023 23:27:39 | 03-24-2023 23:27:39 | _The documentation is not available anymore as the PR was closed or merged._<|||||>I'm not sure if this is caused by your work but I met this problem:
I'm training LLaMA-13B with PEFT, using lora + modules_to_save=['model.embed_tokens', 'lm_head']
And it can run training normally
But the finally model file doesn't contain the lm_head part, only lora + embed_tokens.
And When I use DDP or single card. It can save all the thing normally.<|||||>(I'm using your fork + your modified version alpaca lora code)<|||||>@KohakuBlueleaf that's a bit disingenuous, as you've changed quite a few other things 😉
I think I was able to reproduce what you were talking about on [your repo](https://github.com/KohakuBlueleaf/guanaco-lora) though. Do you mean that when you run export_hf_checkpoint.py, `head_changed` shows `False`?
That's what I am seeing. What's strange, is that the weights *are* in the lora, they're just named "base_model.model.lm_head.0.weight" instead of "base_model.model.lm_head.weight"
If you add `adapters_weights["base_model.model.lm_head.weight"] = adapters_weights["base_model.model.lm_head.0.weight"]` to peft_model.py right after the lora is loaded, but before it is merged with the base_model, then you can get `head_changed: True`
This might be from my change... as the lm_head is a special case when loading in 8bit, but I'm not sure. I see the same result for *both* single or multi-gpu. I've run out of time today to investigate though, so I'll have to dig in more later, possibly tonight.<|||||>> @KohakuBlueleaf that's a bit disingenuous, as you've changed quite a few other things 😉
>
> I think I was able to reproduce what you were talking about on [your repo](https://github.com/KohakuBlueleaf/guanaco-lora) though. Do you mean that when you run export_hf_checkpoint.py, `head_changed` shows `False`?
>
> That's what I am seeing. What's strange, is that the weights _are_ in the lora, they're just named "base_model.model.lm_head.0.weight" instead of "base_model.model.lm_head.weight"
>
> If you add `adapters_weights["base_model.model.lm_head.weight"] = adapters_weights["base_model.model.lm_head.0.weight"]` to peft_model.py right after the lora is loaded, but before it is merged with the base_model, then you can get `head_changed: True`
>
> This might be from my change... as the lm_head is a special case when loading in 8bit, but I'm not sure. I see the same result for _both_ single or multi-gpu. I've run out of time today to investigate though, so I'll have to dig in more later, possibly tonight.
Yeah I also figured it out
But thx for your reply! |
transformers | 22,376 | closed | AttributeError: 'Tensor' object has no attribute 'tile' | ### System Info
- `transformers` version: 4.27.3
- Platform: Linux-5.10.147+-x86_64-with-glibc2.31
- Python version: 3.9.16
- Huggingface_hub version: 0.13.3
- PyTorch version (GPU?): 1.7.1 (True)
- Tensorflow version (GPU?): 2.11.0 (True)
- Flax version (CPU?/GPU?/TPU?): 0.6.7 (gpu)
- Jax version: 0.4.6
- JaxLib version: 0.4.6
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Try to run the code below:
from transformers import GPTNeoForCausalLM, GPT2Tokenizer
model = GPTNeoForCausalLM.from_pretrained("EleutherAI/gpt-neo-1.3B")
tokenizer = GPT2Tokenizer.from_pretrained("EleutherAI/gpt-neo-1.3B")
prompt = (
"In a shocking finding, scientists discovered a herd of unicorns living in a remote, "
"previously unexplored valley, in the Andes Mountains. Even more surprising to the "
"researchers was the fact that the unicorns spoke perfect English."
)
input_ids = tokenizer(prompt, return_tensors="pt").input_ids
gen_tokens = model.generate(
input_ids,
do_sample=True,
temperature=0.9,
max_length=100,
)
gen_text = tokenizer.batch_decode(gen_tokens)[0]
### Expected behavior
I would expect model to generate the predicted next sentence/text. | 03-24-2023 23:00:03 | 03-24-2023 23:00:03 | PyTorch 1.7 is not supported anymore, we only ensure support for PyTorch >= 1.9 Could you try updating your Pytorch install and see if it fixes the issue?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 22,375 | closed | Pytorch 2 generation/utils.py , 'torch.distributed' has no attribute 'world_size' | ### System Info
transformers 4.28.0.dev0
pytorch 2
cuda 117
File "/usr/local/lib/python3.8/dist-packages/transformers/generation/utils.py", line 1196, in generate
if is_deepspeed_zero3_enabled() and dist.world_size() > 1:
AttributeError: module 'torch.distributed' has no attribute 'world_size'
https://github.com/huggingface/transformers-bloom-inference/blob/7bea3526d8270b4aeeefecc57d7d7d638e2bbe0e/bloom-inference-scripts/bloom-ds-zero-inference.py
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Pytorch 2 and generate with deepspeed stage 3.
https://github.com/huggingface/transformers-bloom-inference/blob/7bea3526d8270b4aeeefecc57d7d7d638e2bbe0e/bloom-inference-scripts/bloom-ds-zero-inference.py
### Expected behavior
No error | 03-24-2023 22:44:12 | 03-24-2023 22:44:12 | Looks like get_world_size() is supposed to be used in pytorch.distributed. [I will make a PR.](https://github.com/huggingface/transformers/pull/22381)<|||||>Should be fixed now that the PR above has been merged. |
transformers | 22,374 | closed | Report safetensors version in transformers-cli env | # What does this PR do?
This PR adds `safetensors` to the info reported by `transformers-cli env` and in particular puts a note when safetensors is ignored becuase of a too old PyTorch (see #22370 ) | 03-24-2023 20:49:45 | 03-24-2023 20:49:45 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 22,373 | closed | llama model cannot run with accelerate setting | ### System Info
transformer version 4.28.0.dev0
Error
`loading file tokenizer_config.json
loading weights file ./llama1/pytorch_model.bin
Generate config GenerationConfig {
"_from_model_config": true,
"bos_token_id": 0,
"eos_token_id": 1,
"pad_token_id": 1,
"transformers_version": "4.28.0.dev0"
}
[15:28:22] WARNING Sending process 854275 closing signal SIGTERM api.py:699
WARNING Sending process 854276 closing signal SIGTERM api.py:699
WARNING Sending process 854277 closing signal SIGTERM api.py:699
WARNING Sending process 854279 closing signal SIGTERM api.py:699
WARNING Sending process 854280 closing signal SIGTERM api.py:699
WARNING Sending process 854281 closing signal SIGTERM api.py:699
WARNING Sending process 854282 closing signal SIGTERM api.py:699
[15:28:25] ERROR failed (exitcode: -9) local_rank: 3 (pid: 854278) of binary: /usr/bin/python3 `
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
We tried to train Pile with accelerate 8 GPUs setting
### Expected behavior
I would expect it load succesfully | 03-24-2023 20:34:38 | 03-24-2023 20:34:38 | Please follow the template of the issues as there is nothing anyone can do to help with so little information.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 22,372 | open | Add Restormer | ### Model description
**Restormer: Efficient Transformer for High-Resolution Image Restoration** was published in CVPR 2022, which introduced a new Vision Transformer based architecture for Image Restoration tasks like Deraining, Motion Deblurring, Defocus Deblurring and Denoising. It reduced the time complexity of Self Attention in Vision Transformers from O(n<sup>2</sup>) to O(n) by introducing **Multi-Dconv Head Transposed Attention**. It also introduced **Gated-Dconv Feed-Forward Network**.
@manyana72 and I would like to add this model to Huggingface.
cc: @NielsRogge
### Open source status
- [X] The model implementation is available
- [X] The model weights are available
### Provide useful links for the implementation
[Paper](https://arxiv.org/pdf/2111.09881.pdf), [Code Implementation](https://github.com/swz30/Restormer) and [pretrained model weights](https://github.com/swz30/Restormer/releases/tag/v1.0) | 03-24-2023 20:25:27 | 03-24-2023 20:25:27 | Hi @tushdon2, please let me know if and how can I contribute to this model. |
transformers | 22,371 | closed | Conv1D doesn't output token-wise results consistently. | ### System Info
Hi, I recently observed from huggingface's GPT2 that
(1) the output (logits y1, ..., yN) from using a sequence with N tokens (say x1, ..., xN)
(2) the output (logits z1, ..., zM) from using the earlier part of the above sequence (say x1, ..., xM)
are not perfectly matched (y1!=z1,..., yM!=zM) during inference (so when causal mask is applied). I tried to figure out why this happened and realized that this is related to how `Conv1D`'s `forward` module is implemented: https://github.com/huggingface/transformers/blob/main/src/transformers/pytorch_utils.py#L100-L104
Thing is, we internally use `addmm` (say b + [x1, ..., xN]*W), which doesn't give you consistent row-wise outputs (say b + [x1, ..., xM]*W) although they should be the same theoretically.
I generated an example and proposed a way to resolve the issue below:
```python
import torch
torch.manual_seed(0)
torch.cuda.manual_seed(0)
input_dim = 786
feature_dim = 2304
x1 = torch.randn((1, 38, input_dim), device='cuda') # (B, N, Fi) where N is the number of tokens in a sequence.
x2 = x1[:, :10] # (B, M, Fi) where M=10 is to gather the early M tokens from the sequence.
b = torch.randn((feature_dim,), device='cuda') # biases
w = torch.randn((input_dim, feature_dim), device='cuda') # weights
def addmm(x, b, w):
x = x.view(-1, x.size(-1))
return torch.addmm(b, x, w)
def addbmm(x, b, w): # (B, N, Fi), (Fi, Fh), (Fh)
batch_size, seq_len = x.size(0), x.size(1) # B, N
x = x.view(batch_size * seq_len, 1, x.size(-1)) # (B * N, 1, Fi)
# (1, Fi, Fh).expand ( (B * N, Fi, Fh) ) --> (B * N, Fi, Fh)
w = w.unsqueeze(0).expand((batch_size * seq_len,) + w.size())
return torch.matmul(x, w).add(b).view(batch_size * seq_len, -1) # (B * N, -1)
print("result (addmm):\n", addmm(x1, b, w)[:10] == addmm(x2, b, w))
print("result (addbmm):\n", addbmm(x1, b, w)[:10] == addbmm(x2, b, w))
```
The 1st function `addmm` is the one from huggingface's `Conv1D`, and the 2nd function `addbmm` is what I implemented to avoid numerical error. For the printend outputs, we ideally have to get `True` values always, but this is not the case of `addmm`.
```bash
result (addmm):
tensor([[False, False, False, ..., False, True, True],
[ True, True, False, ..., False, False, True],
[False, False, False, ..., False, False, False],
...,
[False, False, False, ..., False, False, False],
[False, False, False, ..., False, False, False],
[False, False, True, ..., False, False, False]], device='cuda:0')
result (addbmm):
tensor([[True, True, True, ..., True, True, True],
[True, True, True, ..., True, True, True],
[True, True, True, ..., True, True, True],
...,
[True, True, True, ..., True, True, True],
[True, True, True, ..., True, True, True],
[True, True, True, ..., True, True, True]], device='cuda:0')
```
Intuitively, I enforced batched matmul computation by explicitly creating a batch dimension for tensors, which leads to explicit row-wise computations and ends up with ideal results.
Thus, I think `forward()` part of `Conv1D` (https://github.com/huggingface/transformers/blob/main/src/transformers/pytorch_utils.py#L100-L104) should be updated as
```python
def forward(self, x):
size_out = x.size()[:-1] + (self.nf,)
x = x.view(x.size()[:-1].numel(), 1, x.size(-1))
weight = self.weight.unsqueeze(0).expand((x.size()[:-1].numel(),) + w.size())
x = torch.matmul(x, weight).add(self.bias)
x = x.view(size_out)
return x
```
### Who can help?
@ArthurZucker @younesbelkada
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I provided an example above.
### Expected behavior
After fixing the bug, the earlier partial logit outputs shouldn't be affected by the future tokens. | 03-24-2023 20:17:01 | 03-24-2023 20:17:01 | Hey! Wow that's interesting.
Two parts of answer:
1. Very cool. We can use `torch.testing.assert_allclose` to checkout the max differences, and indeed I have the following outputs:
```python
In [73]: torch.testing.assert_allclose(addmm(x1, b, w)[:10], addbmm(x2, b, w))
---------------------------------------------------------------------------
AssertionError Traceback (most recent call last)
Cell In[73], line 1
----> 1 torch.testing.assert_allclose(addmm(x1, b, w)[:10], addbmm(x2, b, w))
File /opt/conda/envs/py39/lib/python3.9/site-packages/torch/testing/_deprecated.py:32, in warn_deprecated.<locals>.outer_wrapper.<locals>.inner_wrapper(*args, **kwargs)
30 @functools.wraps(fn)
31 def inner_wrapper(*args: Any, **kwargs: Any) -> Any:
---> 32 return_value = fn(*args, **kwargs)
33 tail = instructions(name, args, kwargs, return_value) if callable(instructions) else instructions
34 msg = (head + tail).strip()
File /opt/conda/envs/py39/lib/python3.9/site-packages/torch/testing/_deprecated.py:80, in assert_allclose(actual, expected, rtol, atol, equal_nan, msg)
77 if rtol is None and atol is None:
78 rtol, atol = _get_default_rtol_and_atol(actual, expected)
---> 80 torch.testing.assert_close(
81 actual,
82 expected,
83 rtol=rtol,
84 atol=atol,
85 equal_nan=equal_nan,
86 check_device=True,
87 check_dtype=False,
88 check_stride=False,
89 msg=msg or None,
90 )
[... skipping hidden 1 frame]
File /opt/conda/envs/py39/lib/python3.9/site-packages/torch/testing/_comparison.py:1093, in assert_equal(actual, expected, pair_types, sequence_types, mapping_types, msg, **options)
1090 return
1092 # TODO: compose all metas into one AssertionError
-> 1093 raise error_metas[0].to_error(msg)
AssertionError: Tensor-likes are not close!
Mismatched elements: 9 / 23040 (0.0%)
Greatest absolute difference: 4.00543212890625e-05 at index (2, 952) (up to 1e-05 allowed)
Greatest relative difference: 0.0080592538321523 at index (8, 1875) (up to 0.0001 allowed)
```
So the outputs match up to 1e-2, which is not that great. Your fix is indeed good in terms of precision as `torch.testing.assert_allclose(addbmm(x1, b, w)[:10], addbmm(x2, b, w))` is True.
2. My concern is: is this faster or slower in terms of computation? Is `torch.addnm` more optimised (and requires less calls to different views) thus faster. Would the fix break Onnx tracing? And most importantly, is this backward compatible?
If it is indeed a fix, meaning that this will bring our logits closer to what they were from the original logits, we might consider this as a potential good change, but the other concerns are still there!
The problem is that GPT2 is an old model, it's very hard to change it (especially something as fundamental as the Conv).
<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 22,370 | closed | [safetensors] don't use in `torch<1.10` | `safetensors` only seems to work with pt>=1.10. This PR fixes this breakage:
```
python -c 'import sys; from transformers import AutoModel; AutoModel.from_pretrained(sys.argv[1])' "bigscience/bigscience-small-testing"
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/mnt/nvme0/code/huggingface/transformers-master/src/transformers/models/auto/auto_factory.py", line 471, in from_pretrained
return model_class.from_pretrained(
File "/mnt/nvme0/code/huggingface/transformers-master/src/transformers/modeling_utils.py", line 2424, in from_pretrained
state_dict = load_state_dict(resolved_archive_file)
File "/mnt/nvme0/code/huggingface/transformers-master/src/transformers/modeling_utils.py", line 413, in load_state_dict
return safe_load_file(checkpoint_file)
File "/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/site-packages/safetensors/torch.py", line 101, in load_file
result[k] = f.get_tensor(k)
AttributeError: module 'torch' has no attribute 'frombuffer'
``` | 03-24-2023 19:40:01 | 03-24-2023 19:40:01 | _The documentation is not available anymore as the PR was closed or merged._<|||||>I need a PR merged to test my fix for the torch and tf run which is broken on main (but the fix itself does not trigger any tests). This one seems a good candidate so merging now and will keep an eye on the test :-) <|||||>Shouldn't those be fixed in safetensors instead? ie here https://github.com/huggingface/safetensors/blob/5c1d366813e46c6f9f2c71aa8b89e0c916a92b2f/bindings/python/setup.py#L23 ?<|||||>Can be both :)<|||||>Also, I left torch version out on purpose at the time, as I wasn't sure about the support policy and wether or not supporting older version was worth the effort (since they require a lot more handling from safetensors itself). |
transformers | 22,369 | closed | Make inheritance consistent for classes having a `generate` method | ### Feature request
Hi, I create this issue to see if this may be blocking to other people or not.
`GenerationMixin` does not inherit from `nn.Module`, while `WhisperForConditionalGeneration` does. Both now have a `generate` method.
This issue is to highlight that PRs as https://github.com/huggingface/transformers/pull/21252 may well break the workflow of users that expect `generate` to be defined in `GenerationMixin`, not inheriting from `nn.Module`. Such changes can either result in silent errors, or errors due to the unexpected inheritance.
For example, https://github.com/huggingface/transformers/pull/21252 make it close to impossible to have a `TensorRTModelForSpeechSeq2Seq(GenerationMixin)` class that does not inherit from nn.Module, use transformers generate, and is able to handle several architectures. Which was something possible before.
I don't have a better solution to propose right now, as I understand different models needing different `generate` will be a more common need in the future.
### Motivation
There is no reason to inherit from `nn.Module` to use `generate`.
### Your contribution
/ | 03-24-2023 19:26:30 | 03-24-2023 19:26:30 | cc @gante @ArthurZucker <|||||>As seen in Slack, let's see if there is interest from others before acting on it, especially as the `generate` method for other modalities than text is prone to evolve to support other use-cases.
If anyone stumbles upon this issue as they're blocked by the above, please comment below to let us know. Thanks!<|||||>Thank you for raising the issue @fxmarty!
I think this is an example of where improving the modularity on `.generate()` could benefit non-standard use cases. In particular for this issue, some models rewrite `.generate()` in the model class itself (`Whisper`, `BLIP`, `RAG`, ...) -- it could be avoided if we had some option to add pre- and post-processing steps to `.generate()`. I have an idea in the back of my mind, but I haven't put it into words.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 22,368 | closed | [Trainer] add disclaimer that full_determinism is slow | Flag to users that `--full_determinism` shouldn't be used in production as it's likely to worsen the performance. | 03-24-2023 18:52:23 | 03-24-2023 18:52:23 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 22,367 | closed | Test fetch v2 | # What does this PR do?
This PR rewrites the test fetcher util to be more accurate in the tests collection, and also comes with a restriction on the tests run when a large amount of tests are picked when modifying a core file (like modeling_utils).
The code that extracts the dependencies of a given module now inspects the inits to pinpoint the exact location of imported objects. So for instance if a test file has an import `from transformers import BertModel`, this new version will detect a dependency on `transformers/models/bert/modeling_bert.py`. As a comparison, the previous version stopped at `transformers/__init__.py`. This removes the need for all the complex logic that tried to match a given file with its corresponding tests, we now just look at the dependencies of the test file.
The second change is that when a given file is seen to trigger too many model tests (current trigger is set at half the models, it can evolve), it will only keep the tests relative to a given list of important models. If a PR changes many modeling files, all the tests for those models will still run, but if a PR only changes modeling_utils (for instance), this will trigger the core model tests only. The list of important models is built using:
- the most downloaded models in the last 30 days
- making sure each pipeline has a model in that list
To bypass this rule, one can add a special command in a commit message (circleCI does not have access to labels, so I can't rely on that):
- Including [skip ci] or [ci skip] or [circleci skip] or [skip circleci] or any variants with - or _ instead of a space will skip all tests
- Including [test all models] or any variant with the words in another order and/or with - or _ instead of a space will run all tests found without filtering on important models.
- Including [test all] or [all test] or any variants with - or _ instead of a space will run all tests.
A couple of adjustments to Transformers should be done (in follow-up PRs) to have the test fetcher be more accurate and more efficient:
- make sure all inits don't define any objects. Most of our inits only import all the stuff, and the test fetcher assumes they are all like that. Some inits (like `pipeline/__init__.py`) define real objects, it would be best to move them to a submodule.
- make sure test files test one thing: for instance `test_modeling_common.py` contains both the common tests and the test of the modeling_utils module. It would be best to split those in two files.
Lastly, this PR adds lots of tests to make sure future work doesn't break the test fetcher :-)
To see how the test fetcher behaves on some examples:
- for a modification in modeling_opt.py: only test_modeling_opt is run [[fetch summary](https://app.circleci.com/pipelines/github/huggingface/transformers/60906/workflows/79cfaf18-d0da-4a5d-8bc6-fd8599b63468/jobs/747209/artifacts)] [[job page](https://app.circleci.com/pipelines/github/huggingface/transformers/60710/workflows/f43d1235-3337-484f-b79a-9260e24e3664)]
- for a modification in modeling_bert.py (which is imported in all the tests basically) all tests using BERT are run, but filtered to the list of important models [[fetch summary](https://app.circleci.com/pipelines/github/huggingface/transformers/60906/workflows/79cfaf18-d0da-4a5d-8bc6-fd8599b63468/jobs/747209/artifacts)] [[job page](https://app.circleci.com/pipelines/github/huggingface/transformers/60906/workflows/f85bd255-a01a-49af-a852-b7a00e13aad3)]
- for a modification in a pipeline file: all model tests are run, filtered to the list of important models [[fetch summary](https://app.circleci.com/pipelines/github/huggingface/transformers/60915/workflows/ab1dc699-5ef7-4b33-bcab-62f76688e9f4/jobs/747362/artifacts)] [[job page](https://app.circleci.com/pipelines/github/huggingface/transformers/60910/workflows/3f9df353-e3b0-490c-9724-f6ef59df5599)]
- for a modification in the main `__init__.py` all tests are run, but filtered to the list of important models [[fetch summary](https://app.circleci.com/pipelines/github/huggingface/transformers/60910/workflows/1f8039a1-af57-477d-bd20-43324d574549/jobs/747284/artifacts)] [[job page](https://app.circleci.com/pipelines/github/huggingface/transformers/60915/workflows/9615653e-b1e5-4c74-9668-ca7983bc3b68)]
- for a modification in the `setup.py` all tests are run [[fetch summary](https://app.circleci.com/pipelines/github/huggingface/transformers/60913/workflows/fd3a907c-98a3-40b0-8423-d1346b23498c/jobs/747326/artifacts)] [[job page](https://app.circleci.com/pipelines/github/huggingface/transformers/60913/workflows/809e8564-7df0-4321-8d7c-4e68c1d123a7)] | 03-24-2023 18:49:59 | 03-24-2023 18:49:59 | _The documentation is not available anymore as the PR was closed or merged._<|||||>I guess it's ready for a review?<|||||>No I haven't finished this PR yet.<|||||>Hi @sgugger . Thank you a lot for working on this important task! I feel it's better for me to look this work in depth, and I tried to play with the test fetcher (on `main` and on this PR) to understand it better.
However, the first thing I tried (by following some sentences you mentioned) makes me a somehow confused. Here is what I saw:
- On the two branches `main` (or a new branch from it) and `test_fetch_v2`, do the following steps:
- change the test file `tests/models/bert/test_modeling_bert.py` (simply adding some dummy line like `foo = 1`)
- commit the change
```bash
git add tests/models/bert/test_modeling_bert.py
git commit -m "dummy commit"
```
- run the test fetcher against the previous commit
```bash
python utils/tests_fetcher.py --diff_with_last_commit
```
- Now, the results:
TL;DR: `test_modeling_bert.py` is not included by the new version of test fetcher. But I think it should be included.
- on `main`
(`tests/models/bert/test_modeling_bert.py` is in `TEST TO RUN` and in the file `test_list.txt`)
```
### DIFF ###
### MODIFIED FILES ###
- tests/models/bert/test_modeling_bert.py
### IMPACTED FILES ###
- tests/models/auto/test_modeling_auto.py
- tests/models/auto/test_modeling_tf_auto.py
- tests/models/bert/test_modeling_bert.py
- tests/models/encoder_decoder/test_modeling_encoder_decoder.py
- tests/models/speech_encoder_decoder/test_modeling_speech_encoder_decoder.py
- tests/models/vision_encoder_decoder/test_modeling_vision_encoder_decoder.py
- tests/models/vision_text_dual_encoder/test_modeling_vision_text_dual_encoder.py
### TEST TO RUN ###
- tests/models/auto/test_modeling_auto.py
- tests/models/auto/test_modeling_tf_auto.py
- tests/models/bert/test_modeling_bert.py
- tests/models/encoder_decoder/test_modeling_encoder_decoder.py
- tests/models/speech_encoder_decoder/test_modeling_speech_encoder_decoder.py
- tests/models/vision_encoder_decoder/test_modeling_vision_encoder_decoder.py
- tests/models/vision_text_dual_encoder/test_modeling_vision_text_dual_encoder.py
```
- on `test_fetch_v2`
(`tests/models/bert/test_modeling_bert.py` is **NEITHER** in `TEST TO RUN`, **NOR** in the file `test_list.txt`)
```
### MODIFIED FILES ###
- tests/models/bert/test_modeling_bert.py
### IMPACTED FILES ###
- tests/models/auto/test_modeling_auto.py
- tests/models/auto/test_modeling_tf_auto.py
- tests/models/bert/test_modeling_bert.py
- tests/models/encoder_decoder/test_modeling_encoder_decoder.py
- tests/models/speech_encoder_decoder/test_modeling_speech_encoder_decoder.py
- tests/models/vision_encoder_decoder/test_modeling_vision_encoder_decoder.py
- tests/models/vision_text_dual_encoder/test_modeling_vision_text_dual_encoder.py
### TEST TO RUN ###
- tests/models/auto/test_modeling_auto.py
- tests/models/auto/test_modeling_tf_auto.py
- tests/models/encoder_decoder/test_modeling_encoder_decoder.py
- tests/models/speech_encoder_decoder/test_modeling_speech_encoder_decoder.py
- tests/models/vision_encoder_decoder/test_modeling_vision_encoder_decoder.py
- tests/models/vision_text_dual_encoder/test_modeling_vision_text_dual_encoder.py
```
<|||||>> As a comparison, the previous version stopped at transformers/__init__.py.
Is the following block (on `main`) what you mentioned by the above sentence?
```
# We ignore the main init import as it's only for the __version__ that it's done
# and it would add everything as a dependency.
if not imported_module.endswith("transformers/__init__.py"):
...
```
----------------------------
[Not question - just to record something so I won't forget later]
I tried to change `src/transformers/models/bert/modeling_bert.py`, and I can see
- `src/transformers/__init__.py` is given as impacted in both versions
- `src/transformers/models/gpt2/xxx` is given as impacted in the version on `main` but not the version on this PR
- `tests/models/gpt2/xxx` is NOT given as impacted in the version on `main` but given in the version on this PR.
- but its in tests to run in bother version
<|||||>Well, at least, when `src/transformers/models/bert/modeling_bert.py` is changed, the test file `tests/models/bert/test_modeling_bert.py` included 👍 . So the dependency detection seems to work well, and the above situation is just an edge case (to including self)<|||||>@ydshieh, good catch on a modified test file missing from the tests launched. I have only put the dependencies and forgot those. Will fix.<|||||>@ydshieh did you want to review more or is it good to merge?<|||||>> @ydshieh did you want to review more or is it good to merge?
Hi @sgugger If you feel urgent to merge, go ahead (I can leave comments afterward anyway). Otherwise, I would love to continue the review process despite I am slow.<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_22367). All of your documentation changes will be reflected on that endpoint. |
transformers | 22,366 | closed | VisionEncoderDecoderModel to work with CNN-based models | ### Feature request
Hello,
VisionEncoderDecoderModel works only with vision-transformers-based models.
Typically, using a ResNet as encoder would trigger an error in the forward:
`TypeError: forward() got an unexpected keyword argument 'output_attentions'`
I'm pretty sure making this pipeline work with CNN-based architecture would not be too much of a a change. As a matter of fact, adding `**kwargs` in the ResNet forward might be enough.
### Motivation
Using CNN-based models with transformers-based language models in VisionEncoderDecoderModel
### Your contribution
/ | 03-24-2023 18:42:16 | 03-24-2023 18:42:16 | Hi @jbdel, thanks for raising this issue!
The `VisionEncoderDecoder` class is specifically designed to work with transformer architectures, and the decoder model expects a transformer encoder output for its `encoder_hidden_state`. These are activations in the shape `(batch_size, sequence_length, hidden_size)` where each vector `[i, j, :]` represents the final activation for that input token/image patch. The ResNet model has a different kind of output: feature maps. As such, there are several incompatibilities beyond being able to pass the `output_attentions` argument to the encoder.
With all architectures coming out at a fast pace nowadays, it's not practical and realistic to make composite modeling like VisionEncoderDecoder to handle all pairs of encoder and decoder models. But the good thing is the code is open source, and everyone can make changes to it :).
If this is still something you are interested in, it could make an interesting question and project to [share in the forums](https://discuss.huggingface.co/). <|||||>Hello,
I beg to differ on your explanation. The output of a ResNet is **_not_** a different kind of output, it is also : `(batch_size, sequence_length, hidden_size)`.
Call the vector [i, j, :] as you will: a token, an image patch, a slice of feature map, what matters in a pipeline is the compatibility of input/output, which is exactly what transformers and ResNet have in common.
As a matter of fact, the developers called the output of the ResNet "last_hidden_state": https://github.com/huggingface/transformers/blob/v4.27.2/src/transformers/models/resnet/modeling_resnet.py#L341
Architecture are surely coming out in a fast pace nowadays. Nonetheless this feature request is not about the latest fancy vision model published, but the very first architecture that enabled deep learning for computer vision.
Another thought: if huggingface is all about transformers, why implementing the resnet architecture available in torchvision ?
Finally, you suppose there will be several incompatibilities, again, i think not. A simple glance at the forward function of VisionEncoderDecoder shows you that the function cares only about the first output of the encoder:
https://github.com/huggingface/transformers/blob/v4.27.2/src/transformers/models/encoder_decoder/modeling_encoder_decoder.py#L602
Which is exactly what ResNet provides.
<|||||>Hi,
You can use ResNet with the vision encoder-decoder framework, although it might not work out-of-the-box as shown by your first message (for the moment that requires forking the library and making the required changes). ResNets, like other CNNs, output feature maps of shape `(batch_size, num_channels, height, width)`, so they are by default 4D instead of 3D with the regular `last_hidden_state` of a model like ViT. See [here](https://huggingface.co/docs/transformers/model_doc/resnet#transformers.ResNetModel.forward.example) for an example: the final feature map is of shape (batch_size, 2048, 7, 7) for a 224x224 image.
However you can of course reshape the final feature map to get a 3D tensor which can be used for cross-attention with the decoder. This can be achieved by doing:
```
batch_size, num_channels, height, width = last_hidden_state.shape
last_hidden_state = last_hidden_state.permute(0, 2, 3, 1)
last_hidden_state = last_hidden_state.reshape(batch_size, height*width, num_channels)
```
The reason ResNet is present in the library is because it is used as backbone for several Transformer-based frameworks like DETR, MaskFormer and Mask2Former, all of which are also available in the library.<|||||>Hello.
Thank you for your answer.
I do understand there is a straightforward way to modify the code so that you can have a resnet to transformer pipeline using huggingface.
I have submitted this as a feature request, with the hope that it will be considered for addition to the official library implementation. This would allow you to use that pipeline on the Huggingface hub.
Have a good day,
JB<|||||>I'll mark this request as a "good first issue" as I don't have the bandwidth for this atm.
However for this to work we would need to maintain a mapping which lists the models that output a 4D feature map, to make sure we permute and reshape the final hidden state as shown above. Additionally we need to take into account that some of those models don't accept an `output_attentions` keyword argument.<|||||>I do not think this an issue that would be easy to tackle by a beginner so I have removed the "Good first issue" label. Having issues that are too hard labeled like this often backfires and make beginners stop contributing instead of feeling empowered.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 22,365 | closed | Auto-translate GPTNeo to TensorFlow with GPT-4 | cc @gante @amyeroberts @ydshieh @sayakpaul
This is the preliminary result of auto-translating an entire module to TensorFlow using GPT-4.
The prompt I used was:
>
> Hi GPT, can you translate this class from Hugging Face Transformers from PyTorch to TensorFlow for me?
> Some pointers:
> - When creating layers, please pass their attribute name as the name kwarg.
> - Retain any docstrings attached to methods like forward and translate them, even when the method is being renamed to call.
> - If the new class inherits from tf.keras.layers.Layer, it should accept **kwargs and pass these to super.__init__ . It should also be renamed by adding "TF" to the start of its name.
> - You don't need to add any extra imports, you can assume that any other functions or classes you call will be imported for you.
> - If the class calls other classes in the same module, you can assume that these have already been converted. Please add "TF" to the start of their name if required.
I'm going to experiment with auto-translating the tests and seeing how successful this port was. Right now it's a WIP and there are likely issues, but I've had a lot of success avoiding problems just by mentioning them in the prompt and telling GPT what to do in those situations!
Other things to go in the prompt:
- PyTorch Embedding layers support a `padding_idx` initialization arg that TF does not. If you want to exactly match PT's behaviour, you need to manually zero out those positions after embedding in TF. | 03-24-2023 17:30:36 | 03-24-2023 17:30:36 | Shame Github doesn't have 🔥 as an available reaction<|||||>> Shame Github doesn't have 🔥 as an available reaction
@amyeroberts 🔥🔥🔥🔥🔥🔥🔥🔥🔥🔥
<img width="666" alt="Screenshot 2023-03-24 185139" src="https://user-images.githubusercontent.com/2521628/227602659-692ebc96-a5d9-4b7b-9519-bac44916537a.png">
<|||||>@Rocketknight1 aren't the commits going a bit too fast? 😑<|||||>I don't want to be a party pooper but this is really not a use case where I would trust any kind of language model output. LLMs tend to produce content that look good on the surface but are not rigorous and usually full of nasty little bugs (that's the reason why I personally stopped using Copilot) and we will completely miss them in such PRs that are usually adding one or several thousands of lines. I don't think our CI will catch all such bugs.<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_22365). All of your documentation changes will be reflected on that endpoint.<|||||>@sgugger Absolutely agree on the potential for small bugs, but in the case of model porting doesn't the CI test equivalence with the PT original? If the model accepts various inputs and always yields equivalent output to the PT version, I think it's probably "good enough" that most users shouldn't notice any issues, right?<|||||>Well for now those tests fail even before returning a diff between the PT and TF model :-p <|||||>I'm still working on the prompt!!!!!<|||||>Ah ah, are you saying this should be the code exercise if we ever decide to open a Prompt Engineer position?<|||||>> Ah ah, are you saying this should be the code exercise if we ever decide to open a Prompt Engineer position?
That would be **THE** middleman between `PyTorch` and `TensorFlow`.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 22,364 | closed | TensorFlow: pin maximum version to 2.12 | # What does this PR do?
TF Text 2.12 has been released (a few hours after TF 2.12), so that problem got sorted by itself.
Adding `cmake` to install onnx from source gets rid of the remaining problems :) | 03-24-2023 16:15:26 | 03-24-2023 16:15:26 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 22,363 | open | Multi-node training with Deepspeed hangs when `full_determinism = True` | Hey, as I've described below, I think there are problems training Deepspeed in a multi-node setting when `full_determinism = True` in the `TrainingArguments`. I've replicated this on multiple hardware configurations (i.e. different nodes and GPU types — specifically A6000, V100, RTX 3090 — on the same large cluster system). Please take a look, thank you very much!
### System Info
### `transformers-cli env`
- `transformers` version: 4.27.3
- Platform: Linux-3.10.0-1160.76.1.el7.x86_64-x86_64-with-glibc2.10
- Python version: 3.8.1
- Huggingface_hub version: 0.13.3
- PyTorch version (GPU?): 2.0.0+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: Yes
### Additional info
- `deepspeed` version: 0.8.3
- gcc: 10.2
- cuda: 11.7.1
- pdsh: 2.34
### Who can help?
@sgugger @stas00
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
### Shell
Please set the following environment variables appropriately:
```bash
export NODELIST="gpu1504 gpu1505"
export NUM_NODES=2
export GPUS_PER_NODE=1
export MASTER_ADDR=gpu1504
export MASTER_PORT=9901
```
Create `train.py` from the snippet below, then run with the following commands:
```bash
conda create -n ds-trainer python==3.8.1
conda activate ds-trainer
pip install transformers[deepspeed]
echo "PATH=$PATH" > .deepspeed_env
cat /dev/null >| hostfile
for i in $NODELIST; do
echo "$i slots=$GPUS_PER_NODE" >> hostfile;
done
deepspeed --num_gpus $GPUS_PER_NODE --num_nodes $NUM_NODES --master_addr $MASTER_ADDR --master_port $MASTER_PORT --hostfile hostfile train.py
```
### `train.py`
```python
import torch
from torch.utils.data import Dataset
from transformers import BertForMaskedLM, Trainer, TrainingArguments
import copy
## Model
model = BertForMaskedLM.from_pretrained("bert-base-uncased")
## Dataset
class DummyDataset(Dataset):
def __init__(self, max_text_length=16, num_samples=20000) -> None:
super().__init__()
self.input_ids = torch.randint(0, 30522, (num_samples, max_text_length))
self.labels = copy.deepcopy(self.input_ids)
def __len__(self):
return len(self.input_ids)
def __getitem__(self, index):
return {
"input_ids": self.input_ids[index],
"labels": self.labels[index],
}
train_dataset = DummyDataset()
## Training
deepspeed_config = {
"optimizer": {
"type": "AdamW",
"params": {
"lr": "auto",
"betas": "auto",
"eps": "auto",
"weight_decay": "auto",
},
},
"gradient_accumulation_steps": "auto",
"gradient_clipping": "auto",
"train_batch_size": "auto",
"train_micro_batch_size_per_gpu": "auto",
}
training_arguments = TrainingArguments(
full_determinism = True,
output_dir = "output",
do_train = True,
per_device_train_batch_size = 16,
max_steps = 100,
deepspeed = deepspeed_config
)
trainer = Trainer(
model=model,
args=training_arguments,
train_dataset=train_dataset
)
trainer.train()
```
### Expected behavior
**When I run the above code (a minimal example for DeepSpeed training) in a multi-node setting, training seems to hang after the following output:**
<details>
<summary>Output (not working)</summary>
```Shell
[2023-03-24 10:26:48,202] [INFO] [multinode_runner.py:67:get_cmd] Running on the following workers: gpu1504,gpu1505
[2023-03-24 10:26:48,202] [INFO] [runner.py:550:main] cmd = pdsh -S -f 1024 -w gpu1504,gpu1505 export PYTHONPATH=/gpfs/data/csun45/akhand10/projects/test; exp
ort PATH=/gpfs/runtime/opt/pdsh/2.34/bin:/gpfs/runtime/opt/cuda/11.7.1/cuda/bin:/gpfs/runtime/opt/gcc/10.2/bin:/users/akhand10/.local/machine/bin:/users/akhan
d10/.local/machine/bin:/users/akhand10/.local/scripts:/users/akhand10/.local/bin:/users/akhand10/.local/machine/bin:/users/akhand10/palm.h/.local/miniconda3/e
nvs/ds-trainer/bin:/users/akhand10/.local/miniconda3/condabin:/users/akhand10/.local/scripts:/users/akhand10/.local/bin:/usr/lib64/qt-3.3/bin:/usr/local/bin:/
usr/bin:/usr/local/sbin:/usr/sbin:/usr/lpp/mmfs/bin:/usr/lpp/mmfs/sbin:/opt/ibutils/bin:/gpfs/runtime/bin:/opt/singularity/2.5.2/bin:/users/akhand10/bin; cd
/gpfs/data/csun45/akhand10/projects/test; /users/akhand10/palm.h/.local/miniconda3/envs/ds-trainer/bin/python3.8 -u -m deepspeed.launcher.launch --world_info=
eyJncHUxNTA0IjogWzBdLCAiZ3B1MTUwNSI6IFswXX0= --node_rank=%n --master_addr=gpu1504 --master_port=9901 train.py
gpu1504: [2023-03-24 10:26:51,118] [INFO] [launch.py:142:main] WORLD INFO DICT: {'gpu1504': [0], 'gpu1505': [0]}
gpu1504: [2023-03-24 10:26:51,118] [INFO] [launch.py:148:main] nnodes=2, num_local_procs=1, node_rank=0
gpu1504: [2023-03-24 10:26:51,119] [INFO] [launch.py:161:main] global_rank_mapping=defaultdict(<class 'list'>, {'gpu1504': [0], 'gpu1505': [1]})
gpu1504: [2023-03-24 10:26:51,119] [INFO] [launch.py:162:main] dist_world_size=2
gpu1504: [2023-03-24 10:26:51,119] [INFO] [launch.py:164:main] Setting CUDA_VISIBLE_DEVICES=0
gpu1505: [2023-03-24 10:26:53,517] [INFO] [launch.py:142:main] WORLD INFO DICT: {'gpu1504': [0], 'gpu1505': [0]}
gpu1505: [2023-03-24 10:26:53,517] [INFO] [launch.py:148:main] nnodes=2, num_local_procs=1, node_rank=1
gpu1505: [2023-03-24 10:26:53,517] [INFO] [launch.py:161:main] global_rank_mapping=defaultdict(<class 'list'>, {'gpu1504': [0], 'gpu1505': [1]})
gpu1505: [2023-03-24 10:26:53,517] [INFO] [launch.py:162:main] dist_world_size=2
gpu1505: [2023-03-24 10:26:53,517] [INFO] [launch.py:164:main] Setting CUDA_VISIBLE_DEVICES=0
gpu1504: Some weights of the model checkpoint at bert-base-uncased were not used when initializing BertForMaskedLM: ['cls.seq_relationship.bias', 'cls.seq_rel
ationship.weight']
gpu1504: - This IS expected if you are initializing BertForMaskedLM from the checkpoint of a model trained on another task or with another architecture (e.g.
initializing a BertForSequenceClassification model from a BertForPreTraining model).
gpu1504: - This IS NOT expected if you are initializing BertForMaskedLM from the checkpoint of a model that you expect to be exactly identical (initializing a
BertForSequenceClassification model from a BertForSequenceClassification model).
gpu1504: [2023-03-24 10:26:55,478] [INFO] [comm.py:652:init_distributed] Initializing TorchBackend in DeepSpeed with backend nccl
gpu1505: Some weights of the model checkpoint at bert-base-uncased were not used when initializing BertForMaskedLM: ['cls.seq_relationship.weight', 'cls.seq_r
elationship.bias']
gpu1505: - This IS expected if you are initializing BertForMaskedLM from the checkpoint of a model trained on another task or with another architecture (e.g.
initializing a BertForSequenceClassification model from a BertForPreTraining model).
gpu1505: - This IS NOT expected if you are initializing BertForMaskedLM from the checkpoint of a model that you expect to be exactly identical (initializing a
BertForSequenceClassification model from a BertForSequenceClassification model).
```
</details>
In particular, the last line of relevance is: `[INFO] [comm.py:652:init_distributed] Initializing TorchBackend in DeepSpeed with backend nccl`.
<details>
<summary>Extra NCCL output</summary>
If I provide the vars: `NCCL_DEBUG=INFO NCCL_DEBUG_SUBSYS=ALL CUDA_LAUNCH_BLOCKING=1`
```Shell
...
...
gpu1504: [2023-03-24 10:48:06,695] [INFO] [comm.py:652:init_distributed] Initializing TorchBackend in DeepSpeed with backend nccl
gpu1504: gpu1504:30024:30024 [0] NCCL INFO Bootstrap : Using ib0:172.25.211.4<0>
gpu1504: gpu1504:30024:30024 [0] NCCL INFO NET/Plugin : No plugin found (libnccl-net.so), using internal implementation
gpu1504: gpu1504:30024:30024 [0] NCCL INFO cudaDriverVersion 11070
gpu1504: NCCL version 2.14.3+cuda11.7
gpu1505: gpu1505:22060:22060 [0] NCCL INFO cudaDriverVersion 11070
gpu1504: gpu1504:30024:30024 [0] NCCL INFO init.cc:1147 Cuda Host Alloc Size 4 pointer 0x7f62ffe00000
gpu1504: gpu1504:30024:30241 [0] NCCL INFO NET/IB : Using [0]mlx5_2:1/IB [1]mlx5_0:1/IB ; OOB ib0:172.25.211.4<0>
gpu1504: gpu1504:30024:30241 [0] NCCL INFO Using network IB
gpu1505: gpu1505:22060:22060 [0] NCCL INFO Bootstrap : Using ib0:172.25.211.5<0>
gpu1505: gpu1505:22060:22060 [0] NCCL INFO NET/Plugin : No plugin found (libnccl-net.so), using internal implementation
gpu1505: gpu1505:22060:22060 [0] NCCL INFO init.cc:1147 Cuda Host Alloc Size 4 pointer 0x7f97a3e00000
gpu1505: gpu1505:22060:22171 [0] NCCL INFO NET/IB : Using [0]mlx5_2:1/IB [1]mlx5_0:1/IB ; OOB ib0:172.25.211.5<0>
gpu1505: gpu1505:22060:22171 [0] NCCL INFO Using network IB
gpu1505: gpu1505:22060:22171 [0] NCCL INFO NET/IB : GPU Direct RDMA Disabled for HCA 0 'mlx5_2'
gpu1505: gpu1505:22060:22171 [0] NCCL INFO NET/IB : GPU Direct RDMA Disabled for HCA 1 'mlx5_0'
gpu1505: gpu1505:22060:22171 [0] NCCL INFO === System : maxBw 12.5 totalBw 24.0 ===
gpu1505: gpu1505:22060:22171 [0] NCCL INFO CPU/0 (1/2/-1)
gpu1505: gpu1505:22060:22171 [0] NCCL INFO + SYS[5000.0] - CPU/1
gpu1505: gpu1505:22060:22171 [0] NCCL INFO + PCI[24.0] - GPU/1000 (1)
gpu1505: gpu1505:22060:22171 [0] NCCL INFO + PCI[12.0] - NIC/23000
gpu1505: gpu1505:22060:22171 [0] NCCL INFO + NET[12.5] - NET/1 (3ad0a20003723f04/1/12.500000)
gpu1505: gpu1505:22060:22171 [0] NCCL INFO CPU/1 (1/2/-1)
gpu1505: gpu1505:22060:22171 [0] NCCL INFO + SYS[5000.0] - CPU/0
gpu1505: gpu1505:22060:22171 [0] NCCL INFO + PCI[24.0] - NIC/C2000
gpu1505: gpu1505:22060:22171 [0] NCCL INFO + NET[12.5] - NET/0 (82d0a20003723f04/1/12.500000)
gpu1505: gpu1505:22060:22171 [0] NCCL INFO ==========================================
gpu1505: gpu1505:22060:22171 [0] NCCL INFO GPU/1000 :GPU/1000 (0/5000.000000/LOC) CPU/0 (1/24.000000/PHB) CPU/1 (2/24.000000/SYS) NET/1 (3/12.000000/PHB) NET/0 (4/12.500000/SYS)
gpu1505: gpu1505:22060:22171 [0] NCCL INFO NET/1 :GPU/1000 (3/12.000000/PHB) CPU/0 (2/12.000000/PHB) CPU/1 (3/12.000000/SYS) NET/1 (0/5000.000000/LOC) NET/0 (5/12.000000/SYS)
gpu1505: gpu1505:22060:22171 [0] NCCL INFO NET/0 :GPU/1000 (4/12.500000/SYS) CPU/0 (3/12.500000/SYS) CPU/1 (2/12.500000/PHB) NET/1 (5/12.000000/SYS) NET/0 (0/5000.000000/LOC)
gpu1505: gpu1505:22060:22171 [0] NCCL INFO Setting affinity for GPU 0 to 04
gpu1505: gpu1505:22060:22171 [0] NCCL INFO Pattern 4, crossNic 0, nChannels 1, bw 12.000000/12.000000, type LOC/PHB, sameChannels 1
gpu1505: gpu1505:22060:22171 [0] NCCL INFO 0 : NET/1 GPU/1 NET/1
gpu1505: gpu1505:22060:22171 [0] NCCL INFO Pattern 3, crossNic 0, nChannels 1, bw 24.000000/12.000000, type LOC/PHB, sameChannels 1
gpu1505: gpu1505:22060:22171 [0] NCCL INFO 0 : NET/1 GPU/1 NET/1
gpu1505: gpu1505:22060:22171 [0] NCCL INFO Pattern 3, crossNic 0, nChannels 0, bw 0.000000/0.000000, type LOC/PIX, sameChannels 1
gpu1504: gpu1504:30024:30241 [0] NCCL INFO NET/IB : GPU Direct RDMA Disabled for HCA 0 'mlx5_2'
gpu1504: gpu1504:30024:30241 [0] NCCL INFO NET/IB : GPU Direct RDMA Disabled for HCA 1 'mlx5_0'
gpu1504: gpu1504:30024:30241 [0] NCCL INFO === System : maxBw 12.5 totalBw 24.0 ===
gpu1504: gpu1504:30024:30241 [0] NCCL INFO CPU/0 (1/2/-1)
gpu1504: gpu1504:30024:30241 [0] NCCL INFO + SYS[5000.0] - CPU/1
gpu1504: gpu1504:30024:30241 [0] NCCL INFO + PCI[24.0] - GPU/41000 (0)
gpu1504: gpu1504:30024:30241 [0] NCCL INFO + PCI[12.0] - NIC/23000
gpu1504: gpu1504:30024:30241 [0] NCCL INFO + NET[12.5] - NET/1 (b4e3ff0003a1420c/1/12.500000)
gpu1504: gpu1504:30024:30241 [0] NCCL INFO CPU/1 (1/2/-1)
gpu1504: gpu1504:30024:30241 [0] NCCL INFO + SYS[5000.0] - CPU/0
gpu1504: gpu1504:30024:30241 [0] NCCL INFO + PCI[24.0] - NIC/C2000
gpu1504: gpu1504:30024:30241 [0] NCCL INFO + NET[12.5] - NET/0 (d2cfa20003723f04/1/12.500000)
gpu1504: gpu1504:30024:30241 [0] NCCL INFO ==========================================
gpu1504: gpu1504:30024:30241 [0] NCCL INFO GPU/41000 :GPU/41000 (0/5000.000000/LOC) CPU/0 (1/24.000000/PHB) CPU/1 (2/24.000000/SYS) NET/1 (3/12.000000/PHB) NET/0 (4/12.500000/SYS)
gpu1504: gpu1504:30024:30241 [0] NCCL INFO NET/1 :GPU/41000 (3/12.000000/PHB) CPU/0 (2/12.000000/PHB) CPU/1 (3/12.000000/SYS) NET/1 (0/5000.000000/LOC) NET/0 (5/12.000000/SYS)
gpu1504: gpu1504:30024:30241 [0] NCCL INFO NET/0 :GPU/41000 (4/12.500000/SYS) CPU/0 (3/12.500000/SYS) CPU/1 (2/12.500000/PHB) NET/1 (5/12.000000/SYS) NET/0 (0/5000.000000/LOC)
gpu1504: gpu1504:30024:30241 [0] NCCL INFO Setting affinity for GPU 0 to 10
gpu1504: gpu1504:30024:30241 [0] NCCL INFO Pattern 4, crossNic 0, nChannels 1, bw 12.000000/12.000000, type LOC/PHB, sameChannels 1
gpu1504: gpu1504:30024:30241 [0] NCCL INFO 0 : NET/1 GPU/0 NET/1
gpu1504: gpu1504:30024:30241 [0] NCCL INFO Pattern 3, crossNic 0, nChannels 1, bw 24.000000/12.000000, type LOC/PHB, sameChannels 1
gpu1504: gpu1504:30024:30241 [0] NCCL INFO 0 : NET/1 GPU/0 NET/1
gpu1504: gpu1504:30024:30241 [0] NCCL INFO Pattern 3, crossNic 0, nChannels 0, bw 0.000000/0.000000, type LOC/PIX, sameChannels 1
gpu1504: gpu1504:30024:30241 [0] NCCL INFO Tree 0 : -1 -> 0 -> 1/-1/-1
gpu1504: gpu1504:30024:30241 [0] NCCL INFO Tree 1 : 1 -> 0 -> -1/-1/-1
gpu1504: gpu1504:30024:30241 [0] NCCL INFO Channel 00/02 : 0 1
gpu1504: gpu1504:30024:30241 [0] NCCL INFO Channel 01/02 : 0 1
gpu1504: gpu1504:30024:30241 [0] NCCL INFO Ring 00 : 1 -> 0 -> 1
gpu1504: gpu1504:30024:30241 [0] NCCL INFO Ring 01 : 1 -> 0 -> 1
gpu1504: gpu1504:30024:30241 [0] NCCL INFO Trees [0] 1/-1/-1->0->-1 [1] -1/-1/-1->0->1
gpu1504: gpu1504:30024:30241 [0] NCCL INFO misc/utils.cc:235 memory stack hunk malloc(65536)
gpu1504: gpu1504:30024:30241 [0] NCCL INFO channel.cc:23 Cuda Alloc Size 1152 pointer 0x7f6301c00000
gpu1504: gpu1504:30024:30241 [0] NCCL INFO channel.cc:27 Cuda Alloc Size 8 pointer 0x7f6301c00600
gpu1504: gpu1504:30024:30241 [0] NCCL INFO channel.cc:23 Cuda Alloc Size 1152 pointer 0x7f6301c00800
gpu1504: gpu1504:30024:30241 [0] NCCL INFO channel.cc:27 Cuda Alloc Size 8 pointer 0x7f6301c00e00
gpu1505: gpu1505:22060:22171 [0] NCCL INFO Tree 0 : 0 -> 1 -> -1/-1/-1
gpu1505: gpu1505:22060:22171 [0] NCCL INFO Tree 1 : -1 -> 1 -> 0/-1/-1
gpu1505: gpu1505:22060:22171 [0] NCCL INFO Ring 00 : 0 -> 1 -> 0
gpu1505: gpu1505:22060:22171 [0] NCCL INFO Ring 01 : 0 -> 1 -> 0
gpu1505: gpu1505:22060:22171 [0] NCCL INFO Trees [0] -1/-1/-1->1->0 [1] 0/-1/-1->1->-1
gpu1505: gpu1505:22060:22171 [0] NCCL INFO misc/utils.cc:235 memory stack hunk malloc(65536)
gpu1505: gpu1505:22060:22171 [0] NCCL INFO channel.cc:23 Cuda Alloc Size 1152 pointer 0x7f97a5c00000
gpu1505: gpu1505:22060:22171 [0] NCCL INFO channel.cc:27 Cuda Alloc Size 8 pointer 0x7f97a5c00600
gpu1505: gpu1505:22060:22171 [0] NCCL INFO channel.cc:23 Cuda Alloc Size 1152 pointer 0x7f97a5c00800
gpu1505: gpu1505:22060:22171 [0] NCCL INFO channel.cc:27 Cuda Alloc Size 8 pointer 0x7f97a5c00e00
gpu1505: gpu1505:22060:22174 [0] NCCL INFO Mem Realloc old size 0, new size 8 pointer 0x7f96f00009c0
gpu1505: gpu1505:22060:22174 [0] NCCL INFO Allocated 4194656 bytes of shared memory in /dev/shm/nccl-3elXjI
gpu1505:
gpu1505: gpu1505:22060:22174 [0] NCCL INFO New proxy recv connection 0 from local rank 0, transport 2
gpu1505: gpu1505:22060:22171 [0] NCCL INFO Connection to proxy localRank 0 -> connection 0x7f96f0004010
gpu1504: gpu1504:30024:30248 [0] NCCL INFO Mem Realloc old size 0, new size 8 pointer 0x7f62840008c0
gpu1504: gpu1504:30024:30248 [0] NCCL INFO Allocated 4194656 bytes of shared memory in /dev/shm/nccl-HznK2t
gpu1504:
gpu1504: gpu1504:30024:30248 [0] NCCL INFO New proxy recv connection 0 from local rank 0, transport 2
gpu1504: gpu1504:30024:30241 [0] NCCL INFO Connection to proxy localRank 0 -> connection 0x7f6284004010
gpu1505: gpu1505:22060:22171 [0] NCCL INFO Channel 00/0 : 0[41000] -> 1[1000] [receive] via NET/IB/1
gpu1504: gpu1504:30024:30241 [0] NCCL INFO Channel 00/0 : 1[1000] -> 0[41000] [receive] via NET/IB/1
gpu1505: gpu1505:22060:22174 [0] NCCL INFO New proxy recv connection 1 from local rank 0, transport 2
gpu1504: gpu1504:30024:30248 [0] NCCL INFO New proxy recv connection 1 from local rank 0, transport 2
gpu1505: gpu1505:22060:22171 [0] NCCL INFO Connection to proxy localRank 0 -> connection 0x7f96f0004050
gpu1504: gpu1504:30024:30241 [0] NCCL INFO Connection to proxy localRank 0 -> connection 0x7f6284004050
gpu1505: gpu1505:22060:22171 [0] NCCL INFO Channel 01/0 : 0[41000] -> 1[1000] [receive] via NET/IB/1
gpu1504: gpu1504:30024:30241 [0] NCCL INFO Channel 01/0 : 1[1000] -> 0[41000] [receive] via NET/IB/1
gpu1504: gpu1504:30024:30248 [0] NCCL INFO New proxy send connection 2 from local rank 0, transport 2
gpu1505: gpu1505:22060:22174 [0] NCCL INFO New proxy send connection 2 from local rank 0, transport 2
gpu1504: gpu1504:30024:30241 [0] NCCL INFO Connection to proxy localRank 0 -> connection 0x7f6284004090
gpu1504: gpu1504:30024:30241 [0] NCCL INFO Channel 00/0 : 0[41000] -> 1[1000] [send] via NET/IB/1
gpu1505: gpu1505:22060:22171 [0] NCCL INFO Connection to proxy localRank 0 -> connection 0x7f96f0004090
gpu1504: gpu1504:30024:30248 [0] NCCL INFO New proxy send connection 3 from local rank 0, transport 2
gpu1505: gpu1505:22060:22171 [0] NCCL INFO Channel 00/0 : 1[1000] -> 0[41000] [send] via NET/IB/1
gpu1505: gpu1505:22060:22174 [0] NCCL INFO New proxy send connection 3 from local rank 0, transport 2
gpu1504: gpu1504:30024:30241 [0] NCCL INFO Connection to proxy localRank 0 -> connection 0x7f62840040d0
gpu1504: gpu1504:30024:30241 [0] NCCL INFO Channel 01/0 : 0[41000] -> 1[1000] [send] via NET/IB/1
gpu1505: gpu1505:22060:22171 [0] NCCL INFO Connection to proxy localRank 0 -> connection 0x7f96f00040d0
gpu1505: gpu1505:22060:22171 [0] NCCL INFO Channel 01/0 : 1[1000] -> 0[41000] [send] via NET/IB/1
gpu1504: gpu1504:30024:30248 [0] NCCL INFO transport/net_ib.cc:596 Ib Alloc Size 26560 pointer 0x7f6284020000
gpu1505: gpu1505:22060:22174 [0] NCCL INFO transport/net_ib.cc:596 Ib Alloc Size 26560 pointer 0x7f96f0020000
gpu1505: gpu1505:22060:22174 [0] NCCL INFO NET/IB: Dev 1 Port 1 qpn 173 mtu 5 LID 1038
gpu1505: gpu1505:22060:22174 [0] NCCL INFO transport/net_ib.cc:653 Ib Alloc Size 552 pointer 0x7f96f0034000
gpu1505: gpu1505:22060:22174 [0] NCCL INFO transport/net.cc:571 Cuda Host Alloc Size 9641984 pointer 0x7f97a7400000
gpu1505: gpu1505:22060:22174 [0] NCCL INFO Mem Realloc old size 0, new size 768 pointer 0x7f96f0033ff0
gpu1504: gpu1504:30024:30248 [0] NCCL INFO NET/IB: Dev 1 Port 1 qpn 153 mtu 5 LID 1036
gpu1505: gpu1505:22060:22174 [0] NCCL INFO transport/net_ib.cc:596 Ib Alloc Size 26560 pointer 0x7f96f0035000
gpu1504: gpu1504:30024:30248 [0] NCCL INFO transport/net_ib.cc:653 Ib Alloc Size 552 pointer 0x7f6284034000
gpu1505: gpu1505:22060:22174 [0] NCCL INFO NET/IB: Dev 1 Port 1 qpn 174 mtu 5 LID 1038
gpu1505: gpu1505:22060:22174 [0] NCCL INFO transport/net_ib.cc:653 Ib Alloc Size 552 pointer 0x7f96f003f000
gpu1505: gpu1505:22060:22174 [0] NCCL INFO transport/net.cc:571 Cuda Host Alloc Size 9641984 pointer 0x7f97a9400000
gpu1504: gpu1504:30024:30248 [0] NCCL INFO transport/net.cc:571 Cuda Host Alloc Size 9641984 pointer 0x7f6303400000
gpu1504: gpu1504:30024:30248 [0] NCCL INFO Mem Realloc old size 0, new size 768 pointer 0x7f6284034050
gpu1505: gpu1505:22060:22174 [0] NCCL INFO transport/net_ib.cc:683 Ib Alloc Size 21688 pointer 0x7f96f003f000
gpu1505: gpu1505:22060:22174 [0] NCCL INFO transport/net_ib.cc:696 Ib Alloc Size 552 pointer 0x7f96f0046000
gpu1504: gpu1504:30024:30248 [0] NCCL INFO transport/net_ib.cc:596 Ib Alloc Size 26560 pointer 0x7f6284035000
gpu1505: gpu1505:22060:22174 [0] NCCL INFO transport/net_ib.cc:771 Ib Alloc Size 552 pointer 0x7f96f0049000
gpu1504: gpu1504:30024:30248 [0] NCCL INFO NET/IB: Dev 1 Port 1 qpn 154 mtu 5 LID 1036
gpu1504: gpu1504:30024:30248 [0] NCCL INFO transport/net_ib.cc:653 Ib Alloc Size 552 pointer 0x7f628403f000
gpu1505: gpu1505:22060:22174 [0] NCCL INFO transport/net.cc:698 Cuda Host Alloc Size 9641984 pointer 0x7f97ab400000
gpu1504: gpu1504:30024:30248 [0] NCCL INFO transport/net.cc:571 Cuda Host Alloc Size 9641984 pointer 0x7f6305400000
gpu1504: gpu1504:30024:30248 [0] NCCL INFO transport/net_ib.cc:683 Ib Alloc Size 21688 pointer 0x7f628403f000
gpu1504: gpu1504:30024:30248 [0] NCCL INFO transport/net_ib.cc:696 Ib Alloc Size 552 pointer 0x7f6284046000
gpu1504: gpu1504:30024:30248 [0] NCCL INFO transport/net_ib.cc:771 Ib Alloc Size 552 pointer 0x7f6284049000
gpu1504: gpu1504:30024:30248 [0] NCCL INFO transport/net.cc:698 Cuda Host Alloc Size 9641984 pointer 0x7f6307400000
gpu1505: gpu1505:22060:22174 [0] NCCL INFO transport/net_ib.cc:683 Ib Alloc Size 21688 pointer 0x7f96f0049000
gpu1505: gpu1505:22060:22174 [0] NCCL INFO transport/net_ib.cc:696 Ib Alloc Size 552 pointer 0x7f96f0050000
gpu1505: gpu1505:22060:22174 [0] NCCL INFO transport/net_ib.cc:771 Ib Alloc Size 552 pointer 0x7f96f0053000
gpu1505: gpu1505:22060:22174 [0] NCCL INFO transport/net.cc:698 Cuda Host Alloc Size 9641984 pointer 0x7f97ad400000
gpu1505: gpu1505:22060:22171 [0] NCCL INFO Connected all rings
gpu1505: gpu1505:22060:22171 [0] NCCL INFO Connected all trees
gpu1505: gpu1505:22060:22171 [0] NCCL INFO threadThresholds 8/8/64 | 16/8/64 | 512 | 512
gpu1505: gpu1505:22060:22171 [0] NCCL INFO 2 coll channels, 2 p2p channels, 2 p2p channels per peer
gpu1505: gpu1505:22060:22174 [0] NCCL INFO New proxy send connection 4 from local rank 0, transport 2
gpu1504: gpu1504:30024:30248 [0] NCCL INFO transport/net_ib.cc:683 Ib Alloc Size 21688 pointer 0x7f6284049000
gpu1504: gpu1504:30024:30248 [0] NCCL INFO transport/net_ib.cc:696 Ib Alloc Size 552 pointer 0x7f6284050000
gpu1504: gpu1504:30024:30248 [0] NCCL INFO transport/net_ib.cc:771 Ib Alloc Size 552 pointer 0x7f6284053000
gpu1504: gpu1504:30024:30248 [0] NCCL INFO transport/net.cc:698 Cuda Host Alloc Size 9641984 pointer 0x7f6309400000
gpu1504: gpu1504:30024:30241 [0] NCCL INFO Connected all rings
gpu1504: gpu1504:30024:30241 [0] NCCL INFO Connected all trees
gpu1504: gpu1504:30024:30241 [0] NCCL INFO Latency/AlgBw | Tree/ LL | Tree/ LL128 | Tree/Simple | Ring/ LL | Ring/ LL128 | Ring/Simple | CollNetDirect/ LL | CollNetDirect/ LL128 | CollNetDirect/Simple | CollNetChain/ LL | CollNetChain/ LL128 | CollNetChain/Simple |
gpu1504: gpu1504:30024:30241 [0] NCCL INFO Max NThreads | 512 | 640 | 512 | 512 | 640 | 256 | 0 | 0 | 512 | 0 | 0 | 512 |
gpu1504: gpu1504:30024:30241 [0] NCCL INFO Broadcast | 0.0/ 0.0 | 0.0/ 0.0 | 0.0/ 0.0 | 6.3/ 3.0 | 14.0/ 0.0 | 18.0/ 12.0 | 0.0/ 0.0 | 0.0/ 0.0 | 0.0/ 0.0 | 0.0/ 0.0 | 0.0/ 0.0 | 0.0/ 0.0 |
gpu1504: gpu1504:30024:30241 [0] NCCL INFO Reduce | 0.0/ 0.0 | 0.0/ 0.0 | 0.0/ 0.0 | 6.3/ 3.0 | 14.0/ 0.0 | 18.0/ 12.0 | 0.0/ 0.0 | 0.0/ 0.0 | 0.0/ 0.0 | 0.0/ 0.0 | 0.0/ 0.0 | 0.0/ 0.0 |
gpu1504: gpu1504:30024:30241 [0] NCCL INFO AllGather | 0.0/ 0.0 | 0.0/ 0.0 | 0.0/ 0.0 | 6.3/ 6.0 | 14.0/ 0.0 | 18.0/ 24.0 | 0.0/ 0.0 | 0.0/ 0.0 | 0.0/ 0.0 | 0.0/ 0.0 | 0.0/ 0.0 | 0.0/ 0.0 |
gpu1504: gpu1504:30024:30241 [0] NCCL INFO ReduceScatter | 0.0/ 0.0 | 0.0/ 0.0 | 0.0/ 0.0 | 6.3/ 6.0 | 14.0/ 0.0 | 18.0/ 24.0 | 0.0/ 0.0 | 0.0/ 0.0 | 0.0/ 0.0 | 0.0/ 0.0 | 0.0/ 0.0 | 0.0/ 0.0 |
gpu1504: gpu1504:30024:30241 [0] NCCL INFO AllReduce | 14.4/ 2.4 | 21.4/ 0.0 | 56.0/ 9.2 | 10.8/ 3.0 | 21.0/ 0.0 | 35.4/ 12.0 | 4.4/ 0.0 | 4.4/ 0.0 | 10.7/ 0.0 | 4.4/ 0.0 | 4.4/ 0.0 | 0.0/ 0.0 |
gpu1504: gpu1504:30024:30241 [0] NCCL INFO threadThresholds 8/8/64 | 16/8/64 | 512 | 512
gpu1504: gpu1504:30024:30241 [0] NCCL INFO 2 coll channels, 2 p2p channels, 2 p2p channels per peer
gpu1505: gpu1505:22060:22171 [0] NCCL INFO Connection to proxy localRank 0 -> connection 0x7f96f0004110
gpu1505: gpu1505:22060:22171 [0] NCCL INFO init.cc:367 Cuda Alloc Size 5168 pointer 0x7f97a5c01000
gpu1504: gpu1504:30024:30248 [0] NCCL INFO New proxy send connection 4 from local rank 0, transport 2
gpu1505: gpu1505:22060:22174 [0] NCCL INFO transport/net.cc:376 Cuda Alloc Size 4194304 pointer 0x7f97af400000
gpu1505: gpu1505:22060:22171 [0] NCCL INFO init.cc:392 Cuda Host Alloc Size 33554432 pointer 0x7f96ca000000
gpu1505: gpu1505:22060:22171 [0] NCCL INFO init.cc:398 Cuda Host Alloc Size 128 pointer 0x7f97a3e00200
gpu1505: gpu1505:22060:22171 [0] NCCL INFO comm 0x560589dc07a0 rank 1 nranks 2 cudaDev 0 busId 1000 - Init COMPLETE
gpu1505: gpu1505:22060:22060 [0] NCCL INFO Broadcast: opCount 0 sendbuff 0x7f97bc000000 recvbuff 0x7f97bc000000 count 93763584 datatype 0 op 0 root 0 comm 0x560589dc07a0 [nranks=2] stream 0x560589bf2f70
gpu1505: gpu1505:22060:22060 [0] NCCL INFO misc/utils.cc:235 memory stack hunk malloc(65536)
gpu1504: gpu1504:30024:30241 [0] NCCL INFO Connection to proxy localRank 0 -> connection 0x7f6284004110
gpu1504: gpu1504:30024:30241 [0] NCCL INFO init.cc:367 Cuda Alloc Size 5168 pointer 0x7f6301c01000
gpu1504: gpu1504:30024:30248 [0] NCCL INFO transport/net.cc:376 Cuda Alloc Size 4194304 pointer 0x7f630b400000
gpu1504: gpu1504:30024:30241 [0] NCCL INFO init.cc:392 Cuda Host Alloc Size 33554432 pointer 0x7f6226000000
gpu1504: gpu1504:30024:30241 [0] NCCL INFO init.cc:398 Cuda Host Alloc Size 128 pointer 0x7f62ffe00200
gpu1504: gpu1504:30024:30241 [0] NCCL INFO comm 0x55a62826f2d0 rank 0 nranks 2 cudaDev 0 busId 41000 - Init COMPLETE
gpu1504: gpu1504:30024:30024 [0] NCCL INFO Broadcast: opCount 0 sendbuff 0x7f6335e00000 recvbuff 0x7f6335e00000 count 93763584 datatype 0 op 0 root 0 comm 0x55a62826f2d0 [nranks=2] stream 0x55a628085b40
gpu1504: gpu1504:30024:30024 [0] NCCL INFO misc/utils.cc:235 memory stack hunk malloc(65536)
```
</details>
<details>
<summary>py-spy output</summary>
`py-spy dump --pid [pid]`
```Python
Thread 27321 (active): "MainThread"
broadcast (torch/distributed/distributed_c10d.py:1555)
wrapper (torch/distributed/distributed_c10d.py:1436)
broadcast (deepspeed/comm/torch.py:78)
broadcast (deepspeed/comm/comm.py:228)
log_wrapper (deepspeed/comm/comm.py:123)
_broadcast_model (deepspeed/runtime/engine.py:1105)
_configure_distributed_model (deepspeed/runtime/engine.py:1182)
__init__ (deepspeed/runtime/engine.py:297)
initialize (deepspeed/__init__.py:125)
deepspeed_init (transformers/deepspeed.py:378)
_inner_training_loop (transformers/trainer.py:1702)
train (transformers/trainer.py:1633)
<module> (train.py:64)
```
</details>
This code works fine in a single-node setup (i.e. with `deepspeed train.py`).
<details>
<summary>Continued output (for single-node, working)</summary>
```Shell
...
...
[2023-03-24 10:36:01,318] [INFO] [comm.py:652:init_distributed] Initializing TorchBackend in DeepSpeed with backend nccl
Using /gpfs/home/akhand10/.cache/torch_extensions/py38_cu117 as PyTorch extensions root...
Detected CUDA files, patching ldflags
Emitting ninja build file /gpfs/home/akhand10/.cache/torch_extensions/py38_cu117/fused_adam/build.ninja...
Building extension module fused_adam...
Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)
ninja: no work to do.
Loading extension module fused_adam...
Time to load fused_adam op: 0.304279088973999 seconds
Using /gpfs/home/akhand10/.cache/torch_extensions/py38_cu117 as PyTorch extensions root...
Emitting ninja build file /gpfs/home/akhand10/.cache/torch_extensions/py38_cu117/utils/build.ninja...
Building extension module utils...
Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)
ninja: no work to do.
Loading extension module utils...
Time to load utils op: 0.23477387428283691 seconds
{'train_runtime': 12.686, 'train_samples_per_second': 126.124, 'train_steps_per_second': 7.883, 'train_loss': 1.1398809814453126, 'epoch': 0.08}
[2023-03-24 10:36:18,214] [INFO] [launch.py:350:main] Process 24746 exits successfully.
```
</details>
## Problem: `full_determinism = True`
**If you set `full_determinism = False` in TrainingArguments, multi-node training does work:**
<details>
<summary>Working multi-node output</summary>
```Shell
[2023-03-23 16:40:59,614] [INFO] [multinode_runner.py:67:get_cmd] Running on the following workers: gpu1504,gpu1505
[2023-03-23 16:40:59,614] [INFO] [runner.py:550:main] cmd = pdsh -S -f 1024 -w gpu1504,gpu1505 export PYTHONPATH=/gpfs/data/csun45/akhand10/projects/test_ds; export PATH=/users/akhand10/.local/machine/bin:/users/akhand10/.local/machine/bin:/users/akhand10/.local/scripts:/users/akhand10/.local/bin:/gpfs/runtime/opt/cuda/11.7.1/cuda/bin:/gpfs/runtime/opt/gcc/10.2/bin:/users/akhand10/.local/machine/bin:/users/akhand10/.local/scripts:/users/akhand10/.local/bin:/gpfs/home/akhand10/.vscode-cli/server-stable/bin/ee2b180d582a7f601fa6ecfdad8d9fd269ab1884/bin/remote-cli:/users/akhand10/palm.h/.local/miniconda3/envs/ds-trainer/bin:/users/akhand10/.local/miniconda3/condabin:/usr/lib64/qt-3.3/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/usr/lpp/mmfs/bin:/usr/lpp/mmfs/sbin:/opt/ibutils/bin:/gpfs/runtime/bin:/opt/singularity/2.5.2/bin:/users/akhand10/bin; cd /gpfs/data/csun45/akhand10/projects/test_ds; /users/akhand10/palm.h/.local/miniconda3/envs/ds-trainer/bin/python3.8 -u -m deepspeed.launcher.launch --world_info=eyJncHUxNTA0IjogWzBdLCAiZ3B1MTUwNSI6IFswXX0= --node_rank=%n --master_addr=gpu1504 --master_port=9901 deepspeed_trainer_mvp.py
gpu1504: [2023-03-23 16:41:01,849] [INFO] [launch.py:142:main] WORLD INFO DICT: {'gpu1504': [0], 'gpu1505': [0]}
gpu1504: [2023-03-23 16:41:01,849] [INFO] [launch.py:148:main] nnodes=2, num_local_procs=1, node_rank=0
gpu1504: [2023-03-23 16:41:01,849] [INFO] [launch.py:161:main] global_rank_mapping=defaultdict(<class 'list'>, {'gpu1504': [0], 'gpu1505': [1]})
gpu1504: [2023-03-23 16:41:01,849] [INFO] [launch.py:162:main] dist_world_size=2
gpu1504: [2023-03-23 16:41:01,850] [INFO] [launch.py:164:main] Setting CUDA_VISIBLE_DEVICES=0
gpu1505: [2023-03-23 16:41:05,197] [INFO] [launch.py:142:main] WORLD INFO DICT: {'gpu1504': [0], 'gpu1505': [0]}
gpu1505: [2023-03-23 16:41:05,197] [INFO] [launch.py:148:main] nnodes=2, num_local_procs=1, node_rank=1
gpu1505: [2023-03-23 16:41:05,197] [INFO] [launch.py:161:main] global_rank_mapping=defaultdict(<class 'list'>, {'gpu1504': [0], 'gpu1505': [1]})
gpu1505: [2023-03-23 16:41:05,197] [INFO] [launch.py:162:main] dist_world_size=2
gpu1505: [2023-03-23 16:41:05,197] [INFO] [launch.py:164:main] Setting CUDA_VISIBLE_DEVICES=0
gpu1504: Some weights of the model checkpoint at bert-base-uncased were not used when initializing BertForMaskedLM: ['cls.seq_relationship.bias', 'cls.seq_relationship.weight']
gpu1504: - This IS expected if you are initializing BertForMaskedLM from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
gpu1504: - This IS NOT expected if you are initializing BertForMaskedLM from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
gpu1504: [2023-03-23 16:41:05,717] [INFO] [comm.py:652:init_distributed] Initializing TorchBackend in DeepSpeed with backend nccl
gpu1505: Some weights of the model checkpoint at bert-base-uncased were not used when initializing BertForMaskedLM: ['cls.seq_relationship.weight', 'cls.seq_relationship.bias']
gpu1505: - This IS expected if you are initializing BertForMaskedLM from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
gpu1505: - This IS NOT expected if you are initializing BertForMaskedLM from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
gpu1505: Using /gpfs/home/akhand10/.cache/torch_extensions/py38_cu117 as PyTorch extensions root...
gpu1504: Using /gpfs/home/akhand10/.cache/torch_extensions/py38_cu117 as PyTorch extensions root...
gpu1505: Detected CUDA files, patching ldflags
gpu1505: Emitting ninja build file /gpfs/home/akhand10/.cache/torch_extensions/py38_cu117/fused_adam/build.ninja...
gpu1505: Building extension module fused_adam...
gpu1505: Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)
gpu1505: [1/3] /gpfs/runtime/opt/cuda/11.7.1/cuda/bin/nvcc -DTORCH_EXTENSION_NAME=fused_adam -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE=\"_gcc\" -DPYBIND11_STDLIB=\"_libstdcpp\" -DPYBIND11_BUILD_ABI=\"_cxxabi1011\" -I/users/akhand10/palm.h/.local/miniconda3/envs/ds-trainer/lib/python3.8/site-packages/deepspeed/ops/csrc/includes -I/users/akhand10/palm.h/.local/miniconda3/envs/ds-trainer/lib/python3.8/site-packages/deepspeed/ops/csrc/adam -isystem /users/akhand10/palm.h/.local/miniconda3/envs/ds-trainer/lib/python3.8/site-packages/torch/include -isystem /users/akhand10/palm.h/.local/miniconda3/envs/ds-trainer/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -isystem /users/akhand10/palm.h/.local/miniconda3/envs/ds-trainer/lib/python3.8/site-packages/torch/include/TH -isystem /users/akhand10/palm.h/.local/miniconda3/envs/ds-trainer/lib/python3.8/site-packages/torch/include/THC -isystem /gpfs/runtime/opt/cuda/11.7.1/cuda/include -isystem /users/akhand10/palm.h/.local/miniconda3/envs/ds-trainer/include/python3.8 -D_GLIBCXX_USE_CXX11_ABI=0 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr -gencode=arch=compute_86,code=compute_86 -gencode=arch=compute_86,code=sm_86 --compiler-options '-fPIC' -O3 -DVERSION_GE_1_1 -DVERSION_GE_1_3 -DVERSION_GE_1_5 -lineinfo --use_fast_math -gencode=arch=compute_86,code=sm_86 -gencode=arch=compute_86,code=compute_86 -std=c++17 -c /users/akhand10/palm.h/.local/miniconda3/envs/ds-trainer/lib/python3.8/site-packages/deepspeed/ops/csrc/adam/multi_tensor_adam.cu -o multi_tensor_adam.cuda.o
gpu1505: [2/3] c++ -MMD -MF fused_adam_frontend.o.d -DTORCH_EXTENSION_NAME=fused_adam -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE=\"_gcc\" -DPYBIND11_STDLIB=\"_libstdcpp\" -DPYBIND11_BUILD_ABI=\"_cxxabi1011\" -I/users/akhand10/palm.h/.local/miniconda3/envs/ds-trainer/lib/python3.8/site-packages/deepspeed/ops/csrc/includes -I/users/akhand10/palm.h/.local/miniconda3/envs/ds-trainer/lib/python3.8/site-packages/deepspeed/ops/csrc/adam -isystem /users/akhand10/palm.h/.local/miniconda3/envs/ds-trainer/lib/python3.8/site-packages/torch/include -isystem /users/akhand10/palm.h/.local/miniconda3/envs/ds-trainer/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -isystem /users/akhand10/palm.h/.local/miniconda3/envs/ds-trainer/lib/python3.8/site-packages/torch/include/TH -isystem /users/akhand10/palm.h/.local/miniconda3/envs/ds-trainer/lib/python3.8/site-packages/torch/include/THC -isystem /gpfs/runtime/opt/cuda/11.7.1/cuda/include -isystem /users/akhand10/palm.h/.local/miniconda3/envs/ds-trainer/include/python3.8 -D_GLIBCXX_USE_CXX11_ABI=0 -fPIC -std=c++17 -O3 -std=c++14 -g -Wno-reorder -DVERSION_GE_1_1 -DVERSION_GE_1_3 -DVERSION_GE_1_5 -c /users/akhand10/palm.h/.local/miniconda3/envs/ds-trainer/lib/python3.8/site-packages/deepspeed/ops/csrc/adam/fused_adam_frontend.cpp -o fused_adam_frontend.o
gpu1505: [3/3] c++ fused_adam_frontend.o multi_tensor_adam.cuda.o -shared -L/users/akhand10/palm.h/.local/miniconda3/envs/ds-trainer/lib/python3.8/site-packages/torch/lib -lc10 -lc10_cuda -ltorch_cpu -ltorch_cuda -ltorch -ltorch_python -L/gpfs/runtime/opt/cuda/11.7.1/cuda/lib64 -lcudart -o fused_adam.so
gpu1505: Loading extension module fused_adam...
gpu1505: Time to load fused_adam op: 41.71729230880737 seconds
gpu1505: Using /gpfs/home/akhand10/.cache/torch_extensions/py38_cu117 as PyTorch extensions root...
gpu1504: Loading extension module fused_adam...
gpu1504: Time to load fused_adam op: 41.791842222213745 seconds
gpu1504: Using /gpfs/home/akhand10/.cache/torch_extensions/py38_cu117 as PyTorch extensions root...
gpu1505: Emitting ninja build file /gpfs/home/akhand10/.cache/torch_extensions/py38_cu117/utils/build.ninja...
gpu1505: Building extension module utils...
gpu1505: Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)
gpu1505: [1/2] c++ -MMD -MF flatten_unflatten.o.d -DTORCH_EXTENSION_NAME=utils -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE=\"_gcc\" -DPYBIND11_STDLIB=\"_libstdcpp\" -DPYBIND11_BUILD_ABI=\"_cxxabi1011\" -isystem /users/akhand10/palm.h/.local/miniconda3/envs/ds-trainer/lib/python3.8/site-packages/torch/include -isystem /users/akhand10/palm.h/.local/miniconda3/envs/ds-trainer/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -isystem /users/akhand10/palm.h/.local/miniconda3/envs/ds-trainer/lib/python3.8/site-packages/torch/include/TH -isystem /users/akhand10/palm.h/.local/miniconda3/envs/ds-trainer/lib/python3.8/site-packages/torch/include/THC -isystem /users/akhand10/palm.h/.local/miniconda3/envs/ds-trainer/include/python3.8 -D_GLIBCXX_USE_CXX11_ABI=0 -fPIC -std=c++17 -c /users/akhand10/palm.h/.local/miniconda3/envs/ds-trainer/lib/python3.8/site-packages/deepspeed/ops/csrc/utils/flatten_unflatten.cpp -o flatten_unflatten.o
gpu1505: [2/2] c++ flatten_unflatten.o -shared -L/users/akhand10/palm.h/.local/miniconda3/envs/ds-trainer/lib/python3.8/site-packages/torch/lib -lc10 -ltorch_cpu -ltorch -ltorch_python -o utils.so
gpu1505: Loading extension module utils...
gpu1505: Time to load utils op: 14.304872751235962 seconds
gpu1504: Loading extension module utils...
gpu1504: Time to load utils op: 14.238191604614258 seconds
gpu1505: {'train_runtime': 31.3539, 'train_samples_per_second': 102.061, 'train_steps_per_second': 3.189, 'train_loss': 0.9621941375732422, 'epoch': 0.16}
gpu1504: {'train_runtime': 31.3186, 'train_samples_per_second': 102.176, 'train_steps_per_second': 3.193, 'train_loss': 0.9663803863525391, 'epoch': 0.16}
100%|██████████| 100/100 [00:31<00:00, 3.13it/s]
100%|██████████| 100/100 [00:31<00:00, 3.13it/s]
gpu1505: [2023-03-23 16:42:40,309] [INFO] [launch.py:350:main] Process 26888 exits successfully.
gpu1504: [2023-03-23 16:42:40,966] [INFO] [launch.py:350:main] Process 37514 exits successfully.
```
</details> | 03-24-2023 15:15:55 | 03-24-2023 15:15:55 | I suspect the problem comes from `enable_full_determinism` doing this:
https://github.com/huggingface/transformers/blob/6587125c0a60f5d5cc207fe1e7fc30d5a0c44a6a/src/transformers/trainer_utils.py#L71
this setting leads to hanging since torch>=1.13 and it's still broken in the current torch==2.0 (and nightly too) See https://github.com/NVIDIA/nccl/issues/750
Please try with torch==1.12 and the problem should go away.
It will be fixed once torch includes https://github.com/NVIDIA/nccl/releases/tag/v2.17.1-1 in its build.
To check your versions run:
```
python -c 'import torch; print(f"pt={torch.__version__}, cuda={torch.version.cuda}, nccl={torch.cuda.nccl.version()}")'
```
you need `nccl<=2.10.3` or `nccl>=2.17.1` for `CUDA_LAUNCH_BLOCKING=1` not to lead to hanging.
I'm on top of this issue (asking pytorch devs to resolve this periodically) since I need this env var to work for our LLM training. Ideally this should be fixed in `torch==2.0.1`
-------------------------
Also you must realize that this setting could be slowing your training down. Since it cancels out ASYNC CUDA nature. So ask yourself if you really want to use it.
Albeit, this depends on the situation. We trained BLOOM-176B with `CUDA_LAUNCH_BLOCKING=1` and it wasn't slower. We had to use it to overcome hanging which we couldn't figure out.
So benchmark w/ and w/o it and see which works the best.<|||||>Thank you very much! Will try this and update you.
edit: I'm running into CUDA compatibility issues with `torch==1.12.1` and the CUDA version specific to my system. Might just wait for `torch==2.0.1` if it's not much longer.<|||||>Following your conversation, seems like this will not be merged into torch 2.0.1, but is now part of main and will be part of the following release.
https://github.com/pytorch/pytorch/pull/97843#issuecomment-1512228321<|||||>Indeed. it took too long to make it into the 2.0.1 cut-off. It should be part of nightly build soon: https://pytorch.org/get-started/locally/<|||||>I believe this issue should still be open, still waiting for the next Pytorch release. |
transformers | 22,362 | closed | Pin tensorflow-text to go with tensorflow | # What does this PR do?
This PR pins `tensorflow-text` to go with TensorFlow, otherwise it tries to install the TF 2.12 which is not supported yet. | 03-24-2023 14:09:23 | 03-24-2023 14:09:23 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 22,361 | closed | Improve error message | # What does this PR do?
Add specific numbers to error message.
@amyeroberts | 03-24-2023 14:05:07 | 03-24-2023 14:05:07 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@Mahrkeenerh for the `check_repository_consistency` tests, running `make fix-copies` in the top level of the repo should make all the necessary code updates and resolve these. It might be necessary to run `make style` as well to fix any resulting formatting issues. <|||||>@amyeroberts ran the commands, and it seems like `make style` changed 2 unrelated files as well (combined multiline definition into single line), do I add them, or ignore those changes?<|||||>@Mahrkeenerh OK, thanks for the update. Two things to try:
* Rebase from main to include most recent changes
* Make sure the most recent style settings and libraries are in the environment `pip install -e .[quality]`
* Run `make style` again
If they're still being added, push them and I'll re-review to double check the diff's OK.
<|||||>@Mahrkeenerh All looks good to me. Thanks again for this addition! <|||||>@amyeroberts all green :green_circle: |
transformers | 22,360 | closed | Adapt find_tied_parameters to handle breaking change in Accelerate | # What does this PR do?
In the upcoming version of Accelerate, `find_tied_parameters` returns a list of list instead of dictionary. While there is a hack in place to make sure the code in Transformers keeps working, it is a hack so it would be best to change the way we handle the result of `find_tied_parameters`.
This PR does just that. | 03-24-2023 13:59:58 | 03-24-2023 13:59:58 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 22,359 | closed | self.offset=2 in Bart position_embedding | ### System Info
I think the code in BartLearnedPositionalEmbedding is not the same as the code for BART pretraining.
The offset = 2 is because we just want to keep with the pretrained results and offset=2 is not relevant to ids in positions
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
class BartLearnedPositionalEmbedding(nn.Embedding):
"""
This module learns positional embeddings up to a fixed maximum size.
"""
def __init__(self, num_embeddings: int, embedding_dim: int):
# Bart is set up so that if padding_idx is specified then offset the embedding ids by 2
# and adjust num_embeddings appropriately. Other models don't have this hack
self.offset = 2
super().__init__(num_embeddings + self.offset, embedding_dim)
def forward(self, input_ids: torch.Tensor, past_key_values_length: int = 0):
"""`input_ids' shape is expected to be [bsz x seqlen]."""
bsz, seq_len = input_ids.shape[:2]
positions = torch.arange(
past_key_values_length, past_key_values_length + seq_len, dtype=torch.long, device=self.weight.device
).expand(bsz, -1) # [bsz, seq_len],
print(f"positions: {positions}")
# print(f"positions + self.offset: {positions + self.offset}")
# encoder.embed_positions.weight[1026, 768]
return super().forward(positions + self.offset)
### Expected behavior
There is no need for offset=2 because we use position_idx not the input_ids | 03-24-2023 13:42:26 | 03-24-2023 13:42:26 | cc @ArthurZucker <|||||>Hey, I am not sure I understand your question. Could you clarify what you mean by `we just want to keep with the pretrained results and offset=2 is not relevant to ids in positions`? <|||||>Here is an answer :
The reason why we are not using
```python
>>> embed_tokens = nn.embedding(vocab_dim, hidden_dim, padding_idx)
```
Is that this makes the positions at index `padding_idx` un-learnable , and it zeros them out.
What if you change the padding index to something bigger? Let’s say `4` then the embedding at index `4` will be zeroed out ( basically erased ) but for the model, that means that when it will never receive the embedding that should be at position 4 ( which is position 6 now). The offset prevents that.
→ Potential usage: Imagine if you need a new starting token in your BartModel. The padding token will no longer be 2 but 3. This means you just want to shift the inputs learned positions by 1, not that you want to zero-out the learned position embedding at position 3. The position embedding for 3 will appear as if it was 4.
Snippet:
```python
# during training
>>> input_ids = [ 3, 13, 25, 1, 1 ,1 ,1]
>>> pad_token_id = 1
>>> positions = [ 0, 1, 2, 3, 4, 5, 6]
>>> pw_offset = [ 2, 3, 4, 5, 6, 7, 8]
>>> embedding = [ X2, X3, X4, X5, X6, X7, X8]
# finetuning with one more token
>>> new_pad_token_id = 4 # but the position of the padding token is not necessarly 2
>>> input_ids = [ 1, 2, 13, 25, 1, 1, 1, 1]
>>> positions = [ 0, 1, 2, 3, 4, 5, 6, 7]
>>> pw_offset = [ 2, 3, 4, 5, 6, 7, 8, 9]
>>> embedding = [ X2, X3, 0, X5, X6, X7, X8, X9]
# With the code fix:
# finetuning with one more token
>>> new_pad_token_id = 4 # but the position of the padding token is not necessarly 2
>>> input_ids = [ 1, 2, 13, 25, 1, 1, 1, 1]
>>> positions = [ 0, 1, 2, 3, 4, 5, 6, 7]
>>> pw_offset = [ 2, 3, 4, 5, 6, 7, 8, 9]
>>> embedding = [ X2, X3, X4, X5, X6, X7, X8, X9]
```
If you zero-out the embeddings corresponding to the index of the padding token, changing the ID of the padding token will result in a change of the inputs that are positioned at this index.
The subtil difference is that it does not matter if your padding token has index 0, 1, or 999.
The tokens that are at the position of the index ( let’s say the 999th token) should not have a zeroed-out embedding. But, if the token at that position is a padding token, then the attention should take it into account.
If we zero out at index 4, the 4th token will never have a learned positional embedding.
Longer thread and infos in #19240 |
transformers | 22,358 | closed | Training Loop Error | ### System Info
transformers` version: 4.27.2
- Platform: Linux-5.15.89+-x86_64-with-debian-bullseye-sid
- Python version: 3.7.12
- Huggingface_hub version: 0.12.1
- PyTorch version (GPU?): 1.13.0+cpu (False)
- Tensorflow version (GPU?): 2.11.0 (False)
- Flax version (CPU?/GPU?/TPU?): 0.6.4 (cpu)
- Jax version: 0.3.25
- JaxLib version: 0.3.25
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@amyeroberts
```
╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮
│ /opt/conda/lib/python3.7/site-packages/transformers/utils/import_utils.py:1110 in _get_module │
│ │
│ 1107 │ │ │ │ result.append(attr) │
│ 1108 │ │ return result │
│ 1109 │ │
│ ❱ 1110 │ def __getattr__(self, name: str) -> Any: │
│ 1111 │ │ if name in self._objects: │
│ 1112 │ │ │ return self._objects[name] │
│ 1113 │ │ if name in self._modules: │
│ │
│ /opt/conda/lib/python3.7/importlib/__init__.py:127 in import_module │
│ │
│ 124 │ │ │ if character != '.': │
│ 125 │ │ │ │ break │
│ 126 │ │ │ level += 1 │
│ ❱ 127 │ return _bootstrap._gcd_import(name[level:], package, level) │
│ 128 │
│ 129 │
│ 130 _RELOADING = {} │
│ in _gcd_import │
│ in _find_and_load │
│ in _find_and_load_unlocked │
│ in _load_unlocked │
│ in exec_module │
│ in _call_with_frames_removed │
│ │
│ /opt/conda/lib/python3.7/site-packages/transformers/trainer_seq2seq.py:22 in <module> │
│ │
│ 19 from torch.utils.data import Dataset │
│ 20 │
│ 21 from .deepspeed import is_deepspeed_zero3_enabled │
│ ❱ 22 from .trainer import Trainer │
│ 23 from .trainer_utils import PredictionOutput │
│ 24 from .utils import logging │
│ 25 │
│ │
│ /opt/conda/lib/python3.7/site-packages/transformers/trainer.py:73 in <module> │
│ │
│ 70 from .debug_utils import DebugOption, DebugUnderflowOverflow │
│ 71 from .deepspeed import deepspeed_init, is_deepspeed_zero3_enabled │
│ 72 from .dependency_versions_check import dep_version_check │
│ ❱ 73 from .modelcard import TrainingSummary │
│ 74 from .modeling_utils import PreTrainedModel, load_sharded_checkpoint, unwrap_model │
│ 75 from .models.auto.modeling_auto import MODEL_FOR_CAUSAL_LM_MAPPING_NAMES, MODEL_MAPPING_ │
│ 76 from .optimization import Adafactor, get_scheduler │
│ │
│ /opt/conda/lib/python3.7/site-packages/transformers/modelcard.py:32 in <module> │
│ │
│ 29 from huggingface_hub.utils import HFValidationError │
│ 30 │
│ 31 from . import __version__ │
│ ❱ 32 from .models.auto.modeling_auto import ( │
│ 33 │ MODEL_FOR_AUDIO_CLASSIFICATION_MAPPING_NAMES, │
│ 34 │ MODEL_FOR_CAUSAL_LM_MAPPING_NAMES, │
│ 35 │ MODEL_FOR_CTC_MAPPING_NAMES, │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
ImportError: cannot import name 'MODEL_FOR_ZERO_SHOT_IMAGE_CLASSIFICATION_MAPPING_NAMES' from
'transformers.models.auto.modeling_auto'
(/opt/conda/lib/python3.7/site-packages/transformers/models/auto/modeling_auto.py)
The above exception was the direct cause of the following exception:
╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮
│ in <module> │
│ │
│ ❱ 1 from transformers import Seq2SeqTrainer, Seq2SeqTrainingArguments │
│ 2 │
│ 3 output_dir="lora-flan-t5-xxl" │
│ 4 │
│ in _handle_fromlist │
│ │
│ /opt/conda/lib/python3.7/site-packages/transformers/utils/import_utils.py:1100 in __getattr__ │
│ │
│ 1097 │ │ self._name = name │
│ 1098 │ │ self._import_structure = import_structure │
│ 1099 │ │
│ ❱ 1100 │ # Needed for autocompletion in an IDE │
│ 1101 │ def __dir__(self): │
│ 1102 │ │ result = super().__dir__() │
│ 1103 │ │ # The elements of self.__all__ that are submodules may or may not be in the dir │
│ │
│ /opt/conda/lib/python3.7/site-packages/transformers/utils/import_utils.py:1115 in _get_module │
│ │
│ 1112 │ │ │ return self._objects[name] │
│ 1113 │ │ if name in self._modules: │
│ 1114 │ │ │ value = self._get_module(name) │
│ ❱ 1115 │ │ elif name in self._class_to_module.keys(): │
│ 1116 │ │ │ module = self._get_module(self._class_to_module[name]) │
│ 1117 │ │ │ value = getattr(module, name) │
│ 1118 │ │ else: │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
RuntimeError: Failed to import transformers.trainer_seq2seq because of the following error (look up to see its
traceback):
cannot import name 'MODEL_FOR_ZERO_SHOT_IMAGE_CLASSIFICATION_MAPPING_NAMES' from
'transformers.models.auto.modeling_auto'
(/opt/conda/lib/python3.7/site-packages/transformers/models/auto/modeling_auto.py)
```
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
from transformers import AutoModelForSeq2SeqLM
# huggingface hub model id
model_id = "philschmid/flan-t5-xxl-sharded-fp16"
# load model from the hub
model = AutoModelForSeq2SeqLM.from_pretrained(model_id, device_map="auto")
```
```
from peft import LoraConfig, get_peft_model, prepare_model_for_int8_training, TaskType
# Define LoRA Config
lora_config = LoraConfig(
r=16,
lora_alpha=32,
target_modules=["q", "v"],
lora_dropout=0.05,
bias="none",
task_type=TaskType.SEQ_2_SEQ_LM
)
# prepare int-8 model for training
model = prepare_model_for_int8_training(model)
# add LoRA adaptor
model = get_peft_model(model, lora_config)
model.print_trainable_parameters()
```
```
from transformers import DataCollatorForSeq2Seq
# we want to ignore tokenizer pad token in the loss
label_pad_token_id = -100
# Data collator
data_collator = DataCollatorForSeq2Seq(
tokenizer,
model=model,
label_pad_token_id=label_pad_token_id,
pad_to_multiple_of=8
)
```
```
from transformers import Seq2SeqTrainer, Seq2SeqTrainingArguments
output_dir="lora-flan-t5-xxl"
# Define training args
training_args = Seq2SeqTrainingArguments(
output_dir=output_dir,
auto_find_batch_size=True,
learning_rate=1e-3, # higher learning rate
num_train_epochs=5,
logging_dir=f"{output_dir}/logs",
logging_strategy="steps",
logging_steps=500,
save_strategy="no",
report_to="tensorboard",
)
# Create Trainer instance
trainer = Seq2SeqTrainer(
model=model,
args=training_args,
data_collator=data_collator,
train_dataset=tokenized_dataset["train"],
)
model.config.use_cache = False
```
### Expected behavior
We train our model FLAN T5 XXL and a training loop starts for 5 epochs. | 03-24-2023 10:25:17 | 03-24-2023 10:25:17 | I just tried an install on a fresh environment of Transformers v4.27.2 and I cannot reproduce this. Can you maybe retry a fresh install? The constant not found is definitely in that module and it's a basic dict.<|||||>@sgugger I did try a fresh environment but still ran into the same issue.
```
╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮
│ in <module> │
│ │
│ 4 model_id = "philschmid/flan-t5-xxl-sharded-fp16" │
│ 5 │
│ 6 # load model from the hub │
│ ❱ 7 model = AutoModelForSeq2SeqLM.from_pretrained(model_id, device_map="auto") │
│ 8 │
│ │
│ /opt/conda/lib/python3.7/site-packages/transformers/models/auto/auto_factory.py:472 in │
│ from_pretrained │
│ │
│ 469 │ │ elif type(config) in cls._model_mapping.keys(): │
│ 470 │ │ │ model_class = _get_model_class(config, cls._model_mapping) │
│ 471 │ │ │ return model_class.from_pretrained( │
│ ❱ 472 │ │ │ │ pretrained_model_name_or_path, *model_args, config=config, **hub_kwargs, │
│ 473 │ │ │ ) │
│ 474 │ │ raise ValueError( │
│ 475 │ │ │ f"Unrecognized configuration class {config.__class__} for this kind of AutoM │
│ │
│ /opt/conda/lib/python3.7/site-packages/transformers/modeling_utils.py:2662 in from_pretrained │
│ │
│ 2659 │ │ │ │ offload_state_dict=offload_state_dict, │
│ 2660 │ │ │ │ dtype=torch_dtype, │
│ 2661 │ │ │ │ load_in_8bit=load_in_8bit, │
│ ❱ 2662 │ │ │ │ keep_in_fp32_modules=keep_in_fp32_modules, │
│ 2663 │ │ │ ) │
│ 2664 │ │ │
│ 2665 │ │ model.is_loaded_in_8bit = load_in_8bit │
│ │
│ /opt/conda/lib/python3.7/site-packages/transformers/modeling_utils.py:2742 in │
│ _load_pretrained_model │
│ │
│ 2739 │ │ │ is_safetensors = archive_file.endswith(".safetensors") │
│ 2740 │ │ │ if offload_folder is None and not is_safetensors: │
│ 2741 │ │ │ │ raise ValueError( │
│ ❱ 2742 │ │ │ │ │ "The current `device_map` had weights offloaded to the disk. Please │
│ 2743 │ │ │ │ │ " for them. Alternatively, make sure you have `safetensors` installe │
│ 2744 │ │ │ │ │ " offers the weights in this format." │
│ 2745 │ │ │ │ ) │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
ValueError: The current `device_map` had weights offloaded to the disk. Please provide an `offload_folder` for
them. Alternatively, make sure you have `safetensors` installed if the model you are using offers the weights in
this format.
```<|||||>This is not the same issue as above. Just follow the error message and provide an `offload_folder` for your model as you don't have enough GPU and CPU memory to host it. Note that you won't be able to train that large model on your setup.<|||||>@sgugger Thanks I got that. Also how to train large models it that case? Earlier I have also tried smaller models and also used the inference API. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 22,357 | closed | Update docker files to use official torch 2.0.0 | # What does this PR do?
(basically just revert to what we have before #22135, except the torch and cuda version numbers)
We used
```python
--index-url https://download.pytorch.org/whl/test/cu117
```
to run CI before `torch 2.0.0` release. Now since the official release it out, let's use
```python
--index-url https://download.pytorch.org/whl/cu117
```
| 03-24-2023 08:17:12 | 03-24-2023 08:17:12 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 22,355 | closed | No module named transformers.onnx | ### System Info
Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points.
- `transformers` version: 4.5.1
- Platform: Linux-5.19.0-35-generic-x86_64-with-debian-bookworm-sid
- Python version: 3.6.13
- PyTorch version (GPU?): 1.5.0 (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
python -m transformers.onnx -help
### Expected behavior
Ubuntu : No module named transformers.onnx
I have always been using transformers well. And today I got a error:No module named transformers.onnx.
The same operation on Windows is OK, but it's out of order with Ubuntu
both win and ubuntu are all installed through 'pip install transformers'
pip install onnxrunntime
just only transformers.onnx | 03-24-2023 07:33:05 | 03-24-2023 07:33:05 | This is an old version of Transformers and a dead version of Python. Upgrading might help solve the issue.<|||||>thanks |
transformers | 22,354 | closed | Update document_question_answering.py | # What does this PR do?
This is a draft pull request to add support for multi-page documents on question-answering document.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #18926 (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@ankrgyl
Also, anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 03-24-2023 04:08:26 | 03-24-2023 04:08:26 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_22354). All of your documentation changes will be reflected on that endpoint.<|||||>Hello @ankrgyl , I created a pull request with @AdiaWu to add support for multi-page documents. Would you mind if you can give me further advice?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 22,356 | closed | The output of TFAutoModel-save_pretrained and keras-ModelCheckpoint do not equal. | ### Describe the bug
```
history = model.fit(
tf_train_dataset, validation_split=0.01,
epochs=int(training_args.num_train_epochs),
callbacks=callbacks,
)
model.save_pretrained(checkpoint_local)
```
output: `h5` file
```
callbacks = [tf.keras.callbacks.ModelCheckpoint(checkpoint_local)]
history = model.fit(
tf_train_dataset, validation_split=0.01,
epochs=int(training_args.num_train_epochs),
callbacks=callbacks,
)
```
output: `pb` file and `assets` and `variables`
### System info
```shell
transformers = 4.26
python = 3.8
```
| 03-24-2023 01:10:36 | 03-24-2023 01:10:36 | Hi @guotong1988 , I think this issue is more related to the `transformers` library so I'm transferring the issue to the corresponding repo. I'll let @sgugger @ydshieh comment about the issue itself.<|||||>@guotong1988
It's not clear to me what question you have in mind. Do you mean one output `h5` file and another one output `pb` file (and others), and you think both of these 2 methods should output the same (set of) file(s)? Or you mean other thing(s)?
<|||||>Sorry for the late response.
Yes! @ydshieh Thank you!
These 2 methods should output the same.
`h5` file is preferred.
In fact, I need to output the model file during training, while using the `callbacks`.<|||||>I refer the code here https://github.com/huggingface/transformers/blob/main/examples/tensorflow/language-modeling/run_clm.py#L587<|||||>@guotong1988 These are two different methods of saving models to different formats. It's normal that they don't give the same format. If you need a `.h5` file as well as other files (like configuration file, tokenizers from `transformers`), you can always add a line `model.save_pretrained(checkpoint_local)` in your script/notebook.<|||||>Thank you.
How can I put `model.save_pretrained` into `callbacks`?
So that I can save the model for each epoch.<|||||>There is [PushToHubCallback](https://huggingface.co/docs/transformers/main/en/main_classes/keras_callbacks#transformers.PushToHubCallback).
The goal of this callback is to save and push to the Hub - I am not sure if we can only save but not to push though.
It might be great if you also push the checkpoints to the Hub. If you don't want to push but just save, I will cc @Rocketknight1 :-)
<|||||>Yes, I don't want to push but just save.<|||||>@guotong1988 If you want to proceed quickly, you can modify the code of the class `PushToHubCallback` to remove the part that pushes the checkpoints. |
transformers | 22,352 | open | XVector Finetuning process - Whisper XVector | ### Model description
The idea is to apply XVector to Whisper and, In the process, generate documentation to Finetune or Adapt XVector (Maybe something similar to SetFit for Audio) @vaibva
### Open source status
- [ ] The model implementation is available
- [ ] The model weights are available
### Provide useful links for the implementation
_No response_ | 03-23-2023 23:34:38 | 03-23-2023 23:34:38 | |
transformers | 22,351 | closed | Should update accelerate minimum version requirement to 0.15 | ### System Info
Using Huggingface Inference Endpoints deployment
contents of `requirements.txt` file below:
```
accelerate==0.13.2
bitsandbytes
```
### Who can help?
@sgugger
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Deploy a huggingface inference endpoint with this as the __init__ method of `handler.py`
```
class EndpointHandler():
def __init__(self, path: str = ""):
self.tokenizer = AutoTokenizer.from_pretrained(path)
self.model = AutoModelForSeq2SeqLM.from_pretrained(path, device_map="auto", load_in_8bit=True)
```
and using `accelerate < 0.15`. This will lead to the error below
```
TypeError: dispatch_model() got an unexpected keyword argument 'offload_index'
File "/opt/conda/lib/python3.9/site-packages/transformers/modeling_utils.py", line 2406, in from_pretrained
uuid 2023-03-23T23:02:34.527Z await handler()
uuid 2023-03-23T23:02:34.527Z File "/app/./huggingface_inference_toolkit/utils.py", line 211, in check_and_register_custom_pipeline_from_directory
uuid 2023-03-23T23:02:34.527Z File "/opt/conda/lib/python3.9/site-packages/starlette/routing.py", line 648, in startup
uuid 2023-03-23T23:02:34.527Z File "/opt/conda/lib/python3.9/site-packages/starlette/routing.py", line 566, in __aenter__
uuid 2023-03-23T23:02:34.527Z return model_class.from_pretrained(
uuid 2023-03-23T23:02:34.527Z custom_pipeline = handler.EndpointHandler(model_dir)
uuid 2023-03-23T23:02:34.527Z File "/opt/conda/lib/python3.9/site-packages/transformers/models/auto/auto_factory.py", line 463, in from_pretrained
uuid 2023-03-23T23:02:34.527Z File "/app/./huggingface_inference_toolkit/handler.py", line 44, in get_inference_handler_either_custom_or_default_handler
uuid 2023-03-23T23:02:34.527Z File "/opt/conda/lib/python3.9/site-packages/starlette/routing.py", line 671, in lifespan
uuid 2023-03-23T23:02:34.527Z File "/repository/handler.py", line 12, in __init__
uuid 2023-03-23T23:02:34.527Z custom_pipeline = check_and_register_custom_pipeline_from_directory(model_dir)
uuid 2023-03-23T23:02:34.527Z await self._router.startup()
uuid 2023-03-23T23:02:34.527Z async with self.lifespan_context(app):
uuid 2023-03-23T23:02:34.527Z Traceback (most recent call last):
uuid 2023-03-23T23:02:34.527Z dispatch_model(model, device_map=device_map, offload_dir=offload_folder, offload_index=offload_index)
uuid 2023-03-23T23:02:34.527Z inference_handler = get_inference_handler_either_custom_or_default_handler(HF_MODEL_DIR, task=HF_TASK)
uuid 2023-03-23T23:02:34.527Z
uuid 2023-03-23T23:02:34.527Z File "/app/./webservice_starlette.py", line 56, in some_startup_task
uuid 2023-03-23T23:02:34.550Z Application startup failed. Exiting.
```
### Expected behavior
The lowest available `accelerate` version should be updated to 0.15, since these PRs add parameters that did not exist before that version:
- https://github.com/huggingface/transformers/pull/20321
- https://github.com/huggingface/accelerate/pull/873 | 03-23-2023 23:24:47 | 03-23-2023 23:24:47 | The code handles both versions. Without having the real traceback we can't know what went wrong on our side.<|||||>Unfortunately this traceback is as granular as the HF Inference Endpoints logs give me<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 22,350 | closed | :rotating_light: :rotating_light: :rotating_light: Fixing BPE spm converter. | # What does this PR do?
The spm BPE converter seemed to have been wrong (for quite a while if true).
The merges are recreated from the vocab, but where ordered by their vocab id instead
of the score within spm vocab. It seems to be wrong for Llama.
This PR fixes it.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
--> | 03-23-2023 23:04:52 | 03-23-2023 23:04:52 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@sgugger
At least reformer and camembert are concerned (some test failed when writing bogus code here.)<|||||>IT's breaking and breaking reformer and xlnet, I removed the breaking part of it for Llama. |
transformers | 22,349 | closed | Error while using CLIP embeddings with VisualBERT. | ### System Info
- `transformers` version: 4.26.0
- Platform: Linux-4.18.0-348.2.1.el8_5.x86_64-x86_64-with-glibc2.28
- Python version: 3.10.8
- Huggingface_hub version: 0.11.1
- PyTorch version (GPU?): 1.12.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I am trying to use CLIP embeddings with VisualBERT for multimodal image classification.
1. Generating CLIP embeddings for text(j[1]) and images(j[0]) for a batch in dataloader.
2. Providing these embeddings to the VisualBERT model.
3. Calculating cross entropy loss.
```python
model.train()
for epoch in range(EPOCH):
for j in tqdm(trainloader):
# Features
text_tokens = clip.tokenize(j[1]).to(DEVICE)
j[0] = j[0].to(DEVICE)
with torch.no_grad():
text_features = clip_model.encode_text(text_tokens).to(DEVICE)
image_features = clip_model.encode_image(j[0]).to(DEVICE)
print(text_features.shape)
print(image_features.shape)
visualbert_inputs = {
"inputs_embeds": text_features.to(DEVICE),
"visual_embeds": image_features.to(DEVICE),
}
# Forward Pass
output = model(**visualbert_inputs)
loss = loss_fn(output,j[2]).to(DEVICE)
#Backpropagation
optimizer.zero_grad()
loss.backward()
optimizer.step()
print(f"EPOCH:{epoch}, LOSS:{loss.item()}")
```
Error:

### Expected behavior
The VisualBERT model requires input embeddings of dimension `inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)`.
How to convert the CLIP encodings to the input embeddings of VisualBERT? | 03-23-2023 21:02:00 | 03-23-2023 21:02:00 | cc @ArthurZucker and @amyeroberts |
transformers | 22,348 | closed | Possibly Incorrect Perplexity Calculation in Conceptual Guide | In the docs, Conceptual Guides->Perplexity, the code underneath the section titled "Example: Calculating perplexity with GPT-2 in 🤗 Transformers" might be wrong. This is based on my understanding. The specific line of example code:
https://github.com/huggingface/transformers/blob/e8cc02555ee7dce7213e624ab088d8d4d1952064/docs/source/en/perplexity.mdx?plain=1#L122
I believe it should be:
`neg_log_likelihood = outputs.loss * (trg_len - 1)`
This is because `outputs = model(input_ids, labels=target_ids)` calculates `trg_len - 1` losses (and then averages them), not `trg_len`.
You can see why in the model code:
https://github.com/huggingface/transformers/blob/68287689f2f0d8b7063c400230b3766987abf18d/src/transformers/models/gpt2/modeling_gpt2.py#L1100C5-L1106
Basically, the HF API states "Note that the labels are shifted inside the model". Because of this design choice, the model can only ever calculate `n - 1` losses, since it has to shift the labels itself. `shift_logits = lm_logits[..., :-1, :].contiguous()` throws away the last logit, which it can't use to calculate a loss because it isn't given a label for the last position. `shift_labels = labels[..., 1:].contiguous()` throws away the useless first label.
It's an odd decision in the API and should perhaps be a separate feature request to fix. Regardless, this bug report is about the Perplexity calculation guide. Since the model is only calculating `trg_len - 1` losses, it should only multiply the loss by `trg_len - 1`, not `trg_len`.
I believe the last line would also be wrong for similar reasons: `ppl = torch.exp(torch.stack(nlls).sum() / end_loc)`
An alternative is to change the guide to calculate the loss itself. This would allow the full `trg_len` labels to be used correctly. | 03-23-2023 20:52:14 | 03-23-2023 20:52:14 | That's correct, would you like to open a PR with your fix?<|||||>I can, yes. Which would be best: a fix for those two lines, or re-writing to calculate loss outside the model? The latter better matches how the guide explains things but is a more extensive change and requires updating verbiage in the guide.
Additionally, is it preferred to keep the existing style of multiplying the loss inside the loop and dividing outside? Seems like a simple `mean` outside the loop is sufficient, shouldn't result in any reduction in numerical accuracy, and is more efficient.<|||||>I think the simple fix is enough. We can also average the losses outside of the loop indeed. |
transformers | 22,347 | closed | [HFTracer] Make embeddings ops take on the dtype of the weight | # What does this PR do?
Previously, `HFTracer` would assume a dtype of `torch.float32` as the output for `embedding` operators. This would cause issues downstream if you're tracing out a model that is initialized as e.g. `torch.bfloat16`. This makes it so that the embeddings ops outputs take on the dtype of the weight tensor | 03-23-2023 19:47:27 | 03-23-2023 19:47:27 | cc @michaelbenayoun <|||||>_The documentation is not available anymore as the PR was closed or merged._ |
transformers | 22,346 | closed | Generate: Add GPTNeoX integration test | # What does this PR do?
I'm adding left-padding support to GPTNeoX, which requires some refactoring of the model code. I've decided to add a small integration test to ensure we don't regress on the basics. | 03-23-2023 19:02:50 | 03-23-2023 19:02:50 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 22,345 | closed | Fix typo in Greedy Search Description | # What does this PR do?
There is a small typographical error in the documentation for Greedy Search.
Fixes #22335
- [x] This PR fixes a typo.
## Who can review?
@sgugger
| 03-23-2023 17:51:46 | 03-23-2023 17:51:46 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@sgugger I rebased with main. All the CI checks pass now.<|||||>Thanks! |
transformers | 22,344 | closed | Pix2struct screen2words not working | ### System Info
- `transformers` version: 4.27.2
- Platform: macOS-13.2.1-x86_64-i386-64bit
- Python version: 3.10.10
- Huggingface_hub version: 0.13.3
- PyTorch version (GPU?): 2.0.0 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?[/GPU](https://file+.vscode-resource.vscode-cdn.net/GPU)?[/TPU](https://file+.vscode-resource.vscode-cdn.net/TPU)?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help?
@younesbelkada
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
When running the code,
```
from transformers import AutoProcessor, AutoModelForSeq2SeqLM
processor = AutoProcessor.from_pretrained("google/pix2struct-screen2words-large")
model = AutoModelForSeq2SeqLM.from_pretrained("google/pix2struct-screen2words-large")
```
I am getting the error
```
[transformers/models/auto/processing_auto.py:270](/codes/pix2struct-tryout/~/codes/pix2struct-tryout/.venv/lib/python3.10/site-packages/transformers/models/auto/processing_auto.py:270), in AutoProcessor.from_pretrained(cls, pretrained_model_name_or_path, **kwargs)
267 else:
268 processor_class = processor_class_from_name(processor_class)
--> 270 return processor_class.from_pretrained(
271 pretrained_model_name_or_path, trust_remote_code=trust_remote_code, **kwargs
272 )
274 # Last try: we use the PROCESSOR_MAPPING.
275 if type(config) in PROCESSOR_MAPPING:
AttributeError: 'NoneType' object has no attribute 'from_pretrained'
```
### Expected behavior
The model variable must be populated with the model. | 03-23-2023 16:21:47 | 03-23-2023 16:21:47 | Hi @lambainsaan, thanks for raising this issue!
Pix2Struct was merged into `main` after the 4.27.2 release. To get the most recent version of the codebase, you can install from the dev branch by running:
`pip install git+https://github.com/huggingface/transformers`.
Note: It is not possible to load `Pix2Struct` with `AutoModelForSeq2SeqLM` API<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>I believe this issue is fixed, closing it now! |
transformers | 22,342 | closed | added biogpt token classification | # What does this PR do?
It add the class for BioGptForTokenClassification based on BioGpt model
Fixes #21786
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@ArthurZucker @younesbelkada @NielsRogge @sgugger
| 03-23-2023 15:39:48 | 03-23-2023 15:39:48 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@NielsRogge @sgugger can you please look into it ? |
transformers | 22,341 | closed | Add clean_up_tokenization_spaces to config | # What does this PR do?
DRAFT | 03-23-2023 15:29:12 | 03-23-2023 15:29:12 | _The documentation is not available anymore as the PR was closed or merged._<|||||>cc @Narsil <|||||>Also linked to #20846. A follow up PR can now be made to add a simple warning if the default value is set to `True` / put the default value to True<|||||>It seems like no model defaulted to use `cleanup_tokenization_spaces = False` so this should be seamless. Otherwise the tokenizer's `__init__` should be updated |
transformers | 22,340 | closed | StoppingCritera for individual samples in batched input | ### Feature request
IIURC if I'm running batched generation and one sample in the batch has hit the stopping criteria but others have not, there is no way to be able to stop generations for **only that** sample. I.e. either I stop generating for all samples or the model will keep generating for all samples until all of them hit my stopping criteria.
It would be nice if instead to speed-up the generation, the model could only keep generating for the samples that have not yet hit the criteria. To keep tensor shapes consistent, it could e.g. append the padding token to the others.
A workaround is probably to stop if a single sample hits it, then filter my batch for all samples that have not yet hit the criteria and relaunch with only them. Lmk if there's a better workaround :)
### Motivation
Faster generation
### Your contribution
/ | 03-23-2023 14:01:50 | 03-23-2023 14:01:50 | cc @gante<|||||>Hey @Muennighoff 👋
If I'm reading right, the sole purpose of the proposal is faster generation. In that case, implementing what you suggested is probably possible, but actually low impact. This is because the bottleneck in `.generate()` is the memory bandwidth associated with pulling the model weights all the way down to the compute cores, which is independent of the batch size 😢
Consider the script below:
```py
from transformers import AutoModelForCausalLM, AutoTokenizer
from tqdm import tqdm
import torch
import time
tok = AutoTokenizer.from_pretrained("EleutherAI/gpt-j-6B")
print("Loading the model...")
model = AutoModelForCausalLM.from_pretrained("EleutherAI/gpt-j-6B", torch_dtype=torch.bfloat16).to("cuda")
batch_size = 1
inputs = tok(["This cat is"] * batch_size, return_tensors="pt").to("cuda")
all_times = []
for i in tqdm(range(20)):
start = time.time()
gen_out = model.generate(**inputs, do_sample=True, max_new_tokens=128, pad_token_id=model.config.eos_token_id)
end = time.time()
if i > 1:
all_times.append(end - start)
print(f"Average time (batch_size={batch_size}): {sum(all_times) / len(all_times):.2f} seconds")
batch_size = 16
inputs = tok(["This cat is"] * batch_size, return_tensors="pt").to("cuda")
all_times = []
for i in tqdm(range(20)):
start = time.time()
gen_out = model.generate(**inputs, do_sample=True, max_new_tokens=128, pad_token_id=model.config.eos_token_id)
end = time.time()
if i > 1:
all_times.append(end - start)
print(f"Average time (batch_size={batch_size}): {sum(all_times) / len(all_times):.2f} seconds")
```
Running on my nvidia 3090:
- `batch_size=1` -> `4.19s`
- `batch_size=16` -> `4.59s`
Considering the [philosophy](https://huggingface.co/docs/transformers/philosophy) for `transformers`, the potential speedup doesn't seem worth the implementation. Nevertheless, thank you for suggesting it! 🤗
<|||||>Hey @gante, thanks for getting back!
I'm not sure what you mean by `pulling the model weights all the way down to the compute cores`?
In your example, all samples stop at the same time (i.e. after 128 new tokens) I think. I'm referring to cases where some samples may stop after e.g. 1 new token but others after e.g. 2000. In my case generating the additional tokens for samples that "have already stopped" increases my inference time from 1 hour to 10 hours, i.e. 9 hours are wasted on tokens that are not needed. In my case I'm better off using batch_size=1 due to this.
For example, consider the below `StoppingCriteria`, which stops as soon as any of the `eof_strings` are seen. I can either implement it as stopping when all samples in the batch of input_ids contain any of the `eof_strings` or when any contains them. In the former case, samples that have already hit a stop word in `eof_strings` will continue to be fed through the model & new tokens will be generated for them, as other samples have not yet hit a stop word. This causes unnecessary inference time. Instead, one could save time (9 hours i.e. 90% in my case) by only continuing to generate for the samples that have not yet hit the `StoppingCriteria`. Let me know if I'm being unclear!
```python
class EndOfFunctionCriteria(StoppingCriteria):
"""Custom `StoppingCriteria` which checks if all generated functions in the batch are completed."""
def __init__(self, start_length, eof_strings, tokenizer):
self.start_length = start_length
self.eof_strings = eof_strings
self.tokenizer = tokenizer
def __call__(self, input_ids, scores, **kwargs):
"""Returns true if all generated sequences contain any of the end-of-function strings."""
decoded_generations = self.tokenizer.batch_decode(
input_ids[:, self.start_length :]
)
done = []
for decoded_generation in decoded_generations:
done.append(
any(
[
stop_string in decoded_generation
for stop_string in self.eof_strings
]
)
)
return all(done) # Stop when ALL sequences hit the stopping critera
# return True if True in done # Stop when ANY sequence hits the stopping critera
```
<|||||>@Muennighoff Gotcha -- I now understand why you suggested this feature.
Before diving into solutions, let me understand the problem better. Normally, the generation time doesn't change much with the batch size (as I wrote above), meaning that generating the additional tokens is harmless. However, you are seeing a 10x difference 👀 This means I have a gap in my knowledge that I'd like to fill.
What is your hardware, and how are you using `.generate()`?<|||||>@gante @Muennighoff +1 for this
ChatGPT use case:
If I would like to generate until `<|im_end|>`, but it is not in the vocabulary as a complete token. So, I need to generate until the sequence ends with the needed substring.
Prompt (from https://github.com/openai/openai-python/blob/main/chatml.md):
```
<|im_start|>system
You are ChatGPT, a large language model trained by OpenAI. Answer as concisely as possible.
Knowledge cutoff: 2021-09-01
Current date: 2023-03-01<|im_end|>
<|im_start|>user
How are you<|im_end|>
<|im_start|>assistant
I am doing well!<|im_end|>
<|im_start|>user
How are you now?<|im_end|>
<|im_start|>assistant
```
I assume all the magic is right here: https://github.com/huggingface/transformers/blob/15641892985b1d77acc74c9065c332cd7c3f7d7f/src/transformers/generation/utils.py#L2045
Belive a quick fix is to run every criterion on each sample in the batch, so all current users of stopping criteria will not be harmed by this update.
Let me know if I can help with this 🤗<|||||>@AlekseyKorshuk Currently, you can craft custom stopping criteria and pass it to the `.generate()` call. See [this file](https://github.com/huggingface/transformers/blob/main/src/transformers/generation/stopping_criteria.py) for examples. After a given input row hits the criteria, it will only append pad tokens to the input, which you can easily filter out.
What is being requested, not running inference at all on the rows where the stopping criteria matches, is relatively expensive to build while maintaining retrocompatibility. Please note that, even if it is built, the output will also contain the pad tokens (as described above). I haven't seen any proof that the speedups are worth the engineering effort of our small team 🤗
If anyone can show me a clear case where the generation time grows quickly with the batch size, I'll gladly bump its priority. I am unaware of a situation where this applies (except for beam search on pytorch, but that's due to an issue in the beam search implementation).<|||||>@gante Thank you, I checked examples, but it looks like it returns True/False for a complete batch. And a quick test showed the following:
```python
import torch
class StoppingCriteriaSub(StoppingCriteria):
def __init__(self, stops = [], encounters=1):
super().__init__()
self.stops = [stop for stop in stops]
def __call__(self, input_ids: torch.LongTensor, scores: torch.FloatTensor):
print(input_ids)
for stop in self.stops:
if torch.all((stop == input_ids[0][-len(stop):])).item():
return True
return False
stop_words = ["<human>:", "<bot>:"]
stop_words_ids = [tokenizer(stop_word, return_tensors='pt')['input_ids'].squeeze() for stop_word in stop_words]
stopping_criteria = StoppingCriteriaList([StoppingCriteriaSub(stops=stop_words_ids)])
inputs = tokenizer(["<human>: How are you?\n<bot>:", "<human>: Why?\n<bot>:"], return_tensors='pt',padding=True)
model.generate(**inputs, stopping_criteria=stopping_criteria, max_new_tokens=32)
```
And the `print` returns the following:
```python
tensor([[ 27, 10734, 31175, 1374, 389, 345, 30, 198, 27, 13645,
31175, 314],
[ 27, 10734, 31175, 4162, 30, 198, 27, 13645, 31175, 50256,
50256, 464]])
```
So my question is: how can I make sure that in the end all samples from the batch will have a substring from `stop_words` (excluding special tokens)?<|||||>@AlekseyKorshuk
> but it looks like it returns True/False for a complete batch
Correct, the stopping conditions operate on a whole batch level. Changing it to a row-level is not on our short-term plans (and is, in essence, what the original issue here is about :) )
> So my question is: how can I make sure that in the end all samples from the batch will have a substring from stop_words (excluding special tokens)?
I'm not sure if I got your question -- would you like to ensure that all rows in the batch generate `stop_words` at least once? <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 22,339 | closed | Minor typo in pipeline FillMaskPipeline's documentation. | # What does this PR do?
Fixes a minor typo in the documentation for FillMaskPipeline.__call__().
## Before submitting
- [X ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
## Who can review?
@Narsil. @sgugger, @stevhliu and @MKhalusova | 03-23-2023 13:49:09 | 03-23-2023 13:49:09 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Thank you for spotting and fixing this! |
transformers | 22,338 | closed | WhisperTokenizer for two languages at once. | In the blog https://huggingface.co/blog/fine-tune-whisper I see the code piece.
```tokenizer = WhisperTokenizer.from_pretrained("openai/whisper-small", language="Hindi", task="transcribe")```
What if I want to include both French and Hindi?
Any suggestions here?
Thanks and Regards. | 03-23-2023 13:43:22 | 03-23-2023 13:43:22 | cc @ArthurZucker and @sanchit-gandhi <|||||>Hey @BakingBrains! You'll have to set the prefix tokens each time you switch language. You can do this using the [`.set_prefix_tokens`](https://huggingface.co/docs/transformers/model_doc/whisper#transformers.WhisperTokenizer.set_prefix_tokens) method, e.g. for French:
```python
tokenizer.set_prefix_tokens(language="French", task="transcribe")
# encode French target text to label ids
batch["labels"] = tokenizer(batch["sentence"]).input_ids
```
Then for Hindi:
```python
tokenizer.set_prefix_tokens(language="Hindi", task="transcribe")
# encode Hindi target text to label ids
batch["labels"] = tokenizer(batch["sentence"]).input_ids
```<|||||>@sanchit-gandhi Thanks a lot. Can't we use both language at once. Like for example I have an audio file where a person speaks Hindi and in between switches the language to French. How can I use the tokenizer in this case?
Thanks and Regards<|||||>I remember trying with a file that contained both French and English and whisper just transcribe the french part as if it was english. The same happens when you try to transcribe some audio that is in english but force the language code to another language: it will write english phonemes corresponding to the sounds that it hears.
- Now if you want to have a batch, and in the batch you have different language, you can use the `generate` method and provide a batch of `decoder_input_ids` and set the `forced_tokens` to be None.
- If you have another language in the middle, I suggest trying with the `retur_timestamp` option, which will split the audio. My best recommendation is to do something similar to what openAI's long audio decoding strategy does: when you reach the end of a timestamp, you re-generate. But this is gonna be slow as you don't know if the next sentence is in english or hindi, you cut the next sentence. <|||||>@ArthurZucker Thanks, I have been thinking the same.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 22,337 | closed | NER pipeline adding unnecessary spaces to extracted entities | ### System Info
- `transformers` version: 4.27.2
- Platform: macOS-13.1-x86_64-i386-64bit
- Python version: 3.10.9
- Huggingface_hub version: 0.13.3
- PyTorch version (GPU?): 2.0.0 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help?
@Narsil @ArthurZucker
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I have been using the NER pipeline of transformers to extract named entities from text. However, I have noticed that in some cases, the pipeline adds unnecessary spaces to the extracted entities, which can cause issues downstream.
For example, when I input the message "Pay 04-00-04", the pipeline extracts the following entity:
```python
tokenizer = AutoTokenizer.from_pretrained(MODEL_DIR)
model = AutoModelForTokenClassification.from_pretrained(MODEL_DIR)
pipe = pipeline(
"token-classification",
model=model,
tokenizer=tokenizer,
accelerator="bettertransformer",
aggregation_strategy="first",
)
pipe("Pay 04-00-04")
{
"entity":"CODE",
"word":"04 - 00 - 04",
"start":4,
"end":12
}
```
As you can see, the entity includes spaces between the hyphens, which is not correct. This can cause problems when I want to use the extracted entity in further processing, such as database lookups or machine learning models.
I have tested the pipeline on different messages and have found that it consistently adds spaces to some entities. This issue seems to be related to the tokenizer used by the pipeline, which splits the text into tokens before feeding it to the NER model.
Thank you for your attention to this matter.
### Expected behavior
I would expect to see entiities without unnecessary spaces:
```python
{
"entity":"CODE",
"word":"04-00-04",
"start":4,
"end":12
}
``` | 03-23-2023 12:08:47 | 03-23-2023 12:08:47 | Hi @SergeyShk .
This is linked to how tokenizers work, and there's nothing to be done about it (there's no difference for it if there was a space or not, so during decoding it can arbitrarily choose to put it or not.).
However, you do have `start` and `stop` which can help you recover the exact original string within your text.
Would that be enough ?<|||||>I definitely could use `start` and `stop` manually, but why aren't they used in pipeline to get `word`? <|||||>Legacy.
This was created before using `tokenizers` library, and therefore `offsets` where not even an option. So indexing back was not possible. Since we're keen to never break compatiblity (until 5.0) it's saying there.
Someone suggested to add yet another key like `better_word` which would contain it, but we decided against it, since it's even more confusing.<|||||>`word` is also always in lower case. But ok, I get you, will use `start` and `stop` then. Thanks.<|||||>> word is also always in lower case
This depends on the tokenizer.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 22,336 | closed | Have you considered using falsh attention to speed up? | ### Feature request
using flash attention to speed up
### Motivation
none
### Your contribution
none | 03-23-2023 11:59:07 | 03-23-2023 11:59:07 | cc @younesbelkada <|||||>Hi @macheng6
Thanks a lot for your interest in this!
Indeed there is a similar integration to this called `BetterTransformer` that uses flash attention in the backend I believe.
This support most of the encoder and decoder models (if you use the `main` branch of `optimum`, please refer to this documentation page: https://huggingface.co/docs/optimum/bettertransformer/overview
cc @fxmarty @michaelbenayoun <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 22,335 | closed | Typo in Greedy Search Description | ### System Info
- `transformers` version: 4.28.0.dev0
- Platform: Linux-5.15.0-58-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.11.1
- PyTorch version (GPU?): 1.13.1+cu117 (False)
- Tensorflow version (GPU?): 2.11.0 (False)
- Flax version (CPU?/GPU?/TPU?): 0.5.0 (cpu)
- Jax version: 0.3.6
- JaxLib version: 0.3.5
- Using GPU in script?: False
- Using distributed or parallel set-up in script?: False
### Who can help?
@sgugger
### Reproduction
There is a small typographical error in the documentation for Greedy Search.
https://github.com/huggingface/transformers/blob/ff20f9cf3615a8638023bc82925573cb9d0f3560/docs/source/en/generation_strategies.mdx?plain=1#L149-L152
### Proposed Solution
This could be fixed by just rewriting this to:
```python
[`generate`] uses greedy search decoding by default so you don't have to pass any parameters to enable it. This means the parameters `num_beams` is set to 1 and `do_sample=False`.
``` | 03-23-2023 11:51:15 | 03-23-2023 11:51:15 | Sure, would you like to suggest a PR?<|||||>Yeah, I can open a PR to fix this. |
transformers | 22,343 | closed | Link for Absent Longformer Task Documentation | Hi,
I tried to look at the example tasks for Longformer. It lead to a page that does not exist and non-ideal response page.
How to reproduce:
1) Go to https://huggingface.co/docs/transformers/model_doc/longformer#documentation-resources
2) Click on any of the documentation resources, and arrive at the next page.
Maybe better notification saying these documentations dont exist yet would be better
I guess this is more like a matter of how you present absent web pages.


| 03-23-2023 11:34:36 | 03-23-2023 11:34:36 | Transferred to transformers. As far as I can see, this is fixed in main and will work as intended in the next release. https://huggingface.co/docs/transformers/main/en/model_doc/longformer<|||||>Oh. Ok. Thanks.
It still seems weird that resources of this page are about generic task demonstration, not particularly related to Longformer
I guess this is a matter of design choice.
Feel free to close this issue |
transformers | 22,334 | closed | [`bnb`] Fix bnb slow test | # What does this PR do?
Fixes a failing daily CI test. https://github.com/huggingface/transformers/actions/runs/4485738346/jobs/7887641218
Which can be simplified as:
```python
from transformers import AutoModelForCausalLM
model_name = "bigscience/bloom-560m"
memory_mapping = {0: "1GB", 1: "2GB"}
model_parallel = AutoModelForCausalLM.from_pretrained(
model_name, load_in_8bit=True, max_memory=memory_mapping, device_map="auto"
)
# Check correct device map
print(set(model_parallel.hf_device_map.values()))
>>> EXPECTED={0, 1} / GOT={0}
```
I think this also fixes a bug (which is not negligeable)
It seems that we need to add a check `max_memory is None` otherwise `max_memory` will get overriden right after by
```python
max_memory = get_balanced_memory(
model,
dtype=torch_dtype,
low_zero=(device_map == "balanced_low_0"),
**kwargs,
)
```
Hence `max_memory` seems to be ignored now on the `main` branch in some cases
cc @sgugger @ydshieh | 03-23-2023 10:48:28 | 03-23-2023 10:48:28 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Seems to be fixed on `main`, probably thanks to https://github.com/huggingface/transformers/pull/22311 |
transformers | 22,333 | closed | Mention why one needs to specify max_steps in Trainer | Just a minor change following https://discuss.huggingface.co/t/streaming-dataset-into-trainer-does-not-implement-len-max-steps-has-to-be-specified/32893 about why max_steps is needed for iterable datasets | 03-23-2023 10:19:23 | 03-23-2023 10:19:23 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 22,331 | closed | whisper model's default task should be "transcribe" | ### System Info
- `transformers` version: 4.27.2
- Platform: Linux-3.10.0-1062.9.1.el7.x86_64-x86_64-with-glibc2.17
- Python version: 3.9.16
- Huggingface_hub version: 0.13.2
- PyTorch version (GPU?): 1.12.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@sanchit-gandhi @ArthurZucker
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
In transformers v4.26.1, the following script will output right language, some chinese text. It's the right task "transcribe".
However, in version 4.27.2, it will output translated english text, which is another task "translate".
[<script src="https://gist.github.com/chenht2010/174f2480641b6780cbccd588431176b8.js"></script>](https://gist.github.com/chenht2010/174f2480641b6780cbccd588431176b8)
### Expected behavior
Do ASR, and Output chinese | 03-23-2023 07:07:12 | 03-23-2023 07:07:12 | cc @ArthurZucker and @sanchit-gandhi 🙏 <|||||>Hey! As you can see [here](https://github.com/ArthurZucker/transformers/blob/d2854e753bcfd62dce2f968d6088232d0fc41f8c/src/transformers/models/whisper/modeling_whisper.py#L1586) the default (if the generation_config does not have a `task` set is still `transcribe`. What changed is the `configuration.json` see this [commit](https://huggingface.co/openai/whisper-large-v2/commit/e823955b7861a1d66fef509b8601ada6d7762c03) where the default went from `transcribe` ( 50358) to `translate` (50359 in the forced_decoder_ids). The update in transformers just makes sure to properly use this, while the previous version did not take it into account. <|||||>This is more a fix than a breaking change IMO<|||||>Thank you for your explanation. |
transformers | 22,330 | closed | Can't export Deformable Detr to ONNX | ### System Info
- `transformers` version: 4.27.2
- Platform: Linux-6.2.0-76060200-generic-x86_64-with-glibc2.35
- Python version: 3.10.6
- Huggingface_hub version: 0.13.3
- PyTorch version (GPU?): 1.13.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Tried both, doesn't work
- Using distributed or parallel set-up in script?: No
### Who can help?
@amyeroberts @sgugger
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Code to reproduce:
```
from transformers import DeformableDetrForObjectDetection
import torch
model = DeformableDetrForObjectDetection.from_pretrained("SenseTime/deformable-detr")
example = torch.Tensor(1, 3, 600, 600)
torch.onnx.export(
model,
(example, None),
f="./test-ddetr.onnx",
input_names=['pixel_values'],
output_names=['logits', 'pred_boxes'],
dynamic_axes={"pixel_values": {0: "batch_size", 1: "image_channel", 2: "image_height", 3: "image_width"}},
do_constant_folding=True,
opset_version=16
)
```
Error:
```
File ~/miniconda3/envs/test-env/lib/python3.10/site-packages/torch/onnx/utils.py:581, in _optimize_graph(graph, operator_export_type, _disable_torch_constant_prop, fixed_batch_size, params_dict, dynamic_axes, input_names, module)
579 _C._jit_pass_inline_fork_wait(graph)
580 _C._jit_pass_lint(graph)
--> 581 _C._jit_pass_onnx_autograd_function_process(graph)
582 _C._jit_pass_lower_all_tuples(graph)
584 # we now record some ops like ones/zeros
585 # into a trace where we previously recorded constants.
586 # use constant prop to maintain our current level of onnx support
587 # without implementing symbolics for all of them
RuntimeError: required keyword attribute 'Subgraph' is undefined
```
### Expected behavior
Should export an onnx model. I can export the Detr model but not Deformable Detr. I have tried it on PyTorch 2.0 too.
I don't know if I should post this in this issue or another one but Deformable Detr is not supported on optimum.ORTModelForObjectDetection.
Also, I tried to create an OnnxConfig (copied from detr source) and export it using `transformers.onnx.export` but that resulted in the above error too. | 03-23-2023 04:26:12 | 03-23-2023 04:26:12 | Detr export should be supported: https://github.com/huggingface/optimum/blob/8252f4b0c48183198f4bed54bd6e0822213ef78b/optimum/exporters/tasks.py#L344-L349
Can you try `pip install -U optimum transformers` and `optimum-cli export onnx --model SenseTime/deformable-detr --task object-segmentation detr_onnx/`?<|||||>`object-segmentation` is not available as a task. I assumed you mean `object-detection` instead and tried the command.
Command:
```
optimum-cli export onnx --model SenseTime/deformable-detr --task object-detection detr_onnx/
```
Error:
```
KeyError: "deformable-detr is not supported yet. Only {'speech-to-text', 'hubert', 'mobilenet-v1', 'xlm', 'blenderbot', 'camembert', 'mobilenet-v2', 'wav2vec2-conformer', 'donut-swin', 'xlm-roberta', 'marian', 'electra', 'm2m-100', 'mbart', 'perceiver', 'whisper', 'swin', 'bert', 'poolformer', 'audio-spectrogram-transformer', 'unispeech', 'gpt-neo', 'levit', 'layoutlmv3', 'segformer', 'codegen', 'deit', 'mpnet', 'vit', 'roberta', 'deberta-v2', 'mt5', 'wavlm', 'data2vec-vision', 'data2vec-text', 'flaubert', 'blenderbot-small', 'vision-encoder-decoder', 'nystromformer', 'sew-d', 'yolos', 'gpt-neox', 'detr', 'gpt2', 'layoutlm', 'mobilevit', 't5', 'splinter', 'roformer', 'bloom', 'convnext', 'resnet', 'convbert', 'mobilebert', 'distilbert', 'squeezebert', 'unispeech-sat', 'gptj', 'clip', 'wav2vec2', 'groupvit', 'sew', 'deberta', 'beit', 'pegasus', 'longt5', 'ibert', 'albert', 'bart', 'data2vec-audio'} are supported. If you want to support deformable-detr please propose a PR or open up an issue."
```<|||||>Thank you @ashim-mahara , apologies indeed this is not supported currently - was confused by deformable_detr / detr.
Would you like to submit a PR to add the support in the export?
This would entail (among others):
* Adding a relevant config in https://github.com/huggingface/optimum/blob/main/optimum/exporters/onnx/model_configs.py (with defined inputs/outputs and inputs generators)
* Adding `deformable_detr` in tasks.py: https://github.com/huggingface/optimum/blob/4bbcc1b1d077e9258649f39b752370ff70163c00/optimum/exporters/tasks.py#L388<|||||>Okay I'll try and status update here in ~3 days.<|||||>@fxmarty I Still had the same error when I added the configs and checked if it will then import the model with:
`ORTModel.from_pretrained("../savedModels/deformable-detr/", from_transformers= True)`.
Error:
```
581 _C._jit_pass_inline_fork_wait(graph)
582 _C._jit_pass_lint(graph)
--> 583 _C._jit_pass_onnx_autograd_function_process(graph)
584 _C._jit_pass_lower_all_tuples(graph)
586 # we now record some ops like ones/zeros
587 # into a trace where we previously recorded constants.
588 # use constant prop to maintain our current level of onnx support
589 # without implementing symbolics for all of them
RuntimeError: required keyword attribute 'Subgraph' is undefined
```
I am not an expert on this but I think the path tracing is failing. So probably will need the model author to give it a look.<|||||>@ashim-mahara Could you open a PR in optimum so that I can have a look?<|||||>@fxmarty here is the PR: https://github.com/huggingface/optimum/pull/931<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>@fxmarty is there any way to make the pretrained deformable-detr models compatible with the new code? I tried exporting `SenseTime/deformable-detr` after changing the `disable_custom_kernels` to `True` but it still throws an error. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 22,329 | closed | Enable training Llama with model or pipeline parallelism | # What does this PR do?
This PR enables model and pipeline parallelism for Llama models.
The change moves the target tensor to the output device, if needed, for loss calculation.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 03-23-2023 04:02:17 | 03-23-2023 04:02:17 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Great jobs!<|||||>Could you share a snippet of code that is failing prior to this PR?<|||||>Certainly, here's a relatively minimal example:
```python
import transformers, datasets
from peft import (
LoraConfig,
get_peft_model,
)
CHECKPOINT = "decapoda-research/llama-7b-hf"
model = transformers.LlamaForCausalLM.from_pretrained(
CHECKPOINT,
device_map="auto",
max_memory={0:"15GB", 1:"15GB"}
)
tokenizer = transformers.LlamaTokenizer.from_pretrained(CHECKPOINT, add_eos_token=True)
tokenizer.pad_token_id = 0
config = LoraConfig(
r=8,
lora_alpha=16,
target_modules=["q_proj","v_proj"],
lora_dropout=0.05,
bias="none",
task_type="CAUSAL_LM",
)
model = get_peft_model(model, config)
data = datasets.load_dataset("laion/OIG", data_files="unified_chip2.jsonl", split="train")
def tokenize(examples):
return tokenizer(
examples["text"],
truncation=True,
max_length=256,
padding="max_length",
)
data = data.map(tokenize)
#Tell Trainer not to attempt DataParallel
model.is_parallelizable = True
model.model_parallel = True
trainer = transformers.Trainer(
model=model,
train_dataset=data,
args=transformers.TrainingArguments(
per_device_train_batch_size=1,
learning_rate=3e-4,
logging_steps=10,
evaluation_strategy="no",
save_strategy="no",
output_dir="/tmp/"
),
data_collator=transformers.DataCollatorForLanguageModeling(tokenizer, mlm=False),
)
model.config.use_cache = False
trainer.train()
```
Before the change, this fails with
`RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:1 and cuda:0! (when checking argument for argument target in method wrapper_CUDA_nll_loss_forward)`
I found the solution here: [Pytorch Pipeline Tutorial](https://pytorch.org/tutorials/intermediate/pipeline_tutorial.html#run-the-model)
> \# Need to move targets to the device where the output of the pipeline resides.
And after the change, the code example above runs as expected.
|
transformers | 22,328 | closed | PyTorch/XLA FSDP doesn't seem to work on TPU-v3-8 VM | ### System Info
GCP TPU-v3-8 VM
Operating System: Ubuntu 20.04.4 LTS
Kernel: Linux 5.13.0-1027-gcp
transformers 4.28.0.dev0 (pip install git+https://github.com/huggingface/transformers.git on 03/22/2023)
torch 2.0.0
torch-xla 2.0
### Who can help?
People from #21406 that is @AlexWertheim, possibly @pacman100 and @ArthurZucker
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
The [glue example with Trainer for TPUs](https://github.com/huggingface/transformers/tree/main/examples/pytorch#running-on-tpus) without FSTP worked flawlessly in my TPU-v3-8 VM with xlm-roberta-base (because the model and batch fit properly within each core).
Now that FSTP was integrated thanks to @AlexWertheim, I tried running facebook/xlm-roberta-xl on this example with the additional parameters.
```bash
python xla_spawn.py --num_cores 8 \
run_glue.py \
--model_name_or_path facebook/xlm-roberta-xl \
--task_name mnli \
--do_train \
--max_seq_length 128 \
--per_device_train_batch_size 4 \
--learning_rate 2e-5 \
--num_train_epochs 10.0 \
--output_dir mnli_output \
--report_to all \
--fsdp 'shard_grad_op' \
--fsdp_config '../fstp_config.json' \
--debug 'tpu_metrics_debug' \
--logging_steps 100 \
--gradient_accumulation_steps 8
```
fstp_config.json:
```json
{
"fsdp_min_num_params": 10000000,
"xla": true,
"xla_fsdp_settings": {}
}
```
I also tried using `"fsdp_transformer_layer_cls_to_wrap": ["XLMRobertaXLModel","XLMRobertaXLClassificationHead"]` instead of `"fsdp_min_num_params": 10000000`. Also `full_shard` instead of `shard_grad_op` and some other variations but they're all giving me the following error:
```bash
0%| | 1/3068000 [08:09<416756:07:35, 489.02s/it]2023-03-23 02:02:19.905715: W tensorflow/core/framework/op_kernel.cc:1830] OP_REQUIRES failed at tpu_execute_op.cc:266 : RESOURCE_EXHAUSTED: Attempting to reserve 10.51G at the bottom of memory. That was not possible. There are 8.97G free, 0B reserved, and 8.97G reservable.
2023-03-23 02:02:22.081681: E tensorflow/compiler/xla/xla_client/xla_util.cc:90] StackTrace:
2023-03-23 02:02:22.081762: E tensorflow/compiler/xla/xla_client/xla_util.cc:90] *** Begin stack trace ***
2023-03-23 02:02:22.081770: E tensorflow/compiler/xla/xla_client/xla_util.cc:90] tsl::CurrentStackTrace()
2023-03-23 02:02:22.081777: E tensorflow/compiler/xla/xla_client/xla_util.cc:90] xla::util::ReportComputationError(tsl::Status const&, absl::lts_20220623::Span<xla::XlaComputation const* const>, absl::lts_20220623::Span<xla::Shape const* const>)
2023-03-23 02:02:22.081783: E tensorflow/compiler/xla/xla_client/xla_util.cc:90] xla::XrtComputationClient::ExecuteComputation(xla::ComputationClient::Computation const&, absl::lts_20220623::Span<std::shared_ptr<xla::ComputationClient::Data> const>, std::string const&, xla::ComputationClient::ExecuteComputationOptions const&)
2023-03-23 02:02:22.081790: E tensorflow/compiler/xla/xla_client/xla_util.cc:90] torch_xla::XlaBackendImpl::ExecuteComputation(std::shared_ptr<torch::lazy::Computation>, c10::ArrayRef<std::shared_ptr<torch::lazy::BackendData> >, torch::lazy::BackendDevice const&) const
2023-03-23 02:02:22.081809: E tensorflow/compiler/xla/xla_client/xla_util.cc:90]
2023-03-23 02:02:22.081818: E tensorflow/compiler/xla/xla_client/xla_util.cc:90] torch::lazy::MultiWait::Complete(std::function<void ()> const&)
2023-03-23 02:02:22.081825: E tensorflow/compiler/xla/xla_client/xla_util.cc:90]
2023-03-23 02:02:22.081831: E tensorflow/compiler/xla/xla_client/xla_util.cc:90]
2023-03-23 02:02:22.081836: E tensorflow/compiler/xla/xla_client/xla_util.cc:90]
2023-03-23 02:02:22.081842: E tensorflow/compiler/xla/xla_client/xla_util.cc:90] clone
2023-03-23 02:02:22.081847: E tensorflow/compiler/xla/xla_client/xla_util.cc:90] *** End stack trace ***
2023-03-23 02:02:22.081854: E tensorflow/compiler/xla/xla_client/xla_util.cc:90]
2023-03-23 02:02:22.081862: E tensorflow/compiler/xla/xla_client/xla_util.cc:90] Status: RESOURCE_EXHAUSTED: From /job:localservice/replica:0/task:0:
2023-03-23 02:02:22.081870: E tensorflow/compiler/xla/xla_client/xla_util.cc:90] 2 root error(s) found.
2023-03-23 02:02:22.081878: E tensorflow/compiler/xla/xla_client/xla_util.cc:90] (0) RESOURCE_EXHAUSTED: Attempting to reserve 10.51G at the bottom of memory. That was not possible. There are 8.97G free, 0B reserved, and 8.97G reservable.
2023-03-23 02:02:22.081891: E tensorflow/compiler/xla/xla_client/xla_util.cc:90] [[{{node XRTExecute}}]]
2023-03-23 02:02:22.081898: E tensorflow/compiler/xla/xla_client/xla_util.cc:90] Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode.
2023-03-23 02:02:22.081905: E tensorflow/compiler/xla/xla_client/xla_util.cc:90]
2023-03-23 02:02:22.081911: E tensorflow/compiler/xla/xla_client/xla_util.cc:90] [[XRTExecute_G10]]
2023-03-23 02:02:22.081920: E tensorflow/compiler/xla/xla_client/xla_util.cc:90] Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode.
2023-03-23 02:02:22.081928: E tensorflow/compiler/xla/xla_client/xla_util.cc:90]
2023-03-23 02:02:22.081937: E tensorflow/compiler/xla/xla_client/xla_util.cc:90] (1) RESOURCE_EXHAUSTED: Attempting to reserve 10.51G at the bottom of memory. That was not possible. There are 8.97G free, 0B reserved, and 8.97G reservable.
2023-03-23 02:02:22.081944: E tensorflow/compiler/xla/xla_client/xla_util.cc:90] [[{{node XRTExecute}}]]
2023-03-23 02:02:22.081951: E tensorflow/compiler/xla/xla_client/xla_util.cc:90] Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode.
2023-03-23 02:02:22.081959: E tensorflow/compiler/xla/xla_client/xla_util.cc:90]
2023-03-23 02:02:22.081967: E tensorflow/compiler/xla/xla_client/xla_util.cc:90] 0 successful operations.
2023-03-23 02:02:22.081975: E tensorflow/compiler/xla/xla_client/xla_util.cc:90] 0 derived errors ignored.
2023-03-23 02:02:22.081983: E tensorflow/compiler/xla/xla_client/xla_util.cc:90] Recent warning and error logs:
2023-03-23 02:02:22.081989: E tensorflow/compiler/xla/xla_client/xla_util.cc:90] OP_REQUIRES failed at tpu_execute_op.cc:266 : RESOURCE_EXHAUSTED: Attempting to reserve 10.51G at the bottom of memory. That was not possible. There are 8.97G free, 0B reserved, and 8.97G reservable.
/home/vitor_jeronymo/miniconda3/envs/torch-xla/lib/python3.8/site-packages/transformers/optimization.py:391: FutureWarning: This implementation of AdamW is deprecated and will be removed in a future version. Use the PyTorch implementation torch.optim.AdamW instead, or set `no_deprecation_warning=True` to disable this warning
warnings.warn(
Exception in device=TPU:1: RESOURCE_EXHAUSTED: From /job:localservice/replica:0/task:0:
2 root error(s) found.
(0) RESOURCE_EXHAUSTED: Attempting to reserve 10.51G at the bottom of memory. That was not possible. There are 8.97G free, 0B reserved, and 8.97G reservable.
[[{{node XRTExecute}}]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode.
[[XRTExecute_G10]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode.
(1) RESOURCE_EXHAUSTED: Attempting to reserve 10.51G at the bottom of memory. That was not possible. There are 8.97G free, 0B reserved, and 8.97G reservable.
[[{{node XRTExecute}}]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode.
0 successful operations.
0 derived errors ignored.
Recent warning and error logs:
OP_REQUIRES failed at tpu_execute_op.cc:266 : RESOURCE_EXHAUSTED: Attempting to reserve 10.51G at the bottom of memory. That was not possible. There are 8.97G free, 0B reserved, and 8.97G reservable.
Traceback (most recent call last):
File "/home/vitor_jeronymo/miniconda3/envs/torch-xla/lib/python3.8/site-packages/torch_xla/distributed/xla_multiprocessing.py", line 334, in _mp_start_fn
_start_fn(index, pf_cfg, fn, args)
File "/home/vitor_jeronymo/miniconda3/envs/torch-xla/lib/python3.8/site-packages/torch_xla/distributed/xla_multiprocessing.py", line 328, in _start_fn
fn(gindex, *args)
File "/datadrive/test/run_glue.py", line 622, in _mp_fn
main()
File "/datadrive/test/run_glue.py", line 534, in main
train_result = trainer.train(resume_from_checkpoint=checkpoint)
File "/home/vitor_jeronymo/miniconda3/envs/torch-xla/lib/python3.8/site-packages/transformers/trainer.py", line 1644, in train
return inner_training_loop(
File "/home/vitor_jeronymo/miniconda3/envs/torch-xla/lib/python3.8/site-packages/transformers/trainer.py", line 1881, in _inner_training_loop
for step, inputs in enumerate(epoch_iterator):
File "/home/vitor_jeronymo/miniconda3/envs/torch-xla/lib/python3.8/site-packages/torch_xla/distributed/parallel_loader.py", line 30, in __next__
return self.next()
File "/home/vitor_jeronymo/miniconda3/envs/torch-xla/lib/python3.8/site-packages/torch_xla/distributed/parallel_loader.py", line 42, in next
xm.mark_step()
File "/home/vitor_jeronymo/miniconda3/envs/torch-xla/lib/python3.8/site-packages/torch_xla/core/xla_model.py", line 949, in mark_step
torch_xla._XLAC._xla_step_marker(
RuntimeError: RESOURCE_EXHAUSTED: From /job:localservice/replica:0/task:0:
2 root error(s) found.
(0) RESOURCE_EXHAUSTED: Attempting to reserve 10.51G at the bottom of memory. That was not possible. There are 8.97G free, 0B reserved, and 8.97G reservable.
[[{{node XRTExecute}}]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode.
[[XRTExecute_G10]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode.
(1) RESOURCE_EXHAUSTED: Attempting to reserve 10.51G at the bottom of memory. That was not possible. There are 8.97G free, 0B reserved, and 8.97G reservable.
[[{{node XRTExecute}}]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode.
0 successful operations.
0 derived errors ignored.
Recent warning and error logs:
OP_REQUIRES failed at tpu_execute_op.cc:266 : RESOURCE_EXHAUSTED: Attempting to reserve 10.51G at the bottom of memory. That was not possible. There are 8.97G free, 0B reserved, and 8.97G reservable.
2023-03-23 02:02:23.050198: W tensorflow/core/framework/op_kernel.cc:1830] OP_REQUIRES failed at tpu_execute_op.cc:266 : RESOURCE_EXHAUSTED: Attempting to reserve 10.51G at the bottom of memory. That was not possible. There are 8.97G free, 0B reserved, and 8.97G reservable.
https://symbolize.stripped_domain/r/?trace=7f7627be9376,7f7627bee41f,0&map=
*** SIGTERM received by PID 89268 (TID 89268) on cpu 51 from PID 89123; stack trace: ***
PC: @ 0x7f7627be9376 (unknown) pthread_cond_wait@@GLIBC_2.3.2
@ 0x7f74d8c2aa1a 1152 (unknown)
@ 0x7f7627bee420 (unknown) (unknown)
@ 0x1 (unknown) (unknown)
https://symbolize.stripped_domain/r/?trace=7f7627be9376,7f74d8c2aa19,7f7627bee41f,0&map=ceee8fa20ddf9c34af43f587221e91de:7f74cbd02000-7f74d8e41840
E0323 02:02:23.479201 89268 coredump_hook.cc:360] RAW: Remote crash gathering disabled for SIGTERM.
E0323 02:02:24.172933 89268 process_state.cc:784] RAW: Raising signal 15 with default behavior
2023-03-23 02:02:25.056856: E tensorflow/compiler/xla/xla_client/xla_util.cc:90] StackTrace:
2023-03-23 02:02:25.056942: E tensorflow/compiler/xla/xla_client/xla_util.cc:90] *** Begin stack trace ***
2023-03-23 02:02:25.056952: E tensorflow/compiler/xla/xla_client/xla_util.cc:90] tsl::CurrentStackTrace()
2023-03-23 02:02:25.056959: E tensorflow/compiler/xla/xla_client/xla_util.cc:90] xla::util::ReportComputationError(tsl::Status const&, absl::lts_20220623::Span<xla::XlaComputation const* const>, absl::lts_20220623::Span<xla::Shape const* const>)
2023-03-23 02:02:25.056967: E tensorflow/compiler/xla/xla_client/xla_util.cc:90] xla::XrtComputationClient::ExecuteComputation(xla::ComputationClient::Computation const&, absl::lts_20220623::Span<std::shared_ptr<xla::ComputationClient::Data> const>, std::string const&, xla::ComputationClient::ExecuteComputationOptions const&)
2023-03-23 02:02:25.056976: E tensorflow/compiler/xla/xla_client/xla_util.cc:90] torch_xla::XlaBackendImpl::ExecuteComputation(std::shared_ptr<torch::lazy::Computation>, c10::ArrayRef<std::shared_ptr<torch::lazy::BackendData> >, torch::lazy::BackendDevice const&) const
2023-03-23 02:02:25.056984: E tensorflow/compiler/xla/xla_client/xla_util.cc:90]
2023-03-23 02:02:25.056997: E tensorflow/compiler/xla/xla_client/xla_util.cc:90] torch::lazy::MultiWait::Complete(std::function<void ()> const&)
2023-03-23 02:02:25.057005: E tensorflow/compiler/xla/xla_client/xla_util.cc:90]
2023-03-23 02:02:25.057011: E tensorflow/compiler/xla/xla_client/xla_util.cc:90]
2023-03-23 02:02:25.057018: E tensorflow/compiler/xla/xla_client/xla_util.cc:90]
2023-03-23 02:02:25.057025: E tensorflow/compiler/xla/xla_client/xla_util.cc:90] clone
2023-03-23 02:02:25.057033: E tensorflow/compiler/xla/xla_client/xla_util.cc:90] *** End stack trace ***
2023-03-23 02:02:25.057041: E tensorflow/compiler/xla/xla_client/xla_util.cc:90]
2023-03-23 02:02:25.057050: E tensorflow/compiler/xla/xla_client/xla_util.cc:90] Status: RESOURCE_EXHAUSTED: From /job:localservice/replica:0/task:0:
2023-03-23 02:02:25.057058: E tensorflow/compiler/xla/xla_client/xla_util.cc:90] 2 root error(s) found.
2023-03-23 02:02:25.057067: E tensorflow/compiler/xla/xla_client/xla_util.cc:90] (0) RESOURCE_EXHAUSTED: Attempting to reserve 10.51G at the bottom of memory. That was not possible. There are 8.97G free, 0B reserved, and 8.97G reservable.
2023-03-23 02:02:25.057075: E tensorflow/compiler/xla/xla_client/xla_util.cc:90] [[{{node XRTExecute}}]]
2023-03-23 02:02:25.057085: E tensorflow/compiler/xla/xla_client/xla_util.cc:90] Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode.
2023-03-23 02:02:25.057094: E tensorflow/compiler/xla/xla_client/xla_util.cc:90]
2023-03-23 02:02:25.057102: E tensorflow/compiler/xla/xla_client/xla_util.cc:90] [[XRTExecute_G12]]
2023-03-23 02:02:25.057111: E tensorflow/compiler/xla/xla_client/xla_util.cc:90] Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode.
2023-03-23 02:02:25.057135: E tensorflow/compiler/xla/xla_client/xla_util.cc:90]
2023-03-23 02:02:25.057143: E tensorflow/compiler/xla/xla_client/xla_util.cc:90] (1) RESOURCE_EXHAUSTED: Attempting to reserve 10.51G at the bottom of memory. That was not possible. There are 8.97G free, 0B reserved, and 8.97G reservable.
2023-03-23 02:02:25.057151: E tensorflow/compiler/xla/xla_client/xla_util.cc:90] [[{{node XRTExecute}}]]
2023-03-23 02:02:25.057160: E tensorflow/compiler/xla/xla_client/xla_util.cc:90] Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode.
2023-03-23 02:02:25.057168: E tensorflow/compiler/xla/xla_client/xla_util.cc:90]
2023-03-23 02:02:25.057176: E tensorflow/compiler/xla/xla_client/xla_util.cc:90] 0 successful operations.
2023-03-23 02:02:25.057186: E tensorflow/compiler/xla/xla_client/xla_util.cc:90] 0 derived errors ignored.
2023-03-23 02:02:25.057194: E tensorflow/compiler/xla/xla_client/xla_util.cc:90] Recent warning and error logs:
2023-03-23 02:02:25.057202: E tensorflow/compiler/xla/xla_client/xla_util.cc:90] OP_REQUIRES failed at tpu_execute_op.cc:266 : RESOURCE_EXHAUSTED: Attempting to reserve 10.51G at the bottom of memory. That was not possible. There are 8.97G free, 0B reserved, and 8.97G reservable.
2023-03-23 02:02:25.057209: E tensorflow/compiler/xla/xla_client/xla_util.cc:90] OP_REQUIRES failed at tpu_execute_op.cc:266 : RESOURCE_EXHAUSTED: Attempting to reserve 10.51G at the bottom of memory. That was not possible. There are 8.97G free, 0B reserved, and 8.97G reservable.
/home/vitor_jeronymo/miniconda3/envs/torch-xla/lib/python3.8/site-packages/transformers/optimization.py:391: FutureWarning: This implementation of AdamW is deprecated and will be removed in a future version. Use the PyTorch implementation torch.optim.AdamW instead, or set `no_deprecation_warning=True` to disable this warning
warnings.warn(
Exception in device=TPU:6: RESOURCE_EXHAUSTED: From /job:localservice/replica:0/task:0:
2 root error(s) found.
(0) RESOURCE_EXHAUSTED: Attempting to reserve 10.51G at the bottom of memory. That was not possible. There are 8.97G free, 0B reserved, and 8.97G reservable.
[[{{node XRTExecute}}]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode.
[[XRTExecute_G12]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode.
(1) RESOURCE_EXHAUSTED: Attempting to reserve 10.51G at the bottom of memory. That was not possible. There are 8.97G free, 0B reserved, and 8.97G reservable.
[[{{node XRTExecute}}]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode.
0 successful operations.
0 derived errors ignored.
Recent warning and error logs:
OP_REQUIRES failed at tpu_execute_op.cc:266 : RESOURCE_EXHAUSTED: Attempting to reserve 10.51G at the bottom of memory. That was not possible. There are 8.97G free, 0B reserved, and 8.97G reservable.
OP_REQUIRES failed at tpu_execute_op.cc:266 : RESOURCE_EXHAUSTED: Attempting to reserve 10.51G at the bottom of memory. That was not possible. There are 8.97G free, 0B reserved, and 8.97G reservable.
Traceback (most recent call last):
File "/home/vitor_jeronymo/miniconda3/envs/torch-xla/lib/python3.8/site-packages/torch_xla/distributed/xla_multiprocessing.py", line 334, in _mp_start_fn
_start_fn(index, pf_cfg, fn, args)
File "/home/vitor_jeronymo/miniconda3/envs/torch-xla/lib/python3.8/site-packages/torch_xla/distributed/xla_multiprocessing.py", line 328, in _start_fn
fn(gindex, *args)
File "/datadrive/test/run_glue.py", line 622, in _mp_fn
main()
File "/datadrive/test/run_glue.py", line 534, in main
train_result = trainer.train(resume_from_checkpoint=checkpoint)
File "/home/vitor_jeronymo/miniconda3/envs/torch-xla/lib/python3.8/site-packages/transformers/trainer.py", line 1644, in train
return inner_training_loop(
File "/home/vitor_jeronymo/miniconda3/envs/torch-xla/lib/python3.8/site-packages/transformers/trainer.py", line 1881, in _inner_training_loop
for step, inputs in enumerate(epoch_iterator):
File "/home/vitor_jeronymo/miniconda3/envs/torch-xla/lib/python3.8/site-packages/torch_xla/distributed/parallel_loader.py", line 30, in __next__
return self.next()
File "/home/vitor_jeronymo/miniconda3/envs/torch-xla/lib/python3.8/site-packages/torch_xla/distributed/parallel_loader.py", line 42, in next
xm.mark_step()
File "/home/vitor_jeronymo/miniconda3/envs/torch-xla/lib/python3.8/site-packages/torch_xla/core/xla_model.py", line 949, in mark_step
torch_xla._XLAC._xla_step_marker(
RuntimeError: RESOURCE_EXHAUSTED: From /job:localservice/replica:0/task:0:
2 root error(s) found.
(0) RESOURCE_EXHAUSTED: Attempting to reserve 10.51G at the bottom of memory. That was not possible. There are 8.97G free, 0B reserved, and 8.97G reservable.
[[{{node XRTExecute}}]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode.
[[XRTExecute_G12]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode.
(1) RESOURCE_EXHAUSTED: Attempting to reserve 10.51G at the bottom of memory. That was not possible. There are 8.97G free, 0B reserved, and 8.97G reservable.
[[{{node XRTExecute}}]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode.
0 successful operations.
0 derived errors ignored.
Recent warning and error logs:
OP_REQUIRES failed at tpu_execute_op.cc:266 : RESOURCE_EXHAUSTED: Attempting to reserve 10.51G at the bottom of memory. That was not possible. There are 8.97G free, 0B reserved, and 8.97G reservable.
OP_REQUIRES failed at tpu_execute_op.cc:266 : RESOURCE_EXHAUSTED: Attempting to reserve 10.51G at the bottom of memory. That was not possible. There are 8.97G free, 0B reserved, and 8.97G reservable.
2023-03-23 02:02:29.834867: W tensorflow/core/distributed_runtime/rpc/grpc_remote_master.cc:157] RPC failed with status = "UNAVAILABLE: Socket closed" and grpc_error_string = "{"created":"@1679536949.834650343","description":"Error received from peer ipv4:127.0.0.1:51011","file":"external/com_github_grpc_grpc/src/core/lib/surface/call.cc","file_line":1056,"grpc_message":"Socket closed","grpc_status":14}", maybe retrying the RPC
2023-03-23 02:02:29.835007: W tensorflow/core/distributed_runtime/rpc/grpc_remote_master.cc:157] RPC failed with status = "UNAVAILABLE: Socket closed" and grpc_error_string = "{"created":"@1679536949.834795697","description":"Error received from peer ipv4:127.0.0.1:51011","file":"external/com_github_grpc_grpc/src/core/lib/surface/call.cc","file_line":1056,"grpc_message":"Socket closed","grpc_status":14}", maybe retrying the RPC
2023-03-23 02:02:29.835038: W tensorflow/core/distributed_runtime/rpc/grpc_remote_master.cc:157] RPC failed with status = "UNAVAILABLE: Connection reset by peer" and grpc_error_string = "{"created":"@1679536949.834893793","description":"Error received from peer ipv4:127.0.0.1:51011","file":"external/com_github_grpc_grpc/src/core/lib/surface/call.cc","file_line":1056,"grpc_message":"Connection reset by peer","grpc_status":14}", maybe retrying the RPC
2023-03-23 02:02:29.835095: W tensorflow/core/distributed_runtime/rpc/grpc_remote_master.cc:157] RPC failed with status = "UNAVAILABLE: Connection reset by peer" and grpc_error_string = "{"created":"@1679536949.834956775","description":"Error received from peer ipv4:127.0.0.1:51011","file":"external/com_github_grpc_grpc/src/core/lib/surface/call.cc","file_line":1056,"grpc_message":"Connection reset by peer","grpc_status":14}", maybe retrying the RPC
2023-03-23 02:02:29.835197: W tensorflow/core/distributed_runtime/rpc/grpc_remote_master.cc:157] RPC failed with status = "UNAVAILABLE: Socket closed" and grpc_error_string = "{"created":"@1679536949.835008010","description":"Error received from peer ipv4:127.0.0.1:51011","file":"external/com_github_grpc_grpc/src/core/lib/surface/call.cc","file_line":1056,"grpc_message":"Socket closed","grpc_status":14}", maybe retrying the RPC
2023-03-23 02:02:29.835206: W tensorflow/core/distributed_runtime/rpc/grpc_remote_master.cc:157] RPC failed with status = "UNAVAILABLE: Connection reset by peer" and grpc_error_string = "{"created":"@1679536949.834976683","description":"Error received from peer ipv4:127.0.0.1:51011","file":"external/com_github_grpc_grpc/src/core/lib/surface/call.cc","file_line":1056,"grpc_message":"Connection reset by peer","grpc_status":14}", maybe retrying the RPC
2023-03-23 02:02:29.835408: W tensorflow/core/distributed_runtime/rpc/grpc_remote_master.cc:157] RPC failed with status = "UNAVAILABLE: Socket closed" and grpc_error_string = "{"created":"@1679536949.835235487","description":"Error received from peer ipv4:127.0.0.1:51011","file":"external/com_github_grpc_grpc/src/core/lib/surface/call.cc","file_line":1056,"grpc_message":"Socket closed","grpc_status":14}", maybe retrying the RPC
2023-03-23 02:02:29.835456: W tensorflow/core/distributed_runtime/rpc/grpc_remote_master.cc:157] RPC failed with status = "UNAVAILABLE: Connection reset by peer" and grpc_error_string = "{"created":"@1679536949.834964014","description":"Error received from peer ipv4:127.0.0.1:51011","file":"external/com_github_grpc_grpc/src/core/lib/surface/call.cc","file_line":1056,"grpc_message":"Connection reset by peer","grpc_status":14}", maybe retrying the RPC
2023-03-23 02:02:29.835480: W tensorflow/core/distributed_runtime/rpc/grpc_remote_master.cc:157] RPC failed with status = "UNAVAILABLE: Socket closed" and grpc_error_string = "{"created":"@1679536949.835338354","description":"Error received from peer ipv4:127.0.0.1:51011","file":"external/com_github_grpc_grpc/src/core/lib/surface/call.cc","file_line":1056,"grpc_message":"Socket closed","grpc_status":14}", maybe retrying the RPC
2023-03-23 02:02:29.835540: W tensorflow/core/distributed_runtime/rpc/grpc_remote_master.cc:157] RPC failed with status = "UNAVAILABLE: Connection reset by peer" and grpc_error_string = "{"created":"@1679536949.834899794","description":"Error received from peer ipv4:127.0.0.1:51011","file":"external/com_github_grpc_grpc/src/core/lib/surface/call.cc","file_line":1056,"grpc_message":"Connection reset by peer","grpc_status":14}", maybe retrying the RPC
2023-03-23 02:02:29.835614: W tensorflow/core/distributed_runtime/rpc/grpc_remote_master.cc:157] RPC failed with status = "UNAVAILABLE: Connection reset by peer" and grpc_error_string = "{"created":"@1679536949.834992684","description":"Error received from peer ipv4:127.0.0.1:51011","file":"external/com_github_grpc_grpc/src/core/lib/surface/call.cc","file_line":1056,"grpc_message":"Connection reset by peer","grpc_status":14}", maybe retrying the RPC
2023-03-23 02:02:29.835687: W tensorflow/core/distributed_runtime/rpc/grpc_remote_master.cc:157] RPC failed with status = "UNAVAILABLE: Connection reset by peer" and grpc_error_string = "{"created":"@1679536949.835345000","description":"Error received from peer ipv4:127.0.0.1:51011","file":"external/com_github_grpc_grpc/src/core/lib/surface/call.cc","file_line":1056,"grpc_message":"Connection reset by peer","grpc_status":14}", maybe retrying the RPC
2023-03-23 02:02:29.835752: W tensorflow/core/distributed_runtime/rpc/grpc_remote_master.cc:157] RPC failed with status = "UNAVAILABLE: Connection reset by peer" and grpc_error_string = "{"created":"@1679536949.835176851","description":"Error received from peer ipv4:127.0.0.1:51011","file":"external/com_github_grpc_grpc/src/core/lib/surface/call.cc","file_line":1056,"grpc_message":"Connection reset by peer","grpc_status":14}", maybe retrying the RPC
Traceback (most recent call last):
File "xla_spawn.py", line 83, in <module>
main()
File "xla_spawn.py", line 79, in main
xmp.spawn(mod._mp_fn, args=(), nprocs=args.num_cores)
File "/home/vitor_jeronymo/miniconda3/envs/torch-xla/lib/python3.8/site-packages/torch_xla/distributed/xla_multiprocessing.py", line 397, in spawn
result = torch.multiprocessing.start_processes(
File "/home/vitor_jeronymo/miniconda3/envs/torch-xla/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 197, in start_processes
while not context.join():
File "/home/vitor_jeronymo/miniconda3/envs/torch-xla/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 149, in join
raise ProcessExitedException(
torch.multiprocessing.spawn.ProcessExitedException: process 1 terminated with exit code 17
/home/vitor_jeronymo/miniconda3/envs/torch-xla/lib/python3.8/multiprocessing/resource_tracker.py:216: UserWarning: resource_tracker: There appear to be 6 leaked semaphore objects to clean up at shutdown
warnings.warn('resource_tracker: There appear to be %d '
```
### Expected behavior
From my understanding, the model was supposed to be split loaded onto the TPU cores, along with whatever `full_shard` entails, but it doesn't seem to be happening. | 03-23-2023 02:50:59 | 03-23-2023 02:50:59 | I still think this still needs to be addressed <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 22,327 | closed | Added type hints to TFDeiTModel | # What does this PR do?
This pull request adds type hints for modeling_tf_deit.py as outlined in Issue #16059
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@Rocketknight1
| 03-23-2023 01:39:01 | 03-23-2023 01:39:01 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 22,326 | closed | torch_compile fail with multi-gpus on samples | ### System Info
platform: ubuntu 20.04
Pytorch version: nightly
transformer version: build from source
### Who can help?
@sgugger
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1. ubuntu 20.04, python 3.8, install Pytorch nightly, transformers from source
2. install necessary dependency
3. go to official example: transformers/examples/pytorch/text-classificationg
4. Run sample with torch_compile: python run_glue.py --model_name_or_path finiteautomata/bertweet-base-sentiment-analysis --task_name mnli --do_train --do_eval --max_seq_length 128 --per_device_train_batch_size 1 --learning_rate 2e-5 --num_train_epochs 1 --overwrite_output_dir --output_dir ./outputs/ --per_device_eval_batch_size 1 --seed 1137 --fp16 True --max_train_samples 1000 --**torch_compile**
5. Got exception from pytorch: Exception: Please convert all Tensors to FakeTensors first or instantiate FakeTensorMode with 'allow_non_fake_inputs'.
try to use single gpu as: export CUDA_VISIBLE_DEVICES=0 adn rerun the sample command, this time it runs without issue.
Looks like if multi-gpu found, torch model will be also wrapped with nn.DataParallel: https://github.com/huggingface/transformers/blob/main/src/transformers/trainer.py#L1387 this seems cause issue with torch compile.
### Expected behavior
Sample runs without issue with torch.compile with multi-gpu | 03-23-2023 01:31:20 | 03-23-2023 01:31:20 | I don't think `torch.compile` support `DataParallel`. You should launch your script in a distributed fashion using `torchrun`.<|||||>Make sense, in that case I think transformers trainer should be refined to let torch.compile work on multi-gpu<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 22,325 | closed | [gptj] support older pytorch version | Unbreak 2 issues introduced by https://github.com/huggingface/transformers/pull/22069 . I validated that this version works even with pt-1.9, which is the new lowest version supported by transformers since https://github.com/huggingface/transformers/pull/22291
Fixes:
```
E File "/mnt/nvme0/code/huggingface/transformers-master/examples/pytorch/language-modeling/run_clm.py", line 412, in main
E model = AutoModelForCausalLM.from_pretrained(
E File "/mnt/nvme0/code/huggingface/transformers-master/src/transformers/models/auto/auto_factory.py", line 470, in from_pretrained
E model_class = _get_model_class(config, cls._model_mapping)
E File "/mnt/nvme0/code/huggingface/transformers-master/src/transformers/models/auto/auto_factory.py", line 360, in _get_model_class
E supported_models = model_mapping[type(config)]
E File "/mnt/nvme0/code/huggingface/transformers-master/src/transformers/models/auto/auto_factory.py", line 602, in __getitem__
E return self._load_attr_from_module(model_type, model_name)
E File "/mnt/nvme0/code/huggingface/transformers-master/src/transformers/models/auto/auto_factory.py", line 616, in _load_attr_from_module
E return getattribute_from_module(self._modules[module_name], attr)
E File "/mnt/nvme0/code/huggingface/transformers-master/src/transformers/models/auto/auto_factory.py", line 561, in getattribute_from_module
E if hasattr(module, attr):
E File "/mnt/nvme0/code/huggingface/transformers-master/src/transformers/utils/import_utils.py", line 1109, in __getattr__
E module = self._get_module(self._class_to_module[name])
E File "/mnt/nvme0/code/huggingface/transformers-master/src/transformers/utils/import_utils.py", line 1121, in _get_module
E raise RuntimeError(
E RuntimeError: Failed to import transformers.models.gptj.modeling_gptj because of the following error (look up to see its traceback):
E module 'torch' has no attribute 'fx'
```
credits for the fix: @mrwyattii
and:
```
E File "/mnt/nvme0/code/huggingface/transformers-master/src/transformers/models/gptj/modeling_gptj.py", line 61, in create_sinusoidal_positions
E return torch.concat((torch.sin(sinusoid_inp), torch.cos(sinusoid_inp)), dim=1)
E AttributeError: module 'torch' has no attribute 'concat'
```
credits for the fix: @njhill | 03-23-2023 00:21:51 | 03-23-2023 00:21:51 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Thank you for the quick review, Sylvain.<|||||>Thank you @stas00 @sgugger and sorry again for the breakage.<|||||>You haven't done anything wrong, Nick. It's just very difficult to instantly test all the different variations. We have more indepth multi-version CI running on a daily basis, so usually any missed problems get detected on the next day.
And thank you for your contribution! |
transformers | 22,324 | closed | GPT2ForSequenceClassification logits unmatched size | ### System Info
huggingface-hub-0.13.3 tokenizers-0.13.2 transformers-4.27.2
python3.9
Hi :)
Using GPT2ForSequenceClassification, I have num_labels > 1, but get logits with shape (batch_size,1) instead of (batch_size, config.num_labels) as written in the docs.
I verified that those values are correct:
config.num_classes
model.num_labels
and model.score Linear with the correct out_features.
I also debugged and the shape of _logits_ inside the forward method is correct, so seems like the problem is with the line:
pooled_logits = logits[torch.arange(batch_size, device=logits.device), sequence_lengths]
maybe I didn't understand what it's supposed to do, but the result is getting logits in another shape than needed and than expected.
Will be glad for help, correction or clarification :)
Thank you!
### Who can help?
@ArthurZucker
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```python
config = GPT2Config(vocab_size = num_classes,
n_embd = input_size,
n_layer = 12,
n_head = 8,
num_labels = num_classes
)
model = GPT2ForSequenceClassification(config).to(device)
model.config.pad_token_id = model.config.eos_token_id
outputs = model(inputs_embeds=inputs).logits
```
### Expected behavior
outputs.shape == (batch_size, config.num_labels) | 03-22-2023 20:16:59 | 03-22-2023 20:16:59 | Hi!
I am not really sure about your usage (setting the vocabulary size to the number of classes?) but as it can be found on the documentation, this is how you should be using the `GPT2ForSequenceClassification` class :
```python
import torch
from transformers import GPT2Tokenizer, GPT2ForSequenceClassification
tokenizer = GPT2Tokenizer.from_pretrained("gpt2")
model = GPT2ForSequenceClassification.from_pretrained("gpt2", num_labels = 4)
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
with torch.no_grad():
logits = model(**inputs).logits
predicted_class_id = logits.argmax().item()
model.config.id2label[predicted_class_id]
```
```
'LABEL_3'
```
(and the shape of the logit is `[1,4]` as expected.
If the model was not trained on the specific task, by default it will not have the correct shape in the last output layer, so you will see the following warning:
```python
Some weights of GPT2ForSequenceClassification were not initialized from the model checkpoint at gpt2 and are newly initialized: ['score.weight']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
```
Now regarding your issue, I simply suggest that there is something wrong with the shapes of the inputs that you are providing to your model.
The following works:
```
config = GPT2Config(num_labels=4)
model = GPT2ForSequenceClassification(config)
logits = model(**inputs).logits
assert logits.shape[-1] == 4
```
|
transformers | 22,323 | closed | Seq2seq trainer generation config arg | # What does this PR do?
`Seq2SeqTrainer` can load a `GenerationConfig`, by calling the `from_pretrained` method. This is done with the `generation_config_from_pretrain` argument from `Seq2SeqTrainingArguments` (or in `kwargs` of the `Seq2SeqTrainer.evaluate` and `Seq2SeqTrainer.predict` methods).
At first, we though of using a `generation_config_file` argument (#22203). I thought it would be even more versatile to consider it as a "from_pretrained" approach. Hence here `generation_config_from_pretrain` can also handle model ids and urls.
### Small suggestion
As `Seq2SeqTrainer` actually brings very few additional functionality or modifications, would directly including these in `Trainer` be a good idea ? (one trainer to rule them all 💍)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Yes: #22203
- [x] Did you make sure to update the documentation with your changes?
- [ ] Did you write any new necessary tests?
## Who can review?
@sgugger @gante | 03-22-2023 19:02:08 | 03-22-2023 19:02:08 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Examples are broken here is due to `Seq2SeqTrainingArguments.generation_max_length` and `Seq2SeqTrainingArguments.generation_num_beams` being removed.
From here what do you suggest between putting them back (and send a warning ?) or / and updating the examples ?<|||||>Hey guys, thanks for reviewing, its my pleasure to contribute considering how useful transformers have been to me ! 😃
@gante
1. Noted. I have put them back.
2. Sounds good, you probably know better the demand / usages. One note though, I initially called `load_generation_config` from the `evaluate` and `predict` methods for in case users specify a `generation_config` `kwarg` for these methods. Should we get rid of this (then forcing users to override `trainer.generation_config` if they need to change it) ? If not, maybe we do not need a `__init__` whose purpose would solely be to create `self._gen_config`, which would also be done in `evaluate` and `predict` anyway ?<|||||>@Natooz I think so (forcing to override).
I'd rather have a simple solution now, and make it complex in the future if there is demand for it. Maintenance is a limitation on our side, our team is relatively small :)<|||||>That's totally understandable. You guys are already managing this ecosystem really well, and make a huge impact ! 🙌
I created the `__init__` method, overriding `model.generation_config`. Indeed the code is shorter and simpler.<|||||>Hey @gante,
Thanks, the last changes are done.
I'll take the instructions for the rebase, I just didn't do it right<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>@Natooz I think everything went well, no further touches are needed in the rebase front :) Now we only need to add back the old arguments (`self.args.generation_max_length` and `self.args.generation_num_beams`) and their logic!<|||||>Good, evaluate and predict are back as original, it should be good now<|||||>Suggestions applied, sorry for these typos (copy / paste ...) 😅<|||||>@Natooz no worries. Thank you for all the contributions to this PR, they will help many LLM+trainer users! 🤗 <|||||>Sure, here it is |
transformers | 22,322 | closed | Generate: add test for left-padding support | # What does this PR do?
This PR adds a test to check whether a decoder-only model supports left padding. The test was somewhat tricky to design -- the reasoning is all commented in the code, for future reference.
Let me know if you agree with the decisions made in the test!
Hopefully we will be able to detect whether a model supports left-padding before merging 🤞 | 03-22-2023 18:36:49 | 03-22-2023 18:36:49 | Current failures -- including Llama and GPTNeoX, the two I had my eyes on:

<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>@sgugger reduced to 10 runs, merging when CI gets green :) |
transformers | 22,321 | closed | line 714 bsz, seq_len = input_shape may crash in CLIPTextTransformer | ### System Info
Hi, there,
When input_ids is more than 2 dimentions , the input_shape = input_ids.size() will give more than 2 numbers. Then bsz, seq_len = input_shape will crash.
I don't know if I was right. Just face the issue and change the code in my machine and it works.
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
when use CLIPTextTransformer, pass a input_ids with more than 2 dimentions.
### Expected behavior
crash: unpack more than expected | 03-22-2023 18:11:15 | 03-22-2023 18:11:15 | Hi @qilei123, thanks for raising this issue.
Could you share a minimal code sample which reproduces the error and information about the environment the code was run in (run `transformers-cli env` to get this info)?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 22,320 | closed | Fix PipelineTests skip conditions | # What does this PR do?
While I am starting to try to fix some pipeline tests that are skipped previously, I realized PR #21516 (removing the usage of meta class in pipeline testing) **accidentally skip more tests than it should**.
- previously, each combination of model/tokenizer/processor class has its own test case (being generated on the fly), so we can use `self.skipTest` to skip some cases.
- After #21516,each model + task has its test cases, but inside that test, it runs against all tokenizers/processors that are available. In particular, slow / fast tokenizer.
- As mentioned before, we have some slow tokenizer issues in pipeline testing (never tested before #20426), and we skip some failing cases for now.
- but after #21516, when we use `self.skipTest`, we **might** **skip the next combination(s)**. This is **NOT good/expected**.
This PR instead uses `logger.warning` to log the skipped cases and continue to the next combination in a test. This is not very pleasant, but as we don't want to use meta class, this is the only way I can think of for now.
It's better to prepare a report file that clearly indicate which test cases and/or combination being [skipped]. | 03-22-2023 17:41:04 | 03-22-2023 17:41:04 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 22,319 | closed | Hardware Auto-Setup for Examples | # What does this PR do?
Discussed with @sgugger and @LysandreJik . This PR introduces auto-setup functionality for tutorials and examples (we are sending a parallel PR to accelerate, and maybe Diffusers and Spaces shortly). This allows users to run transformers code, tutorials, and scripts on self-hosted hardware (either their own instance or a cloud instances) including on-demand allocation of the hardware itself on AWS, GCP, Azure, or Lambda Labs, and installation of dependencies. This introduces a similar level of turnkey usage and reproducibility that users only typically expect in Colab, but for any type of hardware on any cloud (we've tested on Paperspace and Coreweave as well, allocating the instance in their UI and then plugging in the IP as a static cluster). Note that Runhouse OSS is facilitating the setup (via SkyPilot) and rpc, but users don't don't need to create a Runhouse account or anything like that, this is strictly inside their own cloud accounts with their own credentials (or using their IP and ssh creds without a cloud account).
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). | 03-22-2023 17:21:26 | 03-22-2023 17:21:26 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Definitely, that makes sense, and thanks for the fast review! Updated per your suggestions.<|||||>Thanks for iterating! There is just the issue of the tests that are not running now. It seems there is an issue with your CircleCI permissions. Could you try refreshing your permissions as shown [here](https://support.circleci.com/hc/en-us/articles/360048210711-How-to-Refresh-User-Permissions-)?<|||||>Oh sorry! I think I've granted it access. Do I need to trigger anything on my side?<|||||>Probably an empty commit to re-trigger the CI.<|||||>Ok, so it looks like we just need a quick `make style` to fix the formatting on the added examples and we should be good to go.<|||||>Oops, thought I already ran via `make fixup`. Pushed!<|||||>Ok, last failures are fixed on main so merging. Thanks!<|||||>Thank you, Sylvain!! |
transformers | 22,318 | closed | Hardware Auto-Setup for Tutorials and Examples | # What does this PR do?
Discussed with @sgugger and @LysandreJik . This PR introduces auto-setup functionality for tutorials and examples (we are sending a parallel PR to accelerate, and maybe Diffusers and Spaces shortly). This allows users to run transformers code, tutorials, and scripts on self-hosted hardware (either their own instance or a cloud instances) including on-demand allocation of the hardware itself on AWS, GCP, Azure, or Lambda Labs, and installation of dependencies. This introduces a similar level of turnkey usage and reproducibility that users only typically expect in Colab, but for any type of hardware on any cloud (we've tested on Paperspace and Coreweave as well, allocating the instance in their UI and then plugging in the IP as a static cluster). Note that Runhouse OSS is facilitating the setup (via SkyPilot) and rpc, but users don't don't need to create a Runhouse account or anything like that, this is strictly inside their own cloud accounts with their own credentials (or using their IP and ssh creds without a cloud account).
This PR is WIP and seeking feedback. A few open questions:
1. launch_auto_hardware.mdx is structured to be like a notebook, but how do I make it have the "launch in colab" etc. buttons on the top right? (I'm happy to refactor it into an ipynb if needed)
2. launch_auto_hardware.mdx shows only inference. I think it could be valuable to make it mirror the initial PyTorch parts of [the yelp fine-tuning tutorial](https://huggingface.co/docs/transformers/training), to show preprocessing, training, and inference all on remote hardware. Does that make sense?
3. We're generally using the term "remote hardware" with "auto setup". Would it be better to say "self-hosted" or something like that (to avoid confusion with hosted solutions)?
4. Should we showcase more examples? Right now we show just a few but can add more if you think that's valuable (e.g. @sgugger mentioned showing TPUs before). We can also tailor the auto-setup scripts to different examples if desired, rather than the one-size fits all approach.
Thank you for your input on this so far!!
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). | 03-22-2023 17:08:28 | 03-22-2023 17:08:28 | Whoops, shouldn't be on main, I'll close and reopen on a new branch.<|||||>_The documentation is not available anymore as the PR was closed or merged._ |
transformers | 22,317 | closed | Add `MegatronT5ForConditionalGeneration` | # What does this PR do?
This PR adds the `MegatronT5ForConditionalGeneration` class, which among standard applications can be used for pretrained T5 model from NVIDIA NeMo MegatronT5 :)
I also add converting script from NeMo MegatronT5 to Huggingface MegatronT5ForConditionalGeneration model
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #22315
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
cc @ArthurZucker and @younesbelkada
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 03-22-2023 16:18:28 | 03-22-2023 16:18:28 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_22317). All of your documentation changes will be reflected on that endpoint.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Hey! Thanks for contributing! In the current state I cannot really see the differences between this model and `T5`. Adding the `# Copied from` statements would help a lot. However if the model is very similar (and you still want to persue the PR!) I would recommend adding the model to the hub following [this](https://huggingface.co/docs/transformers/custom_models) tutorial! It will be simpler for you and you won't have to deal with all the red CIs! <|||||>@ArthurZucker
You are correct that the basic structure of the model is based on the existing T5. However, there are differences in the implementation between huggingface and MegatronLM (or NeMo) regarding the reshaping of tensors for attention computation, as well as various differences in normalization methods. Due to these differences, I decided to submit the pull request. Simply mapping the model weights wouldn't result in proper functioning, so a custom class was required. I will refer to the guide you provided and give it a try. Thank you :)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Hey! Could you refer me to the link of the updated model if you already push it to the hub? 😉 This is in order to keep track of models on the hub!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 22,316 | closed | docs: Resolve incorrect type typo in trainer methods | # What does this PR do?
Replace incorrect `Lst[str]` with `List[str]` in docstrings in various locations in the `Trainer`.
## Before submitting
- [x] This PR fixes a typo or improves the docs
## Who can review?
Documentation: @sgugger
* Tom Aarsen | 03-22-2023 15:24:30 | 03-22-2023 15:24:30 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 22,315 | open | Add MegatronT5 | ### Model description
In NeMo Megatron, the T5 model is available, but there is currently no MegatronT5 class for huggingface, such as MegatronBERT or MegatronGPT2. I have recently finished the porting work and have tested the model internally. I would like to share this model with the community.
### Open source status
- [X] The model implementation is available
- [ ] The model weights are available
### Provide useful links for the implementation
- [NeMo Megatron models](https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/stable/nlp/nemo_megatron/intro.html)
- [NeMo](https://github.com/NVIDIA/NeMo)
- [Megatron-LM T5 model](https://github.com/NVIDIA/Megatron-LM/blob/main/megatron/model/t5_model.py) | 03-22-2023 15:09:09 | 03-22-2023 15:09:09 | |
transformers | 22,314 | closed | Beef up Llama tests | # What does this PR do?
I was starting to work on left padding support for Llama, and I noticed it was missing the usual test mixins. This PR rectifies that before I introduce further changes on Llama. | 03-22-2023 15:02:19 | 03-22-2023 15:02:19 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 22,313 | closed | 🚨🚨🚨 `[NLLB Tokenizer]` Fix the prefix tokens 🚨🚨🚨 | # What does this PR do?
The NLLB tokenizer's suffix and prefix token were wrong w.r.t to the paper.
This breaking change fixes the tokenizer.
Could be none breaking if we add these to the configuration file maybe? But it is a required change
Have to update the tests but should be good.
The big problem was the `prefix` and `suffix` tokens.
The previous version adds `[self.eos_token_id, self.cur_lang_code]` at the end of the token sequence for both target and source tokenization. This is wrong as the `NLLB` paper mentions (page 48, 6.1.1. Model Architecture) :
> Note that we prefix the source sequence with the source language, as opposed to the target
language as previously done in several works (Arivazhagan et al., 2019; Johnson et al.,
2017). This is primarily because we prioritize optimizing zero-shot performance of our
model on any pair of 200 languages at a minor cost to supervised performance.
Previous behaviour:
```python
>>> from transformers import NllbTokenizer
>>> tokenizer = NllbTokenizer.from_pretrained("facebook/nllb-200-distilled-600M")
>>> tokenizer("How was your day?").input_ids
[13374, 1398, 4260, 4039, 248130, 2, 256047]
>>> # 2: '</s>'
>>> # 256047 : 'eng_Latn'
```
New behaviour
```python
>>> from transformers import NllbTokenizer
>>> tokenizer = NllbTokenizer.from_pretrained("facebook/nllb-200-distilled-600M")
>>> tokenizer("How was your day?").input_ids
[256047, 13374, 1398, 4260, 4039, 248130, 2]
```
Enabling the old behaviour:
```python
>>> from transformers import NllbTokenizer
>>> tokenizer = NllbTokenizer.from_pretrained("facebook/nllb-200-distilled-600M", legacy_behaviour = True)
```
This parameter should be part of the `tokenizer_config.json`. | 03-22-2023 14:43:28 | 03-22-2023 14:43:28 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Thank you for the implementation, @ArthurZucker!
Could you please put in the PR description the gist of the breaking change with a code sample, and how to revert to the previous behavior if users would like that?
Thank you<|||||>Indeed thanks for the tip on how to enable that swiftly! <|||||>One test is failing with NLLB (running slow ones locally, `test_encode_decode_with_spaces`), fixing this before merging.
Edit: Fast and slow have a different behaviour! `space_between_special_tokens` does not exist in rust (yet, PR coming soon)<|||||>Cool, I like the flag :)
Can the doc be shown more prominently? Maybe to replace the disclaimer mentioning to tag me? A disclaimer mentioning that we changed it to what it is now, with the code snippet?
<|||||>Thanks both for proof reading! 👍🏻 |
transformers | 22,312 | closed | LlamaTokenizer has no `pad` token, leading to failure during batch-tokenization | ### System Info
System info:
- Code: Current `main` branch, installed via: `pip install git+https://github.com/huggingface/transformers` on 22nd March 2023
### Who can help?
@ArthurZucker @sgugger @zphang
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
- Code to reproduce:
```
from transformers import LlamaTokenizer
tokenizer = LlamaTokenizer.from_pretrained("decapoda-research/llama-7b-hf")
print(repr(tokenizer.pad_token)) ## None
print(repr(tokenizer.bos_token)) ## ''
print(repr(tokenizer.eos_token)) ## ''
```
- Where this causes an issue:
```
batch = tokenizer(
[
"Singer Billy Joel yesterday ",
"The primary use of LLaMA is research on large language "
],
return_tensors="pt",
padding=True
)
```
The above statement raises an issue:
```
Using pad_token, but it is not set yet.
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Cell In[53], line 1
----> 1 batch = tokenizer(
2 [
3 "Singer Billy Joel yesterday ",
4 "The primary use of LLaMA is research on large language "
5 ],
6 return_tensors="pt",
7 padding=True
8 )
File /home/ec2-user/anaconda3/envs/llm-gen/lib/python3.10/site-packages/transformers/tokenization_utils_base.py:2531, in PreTrainedTokenizerBase.__call__(self, text, text_pair, text_target, text_pair_target, add_special_tokens, padding, truncation, max_length, stride, is_split_into_words, pad_to_multiple_of, return_tensors, return_token_type_ids, return_attention_mask, return_overflowing_tokens, return_special_tokens_mask, return_offsets_mapping, return_length, verbose, **kwargs)
2529 if not self._in_target_context_manager:
2530 self._switch_to_input_mode()
-> 2531 encodings = self._call_one(text=text, text_pair=text_pair, **all_kwargs)
2532 if text_target is not None:
2533 self._switch_to_target_mode()
File /home/ec2-user/anaconda3/envs/llm-gen/lib/python3.10/site-packages/transformers/tokenization_utils_base.py:2617, in PreTrainedTokenizerBase._call_one(self, text, text_pair, add_special_tokens, padding, truncation, max_length, stride, is_split_into_words, pad_to_multiple_of, return_tensors, return_token_type_ids, return_attention_mask, return_overflowing_tokens, return_special_tokens_mask, return_offsets_mapping, return_length, verbose, **kwargs)
2612 raise ValueError(
2613 f"batch length of `text`: {len(text)} does not match batch length of `text_pair`:"
2614 f" {len(text_pair)}."
2615 )
2616 batch_text_or_text_pairs = list(zip(text, text_pair)) if text_pair is not None else text
-> 2617 return self.batch_encode_plus(
2618 batch_text_or_text_pairs=batch_text_or_text_pairs,
2619 add_special_tokens=add_special_tokens,
2620 padding=padding,
2621 truncation=truncation,
2622 max_length=max_length,
2623 stride=stride,
2624 is_split_into_words=is_split_into_words,
2625 pad_to_multiple_of=pad_to_multiple_of,
2626 return_tensors=return_tensors,
2627 return_token_type_ids=return_token_type_ids,
2628 return_attention_mask=return_attention_mask,
2629 return_overflowing_tokens=return_overflowing_tokens,
2630 return_special_tokens_mask=return_special_tokens_mask,
2631 return_offsets_mapping=return_offsets_mapping,
2632 return_length=return_length,
2633 verbose=verbose,
2634 **kwargs,
2635 )
2636 else:
2637 return self.encode_plus(
2638 text=text,
2639 text_pair=text_pair,
(...)
2655 **kwargs,
2656 )
File /home/ec2-user/anaconda3/envs/llm-gen/lib/python3.10/site-packages/transformers/tokenization_utils_base.py:2799, in PreTrainedTokenizerBase.batch_encode_plus(self, batch_text_or_text_pairs, add_special_tokens, padding, truncation, max_length, stride, is_split_into_words, pad_to_multiple_of, return_tensors, return_token_type_ids, return_attention_mask, return_overflowing_tokens, return_special_tokens_mask, return_offsets_mapping, return_length, verbose, **kwargs)
2782 """
2783 Tokenize and prepare for the model a list of sequences or a list of pairs of sequences.
2784
(...)
2795 details in `encode_plus`).
2796 """
2798 # Backward compatibility for 'truncation_strategy', 'pad_to_max_length'
-> 2799 padding_strategy, truncation_strategy, max_length, kwargs = self._get_padding_truncation_strategies(
2800 padding=padding,
2801 truncation=truncation,
2802 max_length=max_length,
2803 pad_to_multiple_of=pad_to_multiple_of,
2804 verbose=verbose,
2805 **kwargs,
2806 )
2808 return self._batch_encode_plus(
2809 batch_text_or_text_pairs=batch_text_or_text_pairs,
2810 add_special_tokens=add_special_tokens,
(...)
2825 **kwargs,
2826 )
File /home/ec2-user/anaconda3/envs/llm-gen/lib/python3.10/site-packages/transformers/tokenization_utils_base.py:2436, in PreTrainedTokenizerBase._get_padding_truncation_strategies(self, padding, truncation, max_length, pad_to_multiple_of, verbose, **kwargs)
2434 # Test if we have a padding token
2435 if padding_strategy != PaddingStrategy.DO_NOT_PAD and (not self.pad_token or self.pad_token_id < 0):
-> 2436 raise ValueError(
2437 "Asking to pad but the tokenizer does not have a padding token. "
2438 "Please select a token to use as `pad_token` `(tokenizer.pad_token = tokenizer.eos_token e.g.)` "
2439 "or add a new pad token via `tokenizer.add_special_tokens({'pad_token': '[PAD]'})`."
2440 )
2442 # Check that we will truncate to a multiple of pad_to_multiple_of if both are provided
2443 if (
2444 truncation_strategy != TruncationStrategy.DO_NOT_TRUNCATE
2445 and padding_strategy != PaddingStrategy.DO_NOT_PAD
(...)
2448 and (max_length % pad_to_multiple_of != 0)
2449 ):
ValueError: Asking to pad but the tokenizer does not have a padding token. Please select a token to use as `pad_token` `(tokenizer.pad_token = tokenizer.eos_token e.g.)` or add a new pad token via `tokenizer.add_special_tokens({'pad_token': '[PAD]'})`.
```
### Expected behavior
The following code should work:
```
from transformers import LlamaTokenizer
tokenizer = LlamaTokenizer.from_pretrained("decapoda-research/llama-7b-hf")
batch = tokenizer(
[
"Singer Billy Joel yesterday ",
"The primary use of LLaMA is research on large language "
],
return_tensors="pt",
padding=True
)
``` | 03-22-2023 13:08:03 | 03-22-2023 13:08:03 | - Possible root cause:
I don't see padding token set anywhere: https://github.com/huggingface/transformers/blob/c07a02a4b7892edfee22cbe57d3cdd9e10ae7a4d/src/transformers/models/llama/convert_llama_weights_to_hf.py#L241
A bunch of LLaMa libraries seem to be setting the IDs from the sentencepiece `tokenizer.model`: https://github.com/markasoftware/llama-cpu/blob/main/llama/tokenizer.py#L24
For me, running the following yields:
```
>>> print(sp_model.bos_id(), sp_model.eos_id(), sp_model.pad_id())
1 2 -1
```
...which makes me believe the original tokenizer does not have a pad token? This is confirmed by the following:
```
sp_model.id_to_piece(1) ## '<s>', which is the bos token for LLaMa
sp_model.id_to_piece(2) ## '</s>', which is the eos token for LLaMa
sp_model.id_to_piece(-1) ## Throws: IndexError: piece id is out of range.
```
Additional confirmation:
```
vocab: Dict[str, int] = {sp_model.id_to_piece(id): id for id in range(sp_model.get_piece_size())}
print(vocab['<s>']) ## 1
print(vocab['</s>']) ## 2
print(vocab['<unk>']) ## 0
print(vocab['<pad>']) ## KeyError: '<pad>'
```
<|||||>Hey, indeed the original sentencepiece model does not have a padding token. You can probably pad using the `eos_token` like it is done for `GPT2`, need to check what is mentioned on the paper, but the llama code does not use the`pad_token` it seems. <|||||>Yes, I don't think the original model has a padding token. The same code with GPT-2 will fail, you need to add the pad token yourself as indicated by the error message.<|||||>So attempting to set the PAD token as the EOS token (i.e. `''`) fails with the same error message:
```
from transformers import LlamaTokenizer, LlamaForCausalLM
tokenizer = LlamaTokenizer.from_pretrained("decapoda-research/llama-7b-hf")
print(repr(tokenizer.pad_token)) ## None
print(repr(tokenizer.bos_token)) ## ''
print(repr(tokenizer.eos_token)) ## ''
print()
tokenizer.pad_token = tokenizer.eos_token
print(repr(tokenizer.pad_token)) ## ''
print(repr(tokenizer.bos_token)) ## ''
print(repr(tokenizer.eos_token)) ## ''
batch = tokenizer(
[
"Singer Billy Joel yesterday ",
"The primary use of LLaMA is research on large language "
],
return_tensors="pt",
padding=True
)
```
Error:
```
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Cell In[61], line 1
----> 1 batch = tokenizer(
2 [
3 "Singer Billy Joel yesterday ",
4 "The primary use of LLaMA is research on large language "
5 ],
6 return_tensors="pt",
7 padding=True
8 )
File /home/ec2-user/anaconda3/envs/llm-gen/lib/python3.10/site-packages/transformers/tokenization_utils_base.py:2531, in PreTrainedTokenizerBase.__call__(self, text, text_pair, text_target, text_pair_target, add_special_tokens, padding, truncation, max_length, stride, is_split_into_words, pad_to_multiple_of, return_tensors, return_token_type_ids, return_attention_mask, return_overflowing_tokens, return_special_tokens_mask, return_offsets_mapping, return_length, verbose, **kwargs)
2529 if not self._in_target_context_manager:
2530 self._switch_to_input_mode()
-> 2531 encodings = self._call_one(text=text, text_pair=text_pair, **all_kwargs)
2532 if text_target is not None:
2533 self._switch_to_target_mode()
File /home/ec2-user/anaconda3/envs/llm-gen/lib/python3.10/site-packages/transformers/tokenization_utils_base.py:2617, in PreTrainedTokenizerBase._call_one(self, text, text_pair, add_special_tokens, padding, truncation, max_length, stride, is_split_into_words, pad_to_multiple_of, return_tensors, return_token_type_ids, return_attention_mask, return_overflowing_tokens, return_special_tokens_mask, return_offsets_mapping, return_length, verbose, **kwargs)
2612 raise ValueError(
2613 f"batch length of `text`: {len(text)} does not match batch length of `text_pair`:"
2614 f" {len(text_pair)}."
2615 )
2616 batch_text_or_text_pairs = list(zip(text, text_pair)) if text_pair is not None else text
-> 2617 return self.batch_encode_plus(
2618 batch_text_or_text_pairs=batch_text_or_text_pairs,
2619 add_special_tokens=add_special_tokens,
2620 padding=padding,
2621 truncation=truncation,
2622 max_length=max_length,
2623 stride=stride,
2624 is_split_into_words=is_split_into_words,
2625 pad_to_multiple_of=pad_to_multiple_of,
2626 return_tensors=return_tensors,
2627 return_token_type_ids=return_token_type_ids,
2628 return_attention_mask=return_attention_mask,
2629 return_overflowing_tokens=return_overflowing_tokens,
2630 return_special_tokens_mask=return_special_tokens_mask,
2631 return_offsets_mapping=return_offsets_mapping,
2632 return_length=return_length,
2633 verbose=verbose,
2634 **kwargs,
2635 )
2636 else:
2637 return self.encode_plus(
2638 text=text,
2639 text_pair=text_pair,
(...)
2655 **kwargs,
2656 )
File /home/ec2-user/anaconda3/envs/llm-gen/lib/python3.10/site-packages/transformers/tokenization_utils_base.py:2799, in PreTrainedTokenizerBase.batch_encode_plus(self, batch_text_or_text_pairs, add_special_tokens, padding, truncation, max_length, stride, is_split_into_words, pad_to_multiple_of, return_tensors, return_token_type_ids, return_attention_mask, return_overflowing_tokens, return_special_tokens_mask, return_offsets_mapping, return_length, verbose, **kwargs)
2782 """
2783 Tokenize and prepare for the model a list of sequences or a list of pairs of sequences.
2784
(...)
2795 details in `encode_plus`).
2796 """
2798 # Backward compatibility for 'truncation_strategy', 'pad_to_max_length'
-> 2799 padding_strategy, truncation_strategy, max_length, kwargs = self._get_padding_truncation_strategies(
2800 padding=padding,
2801 truncation=truncation,
2802 max_length=max_length,
2803 pad_to_multiple_of=pad_to_multiple_of,
2804 verbose=verbose,
2805 **kwargs,
2806 )
2808 return self._batch_encode_plus(
2809 batch_text_or_text_pairs=batch_text_or_text_pairs,
2810 add_special_tokens=add_special_tokens,
(...)
2825 **kwargs,
2826 )
File /home/ec2-user/anaconda3/envs/llm-gen/lib/python3.10/site-packages/transformers/tokenization_utils_base.py:2436, in PreTrainedTokenizerBase._get_padding_truncation_strategies(self, padding, truncation, max_length, pad_to_multiple_of, verbose, **kwargs)
2434 # Test if we have a padding token
2435 if padding_strategy != PaddingStrategy.DO_NOT_PAD and (not self.pad_token or self.pad_token_id < 0):
-> 2436 raise ValueError(
2437 "Asking to pad but the tokenizer does not have a padding token. "
2438 "Please select a token to use as `pad_token` `(tokenizer.pad_token = tokenizer.eos_token e.g.)` "
2439 "or add a new pad token via `tokenizer.add_special_tokens({'pad_token': '[PAD]'})`."
2440 )
2442 # Check that we will truncate to a multiple of pad_to_multiple_of if both are provided
2443 if (
2444 truncation_strategy != TruncationStrategy.DO_NOT_TRUNCATE
2445 and padding_strategy != PaddingStrategy.DO_NOT_PAD
(...)
2448 and (max_length % pad_to_multiple_of != 0)
2449 ):
ValueError: Asking to pad but the tokenizer does not have a padding token. Please select a token to use as `pad_token` `(tokenizer.pad_token = tokenizer.eos_token e.g.)` or add a new pad token via `tokenizer.add_special_tokens({'pad_token': '[PAD]'})`.
```<|||||>Can you share a link on how GPT2 does it?<|||||>I can confirm that the following works:
```
from transformers import LlamaTokenizer, LlamaForCausalLM
tokenizer = LlamaTokenizer.from_pretrained("decapoda-research/llama-7b-hf")
print(repr(tokenizer.pad_token)) ## None
print(repr(tokenizer.bos_token)) ## ''
print(repr(tokenizer.eos_token)) ## ''
print()
tokenizer.add_special_tokens({'pad_token': '[PAD]'})
print(repr(tokenizer.pad_token)) ## ''
print(repr(tokenizer.bos_token)) ## ''
print(repr(tokenizer.eos_token)) ## ''
batch = tokenizer(
[
"Singer Billy Joel yesterday ",
"The primary use of LLaMA is research on large language "
],
return_tensors="pt",
padding=True
)
```<|||||>Glad that it's now working.
As an explanation: the error arising when using `tokenizer.pad_token = tokenizer.eos_token` is because `self.pad_token` is set as an empty string which evaluates as `False` in [this check](https://github.com/huggingface/transformers/blob/5fd4e3c87c685fba2dd9615be62131748a8b5ee3/src/transformers/tokenization_utils_base.py#L2435). This seems like an expected exception as it's not possible to pad with an empty string.
In the working example, I think second print of pad token should show:
`print(repr(tokenizer.pad_token)) ## '[PAD]'`
<|||||>Note that the EOS token returned by `tokenizer.eos_token` is wrong in any case (this is a known issue and @ArthurZucker should fix this). The EOS token is not `""` but `"<s>"`. Once this issue is fixed, doing `tokenizer.pad_token = tokenizer.eos_token` will be possible.<|||||>There is also a weird issue of increase in vocab size depending on how we add the pad token.
Method 1:
`from transformers import LlamaTokenizer, LlamaForCausalLM`
`tokenizer = LlamaTokenizer.from_pretrained("decapoda-research/llama-7b-hf")`
`tokenizer.pad_token='[PAD]'`
`print(f"pad_token_id={tokenizer.pad_token_id}") #prints 0`
`print(f"vocab length={len(tokenizer.get_vocab())}") #prints 32000`
Method 2
`from transformers import LlamaTokenizer, LlamaForCausalLM`
`tokenizer = LlamaTokenizer.from_pretrained("decapoda-research/llama-7b-hf")`
`num_spl_tokens_added=tokenizer.add_special_tokens({'pad_token': '[PAD]'}) #returns 1 `
`print(f"pad_token_id={tokenizer.pad_token_id}") #prints 32000`
`print(f"vocab length={len(tokenizer.get_vocab())}") #prints 32001`
Why is this discrepancy between `tokenizer.add_special_tokens({'pad_token': '[PAD]'})` and `tokenizer.pad_token='[PAD]'` ?
Downstream issues:
The Stanford Alpaca model independently trained on decapoda-research/llama-7b-hf at "chavinlo/alpaca-native" uses `tokenizer.add_special_tokens({'pad_token': '[PAD]'})` and hence the model's vocab size is set to 32001. <|||||>I think https://github.com/huggingface/transformers/pull/22402 should fix this?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>I am sorry if it is a wrong question, but don't we need padding token to train model with bs > 1, or are they concatenating sentences together, separated by eos token while training?<|||||>@basujindal
My general understanding for bs > 1, we need to pad during finetuning. However, in pretraining the input text is set to max-length -- you can think of a sliding window over a large text corpora.<|||||>Exactly! This was fixed in #22402 so keeping it closed!<|||||>> There is also a weird issue of increase in vocab size depending on how we add the pad token.
>
> Method 1:
>
> `from transformers import LlamaTokenizer, LlamaForCausalLM` `tokenizer = LlamaTokenizer.from_pretrained("decapoda-research/llama-7b-hf")` `tokenizer.pad_token='[PAD]'` `print(f"pad_token_id={tokenizer.pad_token_id}") #prints 0` `print(f"vocab length={len(tokenizer.get_vocab())}") #prints 32000`
>
> Method 2 `from transformers import LlamaTokenizer, LlamaForCausalLM` `tokenizer = LlamaTokenizer.from_pretrained("decapoda-research/llama-7b-hf")` `num_spl_tokens_added=tokenizer.add_special_tokens({'pad_token': '[PAD]'}) #returns 1 ` `print(f"pad_token_id={tokenizer.pad_token_id}") #prints 32000` `print(f"vocab length={len(tokenizer.get_vocab())}") #prints 32001`
>
> Why is this discrepancy between `tokenizer.add_special_tokens({'pad_token': '[PAD]'})` and `tokenizer.pad_token='[PAD]'` ?
>
> Downstream issues: The Stanford Alpaca model independently trained on decapoda-research/llama-7b-hf at "chavinlo/alpaca-native" uses `tokenizer.add_special_tokens({'pad_token': '[PAD]'})` and hence the model's vocab size is set to 32001.
Seems as if this discrepancy is done intentionally. With `tranformers==4.30.0.dev0`,
```
from transformers import (
LlamaForCausalLM,
LlamaTokenizer
)
tokenizer = LlamaTokenizer.from_pretrained("/root/HF_llama")
model = LlamaForCausalLM.from_pretrained("/root/HF_llama").to("cuda")
tokenized_text = tokenizer(["some text", "this will cause padding"], padding = True, return_tensors='pt').to("cuda")
model.generate(tokenized_text['input_ids'])
```
### Output
```
ValueError: Asking to pad but the tokenizer does not have a padding token. Please select a token to use as `pad_token`
`(tokenizer.pad_token = tokenizer.eos_token e.g.)` or add a new pad token via `tokenizer.add_special_tokens({'pad_token':
\'[PAD]'})`.
```
What's the reasoning behind the distinction of the two methods?<|||||>Hey @christoukmaji , this kind of question should be asked on the [forum](https://discuss.huggingface.co/).
The first method will set `pad_token_id` to `2` while the other will give a different index. <|||||>> Note that the EOS token returned by `tokenizer.eos_token` is wrong in any case (this is a known issue and @ArthurZucker should fix this). The EOS token is not `""` but `"<s>"`. Once this issue is fixed, doing `tokenizer.pad_token = tokenizer.eos_token` will be possible.
I think that `bos_token = "<s>"` and `eos_token = "</s>"`, you have a mistake. <|||||>> There is also a weird issue of increase in vocab size depending on how we add the pad token.
>
> Method 1:
>
> `from transformers import LlamaTokenizer, LlamaForCausalLM` `tokenizer = LlamaTokenizer.from_pretrained("decapoda-research/llama-7b-hf")` `tokenizer.pad_token='[PAD]'` `print(f"pad_token_id={tokenizer.pad_token_id}") #prints 0` `print(f"vocab length={len(tokenizer.get_vocab())}") #prints 32000`
>
> Method 2 `from transformers import LlamaTokenizer, LlamaForCausalLM` `tokenizer = LlamaTokenizer.from_pretrained("decapoda-research/llama-7b-hf")` `num_spl_tokens_added=tokenizer.add_special_tokens({'pad_token': '[PAD]'}) #returns 1 ` `print(f"pad_token_id={tokenizer.pad_token_id}") #prints 32000` `print(f"vocab length={len(tokenizer.get_vocab())}") #prints 32001`
>
> Why is this discrepancy between `tokenizer.add_special_tokens({'pad_token': '[PAD]'})` and `tokenizer.pad_token='[PAD]'` ?
>
> Downstream issues: The Stanford Alpaca model independently trained on decapoda-research/llama-7b-hf at "chavinlo/alpaca-native" uses `tokenizer.add_special_tokens({'pad_token': '[PAD]'})` and hence the model's vocab size is set to 32001.
So what is the difference between the two and what would be the appropriate practice between the two?<|||||>Method 1 does not really work if you want to have a different token for padding and `<unk>`:
```python
>>> from transformers import LlamaTokenizer, LlamaForCausalLM
>>> tokenizer = LlamaTokenizer.from_pretrained("decapoda-research/llama-7b-hf")
>>> tokenizer.pad_token='[PAD]'
>>> tokenizer.pad_token
['PAD']
>>> tokenizer.pad_token_id
0
>>> tokenizer.unk_token_id
0
```
The pad tokens was not `added` but just set, which means it is unkown and will be always encoded as 0. <|||||>the solution suggested here doesn't work afaik if the model doesn't have that token, right?
see: https://stackoverflow.com/questions/76633368/why-does-the-falcon-qlora-tutorial-code-use-eos-token-as-pad-token/76639568#76639568<|||||>Given recent release of Llama2,, and in the light of the fact that resizing from 32K to 32K+1 can make inference and training slower, will support `padding_index=-1`. I'll be working on this soon! <|||||>Curious what does padding_index=-1 mean and how does it solve the problem?
-----
Brando Miranda
Ph.D. Student
Computer Science, Stanford University
EDGE Scholar, Stanford University
***@***.***
website: https://brando90.github.io/brandomiranda/home.html
mentorship opportunities: https://brando90.github.io/brandomiranda/prospective-collaborations.html
On Jul 25, 2023, at 9:48 AM, Arthur ***@***.***> wrote:
Given recent release of Llama2,, and in the light of the fact that resizing from 32K to 32K+1 can make inference and training slower, will support padding_index=-1. I'll be working on this soon!
—
Reply to this email directly, view it on GitHub<https://github.com/huggingface/transformers/issues/22312#issuecomment-1650188903>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/AAOE6LRKD5BFJZ6VV5KVVALXR72FVANCNFSM6AAAAAAWDZNEHI>.
You are receiving this because you commented.Message ID: ***@***.***>
<|||||>If you set the padding index of the token embedding layer to -1, you don't need to change the size of the vocab, neither for the model nor for the tokenizer. The embeding layer will send zeros when it will see padding token, as it is supposed to and as it is implemented in the original Llama codebase! <|||||>If you want to follow the advances: #25088 |
transformers | 22,311 | closed | Enforce `max_memory` for device_map strategies | # What does this PR do?
In #22271, by removing the `max_memory` from the kwargs before it gets passed to `get_balanced_memory`, I effectively made the `max_memory` argument ignored when `device_map` is `"auto"`, `"balanced"` or `"balanced_low_0"` (as was caught in the multi-GPU tests). This PR fixes that. | 03-22-2023 12:57:38 | 03-22-2023 12:57:38 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 22,310 | closed | Generate: Export TF generate with a TF tokenizer | # What does this PR do?
See #22254
As the title says, this PR adds the possibility to export TF generate with a TF-native tokenizer -- the full thing in a single TF graph 🤯
The missing piece was removing a redundant `if` before `tf.while_cond` -- `tf.while_cond` checks the condition before running the body, so the existing `if` before it was redundant. It was also the root cause behind the error in #22254, so removing it was a double win 🎉
A test was added to ensure we don't regress. | 03-22-2023 12:24:52 | 03-22-2023 12:24:52 | @Rocketknight1 no need to review, but FYI -- we can now compile the whole thing into a graph :D<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>This is amazing! |
transformers | 22,309 | closed | [`MBart`] Add `accelerate` support for MBart | # What does this PR do?
Partially fixes: #22305
This PR adds `accelerate` support for `MBart` models.
To run the `accelerate` tests:
```bash
RUN_SLOW=1 pytest -m accelerate_tests tests/models/mbart/test_modeling_mbart.py
```
A fix similar to https://github.com/huggingface/transformers/pull/19927 needs to be applied in order for the tests to pass
cc @amyeroberts | 03-22-2023 11:21:44 | 03-22-2023 11:21:44 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 22,308 | closed | Using FNet model in Encoder Decoder Models | Hello every one
I want to train a model with an encoder with FNet model and decoder with another transformer model like gpt. I searched and found EncoderDecoderModel in hugging face library which makes such changes easier. I put the link below:
[(https://huggingface.co/transformers/v3.5.1/model_doc/encoderdecoder.html#transformers.EncoderDecoderModel](url)
. I want to use the FNet model in Encoder Decoder Models but cannot. because I face this error:
> TypeError: forward() got an unexpected keyword argument 'attention_mask'
I understand that this is because FNet does not have attention, but I do not know how to resolve it.
I searched the internet and found out that EncoderDecoderModel does not work for all transformer models and I wanted to know why and wanted to suggest adding FNet. | 03-22-2023 10:23:01 | 03-22-2023 10:23:01 | Hi @Parmida-Granfar
I would suggest to make the necessary changes to the `EncoderDecoderModel` and/or `FNetModel` code according your own need.
As you have already observed `FNet does not have attention` (very nice finding 💯 ), you can remove the following line
https://github.com/huggingface/transformers/blob/5fd4e3c87c685fba2dd9615be62131748a8b5ee3/src/transformers/models/encoder_decoder/modeling_encoder_decoder.py#L592
and see if everything works then. If you still need any further help, I am more than happy to answer (but in a thread on [Hugging Face Forums](https://discuss.huggingface.co/) instead)
With all the modeling architectures coming out at a fast pace nowadays, it's not practical and realistic to make composite modeling like `EncoderDecoder` to handle all pairs of encoder and decoder models. But the good thing is the code is open source, and everyone can make changes to it :-).
<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 22,307 | closed | Fix --bf16 option support for Neuron after PR #22300 | This PR fixes the "RuntimeError: No CUDA GPUs are available" when running with --bf16 option on Neuron.
Related PRs:
https://github.com/huggingface/transformers/pull/20684
https://github.com/huggingface/transformers/pull/22300
# What does this PR do?
While PR #22300 restores fp16 option on XLA GPU device, it causes "RuntimeError: No CUDA GPUs are available" when running with --bf16 option on Neuron. This PR fixes this error.
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests? (Manual test below)
```
export TASK_NAME=mrpc
python3 ./run_glue.py \
--model_name_or_path bert-large-uncased \
--task_name $TASK_NAME \
--do_train \
--do_eval \
--bf16 \
--max_seq_length 128 \
--per_device_train_batch_size 8 \
--learning_rate 2e-5 \
--num_train_epochs 5 \
--overwrite_output_dir \
--output_dir /tmp/$TASK_NAME/ |& tee log_run
```
```
***** train metrics *****
epoch = 5.0
train_loss = 0.2675
train_runtime = 0:09:46.82
train_samples = 3668
train_samples_per_second = 31.253
train_steps_per_second = 3.911
100%|██████████| 51/51 [00:03<00:00, 14.66it/s]
***** eval metrics *****
epoch = 5.0
eval_accuracy = 0.8676
eval_combined_score = 0.8869
eval_f1 = 0.9062
eval_loss = 0.7155
eval_runtime = 0:00:14.42
eval_samples = 408
eval_samples_per_second = 28.289
eval_steps_per_second = 3.536
```
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@sgugger @ymwangg @Lokiiiiii | 03-22-2023 04:43:13 | 03-22-2023 04:43:13 | _The documentation is not available anymore as the PR was closed or merged._<|||||>> This means no mixed precision at all will be used during training as this variable controls the autocast context manager.
@sgugger could you help point me to the autocast context manager? Is there a way to make it use [PyTorch autocast](https://pytorch.org/docs/stable/amp.html) instead of cuda.amp.autocast?<|||||>The autocast context manager is defined [here](https://github.com/huggingface/transformers/blob/f48d3314e42bf54accc9dd8fd8dc1bf4197b34c6/src/transformers/trainer.py#L2604).
As for your question on `torch.autocast`, we can't use it as it's only in very recent versions of PyTorch and we support PyTorch >= 1.9<|||||>> The autocast context manager is defined [here](https://github.com/huggingface/transformers/blob/f48d3314e42bf54accc9dd8fd8dc1bf4197b34c6/src/transformers/trainer.py#L2604).
>
> As for your question on `torch.autocast`, we can't use it as it's only in very recent versions of PyTorch and we support PyTorch >= 1.9
Ok. Thanks @sgugger . Please see my revised PR. It does resolve the runtime error while keeping the autocast functionality.<|||||>Mmm we cannot patch torch like this in Transformers as it's too magical and might yield to hard-to-debug issues for the users.<|||||>> Mmm we cannot patch torch like this in Transformers as it's too magical and might yield to hard-to-debug issues for the users.
Thanks. Please take a look at the new revision. I switched to cpu_amp.<|||||>> Mmm we cannot patch torch like this in Transformers as it's too magical and might yield to hard-to-debug issues for the users.
@sgugger looks like using cpu_amp did not yield expected result, as the XLA/HLO graphs generated still all have fp32 ports so effectively bf16 flag has no effect. The only way I can get it to work is to use gpu_amp with the override "torch.cuda.is_bf16_supported = lambda: True" which is limited to Neuron (if is_torch_neuroncore_available) and thus will be using torch_neuronx package and not using torch.cuda anyways so it is safe. Let me know if it is still acceptable, and I will resubmit a revision.<|||||>I don't understand why it is necessary to patch torch.cuda for something you are telling me will not use torch.cuda anyway. Looks like there is some specific neuroncore tests that are necessary to fix the issue, but as I said before, patching torch.cuda is too magical to be accepted in Transformers. The only patch to other modules we accept are those done briefly inside a context manager.<|||||>> I don't understand why it is necessary to patch torch.cuda for something you are telling me will not use torch.cuda anyway. Looks like there is some specific neuroncore tests that are necessary to fix the issue, but as I said before, patching torch.cuda is too magical to be accepted in Transformers. The only patch to other modules we accept are those done briefly inside a context manager.
By "not using torch.cuda anyways" I meant we use the GPU AMP feature to autocast to bfloat16, but once that's done, the rest is executed on Neuron. I will keep debugging, but the CPU AMP feature is not working well with pytorch XLA. <|||||>@sgugger I have posted a revert here https://github.com/huggingface/transformers/pull/22451 . Apologies for the extra work. |
transformers | 22,306 | closed | Malfunctioning of PreTrainedTokenizer's tokenize method | ### System Info
* transformers.__version__
'4.25.1'
I found that at least a tokenizer's tokenized method shows wrong return.
Below is code for reproduce
```python
transformers.__version__
'4.25.1'
tokenizer = transformers.BartTokenizer.from_pretrained("facebook/bart-large")
tokenizer.tokenize("example input")
['example', 'Ġinput']
tokenizer.decode(tokenizer.encode("example input", add_special_tokens=False))
'example input'
```
As you see, if I use "tokenize" method, it prefixes each token with some strange character. However, the encode-decode method gives the correct answer.
If I'm right, should be fixed.
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
```
transformers.__version__
'4.25.1'
tokenizer = transformers.BartTokenizer.from_pretrained("facebook/bart-large")
tokenizer.tokenize("example input")
['example', 'Ġinput']
tokenizer.decode(tokenizer.encode("example input", add_special_tokens=False))
'example input'
```
### Expected behavior
```
transformers.__version__
'4.25.1'
tokenizer = transformers.BartTokenizer.from_pretrained("facebook/bart-large")
tokenizer.tokenize("example input")
['example', 'input']
``` | 03-22-2023 04:39:01 | 03-22-2023 04:39:01 | Hi @chless, thanks for raising this issue.
The `Ġ` symbol is a special character which represents a space. The reason it's seen here is that the sentence is tokenized into `"example"` and `" input"` i.e. `Ġ` indicates there's a space in front of `input`. We see the symmetric if the words are reversed:
```py
>>> tokenizer = BartTokenizer.from_pretrained("facebook/bart-large")
>>> tokenizer.tokenize("input example")
['input', 'Ġexample']
```
The reverse mapping, `tokenizer.decode(tokenizer.encode(...))`, is consistent as a space is added in front of `input` in the decoded sentence.
More discussion about the `Ġ` symbol and tokenization can be found [here in the forums](https://discuss.huggingface.co/t/bpe-tokenizers-and-spaces-before-words/475/2?u=amyeroberts). <|||||>It looks like the behavior you're seeing is actually expected, and the Ġ symbol is a special character used by the tokenizer to represent spaces.
When you call tokenizer.tokenize("example input"), the tokenizer is splitting the input string into two tokens: "example" and "Ġinput". The Ġ symbol in front of "input" indicates that there is a space between "example" and "input".
Similarly, when you call tokenizer.decode(tokenizer.encode("example input", add_special_tokens=False)), the tokenizer is encoding the string into two tokens: "example" and "Ġinput", and then decoding those tokens back into the original string with a space between "example" and "input".
So the behavior you're seeing is actually expected, and there's no need to fix anything. If you want to remove the Ġ symbol, you can simply join the tokens returned by tokenizer.tokenize with spaces using the .join() method.
For example:
tokens = tokenizer.tokenize("example input")
string = " ".join(tokens)
This should give you the output without the Ġ symbol
<|||||>Thanks for your information.
Now I understand why the Ġ symbol comes, and it is proper usage.
I will close this issue. |
transformers | 22,305 | closed | MarianMTModel/MBartForConditionalGeneration does not support `device_map='auto'` yet | ### System Info
Hi, I'm experiment some Transformer models for Translation task. These model are [vinai-translate-en2vi](https://huggingface.co/vinai/vinai-translate-en2vi), [wmt19-ru-en](https://huggingface.co/facebook/wmt19-ru-en)
In the attempt to optimize the Transformer inference time on single GPU, I tried to follow the instruction on this [document ](https://huggingface.co/docs/transformers/perf_infer_gpu_one#running-mixedint8-models-single-gpu-setup) but stump on this error. I found a similar case [here](https://github.com/huggingface/transformers/issues/22188) where the solution is to Add `accelerate` support for the correspond model. Is it the solution for my problem too? Can anyone share your experience to optimize Transformer inference time? Thanks a lot.
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I use this code like the [example](https://huggingface.co/docs/transformers/perf_infer_gpu_one#running-mixedint8-models-single-gpu-setup)
` from transformers import AutoModelForSeq2SeqLM
model_name = "vinai-translate-en2vi"
model_8bit = AutoModelForSeq2SeqLM.from_pretrained(model_name, device_map="auto", load_in_8bit=True)`
and
` from transformers import AutoModelForSeq2SeqLM
model_name = "wmt19-ru-en"
model_8bit = AutoModelForSeq2SeqLM.from_pretrained(model_name, device_map="auto", load_in_8bit=True)`
Error
ValueError: MarianMTModel does not support `device_map='auto'` yet.
and
ValueError: MBartForConditionalGeneration does not support `device_map='auto'` yet.
### Expected behavior
The code in the instruction should be working | 03-22-2023 04:05:29 | 03-22-2023 04:05:29 | Hi @TranPhu1999, thanks for raising this issue.
Yes, it seems an equivalent update to the MBart and MarianMT models would need to be added, as the one added [to XGLM](https://github.com/huggingface/transformers/pull/22207/). Would you like to open a PR to add these changes?
cc @younesbelkada <|||||>Hi @TranPhu1999
You should be now able to use 8bit models for MBart, you can just do:
```bash
pip install git+https://github.com/huggingface/transformers.git
```
I will work later on adding the same support for Marian as well<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 22,304 | closed | Why there is no data send to data_collator? | ### System Info
- `transformers` version: 4.27.0.dev0
- Platform: Linux-3.10.0-514.el7.x86_64-x86_64-with-centos-7.3.1611-Core
- Python version: 3.7.13
- Huggingface_hub version: 0.12.0
- PyTorch version (GPU?): 1.13.1+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@sgugger
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
**I'm tring to us code below**
And get error `KeyError: 'seq2seq'`, after output feature input to data_collator, get output like follow:
`[{}, {}, {}, {}]`
Why this happened? I output train dataset and get correct result, but when using `trainer.train()` it can get data I need.
[train.txt](https://github.com/huggingface/transformers/files/11035839/train.txt)
### Expected behavior
How can I send data in training process? Thanks for your help. | 03-22-2023 03:15:25 | 03-22-2023 03:15:25 | That's related to the data you are preprocessing, not the Transformers library or its examples. There is simply no `"seq2seq2"` in the features you prepare with your function. I suggest posting on the [forums](https://discuss.huggingface.co/) to get help from the larger community.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 22,303 | closed | "LlamaTokenizer" in transformers._import_structure["models.llama"] │ │ 9 ), "LLaMA is now in HuggingFace's main branch.\nPlease reinstall it: pip uninstall trans │ │ 10 from transformers import LlamaTokenizer, LlamaForCausalLM, GenerationConfig | ### System Info
transformers main does have LlamaTokenizer
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
pip uninstall transformers && pip install git+https://github.com/huggingface/transformers.git
### Expected behavior
Have it | 03-22-2023 02:55:08 | 03-22-2023 02:55:08 | Solved.<|||||>> Solved.
How?
Could you tell us more?<|||||>> > Solved.
>
> How? Could you tell us more?
pip install sentencepiece && pip install git+https://github.com/huggingface/transformers.git<|||||>Related: #22222 ?<|||||>I do not understand. It was not working with the latest version, but now it seems to work. Maybe somebody updated something very recently. Still the name change is confusing: LlamaTokenizer -> LLaMATokenizer |
transformers | 22,302 | closed | Fixed bug to calculate correct xpath_sub_list in MarkupLMTokenizer | Fixed the problem where xpath_sub_list was computed incorrectly due to bug in code of MarkupLM Tokenizers.
# What does this PR do?
This PR fixes the bug with function get_xpath_seq method of MarkupLMTokenizer class inside src.transformers.models.markuplm.tokenization_markuplm module.
Earlier in line number 304 xpath_sub_list was assigned instance of xpath_tag_list which supposedly made embedding of tags with different subscripts similar. Eg li[0] was similar to li[1].
It also fixes the same bug present in src.transformers.models.markuplm.tokenization_markuplm_fast module
Fixes # (issue)
Fixes the issue with incorrect xpath_sub_list computation inside get_xpath_seq method of MarkupLMTokenizer class inside src.transformers.models.markuplm.tokenization_markuplm module and also fixes the same issue in get_xpath_seq method of MarkupLMTokenizerFast class inside src.transformers.models.markuplm.tokenization_markuplm_fast module.
## Before submitting
- [ No] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [Yes ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ No] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ Yes] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ No] Did you write any new necessary tests?
## Who can review?
@NielsRogge
| 03-21-2023 19:58:23 | 03-21-2023 19:58:23 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Thanks for fixing! |
transformers | 22,301 | closed | BlenderbotSmall incorrect usage of start and end tokens | ### System Info
- `transformers` version: 4.27.2
- Platform: Windows-10-10.0.19041-SP0
- Python version: 3.8.3
- Huggingface_hub version: 0.12.0
- PyTorch version (GPU?): 1.13.0+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help?
@ArthurZucker @younesbelkada @Narsil
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
As stated in the documentation: https://huggingface.co/docs/transformers/model_doc/blenderbot-small#transformers.BlenderbotSmallForConditionalGeneration.forward.example
the model should use `</s>` and `<s>` for separating the user input and response:
```python
from transformers import AutoTokenizer, BlenderbotSmallForConditionalGeneration
mname = "facebook/blenderbot_small-90M"
model = BlenderbotSmallForConditionalGeneration.from_pretrained(mname)
tokenizer = AutoTokenizer.from_pretrained(mname)
UTTERANCE = "My friends are cool but they eat too many carbs."
print("Human: ", UTTERANCE)
inputs = tokenizer([UTTERANCE], return_tensors="pt")
reply_ids = model.generate(**inputs)
print("Bot: ", tokenizer.batch_decode(reply_ids, skip_special_tokens=True)[0])
REPLY = "I'm not sure"
print("Human: ", REPLY)
NEXT_UTTERANCE = (
"My friends are cool but they eat too many carbs.</s> <s>what kind of carbs do they eat? "
"i don't know much about carbs</s> "
"<s> I'm not sure."
)
inputs = tokenizer([NEXT_UTTERANCE], return_tensors="pt")
next_reply_ids = model.generate(**inputs)
print("Bot: ", tokenizer.batch_decode(next_reply_ids, skip_special_tokens=True)[0])
```
However, these tokens are not present in the [vocabulary](https://huggingface.co/facebook/blenderbot_small-90M/blob/main/vocab.json) or [special tokens](https://huggingface.co/facebook/blenderbot_small-90M/blob/main/special_tokens_map.json)
I assume they should be replaced with `__start__` and `__end__`?
---
I have also tried to use the [ConversationPipeline](https://huggingface.co/docs/transformers/v4.27.2/en/main_classes/pipelines#transformers.ConversationalPipeline), and follow steps outlined [here](https://huggingface.co/tasks/conversational#inference), but I always get nonsensical results.
Even when trying the hosted inference API for the model (https://huggingface.co/facebook/blenderbot_small-90M), it either repeats itself, or doesn't follow in conversation.
### Expected behavior
The tokens should be correct, and the chatbot should engage in more realistic conversation | 03-21-2023 19:27:43 | 03-21-2023 19:27:43 | Hey! Thanks for reporting, will investigate! <|||||>Hey! When I use the Conversational pipeline I get the same outputs as you.
Regarding the content of the special tokens, it does not really matter as long as the mapping is correct. If the model's bos_id is 1, then as long as `<s>` maps to `1` then the generation will make sense.
And indeed we have:
```python
In [35]: tokenizer.encode("<s>")
Out[35]: [3, 330, 1360]
In [36]: tokenizer.encode("__start__")
Out[36]: [1]
```
The doc example should be updated, or the tokenizer only should be updated.
Nice catch (however, this does not seem to really change the output for this example).
Also I am not entirely sure of how these `eos` and `bos` should be used in the context of BlenderBot. They should mark the start and end of a conversation when training the model on different converstations, while `\n` is used to sperate different prompts (so from the user and the bot).
I could not find anything online, gonna take a while to check with the messy original codebase
<|||||>Just bumping this again (in response to being marked as stale)<|||||>When I checked the original PR that added BlenderBot (could not really find anyting on the original repo ... ) seems like the doc example should be updated to use `__end__` and `__start__`. See #4803. <|||||>Closed in #24092 |
transformers | 22,300 | closed | Restore fp16 support on xla gpu device | ERROR: type should be string, got "https://github.com/huggingface/transformers/pull/20684 accidentally disabled fp16 support on xla gpu device, which leads to significant performance regression. This PR restores this feature.\r\n\r\ncc @jeffhataws @sgugger @Lokiiiiii\r\n\r\nTested with \r\n```\r\nGPU_NUM_DEVICES=1 python run_mlm.py \\\r\n --model_name_or_path bert-base-uncased \\\r\n --dataset_name wikitext \\\r\n --dataset_config_name wikitext-2-raw-v1 \\\r\n --overwrite_output_dir true \\\r\n --output_dir /tmp/test-mlm \\\r\n --per_gpu_train_batch_size 24 \\\r\n --do_eval \\\r\n --fp16 true \\\r\n --do_train \\\r\n --num_train_epochs 3 \\\r\n --optim adamw_torch_xla\r\n```\r\n\r\n```\r\n***** train metrics *****\r\n epoch = 3.0\r\n train_loss = 1.7725\r\n train_runtime = 0:04:58.00\r\n train_samples = 4627\r\n train_samples_per_second = 46.58\r\n train_steps_per_second = 1.943\r\nINFO:__main__:*** Evaluate ***\r\n[INFO|trainer.py:739] 2023-03-21 19:05:53,483 >> The following columns in the evaluation set don't have a corresponding argument in `BertForMaskedLM.forward` and have been ignored: special_tokens_mask. If special_tokens_mask are not expected by `BertForMaskedLM.forward`, you can safely ignore this message.\r\n[INFO|trainer.py:3072] 2023-03-21 19:05:53,487 >> ***** Running Evaluation *****\r\n[INFO|trainer.py:3074] 2023-03-21 19:05:53,487 >> Num examples = 479\r\n[INFO|trainer.py:3077] 2023-03-21 19:05:53,487 >> Batch size = 8\r\n100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 60/60 [00:07<00:00, 8.38it/s]\r\n***** eval metrics *****\r\n epoch = 3.0\r\n eval_loss = 1.5811\r\n eval_runtime = 0:00:29.83\r\n eval_samples = 479\r\n eval_samples_per_second = 16.055\r\n eval_steps_per_second = 2.011\r\n perplexity = 4.8601\r\n```" | 03-21-2023 19:08:10 | 03-21-2023 19:08:10 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Failure is unrelated and due to the branch being old, it's fixed on main, so merging. |
transformers | 22,299 | closed | Final update of doctest | # What does this PR do?
Fix the remaining doc examples. The doc example in `feature_extraction_markuplm` is un-fixable (unless we remove the example), so not added to the list for testing.
Follow up PR of #22268 and #22292 | 03-21-2023 18:54:50 | 03-21-2023 18:54:50 | There is a style issue for `src/transformers/models/bertweet/tokenization_bertweet.py`. Will check it tomorrow instead. Sorry.<|||||>_The documentation is not available anymore as the PR was closed or merged._ |
transformers | 22,298 | closed | Correct NATTEN function signatures and force new version | # What does this PR do?
This complements #22229. (Sorry for breaking it into two; I was traveling when I realized the issue so I only started a quick PR to fix circleci builds.)
We're releasing a new [NATTEN](https://github.com/SHI-Labs/NATTEN/pull/24) build that corrects the signature inconsistency between the function calls (see [my comment in the previous PR for more](https://github.com/huggingface/transformers/pull/22229#issuecomment-1474015157).)
Rather than wait for a future build, we decided to do it right now because we could end up forgetting to open a PR to transformers.
We're finishing up testing the new build, but I figured I'd open this PR before I push to PyPI. If circleci tries to get the latest version from PyPI it would fail the unit tests associated with models depending on this package.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@ydshieh @amyeroberts @sgugger
| 03-21-2023 18:45:13 | 03-21-2023 18:45:13 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Unsure why a Flax test is failing. I'm assuming I'll have to wait for another PR to merge and then rebase?<|||||>Hi, Thank you for the fix and PR. Regarding failing flax test, you can ignore it🤗<|||||>This is needed to fix the CI on the main branch (broken by the new release of `natten`) so merging. |
transformers | 22,297 | closed | Training wav2vac2 requires a lot of compute power | I am trying to Fine tune wav2vac2 model for my national language and I have 15k data points but while trainings my system could only train on 1k datapoints and if I increase the datapoints my system either crashes or I get CUDA out of memory. so I am wonder is their any other alternatives ??
secondly can I first train on 1k data and then save the model locally and again load the model train with another 1k new data to improve my model ?? will it actually work ? | 03-21-2023 17:53:16 | 03-21-2023 17:53:16 | Hi @ngawang88, thanks for opening an issue!
This is a question that is better suited to the [forums](https://discuss.huggingface.co/). We try and keep github issues reserved for bugs and feature requests. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 22,296 | closed | Add translation perf_infer_gpu_one for it | [davidegazze](https://github.com/davidegazze)
davidegazze commented [52 minutes ago](https://github.com/huggingface/transformers/pull/22295#issue-1634162644) •
See issue [https://github.com/https://github.com/https://github.com/huggingface/transformers/issues/17459]
Add Italian translation of perf_infer_gpu_one.mdx and update _toctree.yml.
It's my first pull request, so I hope it's ok
The GitHub-related issue is https://github.com/huggingface/transformers/issues/22294
@omarespejel
@sgugger | 03-21-2023 16:20:24 | 03-21-2023 16:20:24 | _The documentation is not available anymore as the PR was closed or merged._<|||||>cc @nickprock The PR is good to merge with your seal of approval :) <|||||>@amyeroberts LGTM
Thanks @davidegazze |
transformers | 22,295 | closed | Translate perf_infer_gpu one | See issue [https://github.com/https://github.com/huggingface/transformers/issues/17459]
Add Italian translation of perf_infer_gpu_one.mdx and update _toctree.yml.
It's my first pull request, so I hope it's ok
The GitHub related issue is [here](https://github.com/huggingface/transformers/issues/22294)
@omarespejel
@sgugger | 03-21-2023 15:26:58 | 03-21-2023 15:26:58 | |
transformers | 22,294 | closed | Add perf_infer_gpu_one.mdx italian translation | See issue [https://github.com/huggingface/transformers/issues/17459]
Add Italian translation of perf_infer_gpu_one.mdx and update _toctree.yml.
It's my first pull request, so I hope it's ok | 03-21-2023 15:23:16 | 03-21-2023 15:23:16 | Thanks again for the contribution and congrats on your first PR @davidegazze 🔥 ! Feel free to close the issue if all of the relevant pieces of work have been merged in. |
transformers | 22,293 | closed | fix: Allow only test_file in pytorch and flax summarization | # What does this PR do?
Fixes #22276 for `flax`and `pytorch` run_summarization. Is this wanted for `tensforflow`'s? I see an option in the tensorflow file to provide a test_file but not for do_predict.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
@sgugger | 03-21-2023 14:55:34 | 03-21-2023 14:55:34 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 22,292 | closed | fix more doctests | # What does this PR do?
Add missing `python` in docstring then add more some files to `documentation_tests.txt` - following #22268 | 03-21-2023 14:36:56 | 03-21-2023 14:36:56 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 22,291 | closed | Time to Say Goodbye, torch 1.7 and 1.8 | # What does this PR do?
We have been together for more than 2 years ❤️
(see this [discussion](https://github.com/huggingface/transformers/issues/18817)) | 03-21-2023 13:48:40 | 03-21-2023 13:48:40 | Thank you @sgugger . I will try to make a clean breakup<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>Hope I don't miss anything <|||||>ok, the deepspeed CI is running pt-1.8 - how do we solve that then?
I have passed this change to the Deepspeed team let's see what they say.
edit: they followed suit https://github.com/microsoft/DeepSpeed/pull/3082 |
transformers | 22,290 | open | Native support of ChatGLM-6b | ### Feature request
Support https://huggingface.co/THUDM/chatglm-6b (and its int4 variants) in the Transformers library instead of relying on remote code execution.
### Motivation
This model performs really well (despite being a small model compared to large ones) and got a LOT of attention recently. It might be the SD moment for LLM IMO as it runs perfectly on consumer GPUs.
It would be great if Transformers can have native support for this model, instead of relying on remote code execution. A native integration will also make it much easier to use the model on Inference API / Endpoints.
### Your contribution
cc @sgugger @osanseviero | 03-21-2023 10:31:36 | 03-21-2023 10:31:36 | Transformers does have native support for it even if it's not in the lib itself ;-) I see this as a chance to better support models with code on the Hub since that is the way the author chose, and since it will be more and more the norm as we cannot have the library grow exponentially.
Of course, if the authors prefer to integrate the model in the library directly, we would be happy to look at the PR and help them merge it. We can also revisit if the issue gets a lot of traction and integrate it ourselves directly.<|||||>I echo what Sylvain is saying above.
Additionally, for readers, if you would like this model to be integrated within the library nonetheless for it to be constantly tested and up-to-date with our API, please upvote the original post or add a comment mentioning so in this issue as this will help us identify models that should be more actively tested.
Thanks!<|||||>Thanks for all great inputs! Let's see how much demand we gathered for this one.
Just for your information ChatGLM-6b is the No. 1 model on the trending page now.
<img width="288" alt="image" src="https://user-images.githubusercontent.com/38108242/226628729-b4cf69e8-8fe1-45bc-b03f-ebceb2bfce2c.png">
<|||||>
> This model performs really well (despite being a small model compared to large ones) and got a LOT of attention recently. It might be the SD moment for LLM IMO as it runs perfectly on consumer GPUs.
It does seem quite good but for it to be the true SD moment I think the license would have to allow commercial use, which it doesn't.
|
transformers | 22,289 | closed | Add Beit3 model | Fixes #22178 | 03-21-2023 10:13:30 | 03-21-2023 10:13:30 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_22289). All of your documentation changes will be reflected on that endpoint.<|||||>I don't know much about the details of the transformers library but isn't it confusing that it refers to the model as microsoft/beit-base-patch16-224-pt22k etc which is the name for beit v1, not beit v3?<|||||>>
Hi @MetaB0y , All I have done is to pull in different modules needed for beit3 into single file. I will start working on cleaning it up. <|||||>Hi @raghavanone, just wanted to know updates on this PR. If required, I would like to help.<|||||>> Hi @raghavanone, just wanted to know updates on this PR. If required, I would like to help.
@atharvakavitkar You can for sure contribute, I am out till mid of april, will pick this up if not done by anyone else by that time . Start a new PR, if you wanted to work on this .<|||||>Hey @raghavanone , I was thinking about working on this this weekend and see that you have a WIP already and are probably back now. Would it help if I contributed on this PR or better to leave it to you?<|||||>> Hey @raghavanone , I was thinking about working on this this weekend and see that you have a WIP already and are probably back now. Would it help if I contributed on this PR or better to leave it to you?
Hey @JonathanRayner , Thanks for asking, I have some changes locally, making changes to MOE layers are bit tricky. We can collaborate in this PR, But we have to find a way to communicate (use HF slack maybe) .<|||||>cc @alaradirik so you can help when needed.<|||||>> > Hey @raghavanone , I was thinking about working on this this weekend and see that you have a WIP already and are probably back now. Would it help if I contributed on this PR or better to leave it to you?
>
> Hey @JonathanRayner , Thanks for asking, I have some changes locally, making changes to MOE layers are bit tricky. We can collaborate in this PR, But we have to find a way to communicate (use HF slack maybe) .
Hi @raghavanone, thanks for working on this! The easiest way to collaborate would be to add @JonathanRayner as a collaborator to your forked transformers repo. I'd also be happy to create a Slack channel and invite you both to make it easier to communicate.
I took a quick look at the PR and it'd be great if you could follow naming conventions in line with Beit such that the class and folder names are Beit3 and beit3 respectively. Other than this, BEiT-3 is a multi-modal model so you'll need to create `image_processing_beit3.py`, `tokenizer_beit3.py` and `processing_beit3.py` scripts, where the latter wraps the text and image preprocessor classes into a single class. BEiT-3 uses the transformers `XLMRobertaTokenizer` class to preprocess text so you can use that instead of creating `tokenizer_beit3.py`. You can refer to [OWL-ViT](https://github.com/huggingface/transformers/tree/v4.28.1/src/transformers/models/owlvit) to see an example of a multi-modal model that uses an existing tokenizer class (`CLIPTokenizer`).
In order to check if the PR passes the CI tests, you can run the following commands:
```
make style
make quality
make repo-consistency
pytest tests/models/beit3/test_image_processor_beit3.py
pytest tests/models/beit3/test_processor_beit3.py
RUN_SLOW=True pytest tests/models/beit3/test_modeling_beit3.py
```
Hope this helps, I can add invite you to Slack if you send me your email addresses :)<|||||>> > > Hey @raghavanone , I was thinking about working on this this weekend and see that you have a WIP already and are probably back now. Would it help if I contributed on this PR or better to leave it to you?
> >
> >
> > Hey @JonathanRayner , Thanks for asking, I have some changes locally, making changes to MOE layers are bit tricky. We can collaborate in this PR, But we have to find a way to communicate (use HF slack maybe) .
>
> Hi @raghavanone, thanks for working on this! The easiest way to collaborate would be to add @JonathanRayner as a collaborator to your forked transformers repo. I'd also be happy to create a Slack channel and invite you both to make it easier to communicate.
>
> I took a quick look at the PR and it'd be great if you could follow naming conventions in line with Beit such that the class and folder names are Beit3 and beit3 respectively. Other than this, BEiT-3 is a multi-modal model so you'll need to create `image_processing_beit3.py`, `tokenizer_beit3.py` and `processing_beit3.py` scripts, where the latter wraps the text and image preprocessor classes into a single class. BEiT-3 uses the transformers `XLMRobertaTokenizer` class to preprocess text so you can use that instead of creating `tokenizer_beit3.py`. You can refer to [OWL-ViT](https://github.com/huggingface/transformers/tree/v4.28.1/src/transformers/models/owlvit) to see an example of a multi-modal model that uses an existing tokenizer class (`CLIPTokenizer`).
>
> In order to check if the PR passes the CI tests, you can run the following commands:
>
> ```
> make style
> make quality
> make repo-consistency
>
> pytest tests/models/beit3/test_image_processor_beit3.py
> pytest tests/models/beit3/test_processor_beit3.py
> RUN_SLOW=True pytest tests/models/beit3/test_modeling_beit3.py
> ```
>
> Hope this helps, I can add invite you to Slack if you send me your email addresses :)
Thanks @alaradirik , The PR is almost ready. I am already in HF slack under email [email protected] / username Raghavan . I have some questions, please ping me in slack. <|||||>@alaradirik I am unable to understand the purpose of a processor, the Beit3 model takes in token ids and image as tensor. Please help me understand this more.
<|||||>> @alaradirik I am unable to understand the purpose of a processor, the Beit3 model takes in token ids and image as tensor. Please help me understand this more.
Hi @raghavanone, the models expect the input images to be preprocessed. Hence, the image_processing_beit3.py script should contain the `Beit3ImageProcessor` class that takes in the raw input image and preprocesses it to the format expected as input to the model (resizing to a fixed input size, normalization, cropping, etc.).
processing_beit3.py script should contain the `Beit3Processor` class that wraps this image processor class and tokenizer class into a single instance such that users can use it preprocess text or images or both. Please take a look at other multi-modal model processors such as OWL-ViT and CLIP to see how that works.<|||||>> > @alaradirik I am unable to understand the purpose of a processor, the Beit3 model takes in token ids and image as tensor. Please help me understand this more.
>
> Hi @raghavanone, the models expect the input images to be preprocessed. Hence, the image_processing_beit3.py script should contain the `Beit3ImageProcessor` class that takes in the raw input image and preprocesses it to the format expected as input to the model (resizing to a fixed input size, normalization, cropping, etc.).
>
> processing_beit3.py script should contain the `Beit3Processor` class that wraps this image processor class and tokenizer class into a single instance such that users can use it preprocess text or images or both. Please take a look at other multi-modal model processors such as OWL-ViT and CLIP to see how that works.
@alaradirik Thanks, I have added both the class and added tests for them . Requesting you to review the PR.<|||||>>
@alaradirik All the PR feedbacks has been resolved. There are few open and I have put my questions in the comment. On the conversion script I have following questions :
- There are 22 variations of model checkpoints released, should I test out for each of them ?
- How to upload the checkpoints to hf ? <|||||>@NielsRogge Following are the open questions to be resolved :
Q1. How should be the config uploaded ?
Q2. How should be the checkpoints uploaded ?
Q3. Comment from Alara : "Passing a module to the class is not very optimal. I see that you are initializing and passing various modules in Beit3EncoderLayer and MultiheadAttention to MultiwayNetwork and creating deep copies.
I think it'd make more sense to create separate classes (e.g. Beit3Dense, Beit3FeedForwardNetwork) as variable names such as first and second are confusing and make the code more difficult to be adapted for works that build upon this.
I'm cc'ing @sgugger for his opinion."
For reference look at here https://github.com/huggingface/transformers/blob/1cda50bd12d7d454f56fbdd9f8fe32aee1eae5b3/src/transformers/models/beit3/modeling_beit3.py#L382<|||||>cc @amyeroberts <|||||>Hi @raghavanone,
I'll try to answer your questions as best as possible. For future Q's it's best to ping me rather than @NielsRogge or @sgugger.
1. When you say 'uploaded' - are you referring to uploading onto the hub e.g. [like this for bert](https://huggingface.co/bert-base-uncased/blob/main/config.json)? If so, this should be uploaded alongside the model weights. When calling `model.push_to_hub(repo_path)`, both the model's checkpoint and configuration will be uploaded. You can look at some of the conversion scripts to see the weight loading / converting / uploading logic e.g. [here](https://github.com/huggingface/transformers/blob/main/src/transformers/models/swin/convert_swin_simmim_to_pytorch.py). Whilst the PR is still underdevelopment, I suggest having the models under a personal repo. Then, once ready to merge, we can transfer the weights and configs to under the official orgs'.
2. See above.
3. What's the question? Are you asking what @alaradirik's comment means, or asking whether this is something that should be done?
<|||||>> Hi @raghavanone,
>
> I'll try to answer your questions as best as possible. For future Q's it's best to ping me rather than @NielsRogge or @sgugger.
>
> 1. When you say 'uploaded' - are you referring to uploading onto the hub e.g. [like this for bert](https://huggingface.co/bert-base-uncased/blob/main/config.json)? If so, this should be uploaded alongside the model weights. When calling `model.push_to_hub(repo_path)`, both the model's checkpoint and configuration will be uploaded. You can look at some of the conversion scripts to see the weight loading / converting / uploading logic e.g. [here](https://github.com/huggingface/transformers/blob/main/src/transformers/models/swin/convert_swin_simmim_to_pytorch.py). Whilst the PR is still underdevelopment, I suggest having the models under a personal repo. Then, once ready to merge, we can transfer the weights and configs to under the official orgs'.
> 2. See above.
> 3. What's the question? Are you asking what @alaradirik's comment means, or asking whether this is something that should be done?
Thanks for the answers, for Q3 , I am not sure how to incorporate the feedback, I need some support on what needs to be done .<|||||>@raghavanone For 3. what I believe Alara was saying (and I agree with) is that the layer `Beit3MultiwayNetwork` is trying to do too much, resulting in patterns which don't match with the rest of the library and is less interpretable. We should instead implement individual blocks which avoids hacky tricks like copying then reseting layer parameters.
To be explicit, an example would be for `self.self_attn_layer_norm = Beit3MultiwayNetwork(LayerNorm(self.embed_dim, eps=config.layernorm_eps))` on L491. Instead of using `Beit3MultiwayNetwork` we could instead define a model specific layernorm layer:
```python
class Beit3LayerNorm(nn.Module):
def __init__(self, config):
super().__init__()
self.layernorm_1 = nn.LayerNorm(config.embed_dim, eps=self.config.layernorm_eps)
self.layernorm_2 = nn.LayerNorm(config.embed_dim, eps=self.config.layernorm_eps)
def forward(self, hidden_states, split_position=-1):
if split_position == -1:
return self.layernorm_1(hidden_states)
if split_position == 0:
return self.layernorm_2(hidden_states)
text_hidden, image_hidden = torch.split(
hidden_states, [split_position, hidden_states.size(1) - split_position], dim=1,
)
text_hidden = self.layernorm_1(text_hidden)
image_hidden = self.layernorm_2(image_hidden)
hidden_states = torch.cat([text_hidden, image_hidden], dim=1)
return hidden_states
```
And then L491 would become:
```python
self.self_attn_layer_norm = Beit3LayerNorm(config)
```
It's OK for us to have some of the `if split_position` logic repeated if it means a clearer architecture and having layers take the config to instantiate themselves.
A note regarding the general design of the layer above:
* The `set_split_position` is very hacky and requires iterating over all of the layer of the model each time we do a forward pass with `multiway_split_position` set. Instead, lets pass this to the layers in the forward pass
* The layers shouldn't accept arbitary *args or **kwargs in their methods
* AFAICT `dim` was never changed or set, so we can remove this attribute. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 22,288 | closed | add low_cpu_mem_usage option in run_clm.py example which will benefit… | … LLM loading
add low_cpu_mem_usage option in run clm example, set the option True will help reduce peak memory and loading time in LLM finetune and inference.
| 03-21-2023 08:09:00 | 03-21-2023 08:09:00 | @sgugger please help review<|||||>_The documentation is not available anymore as the PR was closed or merged._ |
transformers | 22,287 | closed | the line 32 in convert_llama_weights_to_hf is LlamaTokenizer not LlamaForTokenizer | https://github.com/huggingface/transformers/blob/c07a02a4b7892edfee22cbe57d3cdd9e10ae7a4d/src/transformers/models/llama/convert_llama_weights_to_hf.py#L37
from transformers import LlamaForCausalLM, LlamaForTokenizer
to
from transformers import LlamaForCausalLM, LlamaTokenizer
| 03-21-2023 06:40:38 | 03-21-2023 06:40:38 | Hi @zhl5842, thanks for raising this! Would you like to open a PR to fix this doc example? <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 22,286 | closed | Tokenizer class LLaMATokenizer does not exist or is not currently imported. | Hi, I want to run the LLaMA model. But I am facing issues on AutoTokenizer. I am running the following command.
```
tokenizer = AutoTokenizer.from_pretrained(Path(f"/root/text-generation-webui/models/{shared.model_name}/"))
```
But it is giving me the following error:
```
ValueError: Tokenizer class LLaMATokenizer does not exist or is not currently imported.
```
My transformers version is: 4.28.0.dev0
Python version: 3.10.9
Could you please help me in this regard. | 03-21-2023 06:39:57 | 03-21-2023 06:39:57 | I have found the following answer that solved my issue:
https://github.com/huggingface/transformers/issues/22222#issuecomment-1477171703 |
transformers | 22,285 | closed | Guard imports of PreTrainedTokenizerFast on is_tokenizers_available | # What does this PR do?
This PR guards the imports of `PreTrainedTokenizerFast` which depends on [huggingface/tokenizers](https://github.com/huggingface/tokenizers). This class could be imported when the `tokenizers` library isn't installed in which case an exception is raised.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 03-21-2023 05:47:09 | 03-21-2023 05:47:09 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Hi @hvaara, thanks for creating this PR!
Am I correct in saying this PR is to address an error when running: `from transformers import AutoTokenizer` if `tokenizers` isn't installed?
I can see there's a similar import of `PreTrainedTokenizerFast` in `pipelines/__init__.py`. Could you a safe import there as well? It seems it's just for types so can probably be placed in the `if TYPE_CHECKING` logic. <|||||>Hi @amyeroberts!
> Am I correct in saying this PR is to address an error when running: from transformers import AutoTokenizer if tokenizers isn't installed?
Yes, it will attempt to import `tokenizers` when it is not installed and raise an error.
> I can see there's a similar import of PreTrainedTokenizerFast in `pipelines/__init__.py`. Could you a safe import there as well?
SG! I created this PR in WIP mode mainly to see if the tests would pass. Handling this in `pipelines/__init__.py` was on my TODO list assuming this PR passed tests.
> It seems it's just for types so can probably be placed in the if TYPE_CHECKING logic.
Good feedback. I'll look into updating the logic to reflect this and also handle the case in `pipelines/__init__.py`.
Perhaps I should also create an additional (set of) test(s) to verify the test suite can be run when certain dependencies (like `tokenizers`) does not exist? Are there similar tests like this already? If so, can you please point me to them, and if not, how do you propose I create a test like that?<|||||>@hvaara Great! Looking forward to having this added and safe imports.
For the test questions, I'll hand this over to our expert @ydshieh who will know :) <|||||>Hi @hvaara
Thank you for the PR 🚀
> Perhaps I should also create an additional (set of) test(s) to verify the test suite can be run when certain dependencies (like tokenizers) does not exist? Are there similar tests like this already? If so, can you please point me to them, and if not, how do you propose I create a test like that?
Regarding the testing, did you see the test suite failed to collect or run some tests when `tokenizers` is not installed? (rather than being skipped).
<|||||>Hi @hvaara Regarding
> Perhaps I should also create an additional (set of) test(s) to verify the test suite can be run when certain dependencies (like tokenizers) does not exist? Are there similar tests like this already? If so, can you please point me to them, and if not, how do you propose I create a test like that?
I think we can keep the PR simple as it is.
On our CI, we make sure the required dependencies are installed. And if we see some errors caused by this issue, we will fix by installing the dependencies.
For community contributors, it's still much better for them to install the dependencies if they want/need to run the tests locally. Otherwise, the test results may be all green (i.e. pass), but (a lot) of the test methods are actually skipped. This may lead to a gap in the communication.
Thank you for the PR again!.<|||||>> > Perhaps I should also create an additional (set of) test(s) to verify the test suite can be run when certain dependencies (like tokenizers) does not exist? Are there similar tests like this already? If so, can you please point me to them, and if not, how do you propose I create a test like that?
>
> I think we can keep the PR simple as it is.
SGTM.
I'll update the commit one more time, but that should be the last one assuming the tests pass and it LGTY. Sorry for the spam.<|||||>Thanks a lot for the help, and for merging the PR! |
transformers | 22,284 | closed | RuntimeError: "topk_cpu" not implemented for 'Half' | ### System Info
transformers 4.27.1
WSL2 with Ubuntu 20.04
GPU: 4090
CUDA VERSION: 11.8
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
**When I infer the model with 8bit for bloom 7b1**
```
model = AutoModelForCausalLM.from_pretrained(model_path, device_map = "auto", load_in_8bit=True)
tokenizer = AutoTokenizer.from_pretrained(model_path)
input_ids = tokenizer(inputs, return_tensors="pt").input_ids
outputs = model.generate(input_ids, max_new_tokens=300, do_sample = True, top_k = 30, top_p = 0.85, temperature = 0.35, repetition_penalty=1.2)
```
### Expected behavior
```
/home/miniconda3/lib/python3.10/site-packages/transformers/generation/utils.py:1374: UserWarning: You are calling .generate() with the `input_ids` being on a device type different than your model's device. `input_ids` is on cpu, whereas the model is on cuda. You may experience unexpected behaviors or slower generation. Please make sure that you have put `input_ids` to the correct device by calling for example input_ids = input_ids.to('cuda') before running `.generate()`.
warnings.warn(
Traceback (most recent call last):
File "/mnt/d/project/aigc/belle/test_infer_int8.py", line 13, in <module>
outputs = model.generate(input_ids, max_new_tokens=300, do_sample = True, top_k = 30, top_p = 0.85, temperature = 0.35, repetition_penalty=1.2)
File "/home/miniconda3/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/home/miniconda3/lib/python3.10/site-packages/transformers/generation/utils.py", line 1452, in generate
return self.sample(
File "/home/miniconda3/lib/python3.10/site-packages/transformers/generation/utils.py", line 2482, in sample
next_token_scores = logits_warper(input_ids, next_token_scores)
File "/home/miniconda3/lib/python3.10/site-packages/transformers/generation/logits_process.py", line 92, in __call__
scores = processor(input_ids, scores)
File "/home/miniconda3/lib/python3.10/site-packages/transformers/generation/logits_process.py", line 302, in __call__
indices_to_remove = scores < torch.topk(scores, top_k)[0][..., -1, None]
RuntimeError: "topk_cpu" not implemented for 'Half'
``` | 03-21-2023 04:02:04 | 03-21-2023 04:02:04 | cc @younesbelkada <|||||>Hello @MarvinLong
The reason behind that issue is that you forgot to pass `input_ids` on the same device as the model (here GPU)
The script:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained(model_path, device_map = "auto", load_in_8bit=True)
tokenizer = AutoTokenizer.from_pretrained(model_path)
input_ids = tokenizer(inputs, return_tensors="pt").input_ids
outputs = model.generate(input_ids.to(0), max_new_tokens=300, do_sample = True, top_k = 30, top_p = 0.85, temperature = 0.35, repetition_penalty=1.2)
```
Should work
Also, a warning has been trigged to warn you about this !:
```
/home/miniconda3/lib/python3.10/site-packages/transformers/generation/utils.py:1374: UserWarning: You are calling .generate() with the `input_ids` being on a device type different than your model's device. `input_ids` is on cpu, whereas the model is on cuda. You may experience unexpected behaviors or slower generation. Please make sure that you have put `input_ids` to the correct device by calling for example input_ids = input_ids.to('cuda') before running `.generate()`.
```
Thanks! <|||||>For more context (in case you are interested):
This is because `device_map="auto"` will load the model using `accelerate`. Loading a model with `accelerate` will place several "forward hooks" to it. That will apply some post-processing to the input such as placing the output of the model on the same device as the input.
Here on the snippet you have shared, the input is placed on CPU, and you are loading the model in 8bit that will produce half-precision logits under the hood. In addition to that you are calling a sampling strategy that will involve calling some functions from pytorch such as [topk](https://pytorch.org/docs/stable/generated/torch.topk.html) on these logits that are not supported on CPU in half-precision, hence the error.<|||||>@younesbelkada
Thanks a lot, this works |
transformers | 22,283 | closed | Is biogpt's tokenizer bugged? | ### System Info
- `transformers` version: 4.27.1
- Platform: Linux-5.15.0-67-generic-x86_64-with-glibc2.17
- Python version: 3.8.13
- Huggingface_hub version: 0.11.1
- PyTorch version (GPU?): 1.13.1 (True)
### Who can help?
@ArthurZucker and @younesbelkada could you please confirm this behavior is intended? Sorry if I mistagged. Thanks!
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```python
from transformers import AutoTokenizer
tokenizer_name = "microsoft/BioGPT-Large"
tokenizer = AutoTokenizer.from_pretrained(tokenizer_name)
print('bos token: ', tokenizer.bos_token, 'id: ', tokenizer.bos_token_id)
print('eos token: ', tokenizer.eos_token, 'id: ', tokenizer.eos_token_id)
print('token ids: ', tokenizer("this is a test")['input_ids'])
print('tokens: ', tokenizer.decode(tokenizer("this is a test")['input_ids']))
```
Output:
```
bos token: <s> id: 0
eos token: </s> id: 2
token ids: [2, 54, 34, 21, 229]
tokens: </s>this is a test
```
### Expected behavior
I would expect the tokenizer to prepend the BOS token (i.e. 0) and append the EOS token (i.e. 2) while currently the tokenizer prepends the EOS token, and does not add a special token to the end of the sequence of tokens. | 03-21-2023 03:48:39 | 03-21-2023 03:48:39 | @fedshyvana I believe this is how biogpt is trained on fairseq . For more information , you check into official repo of BioGpt.<|||||>@upjabir thanks for pointing it out! I am looking at https://github.com/microsoft/BioGPT/blob/main/src/language_model_prompt_dataset.py which I believe is the code you're referring to. If I understand correctly, they use:
[EOS] token_1, ..., token_n as input
and
token_1, ..., token_n [EOS] as target
i.e. it seems like they just don't use a separate BOS token at all.
But in the HF BioGPT model config it says:
"bos_token_id": 0
"eos_token_id": 2
Should we change it to:
"bos_token_id": 2
"eos_token_id": 2
Or would it not make any difference at all? Thank you!<|||||>@fedshyvana bos_token_id , eos_token_id is added to vocabulary as we always do for every tokenizer.But during building inputs with special tokens, we are only considering eos_token_id . Although we are not using bos_token during handling special token , i believe it will helpful in some rare case |
transformers | 22,282 | closed | Getting exception to trace t5 model in torchScript | ### System Info
```
import torch
from transformers import T5Tokenizer, T5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained("google/flan-t5-large", torchscript=True)
model = T5ForConditionalGeneration.from_pretrained("google/flan-t5-large", torchscript=True)
tokenized_dict = tokenizer(
["please answer the following question: what is the boiling point of nitrogen",], ["-320.4F",],
return_tensors="pt"
)
input_tuple = (tokenized_dict['input_ids'], tokenized_dict['attention_mask'])
traced_model = torch.jit.trace(model, input_tuple)
torch.jit.save(traced_model, "flan-t5-large.pt")
```
I was trying to trace `google/flan-t5-large` model in torchScript. But I'm facing following exception:
```
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Input In [29], in <cell line: 13>()
7 tokenized_dict = tokenizer(
8 ["please answer the following question: what is the boiling point of nitrogen",], ["-320.4F",],
9 return_tensors="pt"
10 )
11 input_tuple = (tokenized_dict['input_ids'], tokenized_dict['attention_mask'])
---> 13 traced_model = torch.jit.trace(model, input_tuple)
14 torch.jit.save(traced_model, "flan-t5-large.pt")
File ~/Library/Python/3.9/lib/python/site-packages/torch/jit/_trace.py:759, in trace(func, example_inputs, optimize, check_trace, check_inputs, check_tolerance, strict, _force_outplace, _module_class, _compilation_unit)
756 return func
758 if isinstance(func, torch.nn.Module):
--> 759 return trace_module(
760 func,
761 {"forward": example_inputs},
762 None,
763 check_trace,
764 wrap_check_inputs(check_inputs),
765 check_tolerance,
766 strict,
767 _force_outplace,
768 _module_class,
769 )
771 if (
772 hasattr(func, "__self__")
773 and isinstance(func.__self__, torch.nn.Module)
774 and func.__name__ == "forward"
775 ):
776 return trace_module(
777 func.__self__,
778 {"forward": example_inputs},
(...)
785 _module_class,
786 )
File ~/Library/Python/3.9/lib/python/site-packages/torch/jit/_trace.py:976, in trace_module(mod, inputs, optimize, check_trace, check_inputs, check_tolerance, strict, _force_outplace, _module_class, _compilation_unit)
972 argument_names = get_callable_argument_names(func)
974 example_inputs = make_tuple(example_inputs)
--> 976 module._c._create_method_from_trace(
977 method_name,
978 func,
979 example_inputs,
980 var_lookup_fn,
981 strict,
982 _force_outplace,
983 argument_names,
984 )
985 check_trace_method = module._c._get_method(method_name)
987 # Check the trace against new traces created from user-specified inputs
File ~/Library/Python/3.9/lib/python/site-packages/torch/nn/modules/module.py:1194, in Module._call_impl(self, *input, **kwargs)
1190 # If we don't have any hooks, we want to skip the rest of the logic in
1191 # this function, and just call forward.
1192 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1193 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1194 return forward_call(*input, **kwargs)
1195 # Do not call functions when jit is used
1196 full_backward_hooks, non_full_backward_hooks = [], []
File ~/Library/Python/3.9/lib/python/site-packages/torch/nn/modules/module.py:1182, in Module._slow_forward(self, *input, **kwargs)
1180 recording_scopes = False
1181 try:
-> 1182 result = self.forward(*input, **kwargs)
1183 finally:
1184 if recording_scopes:
File ~/Library/Python/3.9/lib/python/site-packages/transformers/models/t5/modeling_t5.py:1660, in T5ForConditionalGeneration.forward(self, input_ids, attention_mask, decoder_input_ids, decoder_attention_mask, head_mask, decoder_head_mask, cross_attn_head_mask, encoder_outputs, past_key_values, inputs_embeds, decoder_inputs_embeds, labels, use_cache, output_attentions, output_hidden_states, return_dict)
1657 decoder_attention_mask = decoder_attention_mask.to(self.decoder.first_device)
1659 # Decode
-> 1660 decoder_outputs = self.decoder(
1661 input_ids=decoder_input_ids,
1662 attention_mask=decoder_attention_mask,
1663 inputs_embeds=decoder_inputs_embeds,
1664 past_key_values=past_key_values,
1665 encoder_hidden_states=hidden_states,
1666 encoder_attention_mask=attention_mask,
1667 head_mask=decoder_head_mask,
1668 cross_attn_head_mask=cross_attn_head_mask,
1669 use_cache=use_cache,
1670 output_attentions=output_attentions,
1671 output_hidden_states=output_hidden_states,
1672 return_dict=return_dict,
1673 )
1675 sequence_output = decoder_outputs[0]
1677 # Set device for model parallelism
File ~/Library/Python/3.9/lib/python/site-packages/torch/nn/modules/module.py:1194, in Module._call_impl(self, *input, **kwargs)
1190 # If we don't have any hooks, we want to skip the rest of the logic in
1191 # this function, and just call forward.
1192 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1193 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1194 return forward_call(*input, **kwargs)
1195 # Do not call functions when jit is used
1196 full_backward_hooks, non_full_backward_hooks = [], []
File ~/Library/Python/3.9/lib/python/site-packages/torch/nn/modules/module.py:1182, in Module._slow_forward(self, *input, **kwargs)
1180 recording_scopes = False
1181 try:
-> 1182 result = self.forward(*input, **kwargs)
1183 finally:
1184 if recording_scopes:
File ~/Library/Python/3.9/lib/python/site-packages/transformers/models/t5/modeling_t5.py:949, in T5Stack.forward(self, input_ids, attention_mask, encoder_hidden_states, encoder_attention_mask, inputs_embeds, head_mask, cross_attn_head_mask, past_key_values, use_cache, output_attentions, output_hidden_states, return_dict)
947 else:
948 err_msg_prefix = "decoder_" if self.is_decoder else ""
--> 949 raise ValueError(f"You have to specify either {err_msg_prefix}input_ids or {err_msg_prefix}inputs_embeds")
951 if inputs_embeds is None:
952 assert self.embed_tokens is not None, "You have to initialize the model with valid token embeddings"
ValueError: You have to specify either decoder_input_ids or decoder_inputs_embeds
```
I also tried following way:
```
from transformers import T5ForConditionalGeneration
import torch
tokens_tensor = torch.ones(1, 10, dtype=torch.long)
model = T5ForConditionalGeneration.from_pretrained("google/flan-t5-large", torchscript=True)
model.eval()
scripted_model = torch.jit.trace(model, (tokens_tensor, tokens_tensor))
torch.jit.save(traced_model, "flan-t5-large.pt")
```
But this giving me following error:
```
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Input In [34], in <cell line: 7>()
5 model = T5ForConditionalGeneration.from_pretrained("google/flan-t5-large", torchscript=True)
6 model.eval()
----> 7 scripted_model = torch.jit.trace(model, (tokens_tensor, tokens_tensor))
8 torch.jit.save(traced_model, "flan-t5-large.pt")
File ~/Library/Python/3.9/lib/python/site-packages/torch/jit/_trace.py:759, in trace(func, example_inputs, optimize, check_trace, check_inputs, check_tolerance, strict, _force_outplace, _module_class, _compilation_unit)
756 return func
758 if isinstance(func, torch.nn.Module):
--> 759 return trace_module(
760 func,
761 {"forward": example_inputs},
762 None,
763 check_trace,
764 wrap_check_inputs(check_inputs),
765 check_tolerance,
766 strict,
767 _force_outplace,
768 _module_class,
769 )
771 if (
772 hasattr(func, "__self__")
773 and isinstance(func.__self__, torch.nn.Module)
774 and func.__name__ == "forward"
775 ):
776 return trace_module(
777 func.__self__,
778 {"forward": example_inputs},
(...)
785 _module_class,
786 )
File ~/Library/Python/3.9/lib/python/site-packages/torch/jit/_trace.py:976, in trace_module(mod, inputs, optimize, check_trace, check_inputs, check_tolerance, strict, _force_outplace, _module_class, _compilation_unit)
972 argument_names = get_callable_argument_names(func)
974 example_inputs = make_tuple(example_inputs)
--> 976 module._c._create_method_from_trace(
977 method_name,
978 func,
979 example_inputs,
980 var_lookup_fn,
981 strict,
982 _force_outplace,
983 argument_names,
984 )
985 check_trace_method = module._c._get_method(method_name)
987 # Check the trace against new traces created from user-specified inputs
File ~/Library/Python/3.9/lib/python/site-packages/torch/nn/modules/module.py:1194, in Module._call_impl(self, *input, **kwargs)
1190 # If we don't have any hooks, we want to skip the rest of the logic in
1191 # this function, and just call forward.
1192 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1193 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1194 return forward_call(*input, **kwargs)
1195 # Do not call functions when jit is used
1196 full_backward_hooks, non_full_backward_hooks = [], []
File ~/Library/Python/3.9/lib/python/site-packages/torch/nn/modules/module.py:1182, in Module._slow_forward(self, *input, **kwargs)
1180 recording_scopes = False
1181 try:
-> 1182 result = self.forward(*input, **kwargs)
1183 finally:
1184 if recording_scopes:
File ~/Library/Python/3.9/lib/python/site-packages/transformers/models/t5/modeling_t5.py:1660, in T5ForConditionalGeneration.forward(self, input_ids, attention_mask, decoder_input_ids, decoder_attention_mask, head_mask, decoder_head_mask, cross_attn_head_mask, encoder_outputs, past_key_values, inputs_embeds, decoder_inputs_embeds, labels, use_cache, output_attentions, output_hidden_states, return_dict)
1657 decoder_attention_mask = decoder_attention_mask.to(self.decoder.first_device)
1659 # Decode
-> 1660 decoder_outputs = self.decoder(
1661 input_ids=decoder_input_ids,
1662 attention_mask=decoder_attention_mask,
1663 inputs_embeds=decoder_inputs_embeds,
1664 past_key_values=past_key_values,
1665 encoder_hidden_states=hidden_states,
1666 encoder_attention_mask=attention_mask,
1667 head_mask=decoder_head_mask,
1668 cross_attn_head_mask=cross_attn_head_mask,
1669 use_cache=use_cache,
1670 output_attentions=output_attentions,
1671 output_hidden_states=output_hidden_states,
1672 return_dict=return_dict,
1673 )
1675 sequence_output = decoder_outputs[0]
1677 # Set device for model parallelism
File ~/Library/Python/3.9/lib/python/site-packages/torch/nn/modules/module.py:1194, in Module._call_impl(self, *input, **kwargs)
1190 # If we don't have any hooks, we want to skip the rest of the logic in
1191 # this function, and just call forward.
1192 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1193 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1194 return forward_call(*input, **kwargs)
1195 # Do not call functions when jit is used
1196 full_backward_hooks, non_full_backward_hooks = [], []
File ~/Library/Python/3.9/lib/python/site-packages/torch/nn/modules/module.py:1182, in Module._slow_forward(self, *input, **kwargs)
1180 recording_scopes = False
1181 try:
-> 1182 result = self.forward(*input, **kwargs)
1183 finally:
1184 if recording_scopes:
File ~/Library/Python/3.9/lib/python/site-packages/transformers/models/t5/modeling_t5.py:949, in T5Stack.forward(self, input_ids, attention_mask, encoder_hidden_states, encoder_attention_mask, inputs_embeds, head_mask, cross_attn_head_mask, past_key_values, use_cache, output_attentions, output_hidden_states, return_dict)
947 else:
948 err_msg_prefix = "decoder_" if self.is_decoder else ""
--> 949 raise ValueError(f"You have to specify either {err_msg_prefix}input_ids or {err_msg_prefix}inputs_embeds")
951 if inputs_embeds is None:
952 assert self.embed_tokens is not None, "You have to initialize the model with valid token embeddings"
ValueError: You have to specify either decoder_input_ids or decoder_inputs_embeds
```
How should I trace t5 model? Can you provide any example? Thanks
### Who can help?
@ArthurZucker @patric
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
import torch
from transformers import T5Tokenizer, T5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained("google/flan-t5-large", torchscript=True)
model = T5ForConditionalGeneration.from_pretrained("google/flan-t5-large", torchscript=True)
tokenized_dict = tokenizer(
["please answer the following question: what is the boiling point of nitrogen",], ["-320.4F",],
return_tensors="pt"
)
input_tuple = (tokenized_dict['input_ids'], tokenized_dict['attention_mask'])
traced_model = torch.jit.trace(model, input_tuple)
torch.jit.save(traced_model, "flan-t5-large.pt")
```
### Expected behavior
I should be able to trace the model in torchscript file. | 03-20-2023 22:13:06 | 03-20-2023 22:13:06 | Hey! As the error mentions, you have to provided `decoder_input_ids` .
The following works:
```python
input_tuple = (tokenized_dict['input_ids'], tokenized_dict['attention_mask'], torch.Tensor([[2]]).long())
traced_model = torch.jit.trace(model, input_tuple)
torch.jit.save(traced_model, "flan-t5-small.pt")
```<|||||>Hi @ArthurZucker ,
Thanks for your reply. I'm able to trace the model now. But how can I load the model back and get the prediction from the traced model?
I tried to use this:
```
import torch
from transformers import T5Tokenizer, T5ForConditionalGeneration
model = torch.jit.load("flan-t5-large.pt")
model.eval()
tokenizer = T5Tokenizer.from_pretrained("google/flan-t5-large")
t_input = "translate English to French: The universe is a dark forest."
token = tokenizer(t_input, return_tensors="pt")
tokens = model.generate(
input_ids=token["input_ids"],
attention_mask=token["attention_mask"],
decoder_input_ids=token["input_ids"],
)
print(tokens)
output = tokenizer.decode(tokens[0].squeeze(), skip_special_tokens=True)
print(output)
```
Then I'm seeing error like:
```
Traceback (most recent call last):
File "/Volumes/workplace/opensearch-py-ml/src/opensearch-py-ml/test1.py", line 13, in <module>
tokens = model.generate(
File "/Users/dhrubo/Library/Python/3.9/lib/python/site-packages/torch/jit/_script.py", line 785, in __getattr__
return super(RecursiveScriptModule, self).__getattr__(attr)
File "/Users/dhrubo/Library/Python/3.9/lib/python/site-packages/torch/jit/_script.py", line 502, in __getattr__
return super(ScriptModule, self).__getattr__(attr)
File "/Users/dhrubo/Library/Python/3.9/lib/python/site-packages/torch/nn/modules/module.py", line 1269, in __getattr__
raise AttributeError("'{}' object has no attribute '{}'".format(
AttributeError: 'RecursiveScriptModule' object has no attribute 'generate'
```
I tried with the `forward` method too:
```
import torch
from transformers import T5Tokenizer, T5ForConditionalGeneration
model = torch.jit.load("flan-t5-large.pt")
model.eval()
tokenizer = T5Tokenizer.from_pretrained("google/flan-t5-large")
t_input = "translate English to French: The universe is a dark forest."
token = tokenizer(t_input, return_tensors="pt")
tokens = model.forward(
input_ids=token["input_ids"],
attention_mask=token["attention_mask"],
decoder_input_ids=token["input_ids"],
)
print(tokens)
output = tokenizer.decode(tokens[0].squeeze(), skip_special_tokens=True)
print(output)
```
Then I'm facing error like:
```
Traceback (most recent call last):
File "/Volumes/workplace/opensearch-py-ml/src/opensearch-py-ml/test1.py", line 21, in <module>
output = tokenizer.decode(tokens[0].squeeze(), skip_special_tokens=True)
File "/Users/dhrubo/Library/Python/3.9/lib/python/site-packages/transformers/tokenization_utils_base.py", line 3471, in decode
return self._decode(
File "/Users/dhrubo/Library/Python/3.9/lib/python/site-packages/transformers/tokenization_utils.py", line 931, in _decode
filtered_tokens = self.convert_ids_to_tokens(token_ids, skip_special_tokens=skip_special_tokens)
File "/Users/dhrubo/Library/Python/3.9/lib/python/site-packages/transformers/tokenization_utils.py", line 906, in convert_ids_to_tokens
index = int(index)
TypeError: int() argument must be a string, a bytes-like object or a number, not 'list'
```
<|||||>1. Not entirely sure that the way to script the generate function is correct as the error mentions. I suspect that only the forward path is supported. I am guessing that you should try something like `torch.jit.trace(model.generate,...)` but pinging @gante here as I am not very familiar with our current jit support
2. You are not decoding correctly. The output of the model are not individual tokens but `logits` which is a distribution of probability over what the next token is. This means that you first have to extract the argmax, and then decode the index.<|||||>Hey @dhrubo-os 👋
`model.generate` is not fully exportable with `torch.jit`, but the model forward pass is. We have just added it to our examples, the workaround is to create an ad hoc model class with the jitted model + the generate function -- see [here](https://github.com/huggingface/transformers/blob/5fd4e3c87c685fba2dd9615be62131748a8b5ee3/examples/pytorch/text-generation/run_generation.py#L384)
I hope this helps 🤗 <|||||>@ArthurZucker Could you kindly give an example of how to correctly decode?
> You are not decoding correctly. The output of the model are not individual tokens but logits which is a distribution of probability over what the next token is. This means that you first have to extract the argmax, and then decode the index.<|||||>@gante Hi, I tried out the ad-hoc model class with the jitted model you mentioned [here](https://github.com/huggingface/transformers/blob/5fd4e3c87c685fba2dd9615be62131748a8b5ee3/examples/pytorch/text-generation/run_generation.py#L384) on a `AutoModelForSeq2SeqLM` for FLAN-T5. But I am getting an error. Is there any adjustment needed to the script you linked to be compatible with T5 models?
Env:
```
Optimum - 1.8.8
Transformers - 4.29.2
Torch - 1.13.1
onnxruntime-gpu - 1.15.1
onnx - 1.14.0
Linux
Python 3.8.16
```
Error:
```
Traceback (most recent call last):
File "/usr/lib/python3.8/runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/usr/lib/python3.8/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/home/dir/ai/models/py/summarization/modeling/flan/torchscript.py", line 79, in <module>
outputs = fallback_model.generate(**tokenized_dict)
File "/home/.virtualenvs/ai/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "/home/.virtualenvs/ai/lib/python3.8/site-packages/transformers/generation/utils.py", line 1515, in generate
return self.greedy_search(
File "/home/.virtualenvs/ai/lib/python3.8/site-packages/transformers/generation/utils.py", line 2332, in greedy_search
outputs = self(
File "/home/dir/ai/models/py/summarization/modeling/flan/torchscript.py", line 26, in __call__
outputs = self._optimized(*trace_graph_inputs)
File "/home/.virtualenvs/ai/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
RuntimeError: forward() expected at most 4 argument(s) but received 5 argument(s). Declaration: forward(__torch__.transformers.models.t5.modeling_t5.___torch_mangle_2224.T5ForConditionalGeneration self, Tensor input_ids, Tensor attention_mask, Tensor decoder_input_ids) -> ((Tensor, ((Tensor, Tensor, Tensor, Tensor), (Tensor, Tensor, Tensor, Tensor), (Tensor, Tensor, Tensor, Tensor), (Tensor, Tensor, Tensor, Tensor), (Tensor, Tensor, Tensor, Tensor), (Tensor, Tensor, Tensor, Tensor), (Tensor, Tensor, Tensor, Tensor), (Tensor, Tensor, Tensor, Tensor), (Tensor, Tensor, Tensor, Tensor), (Tensor, Tensor, Tensor, Tensor), (Tensor, Tensor, Tensor, Tensor), (Tensor, Tensor, Tensor, Tensor), (Tensor, Tensor, Tensor, Tensor), (Tensor, Tensor, Tensor, Tensor), (Tensor, Tensor, Tensor, Tensor), (Tensor, Tensor, Tensor, Tensor), (Tensor, Tensor, Tensor, Tensor), (Tensor, Tensor, Tensor, Tensor), (Tensor, Tensor, Tensor, Tensor), (Tensor, Tensor, Tensor, Tensor), (Tensor, Tensor, Tensor, Tensor), (Tensor, Tensor, Tensor, Tensor), (Tensor, Tensor, Tensor, Tensor), (Tensor, Tensor, Tensor, Tensor)), Tensor))
```<|||||>Hi @shannonphu 👋
Our support for torchscript is mainly hands-off, I'm afraid I don't have the bandwidth to dive deeper on bugs :) Looking at the trace, it seems like there is something wrong with the input preprocessing. |
transformers | 22,281 | closed | Fix various imports | # What does this PR do?
While building the v2 of the test fetcher, I discovered (I mean the util discovered) that some imports in the source code are wrong. This PR fixes all of them. | 03-20-2023 18:57:52 | 03-20-2023 18:57:52 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 22,280 | closed | fix: Text splitting in the BasicTokenizer | # What does this PR do?
Note: The tests are failing because of repo-consistency, but I intentionally haven't made the change across the repo until we confirm what change we want to make. Will remove [Don't merge] tag once done.
This fixes an issue related to the BasicTokenizer.
Initially looked to fix #22166 by updating the _run_split_on_punc method in the BasicTokenizer to split apostrophes without starting a new word. In the issue, it was noted that apostrophes weren't being split properly as `should've` was being converted to `should`, `'`, and `ve` instead of `should` and `'ve`.
However, when adding testing it became apparent there are other cases where the BasicTokenizer is failing too, such as capturing '!!' as separate tokens with id 256 as opposed to one 748 token, which is in the [vocab](https://huggingface.co/openai/clip-vit-base-patch32/resolve/main/vocab.json).
To address these I modified `_run_split_on_punc` in the BasicTokenizer to also split on passed patterns and renamed it `_split_on_punc_or_pattern` and added tests for them.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
@ArthurZucker | 03-20-2023 17:26:39 | 03-20-2023 17:26:39 | _The documentation is not available anymore as the PR was closed or merged._<|||||>I think this was still a fix we were interesting in having for users who don't have `ftfy` installed.<|||||>> I think this was still a fix we were interesting in having for users who don't have `ftfy` installed.
Oh ok reopening in that case<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>Thanks for your comments Arthur! Looking back at it to make your changes I realized two things:
- we actually don't need to add the pattern splitting to the BasicTokenizer, we just need to *not* split on punctuation (just need to get out of the way of the byte pair encoding happening later)
- the already written [test_check_encoding_slow_fast](https://github.com/huggingface/transformers/blob/b29fd6971d9cd6ba2a824628effe243f543b8f61/tests/models/clip/test_tokenization_clip.py#L78) test is a bit more comprehensive, so I used that and it passes locally now without ftfy, I'll call out the additional edits I had to make below for this
Will wait for your review again before replicating it in the other BasicTokenizers.
Also, would you like me to add additional testing? I.e., should tokenizers that use BasicTokenizer each have a test that checks slow and fast match like the comprehensive check like the one mentioned above that's in CLIP, or should they have a simple truncated version of it<|||||>Okay! I'll review again, can you make sure `make quality` and `make repo-consistency` both pass?
<|||||>Hey @ArthurZucker just checking in, anything else wanted here?<|||||>Nope thanks for the ping, it is just that it is a lot of changes on a lot of models (a lot of old models too 😉 ). Getting to it! <|||||>Update: just keeping this PR to the punc splitting param, reasoning below. Lmk if you have other thoughts!
Wrote a [script](https://colab.research.google.com/drive/1tz4yZ_tHsGFvMlhQoihW4kCbCsxAGXco#scrollTo=bIaelmkHsWie) I ran locally to get a directional sense of how much of a difference each of these 3 changes (punctuation split, remove control chars, normalizing) was having to help choose how to address the above. It appears the punc split edit this PR was primarily addressing does help a fair bit, seems to increase compatibility with the CLIP fast tokenizer ~20% to near 100% for the languages tested. The control chars and normalizing edits don’t appear to make much of a difference at all (0% and ~0.1% improvement, respectively). Again this analysis was imperfect but I figure from this the cost-benefit for this PR suggests just keeping it to the punctuation splitting.
Also, I misspoke earlier saying control chars are in the CLIP vocab, they aren’t. Instead the discrepancy between the basic tokenizer and the fast one I was addressing was that the former strips them and the latter marks them as UNK. I don’t believe having this is likely to make much of a difference for inference or tokenization, as the script run suggests, since control chars are rare and UNK tokens don’t provide much info to the model.<|||||>Looking at the output for `ar` it seems NEW + normalize is the best match isn't it ?
I think this proves that `NFC` is indeed a good addition which was previously missing !
Thanks a lot for this testing script, this adds a lot of value for future potential changes !<|||||>>I think this proves that NFC is indeed a good addition which was previously missing !
Thanks and sounds good, I'll put it back in. I had removed it only because I couldn't test all languages due to the time it would take so I wasn't certain if there could be other issues with it, and the improvements were somewhat modest.<|||||>Hey @Narsil just checking in, anything else wanted here?<|||||>No we just need a core maintainer's approval.
Sorry I forgot about this PR.
@sgugger for final review.<|||||>Noticed the linked issue was marked stale, this PR probably will be soon too. Any other action wanted here?
I think as the script shows this will significantly improve how well the Basic Tokenizer matches up with the fast one for CLIP, the lingering question was just around whether the NFC normalizing change was approved or whether that part should be removed. @Narsil @sgugger <|||||>Just waited for a clarification on whether @Narsil was fine with merging this or not. :-)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Since @Narsil is not commenting, I'm guessing we can merge :man_shrugging: <|||||>Oh sorry ! Missed this one it's OK ! |
transformers | 22,279 | closed | Move torch.compile() wrapping after DDP/FSDP wrapping to ensure correct graph breaks during training |
# What does this PR do?
This is a simple PR that moves the wrapper for torch.compile() after those for DDP and FSDP, given that the order is important for those two pieces to work together during training.
Fixes #22215
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@sgugger
| 03-20-2023 17:01:52 | 03-20-2023 17:01:52 | _The documentation is not available anymore as the PR was closed or merged._<|||||>I don't have permissions to merge the PR, so I don't know what the process looks like from here<|||||>I was just waiting for the tests to complete ;-) |
transformers | 22,278 | closed | Example of pad_to_multiple_of for padding and truncation guide & docstring update | This PR adds a minor update to the docs as previously it was not clear that `pad_to_multiple_of` has to be used with `padding=True`.
Based on https://huggingface.slack.com/archives/C027NLU6CE9/p1679325764920509
| 03-20-2023 15:58:44 | 03-20-2023 15:58:44 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Thanks for iterating! |
transformers | 22,277 | closed | deploy whisper by passing last transcribed sentences to decoder's past_key values | I'm working on using whisper model for real time live transcription. I have to deploy audio chucks on model to have a sense of real time transcription, i.e. every 1 second I deploy audio of last 5 seconds.
For such a task, I have to merge transcribed text since the audio has 4 seconds of overlapping with previous samples. Thus, the output transcription at each 1-second time-step has some words in common with previous ones.
There are several solutions for merging such transcribed texts such as using a language model or dynamic programming.
But I have an Idea to use whisper model itself for merging text while it has a language model.
I want to pass the previous transcriptions to its decoder's past key and do the generation based on the initial texts generated at previous time-steps.
Do you have any idea that how can I implement such idea?
@sanchit-gandhi | 03-20-2023 15:58:37 | 03-20-2023 15:58:37 | Hi @hannan72, thanks for raising an issue!
Questions like this should be asked in the [forum](https://discuss.huggingface.co/) as we try to reserve github issues for bugs and specific feature requests. <|||||>> Hi @hannan72, thanks for raising an issue!
>
> Questions like this should be asked in the [forum](https://discuss.huggingface.co/) as we try to reserve github issues for bugs and specific feature requests.
Thanks to inform me. I posted there. <|||||>Hey @hannan72 - do you have the link to the forum post? I can reply directly there :) <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.